Starting Oracle Database on Linux 7 using systemd and making OEM start only after the database is up

Recently I created a new virtual machine for the Oracle Database Admin, Install and Upgrade class that I teach at ACC. Previously I’d used Oracle Virtual Box on my local machine and then uploaded the image to Oracle Cloud and used Ravello to give each of my students their own server.

It was actually pretty straight forward:

  1. Upload the latest Oracle Linux (7.6) ISO that I got from
  2. Create a new blank machine with the following:
    1. 4 CPUs
    2. 24 GB of RAM
    3. 200 GB of disk
    4. Mout the uploaded ISO as a CD-ROM
    5. An elastic IP
    6. Services as follows:
      1. SSH (port 22)
      2. HTTPS (port 443)
      3. VNC (port 5901)
      4. HTTPS (port 7803 for Cloud Control)
  3. Start the image, configuring Oracle Linux 7 with the following:
    1. Server with GUI for software
    2. A static IP address (I used, but you could use anything).
    3. IP filtering so my ACC students could access the servers while they are in the labs at ACC and I could access the machines from home
    4. Partition the disk into a 16 GB swap partition, a 10 GB /home partition and then the rest of the disk as the root partition.
    5. When it came time to reboot the server, remove the CD-ROM image and update the configuration before rebooting so the image boots up using the disk.
  4. Install Oracle 12c and create an emrep repository database for OEM 13.3.
  5. Install OEM 13.3.
  6. Install Oracle 11g and create a database that will be upgraded during the course.

At this point everything was great, but since I teach 3-hour classes on Mondays and Wednesdays and shut the servers down between classes, my databases and OEM need to come up cleanly. Oracle has documentation on creating services to automatically start up databases on Linux/Unix, but it uses the old System V method for starting services (which, to be fair does still work on Linux 7). Since this was a Linux 7 server, I wanted to use the new systemd method. Tim’s rather fantastic site had the basic framework, but where he used scripts that he called from the service, I wanted to use dbstart and dbshut so that we could maintain startup and shutdown from a single file (/etc/oratab) rather than modifying a script.

I created the following file:

[root@dba ~]# vim /usr/lib/systemd/system/oracle-database.service
Description=The Oracle Database Service

# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.


ExecStart=/usr/bin/echo 'Starting Oracle Databases with Y in /etc/oratab'
ExecStart=/u01/app/oracle/product/12.2.0/dbhome_1/bin/dbstart /u01/app/oracle/product/12.2.0/dbhome_1
ExecStart=/usr/bin/echo 'dbstart has completed'

ExecStop=/usr/bin/echo 'Stopping Oracle Databases'
ExecStop=/u01/app/oracle/product/12.2.0/dbhome_1/bin/dbshut /u01/app/oracle/product/12.2.0/dbhome_1
ExecStop=/usr/bin/echo 'dbshut has completed'


I then enabled the service using the following:

[root@dba ~]# systemctl daemon-reload
[root@dba ~]# systemctl enable oracle-database

While the above worked great to start the database (tested with a reboot of the server), it didn’t address another issue. Unlike the database, Oracle Enterprise Manager comes with ‘out of the box’ scripts to start and stop OEM. They are the old style System V scripts that run out of /etc/init.d, and it didn’t really seem worth going through the trouble of converting them to the new systemd format. Unfortunately, the OEM scripts always assume that the database is already up and running. If your repository database is running on the same server as your OMS (which isn’t really that big of a deal if your hardware can handle it) that can be fixed by modifying the OEM startup script and adding in a ‘check to make sure your database is up and running before you start OEM’ section. The content in bold below was added to the out of the box OEM script after the initial comments in the file.

[root@dba ~]# vim /etc/init.d/gcstartup
# 2019-03-05 Rich Soule
# OEM should only startup if the emrep database is already up and running
# on the local machine so the below was added to make sure that happens.
#################### Begin Rich Soule Added Lines #######################
if [ "$1" = "start" ]
  while [ $counter -le 24 ]
    if ! /usr/bin/ps -e | /usr/bin/grep -q ora_pmon_emrep
      echo 'OEM is waiting on Oracle database to start'
      sleep 10
   if [ $counter -ge 24 ]
     echo 'Oracle database did not start in time, exiting OEM startup'
     exit 1
   echo 'Oracle database started, waiting 20 more seconds for database to open'
   sleep 20
  echo 'OMS will now attempt to start as per remainer of the /etc/init.d/gcstartup script'
####################  End Rich Soule Added Lines ########################

The above first checks to make sure that gcstartup was called with the start argument. If so then we’ll check if there is an ora_pmon_emrep process running (emrep is the name of my OEM repository database). If it isn’t running, we’ll wait 10 seconds and check again, but only for 240 seconds total. Once that process is found, we break out of our while do loop. If we did hit our 240-second limit, then we exit out of the gcstartup script totally, otherwise, we wait 20 seconds for the database to open and then continue along with the rest of the gcstartup script.

So far this has been working like a charm.

Note that you could do the same type of thing with OEM and instead have the script make a connection to a remote server every 10 seconds to make sure that the remote OEM repository was up before attempting to start OEM.

Happy Linux-ing and Oracle-ing!


Party like it’s 1979!

After 120,000+ track miles on my Lotus Exige, I upgraded to a Radical SR3. (Reason: The Exige was too big and heavy. Reason for the Exige upgrade in 2006: The Miata I was driving was too big and heavy. It’s going to be tough going lighter than the Radical, but it is theoretically possible…) I still had the Exige sitting in the garage and, in theory, I’d be able to drive it when I needed to drive a car. In practice, I ended up driving my F350 back and forth two nights a week where I teach Oracle DBA courses as a way to give back to the community. (The rest of the time I work from my home office.)

Stacey and I decided that it would probably be better for the Exige to go to a new home since we were not really using it anymore. (Stacey daily drives and instructs in a 911). A fellow instructor purchased it, fixed it up a bit and turned it over to someone else. He already had an Exige that he tracked quite a bit and built a few times over, so he had the experience to fix my old car up. It’s nice to know that someone else is using the Exige as it is a truly great car.

However, this meant that I had no choice but to drive the F350 around. It’s an old truck (year 2000) with a lot of miles (190K+) which really isn’t that big of a deal to me. However it is a long bed/dually, so it is not that fun to park. I decided that I should get another car just to drive around in. The smart choice would have been to pick up an old Honda Civic or something for a few K. Who really wants to be smart though…

Enter another track friend, Mark. Mark had a bit of a car problem (too many cars, according to his wife at least) and he had a solution to my dilemma:

1979 Blue Bird Trans Am

That’s a number’s matching, 6.6L 1979 Bluebird Trans Am with a four-speed.

After getting approval from Stacey that this would be OK, Mark and I traded my cash for his car.

It’s been a bit of an adventure driving around in a 40 year old car, but it’s been fun.

I just had the original(!) driver’s seat cushions replaced with new foam by Nikito’s Upholstery here in North West Austin, and I’m about to go put the seat back in and take her for a drive.

Oracle Broke DICTIONARY in 12.2+ PDBs

When Oracle introduced pluggable databases in Oracle 12.1, they were supposed to be indistinguishable from non-pluggable databases as far as clients were concerned. Basically, this was true. But one thing that has since broken, at least in 12.2 and higher (confirmed in 18.3), is the comments on the DICTIONARY view.

I (and my students at ACC) use the DICTIONARY view or DICT for short quite a bit as it’s a quick way to find the data dictionary view that you are looking for.

DICTIONARY (or the DICT synonym) is great. It’s got two columns, TABLE_NAME and COMMENTS. Comments are very useful to search through (make sure to use UPPER as they are mixed case) because sometimes the table name can be a bit opaque.

The entry for DICTIONARY is pretty clear:

select * 
  from dict 
 where table_name = 'DICTIONARY';
DICTIONARY Description of data dictionary tables and views

The above is exactly what you get when you are connected to the root container (CDB$ROOT) of any version of a multitenant database.

However, if you connect to any pluggable database other than version 12.1 (12.1 worked as expected), you’ll get the following for the same query:


It turns out that in 12.2 databases and higher, there is only a single comment in the DICTIONARY view when you are connected to a pluggable database:

select * 
  from dict 
 where comments is not null;
DBA_CONTAINER_DATA Describes default and object-specific CONTAINER_DATA attributes

And, interestingly enough, in a PDB, DBA_CONTAINER_DATA has no rows! So the only comment in the DICTIONARY view in a PDB is for a view that has no data in PDBs. Obviously, someone dropped the ball here.

In a 12.1 pluggable database, you’d get a LOT of comments in the DICTIONARY view:

Connected to:
Oracle Database 12c Enterprise Edition Release 64bit Production

SYS@devpdb > select count(*) from dict where comments is not null;


I pointed this out to Oracle Support back in August of 2018. After first not believing me and asking me what I might have done to my data dictionary, and claiming that they couldn’t reproduce it, and then admitting that they could reproduce it, a bug was logged.

Currently, they are here:

“The bug is at Sev2 and will be worked upon by Dev as per the priority.”

The bug number is 28998679.

You can make the DICTIONARY view work in PDBs by creating a database link to the root container and getting the comments from there. Obviously (I hope), updating the actual view name (DICTIONARY) and/or public synonym (DICT) wouldn’t be supported, so you should create your own public synonym and/or view. Of course, I’m not sure how you’d get developers to use it since they are expecting DICTIONARY/DICT to just work…

As an aside, I came across this when building an APEX application for my students that would let them use an interactive report against DICTIONARY to search for interesting data dictionary views.

Names can be up to 50 BYTES in length…

…said no business user ever.

Business users don’t think in BYTES, they think in CHARACTERS. Tables in databases hold information for business users to use, not (generally) for geeky computer programmers who think in bytes.

Unfortunately, many databases (including Oracle) have their defaults configured to use bytes as the length for table columns. I believe that this is wrong…

Let’s first demonstrate the issue and then we’ll see how it should be fixed and finally how you actually have to fix it.

The Issue

Let’s say we have a business requirement from our customer that reads like the following (Note this is a really bad requirement and any competent DBA should never let a requirement like the below get into a database they manage, but let’s go with it):

“For our business we allow all of our customers to rate each item we sell with with 1 to 5 hearts. The rating should be stored using one of the following values:

  1. ♥♥
  2. ♥♥♥
  3. ♥♥♥♥
  4. ♥♥♥♥♥

We understand that this is ridiculous and that we should really use numbers, but we really want it stored this way.”

Let’s create a table that will store the rating. Note, that this is a horrible table and it’s missing just about everything that makes a table a good table, but we want to focus on the character length issue instead of proper database modeling techniques.

create table t1(c1 varchar2(5));

With our table created, let’s start to add each of the ratings to the table:

insert into table t1 values('♥');
one row inserted
insert into table t1 values('♥♥');
one row inserted
insert into table t1 values('♥♥♥');

Hmmm…. Our table should store up to 5 characters, and yet we can’t actually store 5 characters. Why is this? Let’s use SQL Developer (or SQLcl) to take a look at our table using the new(er) info command:

info t1
	 ROWS         : 
	 COMMENTS     : 

 C1          VARCHAR2(5 BYTE)   Yes

As we can see, table t1 has been defined to allow up to 5 bytes of data to be stored in the column c1. Since the heart symbol is greater than one byte, we can only fit two hearts into the column. When we try to add a row with 3 hearts, we get an error.

The Right Fix (that’s really the wrong fix)

The ‘right’ fix for this is to do the following as a SYSDBA user in the root container of your database (You have moved on from the officially deprecated non-cdb architecture, right?):

alter system set nls_length_semantics = 'CHAR';

However, you’ll see in the documentation that you shouldn’t do this as it ends up breaking things, especially Oracle Text which has quite a few notes in Oracle Support that basically say the only way to get Oracle Text to work is to NOT use the above statement.

Screen Shot 2018-10-16 at 11.26.41 AM

In my opinion, this is a bug. Oracle has finally (starting with Oracle Database 12c) defaulting the character set of a newly created database to AL32UTF8 (which should be the character set used on all newly created databases), and they should in a future release, also fix the NLS_LENGTH_SEMANTICS issue. (At least in my opinion they should! I’m pretty sure it’s not part of the actual plans for future releases.) The default should be CHAR and code that would break if CHARs were used instead of BYTEs (I’m looking at you Oracle Text…) should be fixed to either specify BYTE directly, or fixed to work with CHAR.

The Actual Fix(es)

Since the right fix should work, but really doesn’t, how can you address the issue correctly? There are two different methods to fix the issue:

  1. Correctly set DDL definitions to use CHAR.
  2. Alter the user’s session setting the NLS_LENGTH_SEMANTICS to CHAR.

1. The Correct DDL Fix

This is actually the best way to fix the issue as you can be assured that no matter what the defaults are in the database, your tables are always created correctly. In the same way that you should never trust the date format in a database to be the (also incredibly horrible) DD-MON-RR format and instead always use the to_date function with a correct format mask, you  should always create the length of a character based column with the correct length, and the correct length is character based, not byte based (unless the business requirements are to REALLY store something with a length of bytes which would be strange indeed for 99.99% of business requirements).

So, instead of this:

create table t1(c1 varchar2(5));

Do this:

create table t1(c1 varchar2(5 char));

By explicitly creating the table correctly, everything will work correctly in any database:

drop table t1;
table dropped

create table t1(c1 varchar2(5 char));
table created

insert into table t1 values('♥');
one row inserted
insert into table t1 values('♥♥');
one row inserted
insert into table t1 values('♥♥♥');
one row inserted
insert into table t1 values('♥♥♥♥');
one row inserted
insert into table t1 values('♥♥♥♥♥');
one row inserted

Great! Everything worked just the way we wanted.

2. The Alter Session Fix

Potentially the above fix could require a lot of work if you have a few thousand, or tens of thousands of columns defined without specifying the correct CHAR attribute that should have been specified with each create table statement.

You could, brute force attempt to fix each statement, or instead, just run the following line BEFORE you run all your create table statements that are not as semantically specific as they should be:

alter session set nls_length_semantics = 'CHAR';

Now let’s recreate our t1 table:

drop table t1;
table dropped
create table t1(c1 varchar2(5)); --Still semantically incorrect!
table created
info t1
	 ROWS         : 
	 COMMENTS     : 

 C1          VARCHAR2(5 CHAR)   Yes

By altering the NLS_LENGTH_SEMANTICS of our session, we’ve managed to make our semantically incorrect statement of  “create table t1(c1 varchar2(5));” work correctly.

Note that unless you work this out with Oracle Support directly ahead of time, you definitely shouldn’t alter your session and adjust your NLS_LENGTH_SEMANTICS with any Oracle supplied scripts. For example it would be wrong to do this before running the Oracle APEX install scripts. But for your OWN tables that you are creating in the database, you should always define them in the most semantically correct way, and if you can’t do that for some reason, then you can alter the session to get the correct storage of your data using semantically incorrect statements.

Remember… No business user ever said “Names can be 50 bytes in length.”


SQL Developer SSH Connections In Depth


Available now: A 29-page white paper on SQL Developer SSH Connections that explains in detail how SSH connections should be configured between your desktop/laptop and your organization’s database!

The vast majority of explanations out on the internet of how to configure SSH connections between Oracle SQL Developer and an Oracle Database use incredibly simplistic explanations, often using the oracle os account as the ssh user (absolutely a worst practice!), and not including important things like how firewalls will impact connections, how your ssh server should be configured, etc.

I wrote a 29-page white paper which has been published on the website. It covers the above and more in exacting detail. If you click the link above you’ll be taken to a description of the white paper and then to a page where you’ll be able to download it after supplying a name and email (a fair trade for the amount of time and effort I put into the white paper…).

Note: The image above is from the white paper and is used for explanation purposes only… None of those things actually exist except for the WordPress external firewall that protects just this blog, or, actually, all the blogs at

Thoughts on the line at CotA

CotA Track Map
After many thousands of laps at CotA I can say for certain that the turns that most folks have trouble with are the following:
  1. Turn 3
    CotA Track Map - Turn 3.PNG
    You should enter turn 3 from as far LEFT as you feel comfortable. You should NOT go to the right side of the track to enter turn 3. Ideally you are carrying as much speed as possible through 2 and are brushing your driver’s side (assuming left hand drive car here) wheels right over the edge of the apex curbing at 3 on your way into the apex of 4. Cars end up in the wall on the exit of turn 3/entrance to turn 4 because they tried to enter 3 from the right hand side of the track. I believe the thought is that “Turn 3 is a left hand turn, you should enter it from the right side of the track.”, but turn 3 leads directly into turn 4, so driving from apex to apex in a straight line through 3 and 4 is MUCH more efficient and much, much safer.
  2. Turn 7
    CotA Track Map - Turn 7-8-9
    Turn 7 is one of the two more important turns on the track as it is one of the two INCREASING radius turns on the track (the other being  turn 19, see below). Instead you should take care to NOT late apex 7 and to track out of turn 7 to the right hand side of the track and then throw away turn 8. Unless you are driving a tricycle you are going to be traction limited going through turn 9, so entering turn 8 from the right hand side of the track (to get a ‘good run up the hill’) is just plain silly.
  3. Turn 8
    See above. Enter turn 8 basically ON the curbing on the right hand side of the track and force the car over to the right as much as possible to open up turn 9. Turn 8 is the throw away turn.
  4. Turn 13
    CotA Track Map - Turn 13

    Turn 13 is often taken too fast and early apexed which leads to throttle bobble as you are exiting 13 (or going wide and off track). Instead take 13 a bit deeper and brake a bit harder and turn in a bit harder so that you apex 13 a bit later than you think you should and you’ll be able to constantly add throttle all the way into 15.
  5. Turn 19
    CotA Track Map - Turn 19

    Turn 19 is an increasing radius turn, so it can be taken much, much earlier and much, much faster than most do.

The rest of the track is mostly just ‘regular’ corners and most get them mostly right.

However, the above corners are where my green, first day ever, students often end up being significantly more efficient around the track than just about everybody else in the run group, often to the point where the student will ask me “Why are they going over there to enter turn three?”, or “Why are they going over there to enter turn eight?” while we basically coast behind them off throttle so that we can give the car in front of us enough room to get through the corner. My answer “I’m not really sure why, but as you can see we are significantly more efficient by taking this line, aren’t we?” and, every single time, I get a very enthusiastic “Yes!”.

This also applies to MANY of the advanced students that I coach.

Data analysis bears out the above too…


Rich Soule Racing adds a new car to the fleet

It all began with a trip to the incredibly dangerous

I’d had a 2004 Radical SR3 for a number of years which I loved, but a 2008 Radical SR3 done up like a Mazda 787B came up for sale and on a lark, I threw in a very low bid. The 787B liveried Radical was four years newer and also had a very desirable (and expensive) paddle shifter upgrade and a spare set of wheels. Who knows, maybe nobody else would bid… Four days later the auction was coming to an end and the car still hadn’t hit market value. Maybe I could grab it? I put in a bid at way under market value and watched the timer go down. A new bid came in, so I put in a new slightly higher bid. This kept on going back and forth for the next (very stressful) 11 minutes (Bring A Trailer always leaves the auction open for two minutes after any bid, so there is no sniping…). At the end of that 11 minutes I saw this:



I’d gone from this:


to this:


This led to a number of issues:

  • I now had two Radicals.
  • I had to explain to my wife why this was a good thing.
  • The new Radical was in Los Angeles and I live in Austin.

A quick conversation with the seller solved issue 1. He’d sell my older Radical for me.

The 2nd issue was a bit harder to resolve, but having the first issue solved sort of helped me a bit. I also reminded my wife that I’d be very happy with the newer car and I’ve always appreciated how much she values my happiness. I also got her a dozen roses delivered to her office. I think I’ll be OK on this one.

Depending on what my first Radical goes for, I might come out of this pretty clean. I did get the 2nd car at the lower end of market value and it did have a very nice set of extra wheels with brand new tires. Add in the pneumatic paddle shifters and I don’t think I would have been able to add those upgrades to my current car without going over the purchase price of the new car minus the sales price of my old car (we’ll see what my old car goes for…).

The final issue began this past weekend’s great adventure. When I first purchased the white Radical that meant I’d have to get a truck and trailer. Since both my wife and I instruct for various organizations I ended up getting a two car covered trailer and an (older) F350 dually.


A friend of mine had a Radical Pro Sport that he had purchased from Bring A Trailer, but the organizations he was running with wanted him to only run it in the instructor run groups, so he was looking to let someone else have fun with it. His car was down in Houston. The seller of my new car said he’d also sell my friend’s Pro Sport. Thus a plan was born: Drive to Houston, pick up friend’s car, drive to LA, drop off both cars, load up new car and drive back to Austin. Planned travel time: Friday to (very, very early) Monday morning. I knew this was pushing it, but hey, why not?

Last Friday I took off from Austin and drove down to MSR Houston to pick up my friend’s Pro Sport. It took a bit to pack everything up (lots of spares) and then I headed out to Los Angeles. Unfortunately, the dually had a blowout in one of the inside rear tires as I was driving on I10, but luckily for me, this happened before I got past San Antonio.


I very carefully and slowly drove back to Austin (everywhere that could have fixed the tire was closed at this point), spent the night with Stacy and then took the truck over to Don Rucker Tire and Wheel first thing in the morning. We decided to replace all the tires on the rear of the truck and I headed back out to LA.

It was quite a trip.

  • Times I ran out of gas: 1
    (GPS said there was a gas station just down the road, but when I got there they were out of diesel)
  • Times I almost ran out of gas: 2
    (Again, iExit/Waze/Google said there was a gas station, but when I got there… nothing.)
  • Times I got the truck and trailer stuck in mud: 1
    (The Wilcox AZ Holiday Inn and Suites had a big dirt lot that they made me park in. While I slept, rain happened. Then this happened:
    The tow truck barely had to help me get out, but help me out he did.)
  • Miles driven: 2,890
  • Miles driven on I10: @2,475
    (Goodness gracious this is a boring drive, but when you are towing a 44′ trailer behind an F350, boring isn’t all horrible.)
  • Dollar Amount of Gas Purchased: $814.48
    (I kind of think there might be some pending charges as this doesn’t seem like all that much).
  • Hotel stays: 2
  • Truck Naps: 1 (about an hour or so)
  • Hours spent over the whole trip: @88.5
    (Between the stop in Austin to get the blowout fixed and the stops along the way, I didn’t get back in Austin until 4:30 am on Tuesday morning).
  • Daniel McFadden’s seen: 1
    (I was Daniel’s first ever instructor at an HPDE a long time ago. Daniel is now an instructor himself and lives out in LA. He came over to say hi when I picked up the new car in LA.IMG_3647)
  • Number of Tuckers seen: 1
    This is Preston’s actual car that he drove away from the courthouse after he won the case that cleared his name. Unfortunately, it was too late at that point and Tucker Automobile never happened.

At this point I’m just looking forward to the next event at CotA (or somewhere else) so I can drive the car on track.

If you see me there, please feel free to ask for a ride.