Starting Oracle Database on Linux 7 using systemd and making OEM start only after the database is up

Recently I created a new virtual machine for the Oracle Database Admin, Install and Upgrade class that I teach at ACC. Previously I’d used Oracle Virtual Box on my local machine and then uploaded the image to Oracle Cloud and used Ravello to give each of my students their own server.

It was actually pretty straight forward:

  1. Upload the latest Oracle Linux (7.6) ISO that I got from
  2. Create a new blank machine with the following:
    1. 4 CPUs
    2. 24 GB of RAM
    3. 200 GB of disk
    4. Mout the uploaded ISO as a CD-ROM
    5. An elastic IP
    6. Services as follows:
      1. SSH (port 22)
      2. HTTPS (port 443)
      3. VNC (port 5901)
      4. HTTPS (port 7803 for Cloud Control)
  3. Start the image, configuring Oracle Linux 7 with the following:
    1. Server with GUI for software
    2. A static IP address (I used, but you could use anything).
    3. IP filtering so my ACC students could access the servers while they are in the labs at ACC and I could access the machines from home
    4. Partition the disk into a 16 GB swap partition, a 10 GB /home partition and then the rest of the disk as the root partition.
    5. When it came time to reboot the server, remove the CD-ROM image and update the configuration before rebooting so the image boots up using the disk.
  4. Install Oracle 12c and create an emrep repository database for OEM 13.3.
  5. Install OEM 13.3.
  6. Install Oracle 11g and create a database that will be upgraded during the course.

At this point everything was great, but since I teach 3-hour classes on Mondays and Wednesdays and shut the servers down between classes, my databases and OEM need to come up cleanly. Oracle has documentation on creating services to automatically start up databases on Linux/Unix, but it uses the old System V method for starting services (which, to be fair does still work on Linux 7). Since this was a Linux 7 server, I wanted to use the new systemd method. Tim’s rather fantastic site had the basic framework, but where he used scripts that he called from the service, I wanted to use dbstart and dbshut so that we could maintain startup and shutdown from a single file (/etc/oratab) rather than modifying a script.

I created the following file:

[root@dba ~]# vim /usr/lib/systemd/system/oracle-database.service
Description=The Oracle Database Service

# systemd ignores PAM limits, so set any necessary limits in the service.
# Not really a bug, but a feature.


ExecStart=/usr/bin/echo 'Starting Oracle Databases with Y in /etc/oratab'
ExecStart=/u01/app/oracle/product/12.2.0/dbhome_1/bin/dbstart /u01/app/oracle/product/12.2.0/dbhome_1
ExecStart=/usr/bin/echo 'dbstart has completed'

ExecStop=/usr/bin/echo 'Stopping Oracle Databases'
ExecStop=/u01/app/oracle/product/12.2.0/dbhome_1/bin/dbshut /u01/app/oracle/product/12.2.0/dbhome_1
ExecStop=/usr/bin/echo 'dbshut has completed'


I then enabled the service using the following:

[root@dba ~]# systemctl daemon-reload
[root@dba ~]# systemctl enable oracle-database

While the above worked great to start the database (tested with a reboot of the server), it didn’t address another issue. Unlike the database, Oracle Enterprise Manager comes with ‘out of the box’ scripts to start and stop OEM. They are the old style System V scripts that run out of /etc/init.d, and it didn’t really seem worth going through the trouble of converting them to the new systemd format. Unfortunately, the OEM scripts always assume that the database is already up and running. If your repository database is running on the same server as your OMS (which isn’t really that big of a deal if your hardware can handle it) that can be fixed by modifying the OEM startup script and adding in a ‘check to make sure your database is up and running before you start OEM’ section. The content in bold below was added to the out of the box OEM script after the initial comments in the file.

[root@dba ~]# vim /etc/init.d/gcstartup
# 2019-03-05 Rich Soule
# OEM should only startup if the emrep database is already up and running
# on the local machine so the below was added to make sure that happens.
#################### Begin Rich Soule Added Lines #######################
if [ "$1" = "start" ]
  while [ $counter -le 24 ]
    if ! /usr/bin/ps -e | /usr/bin/grep -q ora_pmon_emrep
      echo 'OEM is waiting on Oracle database to start'
      sleep 10
   if [ $counter -ge 24 ]
     echo 'Oracle database did not start in time, exiting OEM startup'
     exit 1
   echo 'Oracle database started, waiting 20 more seconds for database to open'
   sleep 20
  echo 'OMS will now attempt to start as per remainer of the /etc/init.d/gcstartup script'
####################  End Rich Soule Added Lines ########################

The above first checks to make sure that gcstartup was called with the start argument. If so then we’ll check if there is an ora_pmon_emrep process running (emrep is the name of my OEM repository database). If it isn’t running, we’ll wait 10 seconds and check again, but only for 240 seconds total. Once that process is found, we break out of our while do loop. If we did hit our 240-second limit, then we exit out of the gcstartup script totally, otherwise, we wait 20 seconds for the database to open and then continue along with the rest of the gcstartup script.

So far this has been working like a charm.

Note that you could do the same type of thing with OEM and instead have the script make a connection to a remote server every 10 seconds to make sure that the remote OEM repository was up before attempting to start OEM.

Happy Linux-ing and Oracle-ing!


Party like it’s 1979!

After 120,000+ track miles on my Lotus Exige, I upgraded to a Radical SR3. (Reason: The Exige was too big and heavy. Reason for the Exige upgrade in 2006: The Miata I was driving was too big and heavy. It’s going to be tough going lighter than the Radical, but it is theoretically possible…) I still had the Exige sitting in the garage and, in theory, I’d be able to drive it when I needed to drive a car. In practice, I ended up driving my F350 back and forth two nights a week where I teach Oracle DBA courses as a way to give back to the community. (The rest of the time I work from my home office.)

Stacey and I decided that it would probably be better for the Exige to go to a new home since we were not really using it anymore. (Stacey daily drives and instructs in a 911). A fellow instructor purchased it, fixed it up a bit and turned it over to someone else. He already had an Exige that he tracked quite a bit and built a few times over, so he had the experience to fix my old car up. It’s nice to know that someone else is using the Exige as it is a truly great car.

However, this meant that I had no choice but to drive the F350 around. It’s an old truck (year 2000) with a lot of miles (190K+) which really isn’t that big of a deal to me. However it is a long bed/dually, so it is not that fun to park. I decided that I should get another car just to drive around in. The smart choice would have been to pick up an old Honda Civic or something for a few K. Who really wants to be smart though…

Enter another track friend, Mark. Mark had a bit of a car problem (too many cars, according to his wife at least) and he had a solution to my dilemma:

1979 Blue Bird Trans Am

That’s a number’s matching, 6.6L 1979 Bluebird Trans Am with a four-speed.

After getting approval from Stacey that this would be OK, Mark and I traded my cash for his car.

It’s been a bit of an adventure driving around in a 40 year old car, but it’s been fun.

I just had the original(!) driver’s seat cushions replaced with new foam by Nikito’s Upholstery here in North West Austin, and I’m about to go put the seat back in and take her for a drive.

Oracle Broke DICTIONARY in 12.2+ PDBs

2023-01-06 Update: There is now also a fix for 19.17. Patch 34754803: REGRESSION TRACKING BUG FILED FROM MERGEREQ

2023-01-05 Update: There is a fix for 19.16! Patch 28998679: COMMENTS MISSING FROM DICTIONARY IN PDB

But that fix ONLY appears to work in 19.16.  When I tried to apply the patch to my 19.17 database, it told me that I was missing the 19.16 patch. So I’m off to log a bug against the patch and to see if we can get the patch into the actual database patches…

2019-08-09 Update: Still broken in 18.6 on Oracle Cloud.

When Oracle introduced pluggable databases in Oracle 12.1, they were supposed to be indistinguishable from non-pluggable databases as far as clients were concerned. Basically, this was true. But one thing that has since broken, at least in 12.2 and higher (confirmed in 18.3), is the comments on the DICTIONARY view.

I (and my students at ACC) use the DICTIONARY view or DICT for short quite a bit as it’s a quick way to find the data dictionary view that you are looking for.

DICTIONARY (or the DICT synonym) is great. It’s got two columns, TABLE_NAME and COMMENTS. Comments are very useful to search through (make sure to use UPPER as they are mixed case) because sometimes the table name can be a bit opaque.

The entry for DICTIONARY is pretty clear:

select * 
  from dict 
 where table_name = 'DICTIONARY';
DICTIONARY Description of data dictionary tables and views

The above is exactly what you get when you are connected to the root container (CDB$ROOT) of any version of a multitenant database.

However, if you connect to any pluggable database other than version 12.1 (12.1 worked as expected), you’ll get the following for the same query:


It turns out that in 12.2 databases and higher, there is only a single comment in the DICTIONARY view when you are connected to a pluggable database:

select * 
  from dict 
 where comments is not null;
DBA_CONTAINER_DATA Describes default and object-specific CONTAINER_DATA attributes

And, interestingly enough, in a PDB, DBA_CONTAINER_DATA has no rows! So the only comment in the DICTIONARY view in a PDB is for a view that has no data in PDBs. Obviously, someone dropped the ball here.

In a 12.1 pluggable database, you’d get a LOT of comments in the DICTIONARY view:

Connected to:
Oracle Database 12c Enterprise Edition Release 64bit Production

SYS@devpdb > select count(*) from dict where comments is not null;


I pointed this out to Oracle Support back in August of 2018. After first not believing me and asking me what I might have done to my data dictionary, and claiming that they couldn’t reproduce it, and then admitting that they could reproduce it, a bug was logged.

Currently, they are here:

“The bug is at Sev2 and will be worked upon by Dev as per the priority.”

The bug number is 28998679.

You can make the DICTIONARY view work in PDBs by creating a database link to the root container and getting the comments from there. Obviously (I hope), updating the actual view name (DICTIONARY) and/or public synonym (DICT) wouldn’t be supported, so you should create your own public synonym and/or view. Of course, I’m not sure how you’d get developers to use it since they are expecting DICTIONARY/DICT to just work…

As an aside, I came across this when building an APEX application for my students that would let them use an interactive report against DICTIONARY to search for interesting data dictionary views.

2023-01-05 Update

I was able to verify that if you have a 19.16 database, the patch does fix the issue.

[oracle@databaseserver DB:dbacon ~]
$ opatch lspatches
34160635;OCW RELEASE UPDATE (34160635)
34086870;OJVM RELEASE UPDATE: (34086870)
34133642;Database Release Update : (34133642)

OPatch succeeded.

[oracle@databaseserver DB:dbacon ~]
$ sql sys@orcl as sysdba

SQLcl: Release 22.4 Production on Thu Jan 05 21:44:41 2023

Copyright (c) 1982, 2023, Oracle.  All rights reserved.

Password? (**********?) *********
Connected to:
Oracle Database 19c Enterprise Edition Release - Production

SYS@orcl AS SYSDBA > select count(*) as "Total Comments" 
                       from dict 
                      where comments is not null;

Total Comments

However, this patch will then block you from upgrading to 19.17.

2023-01-06 Update

There is now also a fix for 19.17. If you apply patch 34754803 then life is good:

[oracle@databaseserver DB:dbacon ~]
$ opatch lspatches
34444834;OCW RELEASE UPDATE (34444834)
34411846;OJVM RELEASE UPDATE: (34411846)
34419443;Database Release Update : (34419443)

OPatch succeeded.

[oracle@databaseserver DB:dbacon ~]
$ sql / as sysdba

SQLcl: Release 22.4 Production on Fri Jan 06 15:37:33 2023

Copyright (c) 1982, 2023, Oracle.  All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release - Production

SYS@CDB$ROOT AS SYSDBA > select count(*) from dict where comments is not null;


SYS@CDB$ROOT AS SYSDBA > alter session set container=orcl;

Session altered.

SYS@CDB$ROOT AS SYSDBA > select count(*) from dict where comments is not null;


Names can be up to 50 BYTES in length…

…said no business user ever.

Business users don’t think in BYTES, they think in CHARACTERS. Tables in databases hold information for business users to use, not (generally) for geeky computer programmers who think in bytes.

Unfortunately, many databases (including Oracle) have their defaults configured to use bytes as the length for table columns. I believe that this is wrong…

Let’s first demonstrate the issue and then we’ll see how it should be fixed and finally how you actually have to fix it.

The Issue

Let’s say we have a business requirement from our customer that reads like the following (Note this is a really bad requirement and any competent DBA should never let a requirement like the below get into a database they manage, but let’s go with it):

“For our business we allow all of our customers to rate each item we sell with with 1 to 5 hearts. The rating should be stored using one of the following values:

  1. ♥♥
  2. ♥♥♥
  3. ♥♥♥♥
  4. ♥♥♥♥♥

We understand that this is ridiculous and that we should really use numbers, but we really want it stored this way.”

Let’s create a table that will store the rating. Note, that this is a horrible table and it’s missing just about everything that makes a table a good table, but we want to focus on the character length issue instead of proper database modeling techniques.

create table t1(c1 varchar2(5));

With our table created, let’s start to add each of the ratings to the table:

insert into table t1 values('♥');
one row inserted
insert into table t1 values('♥♥');
one row inserted
insert into table t1 values('♥♥♥');

Hmmm…. Our table should store up to 5 characters, and yet we can’t actually store 5 characters. Why is this? Let’s use SQL Developer (or SQLcl) to take a look at our table using the new(er) info command:

info t1
	 ROWS         : 
	 COMMENTS     : 

 C1          VARCHAR2(5 BYTE)   Yes

As we can see, table t1 has been defined to allow up to 5 bytes of data to be stored in the column c1. Since the heart symbol is greater than one byte, we can only fit two hearts into the column. When we try to add a row with 3 hearts, we get an error.

The Right Fix (that’s really the wrong fix)

The ‘right’ fix for this is to do the following as a SYSDBA user in the root container of your database (You have moved on from the officially deprecated non-cdb architecture, right?):

alter system set nls_length_semantics = 'CHAR';

However, you’ll see in the documentation that you shouldn’t do this as it ends up breaking things, especially Oracle Text which has quite a few notes in Oracle Support that basically say the only way to get Oracle Text to work is to NOT use the above statement.

Screen Shot 2018-10-16 at 11.26.41 AM

In my opinion, this is a bug. Oracle has finally (starting with Oracle Database 12c) defaulting the character set of a newly created database to AL32UTF8 (which should be the character set used on all newly created databases), and they should in a future release, also fix the NLS_LENGTH_SEMANTICS issue. (At least in my opinion they should! I’m pretty sure it’s not part of the actual plans for future releases.) The default should be CHAR and code that would break if CHARs were used instead of BYTEs (I’m looking at you Oracle Text…) should be fixed to either specify BYTE directly, or fixed to work with CHAR.

The Actual Fix(es)

Since the right fix should work, but really doesn’t, how can you address the issue correctly? There are two different methods to fix the issue:

  1. Correctly set DDL definitions to use CHAR.
  2. Alter the user’s session setting the NLS_LENGTH_SEMANTICS to CHAR.

1. The Correct DDL Fix

This is actually the best way to fix the issue as you can be assured that no matter what the defaults are in the database, your tables are always created correctly. In the same way that you should never trust the date format in a database to be the (also incredibly horrible) DD-MON-RR format and instead always use the to_date function with a correct format mask, you  should always create the length of a character based column with the correct length, and the correct length is character based, not byte based (unless the business requirements are to REALLY store something with a length of bytes which would be strange indeed for 99.99% of business requirements).

So, instead of this:

create table t1(c1 varchar2(5));

Do this:

create table t1(c1 varchar2(5 char));

By explicitly creating the table correctly, everything will work correctly in any database:

drop table t1;
table dropped

create table t1(c1 varchar2(5 char));
table created

insert into table t1 values('♥');
one row inserted
insert into table t1 values('♥♥');
one row inserted
insert into table t1 values('♥♥♥');
one row inserted
insert into table t1 values('♥♥♥♥');
one row inserted
insert into table t1 values('♥♥♥♥♥');
one row inserted

Great! Everything worked just the way we wanted.

2. The Alter Session Fix

Potentially the above fix could require a lot of work if you have a few thousand, or tens of thousands of columns defined without specifying the correct CHAR attribute that should have been specified with each create table statement.

You could, brute force attempt to fix each statement, or instead, just run the following line BEFORE you run all your create table statements that are not as semantically specific as they should be:

alter session set nls_length_semantics = 'CHAR';

Now let’s recreate our t1 table:

drop table t1;
table dropped
create table t1(c1 varchar2(5)); --Still semantically incorrect!
table created
info t1
	 ROWS         : 
	 COMMENTS     : 

 C1          VARCHAR2(5 CHAR)   Yes

By altering the NLS_LENGTH_SEMANTICS of our session, we’ve managed to make our semantically incorrect statement of  “create table t1(c1 varchar2(5));” work correctly.

Note that unless you work this out with Oracle Support directly ahead of time, you definitely shouldn’t alter your session and adjust your NLS_LENGTH_SEMANTICS with any Oracle supplied scripts. For example it would be wrong to do this before running the Oracle APEX install scripts. But for your OWN tables that you are creating in the database, you should always define them in the most semantically correct way, and if you can’t do that for some reason, then you can alter the session to get the correct storage of your data using semantically incorrect statements.

Remember… No business user ever said “Names can be 50 bytes in length.”


SQL Developer SSH Connections In Depth


Available now: A 29-page white paper on SQL Developer SSH Connections that explains in detail how SSH connections should be configured between your desktop/laptop and your organization’s database!

The vast majority of explanations out on the internet of how to configure SSH connections between Oracle SQL Developer and an Oracle Database use incredibly simplistic explanations, often using the oracle os account as the ssh user (absolutely a worst practice!), and not including important things like how firewalls will impact connections, how your ssh server should be configured, etc.

I wrote a 29-page white paper which has been published on the website. It covers the above and more in exacting detail. If you click the link above you’ll be taken to a description of the white paper and then to a page where you’ll be able to download it after supplying a name and email (a fair trade for the amount of time and effort I put into the white paper…).

Note: The image above is from the white paper and is used for explanation purposes only… None of those things actually exist except for the WordPress external firewall that protects just this blog, or, actually, all the blogs at

Thoughts on the line at CotA

CotA Track Map
After many thousands of laps at CotA I can say for certain that the turns that most folks have trouble with are the following:
  1. Turn 3
    CotA Track Map - Turn 3.PNG
    You should enter turn 3 from as far LEFT as you feel comfortable. You should NOT go to the right side of the track to enter turn 3. Ideally you are carrying as much speed as possible through 2 and are brushing your driver’s side (assuming left hand drive car here) wheels right over the edge of the apex curbing at 3 on your way into the apex of 4. Cars end up in the wall on the exit of turn 3/entrance to turn 4 because they tried to enter 3 from the right hand side of the track. I believe the thought is that “Turn 3 is a left hand turn, you should enter it from the right side of the track.”, but turn 3 leads directly into turn 4, so driving from apex to apex in a straight line through 3 and 4 is MUCH more efficient and much, much safer.
  2. Turn 7
    CotA Track Map - Turn 7-8-9
    Turn 7 is one of the two more important turns on the track as it is one of the two INCREASING radius turns on the track (the other being  turn 19, see below). Instead you should take care to NOT late apex 7 and to track out of turn 7 to the right hand side of the track and then throw away turn 8. Unless you are driving a tricycle you are going to be traction limited going through turn 9, so entering turn 8 from the right hand side of the track (to get a ‘good run up the hill’) is just plain silly.
  3. Turn 8
    See above. Enter turn 8 basically ON the curbing on the right hand side of the track and force the car over to the right as much as possible to open up turn 9. Turn 8 is the throw away turn.
  4. Turn 13
    CotA Track Map - Turn 13

    Turn 13 is often taken too fast and early apexed which leads to throttle bobble as you are exiting 13 (or going wide and off track). Instead take 13 a bit deeper and brake a bit harder and turn in a bit harder so that you apex 13 a bit later than you think you should and you’ll be able to constantly add throttle all the way into 15.
  5. Turn 19
    CotA Track Map - Turn 19

    Turn 19 is an increasing radius turn, so it can be taken much, much earlier and much, much faster than most do.

The rest of the track is mostly just ‘regular’ corners and most get them mostly right.

However, the above corners are where my green, first day ever, students often end up being significantly more efficient around the track than just about everybody else in the run group, often to the point where the student will ask me “Why are they going over there to enter turn three?”, or “Why are they going over there to enter turn eight?” while we basically coast behind them off throttle so that we can give the car in front of us enough room to get through the corner. My answer “I’m not really sure why, but as you can see we are significantly more efficient by taking this line, aren’t we?” and, every single time, I get a very enthusiastic “Yes!”.

This also applies to MANY of the advanced students that I coach.

Data analysis bears out the above too…


Rich Soule Racing adds a new car to the fleet

It all began with a trip to the incredibly dangerous

I’d had a 2004 Radical SR3 for a number of years which I loved, but a 2008 Radical SR3 done up like a Mazda 787B came up for sale and on a lark, I threw in a very low bid. The 787B liveried Radical was four years newer and also had a very desirable (and expensive) paddle shifter upgrade and a spare set of wheels. Who knows, maybe nobody else would bid… Four days later the auction was coming to an end and the car still hadn’t hit market value. Maybe I could grab it? I put in a bid at way under market value and watched the timer go down. A new bid came in, so I put in a new slightly higher bid. This kept on going back and forth for the next (very stressful) 11 minutes (Bring A Trailer always leaves the auction open for two minutes after any bid, so there is no sniping…). At the end of that 11 minutes I saw this:



I’d gone from this:


to this:


This led to a number of issues:

  • I now had two Radicals.
  • I had to explain to my wife why this was a good thing.
  • The new Radical was in Los Angeles and I live in Austin.

A quick conversation with the seller solved issue 1. He’d sell my older Radical for me.

The 2nd issue was a bit harder to resolve, but having the first issue solved sort of helped me a bit. I also reminded my wife that I’d be very happy with the newer car and I’ve always appreciated how much she values my happiness. I also got her a dozen roses delivered to her office. I think I’ll be OK on this one.

Depending on what my first Radical goes for, I might come out of this pretty clean. I did get the 2nd car at the lower end of market value and it did have a very nice set of extra wheels with brand new tires. Add in the pneumatic paddle shifters and I don’t think I would have been able to add those upgrades to my current car without going over the purchase price of the new car minus the sales price of my old car (we’ll see what my old car goes for…).

The final issue began this past weekend’s great adventure. When I first purchased the white Radical that meant I’d have to get a truck and trailer. Since both my wife and I instruct for various organizations I ended up getting a two car covered trailer and an (older) F350 dually.


A friend of mine had a Radical Pro Sport that he had purchased from Bring A Trailer, but the organizations he was running with wanted him to only run it in the instructor run groups, so he was looking to let someone else have fun with it. His car was down in Houston. The seller of my new car said he’d also sell my friend’s Pro Sport. Thus a plan was born: Drive to Houston, pick up friend’s car, drive to LA, drop off both cars, load up new car and drive back to Austin. Planned travel time: Friday to (very, very early) Monday morning. I knew this was pushing it, but hey, why not?

Last Friday I took off from Austin and drove down to MSR Houston to pick up my friend’s Pro Sport. It took a bit to pack everything up (lots of spares) and then I headed out to Los Angeles. Unfortunately, the dually had a blowout in one of the inside rear tires as I was driving on I10, but luckily for me, this happened before I got past San Antonio.


I very carefully and slowly drove back to Austin (everywhere that could have fixed the tire was closed at this point), spent the night with Stacy and then took the truck over to Don Rucker Tire and Wheel first thing in the morning. We decided to replace all the tires on the rear of the truck and I headed back out to LA.

It was quite a trip.

  • Times I ran out of gas: 1
    (GPS said there was a gas station just down the road, but when I got there they were out of diesel)
  • Times I almost ran out of gas: 2
    (Again, iExit/Waze/Google said there was a gas station, but when I got there… nothing.)
  • Times I got the truck and trailer stuck in mud: 1
    (The Wilcox AZ Holiday Inn and Suites had a big dirt lot that they made me park in. While I slept, rain happened. Then this happened:
    The tow truck barely had to help me get out, but help me out he did.)
  • Miles driven: 2,890
  • Miles driven on I10: @2,475
    (Goodness gracious this is a boring drive, but when you are towing a 44′ trailer behind an F350, boring isn’t all horrible.)
  • Dollar Amount of Gas Purchased: $814.48
    (I kind of think there might be some pending charges as this doesn’t seem like all that much).
  • Hotel stays: 2
  • Truck Naps: 1 (about an hour or so)
  • Hours spent over the whole trip: @88.5
    (Between the stop in Austin to get the blowout fixed and the stops along the way, I didn’t get back in Austin until 4:30 am on Tuesday morning).
  • Daniel McFadden’s seen: 1
    (I was Daniel’s first ever instructor at an HPDE a long time ago. Daniel is now an instructor himself and lives out in LA. He came over to say hi when I picked up the new car in LA.IMG_3647)
  • Number of Tuckers seen: 1
    This is Preston’s actual car that he drove away from the courthouse after he won the case that cleared his name. Unfortunately, it was too late at that point and Tucker Automobile never happened.

At this point I’m just looking forward to the next event at CotA (or somewhere else) so I can drive the car on track.

If you see me there, please feel free to ask for a ride.


Did your APEX upgrade just fail with ORA-04043: object WWV_DBMS_SQL does not exist?


Many folks have done upgrades of APEX to later versions of 5 or 18 without any issues at all, but there is a situation in which things can go wrong. I’m not exactly sure how much of an edge case it is, but I’ve now seen it ‘more than once’. I know that it requires a 12.2 or higher database and I believe it might require an APEX that has been upgraded over the years from ‘previous releases’, somewhere between 3 and 5.0, to a 5.1 or higher version of APEX such as the current 18.1 version of APEX.

In Oracle Database 12.2 and higher, the following three APEX package was officially deprecated or dropped: WWV_DBMS_SQL, WWV_FLOW_VAL, and WWV_FLOW_KEY. Note that I tried to find something in the official Oracle Documentation on this, but I couldn’t. (I’m sure someone with better Google-foo than I could, but it would seem that Oracle’s search capabilities for its documentation should really be a lot better than they are…)  Here’s a link to Morgan’s Library where he lays out what’s different in 12.2 and you can see these three objects are in the deprecated or dropped list. Update: Only WWV_DBMS_SQL was deprecated. The issue that we’re going to run in to is that, most definitely, WWV_DBMS_SQL has been dropped deprecated in Oracle Database 12.2. Update: WWV_FLOW_VAL and WWV_FLOW_KEY are not deprecated or dropped, they are just no longer installed by default as APEX is no longer installed by default. WWV_FLOW_SQL is deprecated.

During the APEX 18.1 install, it will look for a previous version of APEX and when it creates the new APEX_180100 schema, it will migrate all of your information from the previous APEX schema (let’s say APEX_050100) into the new schema. One VERY cool thing about this is that the install will leave all data in the previous schema so that if the installation fails, you can use these instructions to revert back to your previous installation. If you read the documentation that I linked you’ll see that for reverting to a lot of the older versions of APEX after the 18.1 install fails, you’ll actually add back the WWV_DBMS_SQL package by running the source which is in the $ORACLE_HOME/apex/core folder.

After the migration is complete, the installer will attempt to ‘clean up’ some artifacts from the previous versions of APEX that might otherwise get left behind so might run into the following situation if you have a 12.2 or higher database:

SYS@prodcon AS SYSDBA> @apexins apex apex_files temp /i/
[SNIP of vast amounts of output]
PL/SQL procedure successfully completed.

...Remove objects created from previous APEX installations
ERROR at line 1:
ORA-04043: object WWV_DBMS_SQL does not exist
ORA-06512: at line 9
ORA-06512: at line 23

Disconnected from Oracle Database 12c Enterprise Edition Release - 64bit Production

If I used the version of APEX that ships with the 12.2 database to add back in the WWV_DBMS_SQL (from the $ORACLE_HOME/apex/core folder run wwv_dbms_sql.sql and wwv_dbms_sql.plb files as sys) I’d get the ‘new updated versions’ (see below) that actually create a specific version of the packages with the APEX schema name in it, instead of the old ‘generic’ WWV_DBMS_SQL. I had to go back to an older version of APEX (I grabbed one from our previous 12.1 version of the database, but of course you could just download an older version of APEX from the previous versions download page on I’d suggest anything in the 5.0 timeframe.

Here’s what I finally ended up with in my database after the now successful 18.1 upgrade from a 5.1 previous version after running the correct WWV_DBMS_SQL code. Rather hilariously, the error that ended my previous upgrade attempt where it appeared that the install failed because it couldn’t drop the WWV_DBMS_SQL package appears to not actually be the case because I still have the WWV_DBMS_SQL package in my database:

select owner, status, object_name, object_type
  from dba_objects
 where object_name like 'WWV_DBMS_%'
   and owner='SYS';
SYS     VALID     WWV_DBMS_SQL               PACKAGE        
SYS     VALID     WWV_DBMS_SQL_APEX_050100   PACKAGE        
SYS     VALID     WWV_DBMS_SQL_APEX_180100   PACKAGE        

6 rows selected.

Rows 1 and 2 came from the 12.1 version of APEX that shipped with the database, rows 3 and 4 were from the shipped version of 12.2 APEX and rows 5 and 6 are from the 18.1 version of APEX.

At this point, I can now clean up the database by dropping the WWV_DBMS_SQL and WWV_DBMS_SQL_APEX_050100 packages from the database.

Of course, I’ve asked Oracle Support to log a bug against the APEX install script. I’ll update this post with the bug number once Oracle Support is back online (they are doing upgrades this weekend, so Support has been offline a bit… Hmmm… Maybe they should be using #ThickDB with Edition-Based Redefinition to do live updates!)

Happy APEXing!


Beware: Oracle RMAN Password Only Encrypted Compressed Backups to Tape Might Not Decrypt

So, a few weekends ago I had a BIG scare. I was moving an Oracle Database Appliance from one data center to another. Oracle said the only supported method to do so was to wipe the machine back to bare metal and rebuild it in the new data center with all the new IP addresses and network settings. To prepare for this I did a cold backup of a database to tape (really Oracle Cloud Backup) and used the following RMAN commands:

RMAN> set encryption on identified by ******************* only;
RMAN> backup device type sbt incremental level 0
      tag '2018_06_29_SERVER_MOVE' force as compressed backupset database;

Unfortunately, when it came time to read that backup from Oracle Cloud Backup (our configured sbt device), I got the following error:

ORA-19913: unable to decrypt backup

As I worked through the issue with Oracle Support for a very long time I eventually started working with some support folks who said the following:

“I’ve seen issues in the past when you do a password only encrypted compressed backup to tape where it can’t be decrypted. You shouldn’t do that.”

I think this is potentially one of the biggest bugs in the history of Oracle (if you can’t trust a backup then something is very very wrong!) if the issue is really there… I’ve yet to be able to do any additional testing of this, but figured I’d give folks a warning.

As an aside, I did take a disk backup before the tape backup and then copied the disk backup to a mounted NFS drive. I was able to move that disk backup over to the server and restore from it, so no customer data was harmed during the server move.

I’ll be speaking at KScope18!

Today I got an email that began like this:

Dear Richard,

Congratulations on being selected to speak at the premier Oracle user group conference—ODTUG Kscope18. We had more than 1,000 abstracts submitted this year, making this selection a very challenging process.

The following abstract has been accepted for presentation at ODTUG Kscope18, June 10-14 in Orlando, Florida. If you submitted multiple abstracts, you will receive multiple emails with the status of each abstract.

Texas Racing Commission: Lessons Learned from Migrating to Oracle Cloud


I also had another presentation that got accepted as a possible alternate:  Entity Relationship Modeling in the Age of Agile Development.

Hopefully I’ll see some of you at KScope in Orlando!

Remember to use code “insum” at checkout to get a discount off your registration.