Category Archives: Code

Names can be up to 50 BYTES in length…

…said no business user ever.

Business users don’t think in BYTES, they think in CHARACTERS. Tables in databases hold information for business users to use, not (generally) for geeky computer programmers who think in bytes.

Unfortunately, many databases (including Oracle) have their defaults configured to use bytes as the length for table columns. I believe that this is wrong…

Let’s first demonstrate the issue and then we’ll see how it should be fixed and finally how you actually have to fix it.

The Issue

Let’s say we have a business requirement from our customer that reads like the following (Note this is a really bad requirement and any competent DBA should never let a requirement like the below get into a database they manage, but let’s go with it):

“For our business we allow all of our customers to rate each item we sell with with 1 to 5 hearts. The rating should be stored using one of the following values:

  1. ♥♥
  2. ♥♥♥
  3. ♥♥♥♥
  4. ♥♥♥♥♥

We understand that this is ridiculous and that we should really use numbers, but we really want it stored this way.”

Let’s create a table that will store the rating. Note, that this is a horrible table and it’s missing just about everything that makes a table a good table, but we want to focus on the character length issue instead of proper database modeling techniques.

create table t1(c1 varchar2(5));

With our table created, let’s start to add each of the ratings to the table:

insert into table t1 values('♥');
one row inserted
insert into table t1 values('♥♥');
one row inserted
insert into table t1 values('♥♥♥');
error

Hmmm…. Our table should store up to 5 characters, and yet we can’t actually store 5 characters. Why is this? Let’s use SQL Developer (or SQLcl) to take a look at our table using the new(er) info command:

info t1
TABLE: T1 
	 LAST ANALYZED: 
	 ROWS         : 
	 SAMPLE SIZE  : 
	 INMEMORY     :DISABLED 
	 COMMENTS     : 

Columns 
NAME         DATA TYPE          NULL  DEFAULT    COMMENTS
 C1          VARCHAR2(5 BYTE)   Yes

As we can see, table t1 has been defined to allow up to 5 bytes of data to be stored in the column c1. Since the heart symbol is greater than one byte, we can only fit two hearts into the column. When we try to add a row with 3 hearts, we get an error.

The Right Fix (that’s really the wrong fix)

The ‘right’ fix for this is to do the following as a SYSDBA user in the root container of your database (You have moved on from the officially deprecated non-cdb architecture, right?):

alter system set nls_length_semantics = 'CHAR';

However, you’ll see in the documentation that you shouldn’t do this as it ends up breaking things, especially Oracle Text which has quite a few notes in Oracle Support that basically say the only way to get Oracle Text to work is to NOT use the above statement.

Screen Shot 2018-10-16 at 11.26.41 AM

In my opinion, this is a bug. Oracle has finally (starting with Oracle Database 12c) defaulting the character set of a newly created database to AL32UTF8 (which should be the character set used on all newly created databases), and they should in a future release, also fix the NLS_LENGTH_SEMANTICS issue. (At least in my opinion they should! I’m pretty sure it’s not part of the actual plans for future releases.) The default should be CHAR and code that would break if CHARs were used instead of BYTEs (I’m looking at you Oracle Text…) should be fixed to either specify BYTE directly, or fixed to work with CHAR.

The Actual Fix(es)

Since the right fix should work, but really doesn’t, how can you address the issue correctly? There are two different methods to fix the issue:

  1. Correctly set DDL definitions to use CHAR.
  2. Alter the user’s session setting the NLS_LENGTH_SEMANTICS to CHAR.

1. The Correct DDL Fix

This is actually the best way to fix the issue as you can be assured that no matter what the defaults are in the database, your tables are always created correctly. In the same way that you should never trust the date format in a database to be the (also incredibly horrible) DD-MON-RR format and instead always use the to_date function with a correct format mask, you  should always create the length of a character based column with the correct length, and the correct length is character based, not byte based (unless the business requirements are to REALLY store something with a length of bytes which would be strange indeed for 99.99% of business requirements).

So, instead of this:

create table t1(c1 varchar2(5));

Do this:

create table t1(c1 varchar2(5 char));

By explicitly creating the table correctly, everything will work correctly in any database:

drop table t1;
table dropped

create table t1(c1 varchar2(5 char));
table created

insert into table t1 values('♥');
one row inserted
insert into table t1 values('♥♥');
one row inserted
insert into table t1 values('♥♥♥');
one row inserted
insert into table t1 values('♥♥♥♥');
one row inserted
insert into table t1 values('♥♥♥♥♥');
one row inserted

Great! Everything worked just the way we wanted.

2. The Alter Session Fix

Potentially the above fix could require a lot of work if you have a few thousand, or tens of thousands of columns defined without specifying the correct CHAR attribute that should have been specified with each create table statement.

You could, brute force attempt to fix each statement, or instead, just run the following line BEFORE you run all your create table statements that are not as semantically specific as they should be:

alter session set nls_length_semantics = 'CHAR';

Now let’s recreate our t1 table:

drop table t1;
table dropped
create table t1(c1 varchar2(5)); --Still semantically incorrect!
table created
info t1
TABLE: T1 
	 LAST ANALYZED: 
	 ROWS         : 
	 SAMPLE SIZE  : 
	 INMEMORY     :DISABLED 
	 COMMENTS     : 

Columns 
NAME         DATA TYPE          NULL  DEFAULT    COMMENTS
 C1          VARCHAR2(5 CHAR)   Yes

By altering the NLS_LENGTH_SEMANTICS of our session, we’ve managed to make our semantically incorrect statement of  “create table t1(c1 varchar2(5));” work correctly.

Note that unless you work this out with Oracle Support directly ahead of time, you definitely shouldn’t alter your session and adjust your NLS_LENGTH_SEMANTICS with any Oracle supplied scripts. For example it would be wrong to do this before running the Oracle APEX install scripts. But for your OWN tables that you are creating in the database, you should always define them in the most semantically correct way, and if you can’t do that for some reason, then you can alter the session to get the correct storage of your data using semantically incorrect statements.

Remember… No business user ever said “Names can be 50 bytes in length.”

Rich

Advertisements

SQL Developer SSH Connections In Depth

Capture

Available now: A 29-page white paper on SQL Developer SSH Connections that explains in detail how SSH connections should be configured between your desktop/laptop and your organization’s database!

The vast majority of explanations out on the internet of how to configure SSH connections between Oracle SQL Developer and an Oracle Database use incredibly simplistic explanations, often using the oracle os account as the ssh user (absolutely a worst practice!), and not including important things like how firewalls will impact connections, how your ssh server should be configured, etc.

I wrote a 29-page white paper which has been published on the Insum.ca website. It covers the above and more in exacting detail. If you click the link above you’ll be taken to a description of the white paper and then to a page where you’ll be able to download it after supplying a name and email (a fair trade for the amount of time and effort I put into the white paper…).

Note: The image above is from the white paper and is used for explanation purposes only… None of those things actually exist except for the WordPress external firewall that protects just this blog, or, actually, all the blogs at WordPress.com.


Did your APEX upgrade just fail with ORA-04043: object WWV_DBMS_SQL does not exist?

FailedAPEXInstall

Many folks have done upgrades of APEX to later versions of 5 or 18 without any issues at all, but there is a situation in which things can go wrong. I’m not exactly sure how much of an edge case it is, but I’ve now seen it ‘more than once’. I know that it requires a 12.2 or higher database and I believe it might require an APEX that has been upgraded over the years from ‘previous releases’, somewhere between 3 and 5.0, to a 5.1 or higher version of APEX such as the current 18.1 version of APEX.

In Oracle Database 12.2 and higher, the following three APEX package was officially deprecated or dropped: WWV_DBMS_SQL, WWV_FLOW_VAL, and WWV_FLOW_KEY. Note that I tried to find something in the official Oracle Documentation on this, but I couldn’t. (I’m sure someone with better Google-foo than I could, but it would seem that Oracle’s search capabilities for its documentation should really be a lot better than they are…)  Here’s a link to Morgan’s Library where he lays out what’s different in 12.2 and you can see these three objects are in the deprecated or dropped list. Update: Only WWV_DBMS_SQL was deprecated. The issue that we’re going to run in to is that, most definitely, WWV_DBMS_SQL has been dropped deprecated in Oracle Database 12.2. Update: WWV_FLOW_VAL and WWV_FLOW_KEY are not deprecated or dropped, they are just no longer installed by default as APEX is no longer installed by default. WWV_FLOW_SQL is deprecated.

During the APEX 18.1 install, it will look for a previous version of APEX and when it creates the new APEX_180100 schema, it will migrate all of your information from the previous APEX schema (let’s say APEX_050100) into the new schema. One VERY cool thing about this is that the install will leave all data in the previous schema so that if the installation fails, you can use these instructions to revert back to your previous installation. If you read the documentation that I linked you’ll see that for reverting to a lot of the older versions of APEX after the 18.1 install fails, you’ll actually add back the WWV_DBMS_SQL package by running the source which is in the $ORACLE_HOME/apex/core folder.

After the migration is complete, the installer will attempt to ‘clean up’ some artifacts from the previous versions of APEX that might otherwise get left behind so might run into the following situation if you have a 12.2 or higher database:

SYS@prodcon AS SYSDBA> @apexins apex apex_files temp /i/
[SNIP of vast amounts of output]
PL/SQL procedure successfully completed.

...Remove objects created from previous APEX installations
declare
*
ERROR at line 1:
ORA-04043: object WWV_DBMS_SQL does not exist
ORA-06512: at line 9
ORA-06512: at line 23

Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

If I used the version of APEX that ships with the 12.2 database to add back in the WWV_DBMS_SQL (from the $ORACLE_HOME/apex/core folder run wwv_dbms_sql.sql and wwv_dbms_sql.plb files as sys) I’d get the ‘new updated versions’ (see below) that actually create a specific version of the packages with the APEX schema name in it, instead of the old ‘generic’ WWV_DBMS_SQL. I had to go back to an older version of APEX (I grabbed one from our previous 12.1 version of the database, but of course you could just download an older version of APEX from the previous versions download page on Oracle.com). I’d suggest anything in the 5.0 timeframe.

Here’s what I finally ended up with in my database after the now successful 18.1 upgrade from a 5.1 previous version after running the correct WWV_DBMS_SQL code. Rather hilariously, the error that ended my previous upgrade attempt where it appeared that the install failed because it couldn’t drop the WWV_DBMS_SQL package appears to not actually be the case because I still have the WWV_DBMS_SQL package in my database:

select owner, status, object_name, object_type
  from dba_objects
 where object_name like 'WWV_DBMS_%'
   and owner='SYS';
OWNER   STATUS    OBJECT_NAME                OBJECT_TYPE    
SYS     VALID     WWV_DBMS_SQL               PACKAGE        
SYS     INVALID   WWV_DBMS_SQL               PACKAGE BODY   
SYS     VALID     WWV_DBMS_SQL_APEX_050100   PACKAGE        
SYS     VALID     WWV_DBMS_SQL_APEX_050100   PACKAGE BODY   
SYS     VALID     WWV_DBMS_SQL_APEX_180100   PACKAGE        
SYS     VALID     WWV_DBMS_SQL_APEX_180100   PACKAGE BODY   

6 rows selected.

Rows 1 and 2 came from the 12.1 version of APEX that shipped with the database, rows 3 and 4 were from the shipped version of 12.2 APEX and rows 5 and 6 are from the 18.1 version of APEX.

At this point, I can now clean up the database by dropping the WWV_DBMS_SQL and WWV_DBMS_SQL_APEX_050100 packages from the database.

Of course, I’ve asked Oracle Support to log a bug against the APEX install script. I’ll update this post with the bug number once Oracle Support is back online (they are doing upgrades this weekend, so Support has been offline a bit… Hmmm… Maybe they should be using #ThickDB with Edition-Based Redefinition to do live updates!)

Happy APEXing!

Rich


Beware: Oracle RMAN Password Only Encrypted Compressed Backups to Tape Might Not Decrypt

So, a few weekends ago I had a BIG scare. I was moving an Oracle Database Appliance from one data center to another. Oracle said the only supported method to do so was to wipe the machine back to bare metal and rebuild it in the new data center with all the new IP addresses and network settings. To prepare for this I did a cold backup of a database to tape (really Oracle Cloud Backup) and used the following RMAN commands:

RMAN> set encryption on identified by ******************* only;
RMAN> backup device type sbt incremental level 0
      tag '2018_06_29_SERVER_MOVE' force as compressed backupset database;

Unfortunately, when it came time to read that backup from Oracle Cloud Backup (our configured sbt device), I got the following error:

ORA-19913: unable to decrypt backup

As I worked through the issue with Oracle Support for a very long time I eventually started working with some support folks who said the following:

“I’ve seen issues in the past when you do a password only encrypted compressed backup to tape where it can’t be decrypted. You shouldn’t do that.”

I think this is potentially one of the biggest bugs in the history of Oracle (if you can’t trust a backup then something is very very wrong!) if the issue is really there… I’ve yet to be able to do any additional testing of this, but figured I’d give folks a warning.

As an aside, I did take a disk backup before the tape backup and then copied the disk backup to a mounted NFS drive. I was able to move that disk backup over to the server and restore from it, so no customer data was harmed during the server move.


I’ll be speaking at KScope18!

ODTUG
Today I got an email that began like this:

Dear Richard,

Congratulations on being selected to speak at the premier Oracle user group conference—ODTUG Kscope18. We had more than 1,000 abstracts submitted this year, making this selection a very challenging process.

The following abstract has been accepted for presentation at ODTUG Kscope18, June 10-14 in Orlando, Florida. If you submitted multiple abstracts, you will receive multiple emails with the status of each abstract.

Texas Racing Commission: Lessons Learned from Migrating to Oracle Cloud

——–

I also had another presentation that got accepted as a possible alternate:  Entity Relationship Modeling in the Age of Agile Development.

Hopefully I’ll see some of you at KScope in Orlando!

Remember to use code “insum” at checkout to get a discount off your registration.


Today C2 Consulting and Insum Solutions Merged

Capture

Today is my first day as Director of Consulting Services at Insum Solutions! I’m really excited to be part of the Insum team. Insum has been a (the?)  leader in the APEX consulting services space for many years. C2 and Insum have had very similar philosophies and execution styles. I’m really looking forward to working with the new (much bigger) team.

https://www.insum.ca/insum-and-c2-consulting-unite-to-better-serve-the-american-market/


I’ll be speaking at AOUG on Friday December 15th

IMP_web001

I’ll be speaking, but if you come, you’ll be building!

You’ve heard about the Internet of Things (IoT). C2 Consulting has put together a hands on lab where you’ll get to build an IoT thermostat from electronic components and hook it up to to REST components in an Oracle Database and then control everything from an APEX application.

You can register for the event at this link or at this url if that link doesn’t work for some reason:
https://docs.google.com/forms/d/e/1FAIpQLSdKP0TZ8KraEuAoSM1Ad0FSqkzBIP5QMCuc4uItKneeG7nBNA/viewform?c=0&w=1

This is the lab we’ll be working through and it’s pretty awesome (and award winning!): https://concept2completion.com/iotemp

Here’s some details about the when and where:

Event Timing: Friday, December 15th from 11 am to 1:30 PM
Event Address: National Instruments at 11500 North Mopac Expressway, Building C, Rooms 1S13-1S15.
Parking available in the garage for building C.

If you are in the Austin area on Friday December 15th hopefully I’ll see you there.