Oracle APEX, get your act together!

Don’t get me wrong, I have worked with Oracle APEX for many years now with satisfaction. But in my humble opinion they are moving us as developers to an unsustainable situation.
Since the beginning the focus of developing APEX applications has been on the single developer, single application, multiple users paradigm. So we build applications… first in our free time, later perhaps for a customer. Then we paired up with some colleges, formed several teams and now we are looking at a landscape of a lot of applications; some of them very large with a lot of pages.

So now we have large operations with one or many APEX applications, having to manage bug fixes and build new functionality, preferably in a continuous integration life cycle.
And here is where APEX falls short in my opinion.

The focus of APEX deployment has always been on full application export and still is. Even sqlcl has a APEX EXPORT command, but no APEX EXPORT PAGE command (as of this day, please correct me if I am wrong). Sure, you kan export a single part of an application like a theme or a page, but please do this exercise for me:

1) create in a local APEX environment an application
2) Export the application
3) Import the application in e.g.

All works well, but now:

4) export locally a single page of the application
5) try to import the page in

It won’t work (exported in another workspace-id etc.).

Now the savvy guys will tell you that you have to run som APEX installation packaged procedures and that you have to set workspace-id and the offset before importing the page export,
but when you come to think of it this is ridiculous. The APEX application import succeeds, the partial page import fails.
Try getting in your next sprint the requirements that when a user wants to use part X of your application, she first has to align with some internal keys… nonsense.

Today we are facing a development environment where alterations to the application can’t be managed when faced with more than say 5 developers. Even when you lock a page you’re still
able to export the application. Even when someone is working in it. A bugfix on page x that isn’t done yet is able to be present in an export. To capture these situations you have to rely on the test cycles afterwards, but the framework should help us preventing these situations.

Make the page lock more restrictive, make the export/import process work without the nonsense of workspace-ids and offsets.

Let’s take APEX development further and strive for a page-per-page low-code development cycle.


Changing the status of an Oracle Apex application with pl/sql

I needed this. I needed this badly. And for once the Oracle Apex forum gave me the answer for my need..

create or replace procedure set_application_status
  ( p_application in apex_applications.application_id%type
  , p_status      in varchar2
  if p_status not in ( 'AVAILABLE'        , 'AVAILABLE_W_EDIT_LINK', 'DEVELOPER_ONLY'
                     , 'RESTRICTED_ACCESS', 'UNAVAILABLE'          , 'UNAVAILABLE_PLSQL'
	             , 'UNAVAILABLE_URL')
    raise_application_error(-20000, 'Status '||p_status||' is not supported');
  end if;
  for r_i in ( select app.workspace_id
               ,      app.application_id
               from   apex_applications app
               where  app.application_id = p_application
      ( p_security_group_id => r_i.workspace_id
      ( p_flow_id     => r_i.application_id
      , p_flow_status => p_status
  end loop;

Using a serviceworker with Oracle APEX

After a question from a colleague of mine about caching JavaScript, css, images ed. in APEX I started to look at the new way : service workers.
With service workers we have the opportunity to manage caching programmatically with JavaScript.

I’m not going to tell about service workers. There are a lot of people who know more about it and have excellent posts on blogs and YouTube.
This post is more about how I implemented a service worker in a website I developed.

First, in the template of the LOGIN page I added a script section with the code:

function printState(state) {
if ('serviceWorker' in navigator) {
  navigator.serviceWorker.register('/m2b_service_worker.js', {
    scope : './'
  }).then(function (registration) {
    var serviceWorker;
    if (registration.installing) {
      serviceWorker = registration.installing;
    } else if (registration.waiting) {
      serviceWorker = registration.waiting;
    } else if ( {
      serviceWorker =;
    if (serviceWorker) {
      serviceWorker.addEventListener('statechange', function (e) {
  }).catch (function (error) {

With this script I registered the service worker file m2b_service_worker.js. Notice that the printState function is just some overhead for debugging.
This seems all to simple, but it has one small pitfall: scoping.

As an APEX developer I wanted to upload the script as static file in the framework. However, when you do that the maximum scope of the caching would be something like domain/pls/dad/workspace/r/….
That’s no good. I also want to cache static files like domain/i/apex.min.css and those files are out of the mentioned scope. Uploading it as a static file will result in a console.log message:

DOMException: Failed to register a ServiceWorker: The path of the provided scope ('/i/') is not 
under the max scope allowed ('/pls/apex/workspace/r/****'). Adjust the scope, move the Service 
Worker script, or use the Service-Worker-Allowed HTTP header to allow the scope.

Luckily I use a reverse proxy in front of our ORDS so I was able to install the script at the root of the reverse proxy and register it at root /, such that all requests to the domain could be cached if necessary.

The Service Worker:

VERSIE = "2",
CACHENAME = 'omy-cache-' + VERSIE,
EXTENTIES = ['gif', 'jpg', 'ico', 'css', 'js', 'png'];

self.addEventListener('install', function (event) {
  event.waitUntil( (cache) {
      return cache.addAll(FILES);

self.addEventListener('activate', function (event) {
  return event.waitUntil(caches.keys().then(function (keys) {
      return Promise.all( (k) {
          if (k != CACHENAME && k.indexOf('omy-cache-') == 0) {
            return caches.delete (k);
          } else {
            return Promise.resolve();

self.addEventListener('fetch', function (event) {
  var isGet = event.request.method;
    .then(function (response) {
    // Cache hit - return the response from the cached version
      if (response) {
        return response;

    // Not in cache - return the result from the live server
      return fetch(event.request)
          .then(function (response) {
      shouldCache = false,
      reqWithoutQuery = event.request.url.split("?")[0],
          ext = reqWithoutQuery.split(".").pop();
    if (EXTENTIES.indexOf(ext) >= 0 ) {
      shouldCache = true;
        if (shouldCache) {
      //before we return the response from the server
      //we cache the response for the next time
          return (cache) {
            cache.put(event.request, response.clone());
            return response;
        } else {
          return response;
      // Is I understand it, fetch throws an exception when offline
      // but a valid HTTP response, e.g. 404, will go tho then(), not to catch()
      return{return cache.match('offline.html');});

With APEX you don’t want to cache all GET requests. E.g. all GETs from \f are dynamic, dependent from session state. Your application will behave not as expected when you’ll cache \f.
I only want to cache the components that are truly static. Hence the array with exceptions.
I also want to show a static file [offline.hml] when the user has no internet connected, to increase user experience. This static file (and its image) is added to the cache on installation of the service worker.

The meat of the worker is the fetch event. When the request is found in the cache, the cached response is returned. When the request is unknown in the cache, the request is fetched from the server and when the requested item is within the array, it is cached for the next cycle.

When you look in Developer Tools > Application > Cache Storage you will notice your new cache with all the static files that were cached.

To emulate an offline connection, just set the checkbox “offline” in Network and hit F5. This should serve the mentioned offline.html from cache.

Component view in Apex 5.1

So I’m a dinosaur. I don’t care if I’m not one of the cool kids. I like component view, I started with component view, I’m comfortable in component view and I get the “flow” of component view.
Of course I’m using page designer, but when things get “buggy”, I switch in dinosaur-modus.

And than there is Apex 5.1….
Where is my component view?

First, there is Two pane mode
For me that is a big step in the right direction. I’m not always in the position to ask for a 24″ monitor for development. So, less is more in this case.
And look
Component view!

But it’s not the same. This Component View imposter doesn’t give me the original page for e.g. a region. It jumps to the right section of the page-designer.
Not fair! That’s cheating! …. given that it is also very handy and perhaps my new MO.

But look in the upper right corner.
Here you can set some developer specific preferences, like


This will change the original panel to

Legacy, deprecated, whatever. It’s still there.

Like I said, I will be using Two Pane Mode as default with component view as my prefered way of developing (it could be nice if I could set this somewhere in my preferences),
but it’s comforting to know I can still switch “back”.

ORA-00600: with arguments: [12811], [154970]

We encountered this bug on Windows Enterprise Edition.

When dropping a table and you get a ORA-00 600 with arguments: [12811] and [154970], you will have to check your application for the following sequence of events:

– You created the table with an identity column GENERATED BY DEFAULT ON NULL AS IDENTITY
– you have code that tries to alter the value the sequence by an alter table statement in dynamic sql.

When the alter table results in an error, the sequence is dropped and you are stuck with a corrupted database.

Don’t do this in a production database, as we have šŸ˜¦

CREATE TABLE edwin_test2
      MINVALUE 1 MAXVALUE 9999999999999999999999999999 
      CREATION_DATE date, 
      CONSTRAINT edwin_test_pk2 PRIMARY KEY (ID)

  l_max_id number:= 10;
   insert into edwin_test2 
     ( creation_date
     ( sysdate
   l_max_id := -1;  -- force the error
   -- Let's blow things up...
   execute immediate 
   'alter table edwin_test2  modify id number '
   ||'generated by default on null as identity '
   ||'( start with ' ||l_max_id || ' )';

drop table edwin_test2;

You can check if you’re have this corruption by the following script:

  cursor c_seq
    ( cp_name in varchar2
    select 'exists' bla
    from   dba_sequences seq
    where  seq.sequence_name = cp_name
  r_seq c_seq%rowtype;
  for r_i in 
    ( select col.table_name
      ,      col.data_default as seq
      from   dba_tab_columns col
      where  col.identity_column = 'YES'
      and    not exists ( select 1
                          from   dba_views vw
                          where  vw.view_name = col.table_name
    r_i.seq := substr( r_i.seq, instr(r_i.seq,'.')+2);
    r_i.seq := substr( r_i.seq, 1, instr(r_i.seq,'"')-1);   
    r_seq:= null;
    open c_seq(r_i.seq);
    fetch c_seq into r_seq;
    close c_seq;
      ( rpad(r_i.table_name,40,' ')
      ||' '
      ||nvl(r_seq.bla, ' ------ohoh!')
  end loop;

To handle this corruption, just open a SR with Oracle. They should be able to help you.

No, recreating the sequence by hand doesn’t work. I tried. The sys.seq.FLAGS column hasĀ a value of 8 for a hand-created sequence and 40 for an identity column; I see no problem in selecting from the datadictionairy, but updating it is even for me a big NO.


ora-01403 in wwv_flow_api.import_begin

Two new databases, two fresh new apex installations. Yeah! One for development, one for test. Let’s create a new Workspace X in every environments.
Everything is ready, let the development begin!

After a while the first iteration of the application was ready and we imported it in the the second environment and everything failed miserably.

The site exploded in an error page giving the dreaded ORA-01403 after the first statement in the install script WWV_FLOW_API.IMPORT_BEGIN.

But after running the script in SQLPlus with spooling enabled I got the following :

ORA-02291: Integriteitsbeperking (APEX_050000.WWV_FLOWS_FK) is geschonden – bovenliggende sleutel
is niet gevonden.
ORA-06512: in “APEX_050000.WWV_FLOW_API”, regel 2750
ORA-06512: in regel 2

That’s dutch for an integrity constraint violation, parent key not found

Digging into dba_constraints it gave me that the workspace was not found.

The provisioning_company_id of wwv_flow_companies for workspace X had a different value than development!

Note to myself: when deploying into a new environment, don’t create the workspaces by hand, but import it from a source installation.

Note to the APEX development team: it wouldn’t kill you when you stop the process with a simple message like ‘Workspace-id XXXXXX not found’ instead of “I didn’t find it” without givingĀ the “it” some meaning.

Node.js scripts for Oracle Cloud Storage Service

Working with Oracle Cloud Storage Service I noticed that it’s not really customer-ready (in my humble opinion).

e.g. Creation of a storage container is not yet supported from the dashboard. You’ll have to create a container using a magical Java library or a REST-API using Curl.

But we are on Windows.

So we don’t have Curl.

And I refused to install Cygwin just for this purpose.

However, node.js is installed in our Windows environment, so I created a small repository of node.js scripts to handle some of the basics of the Oracle Cloud Storage Service.

For everybody who is interested :