Drop DB startup time from 45 to 3 minutes in dockerized Oracle 19.3.0

Grzegorz Gajos
Jan 7 · 4 min read

When I built Oracle EE 19.3.0 docker image using the official guide, the startup time was more than 45 minutes. I wanted to use that during integration testing so the time was not acceptable. I managed to drop that number to less than 3 minutes. Below I will present a guide on how to do this. Plus usage of not so popular docker feature. Stay tuned!

Building and running oracle/docker-image 19.3.0

  1. Download official project from GitHub https://github.com/oracle/docker-images
  2. Download official database installation files LINUX.X64_193000_db_home.zip from Oracle website https://www.oracle.com/database/technologies/oracle19c-linux-downloads.html
  3. Place it under an appropriate directory in the project /OracleDatabase/SingleInstance/dockerfiles/19.3.0
  4. Important! Open /OracleDatabase/SingleInstance/dockerfiles/19.3.0/Dockerfile file and remove line VOLUME [“$ORACLE_BASE/oradata”] I will tell you why in a moment.
  5. Execute ./buildDockerImage.sh -v 19.3.0
  6. Wait.

When the last command finishes (and it will take some time). You should have your Oracle Database available in the local docker repository. Now, you can start the image.

docker run -d -p 5500:5500 -p 1521:1521 -e ORACLE_PWD=GG_PASS oracle/database:19.3.0-ee

The command should take about 45 minutes and your database should be up and running. You might be a little surprised here but yes, starting fresh Oracle DB is not very fast. The database will perform something like an initial installation on the first run. This approach makes sense as it saves a lot of disk space.

How to configure the database and avoid waiting for a startup?

When you are starting and stopping the container, the database startup is quite fast (minutes). This benefit won’t work for the testing process or CI which is using a fresh image. During tests, You are expecting to start a database with some minimal configuration fast. As fast as possible. Not in ridiculously 45 minutes. This is not acceptable. But let’s back to configuration first.

The “clean” way of configuration in docker world points to Dockerfile. Add some commands there and build the image. Again, this will take an hour. If you make a mistake, a few hours. Here, I would like to focus on performance rather than best practices. The easiest way of DB configuration is to connect to running DB and perform some SQL commands. Let’s do this via sqlplus.

  1. Locate id of the dockerized DB: docker ps
  2. Open bash inside DB container: docker exec -it 0f4acaaca803 /bin/bash
  3. Start DB terminal: sqlplus sys/GG_PASS@//localhost:1521/ORCLCDB as sysdba

At this point, we can execute any commands we like to configure our database. Let’s create a user and grant some per.


Congratulations! Configuration completed. But wait. The configuration has been applied during container runtime. We are not able to start that container somewhere else, right? Wrong! I would like to present docker commit.

Very convenient container configuration

To save a snapshot of our current configuration you can execute the following command

docker commit — author “Author <your@email>” — message “Quick snapshot” 0f4acaaca803 ggajos/oracle-ee:19c-quick

This command is going to pause our DB container and perform a snapshot. The result will be converted to an image and tagged ggajos/oracle-ee:19c-quick. Now we can recreate the container from that specific moment of time anywhere. It’s also very useful if you would like to quickly reset the DB state. To run our snapshot, we can use our tag.

docker run -d -p 5500:5500 -p 1521:1521 ggajos/oracle-ee:19c-quick

Done! The startup time is very short (minutes) as the database is already started and configured! Including all run-time customization applied before running docker commit command.

Why we removed VOLUME from Dockerfile?

Nothing will work if we would use external volumes. That is the reason why the volume command has been removed from the initial Dockerfile. Commit command is not able to perform a data snapshots of external volume (and shouldn’t). Usually, you want your data files outside of the container. But not in this case. We have to put all the files inside the container so we can manage the entire state.

Attention! Do not Overdose!

While we achieved killer performance (in Oracle world), there are few drawbacks.

  1. We are no longer able to pass parameters in startup. The official Oracle docker image allows passing passwords -e ORACLE_PWD=GG_PASS or other args. DB snapshot contains already started database. Arguments are already consumed.
  2. The size of the image is much bigger. The vanilla Oracle EE docker image have about ~5GB. Executing operations above might produce an image that is larger than 10GB. In my case it was 13GB.
  3. The image doesn’t have Dockerfile. You cannot use it in FROM clause. The life-cycle become manual.

You need to consider if all the drawbacks are ok in your case. Being able to perform full integration testing on the target database is great benefit. Testing time matters. Running fresh Oracle DB locally in minutes or on CI is definetely a great asset.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade