When I built Oracle EE 19.3.0 docker image using the official guide, the startup time was more than 45 minutes. I wanted to use that during integration testing so the time was not acceptable. I managed to drop that number to less than 3 minutes. Below I will present a guide on how to do this. Plus usage of not so popular docker feature. Stay tuned!
Building and running oracle/docker-image 19.3.0
- Download official project from GitHub https://github.com/oracle/docker-images
- Download official database installation files
LINUX.X64_193000_db_home.zipfrom Oracle website https://www.oracle.com/database/technologies/oracle19c-linux-downloads.html
- Place it under an appropriate directory in the project
- Important! Open
/OracleDatabase/SingleInstance/dockerfiles/19.3.0/Dockerfilefile and remove line
VOLUME [“$ORACLE_BASE/oradata”]— I will tell you why in a moment.
./buildDockerImage.sh -v 19.3.0
When the last command finishes (and it will take some time). You should have your Oracle Database available in the local docker repository. Now, you can start the image.
docker run -d -p 5500:5500 -p 1521:1521 -e ORACLE_PWD=GG_PASS oracle/database:19.3.0-ee
The command should take about 45 minutes and your database should be up and running. You might be a little surprised here but yes, starting fresh Oracle DB is not very fast. The database will perform something like an initial installation on the first run. This approach makes sense as it saves a lot of disk space.
How to configure the database and avoid waiting for a startup?
When you are starting and stopping the container, the database startup is quite fast (minutes). This benefit won’t work for the testing process or CI which is using a fresh image. During tests, You are expecting to start a database with some minimal configuration fast. As fast as possible. Not in ridiculously 45 minutes. This is not acceptable. But let’s back to configuration first.
The “clean” way of configuration in docker world points to Dockerfile. Add some commands there and build the image. Again, this will take an hour. If you make a mistake, a few hours. Here, I would like to focus on performance rather than best practices. The easiest way of DB configuration is to connect to running DB and perform some SQL commands. Let’s do this via
- Locate id of the dockerized DB:
- Open bash inside DB container:
docker exec -it 0f4acaaca803 /bin/bash
- Start DB terminal:
sqlplus sys/GG_PASS@//localhost:1521/ORCLCDB as sysdba
At this point, we can execute any commands we like to configure our database. Let’s create a user and grant some per.
CREATE USER GG_TEST identified BY GG_TEST;
GRANT CREATE ANY MATERIALIZED VIEW, CREATE JOB, CONNECT, RESOURCE, CREATE ANY VIEW TO GG_TEST;
GRANT UNLIMITED TABLESPACE TO GG_TEST;
Congratulations! Configuration completed. But wait. The configuration has been applied during container runtime. We are not able to start that container somewhere else, right? Wrong! I would like to present
Very convenient container configuration
To save a snapshot of our current configuration you can execute the following command
docker commit — author “Author <your@email>” — message “Quick snapshot” 0f4acaaca803 ggajos/oracle-ee:19c-quick
This command is going to pause our DB container and perform a snapshot. The result will be converted to an image and tagged
ggajos/oracle-ee:19c-quick. Now we can recreate the container from that specific moment of time anywhere. It’s also very useful if you would like to quickly reset the DB state. To run our snapshot, we can use our tag.
docker run -d -p 5500:5500 -p 1521:1521 ggajos/oracle-ee:19c-quick
Done! The startup time is very short (minutes) as the database is already started and configured! Including all run-time customization applied before running
docker commit command.
Why we removed VOLUME from Dockerfile?
Nothing will work if we would use external volumes. That is the reason why the volume command has been removed from the initial Dockerfile. Commit command is not able to perform a data snapshots of external volume (and shouldn’t). Usually, you want your data files outside of the container. But not in this case. We have to put all the files inside the container so we can manage the entire state.
Attention! Do not Overdose!
While we achieved killer performance (in Oracle world), there are few drawbacks.
- We are no longer able to pass parameters in startup. The official Oracle docker image allows passing passwords
-e ORACLE_PWD=GG_PASSor other args. DB snapshot contains already started database. Arguments are already consumed.
- The size of the image is much bigger. The vanilla Oracle EE docker image have about ~5GB. Executing operations above might produce an image that is larger than 10GB. In my case it was 13GB.
- The image doesn’t have Dockerfile. You cannot use it in
FROMclause. The life-cycle become manual.
You need to consider if all the drawbacks are ok in your case. Being able to perform full integration testing on the target database is great benefit. Testing time matters. Running fresh Oracle DB locally in minutes or on CI is definetely a great asset.