Charitha
4 min readMay 4, 2023

Oracle DB migration to Azure Database for Postgres (PG)

This is a quick reference document based on a successful project that used Ora2PG for the migration (with a few tables of size in terabytes) as shown below:

Migration Process

· The Ora2PG assessment is run for the Oracle DBs and an effort/complexity is evaluated.

· For specific details that are not automated in Ora2PG, there needs to be a remediation.

· Database objects can be extracted, the schema and data to PG can be imported.

· The tool also helps you to compare and give a summary of the validation across source oracle versus PG servers.

· The tool is very mature, and the migration is streamlined in simple process with this tool.

Learnings from the project:

Listed are the challenges with migration that were not addressed with Ora2PG and along with appropriate technical details to resolve them.

Pre-migration:

1. The splitting Oracle DB to multiple Postgres DBs as part of modernized data model.

· Table mappings from Oracle to the Postgres DBs needs to be finalized in the planning phase.

· Need to address any Foreign Key dependencies if the tables are spanning 2 PG databases.

· Schema validation is an additional detail that needs to be taken care of for all corresponding objects such as PK, FK, Indexes & Triggers.

2. Application server version does not support PG versions.

· Web Servers and application hosting servers shall be identified during assessment phase and upgraded.

3. PL/SQL embedded in the code.

· Identify these records in the assessment phase and additional remediation needed to convert to PG/SQL.

· Migration window: Estimate the duration of the migration and the system downtime based actual load times for these tables/objects. This can help you define the maintenance window to load the DB and cutover.

4. Data types LOB, Secure File LOBs, LONGRAW formats

· Close attention needs to be paid and manual intervention may be needed for these Oracle format.

· Manual intervention like schema validation with comparing the exact field sizes and temporary tables for data migration may be needed.

Migration:

1. Data should be imported before importing the ones listed below. Follow the sequence to enable the constraints for faster import times.

· Indexes — Constraints (PK, Null/Not Null, Check and Unique) -Foreign keys (separated from the other constraints) — Triggers.

2. Data Migration was very slow and aborted a few times.

· Review the server parameters and adjust them for migration performance tuning.

· Here are a few parameters updated that can help.

3. Transaction log files sizes were increasing, and database size reached its limit.

· Turn off the AUTOVACUUM on Platform & Application PG Flexible Server. This process recovers the disks space in the tables and returns it to the database automatically. This adds to the delay with the processing, especially when new tables are written down and no records are deleted (or no space needs to be reclaimed). This needs to be disabled before the migration process.

· ALTER TABLE to UNLOG specific tables for migration if they are intermediate tables.

4. Large table migrations

· The tables were loaded with a data until 5 days back pre-migration. And during the migration window, the delta was loaded.

· Session time out should be increased to allow them to complete.

· Leveraged other utilities like pg_dump and pg_restore for the data migrations for hand full of tables when the table size is huge or migration time is longer.

Post Migration:

1. Validation

· Ora2pg comparison for ensuring all the objects are recreated.

· Ensure the data loads are complete with the stats on the Postgres database for each table and object.

· Custom scripts may be needed for thorough data quality checks between oracle and Postgres DBs.

2. Processes need to be set up that were functional in the Oracle server:

· Oracle loader replaced with Postgres COPY. Ensure the process works in production after the cut-over to PG servers. There should be monitoring in place to ensure the loads are within the SLAs for about a week.

· Use pg_cron for the server maintenance routines.

· Use postgres alerts to setup UTL_SMTP jobs and other alerts.

· There are options to use Azure Logic Apps or Azure Functions to automate the processes.

· Python code: Custom scripts for quality checks/validations were built using python and helped with automation.

· Azure Postgres portal used for setup and updates of the server configurations and server monitoring/management.

· Ensure all the down streaming applications, reports and interfaces are tested.

3. The VACUUM ANALYZE needs to be done on all the objects after migration process. Enable AUTOVACUUM for the DB and ALTER TABLES to start LOGGING on all tables.

4. Check Bloat: The updates and deletes mark the existing records as invisible and will stay in the disk. To ensure the efficient management of disc, use the appropriate pg_states routines to check dead tuple percentages.

· To keep the bloat at check, use CLUSTER, REINDEX or pg_repack extension. Choose the appropriate methodology based on the cost of the operation and the table locks preventing read/write operations during the process. VACUUM FULL to be used only when needed.

References:

· New Oracle to Postgres Migration Guide for Azure — Microsoft Community Hub

· OrcasNinjaTeam/Oracle to Azure Database for PostgreSQL Migration Guide.pdf at master · microsoft/OrcasNinjaTeam (github.com)

· Ora2Pg : Migrates Oracle to PostgreSQL (darold.net)

· Oracle to Azure Database for PostgreSQL: Migration guide | Microsoft Learn

· DataMigrationTeam/Oracle to Azure PostgreSQL Migration Cookbook.pdf at master · microsoft/DataMigrationTeam (github.com)

· DataMigrationTeam/Oracle to Azure Database for PostgreSQL Migration Workarounds.pdf at master · microsoft/DataMigrationTeam (github.com)

Charitha
0 Followers

Analytics , AI , Digital Transformation - Financial Services / Manufacturing/Retail