Oracle Integration Platform disaster recovery considerations

Omid Izadkhasti
14 min readSep 4, 2024

--

Oracle Integration Cloud (OIC) stands out as a leading integration platform due to its strong market position, robust SLA, scalability, and fully managed nature. Its comprehensive feature set, coupled with Oracle Cloud infrastructure and enterprise focus, makes it a top choice for businesses looking to integrate disparate systems seamlessly and reliably.

Disaster recovery is a critical concern for customers using Oracle Integration Cloud (OIC), especially when considering the potential impact of losing an entire region. Ensuring business continuity in such a scenario requires a well-defined disaster recovery strategy that includes the ability to quickly switch to another region.

Designing a holistic disaster recovery (DR) solution for an integration platform like Oracle Integration Cloud (OIC) is crucial, given its role in connecting various external systems and ensuring seamless data flow across an organization’s ecosystem. A holistic DR strategy should address the complexities of maintaining connectivity and data integrity across multiple systems in the event of a disaster.

When designing and implementing a Disaster Recovery (DR) solution for Oracle Integration Cloud (OIC), it’s crucial to address several key considerations that ensure a robust, resilient, and effective strategy. OIC, as a critical integration platform, plays a pivotal role in connecting various systems, applications, and data sources across an organization. Therefore, a well-thought-out DR solution is essential for maintaining business continuity in the event of an outage or disaster.

Here is high level architecture diagram of Active-Passive DR for OIC.

When designing a Disaster Recovery (DR) solution for Oracle Integration Cloud (OIC), it’s essential to extend the DR strategy beyond just the core integration platform to include the other key components that support your end-to-end integrations. These components, such as Oracle Streaming, databases, API Gateway, and others, are integral to the overall integration architecture and need to be included in the DR plan to ensure full system resilience and business continuity.

To thoroughly design and implement a Disaster Recovery (DR) solution for Oracle Integration Cloud (OIC), it’s important to consider each component within the architecture. I’ll walk you through the key considerations for each component involved in the DR solution:

1. Infrastructure

In designing and implementing a Disaster Recovery (DR) solution for Oracle Integration Cloud (OIC), it’s crucial to have an efficient and automated method for provisioning infrastructure resources across multiple regions.

While the specifics of writing Terraform code are not covered in this blog, consider using this approach to automate the provisioning of your OIC instances and associated infrastructure. By leveraging Terraform, you can create a more resilient, reliable, and easily manageable DR solution for Oracle Integration Cloud.

Please refer to this post to understand how using Terraform to provision OIC instance.

2. Oracle Integration Cloud (OIC)

In this section, I’ll outline the high-level steps necessary to configure an Active-Passive Disaster Recovery (DR) solution for Oracle Integration Cloud (OIC) using a custom domain. This setup involves provisioning infrastructure resources in both the primary and DR regions and configuring them to ensure seamless failover in case of a disaster.

High-Level Steps for Active-Passive DR Solution for OIC

1. Update OIC Instances with Custom Endpoint

  • Objective: Ensure that both OIC instances (in the primary and DR regions) use the same custom endpoint.
  • Action: Configure both OIC instances to use a custom domain (e.g., oic.mydomain.com). This ensures that traffic directed to the custom domain can be rerouted between the primary and DR regions as needed.

2. Provision OCI Vault and RSA Master Key

  • Objective: Securely manage certificates and encryption keys.
  • Action: Use Terraform to provision OCI Vault and create an RSA master key in both the primary and DR regions. This key will be used for encryption and decryption tasks, ensuring data security across regions.

3. Create and Import SSL Certificate

  • Objective: Secure the custom domain with SSL/TLS.
  • Action: Obtain an SSL certificate for your custom domain (oic.mydomain.com). Import this certificate into the OCI Certificate Service in both regions. This ensures that your OIC instances are accessible via HTTPS.

4. Configure OCI Load Balancer

  • Objective: Distribute traffic to OIC instances.
  • Action: Use Terraform to provision OCI Load Balancers in both regions, including necessary resources such as listeners, hostnames, and backend sets. Configure the load balancer to listen on port 443 (HTTPS) for the custom domain and route traffic to the OIC public IP address as the backend in each region.

5. Provision Web Application Firewall (WAF)

  • Objective: Enhance security by protecting the OIC instances from common web attacks.
  • Action: Provision a WAF policy in both regions using Terraform and add the load balancer as an enforcement point. This step ensures that all incoming traffic is inspected and filtered before reaching the OIC instances.

6. Configure DNS

  • Objective: Manage the custom domain and direct traffic to the appropriate region.
  • Action:
  • Create a DNS zone in your chosen DNS service (e.g., OCI DNS Service).
  • Create an A record for your custom endpoint (oic.mydomain.com) that points to the IP address of the primary load balancer.
  • In the event of a disaster, update the A record to point to the IP address of the DR load balancer.

7. Test the Configuration

  • Objective: Verify that the setup is functional and traffic is correctly routed.
  • Action: Access your OIC instance using the custom URL (https://oic.mydomain.com). Ensure that the traffic is correctly routed to the primary OIC instance and that the DR instance can take over if the primary instance is unavailable.

Please refer to this post for more information.

2. Oracle Integration Data

One of the simplest solutions for migrating integrations from the primary (source) instance to the DR (target) instance is to use the export and import functionality. This process involves exporting all integration resources from the source instance to an Object Storage bucket in the DR region and then importing them into the DR instance. This can be done using the following steps:

1. Export/Import Integrations

One of the simplest solutions for migrating integrations from the primary (source) instance to the DR (target) instance is to utilize the bulk export and import functionality. This method involves exporting all integration resources from the source instance to an Object Storage bucket located in the DR region. Once the resources are securely stored, you can easily import them from the bucket into the DR instance.

For scenarios where you require more granular control over the migration process, you can also export and import individual integrations using the OIC REST APIs. This allows for more customized management of specific integration flows, ensuring that only the necessary components are transferred during the migration.

This version adds clarity while emphasizing the key points about bulk export/import and the alternative option using OIC REST APIs.

This can be done using the following steps:

Note: You need to generate an access token in order to invoke Oracle Integration Cloud (OIC) REST APIs. The access token is essential for authenticating API requests. For more information on how to generate the access token and authenticate using OAuth 2.0, please refer to the relevant documentation here.

Step 1: Bulk Export Integrations

First, create a bucket in the DR region and configure the storage settings in both OIC instances. Then, use the following command to export integrations from the source instance to the Object Storage bucket in the DR region:

curl --location 'https://design.integration.ap-sydney-1.ocp.oraclecloud.com/ic/api/common/v1/exportServiceInstanceArchive?integrationInstance=<instance name>' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <Auth Token>' \
--data '{"description":"","exportSecurityArtifacts":true,"jobName":"export_syd_280824","storageName":"OIC-BUCKET"}'

Step 2: Bulk Import Integrations

After exporting the integrations, you can import them into the DR instance using the following command (Before you can successfully import integrations into the DR environment, it’s crucial to ensure that the integrations are in a deactivated state. Integrations must be deactivated because the import process will not work if they are currently active):

curl --location 'https://design.integration.ap-sydney-1.ocp.oraclecloud.com/ic/api/common/v1/importServiceInstanceArchive?integrationInstance=<instance name>' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <Auth Token>' \
--data '{"archiveFile":"export_syd_280824","description":"","importActivateMode":"importOnly","importSecurityArtifacts":true,"jobName":"Import Integrations 220824","storageName":"OIC-BUCKET","startSchedules":false}'

Step 3: Export/Import Integrations One by One

Alternatively, if you prefer to export and import integrations individually, you can do so using the following commands:

Export Single Integration:

curl --location --request GET 'https://design.integration.ap-sydney-1.ocp.oraclecloud.com/ic/api/integration/v1/integrations/<Integration Name>|<Integration Version>/archive?integrationInstance=<Instance Name>' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <Auth Token>'
--output <outputfile.iar>

Import Single Integration:

curl --location --request PUT -F file=@myIntegration.iar -F type=application/octet-stream h'https://design.integration.ap-sydney-1.ocp.oraclecloud.com/ic/api/integration/v1/integrations/archive?integrationInstance=<Instance Name>' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <Auth Token>'

Step 4. Automate the Process with CI/CD Solutions

To streamline the export and import process, consider using Continuous Integration/Continuous Deployment (CI/CD) solutions like OCI DevOps. These tools can automate the export and import of integrations, making the process more efficient and less prone to error.

Step 5. Post-Import Configuration

After importing integrations into the target environment (DR instance), you need to:

  • Update Connection Details: Adjust the connection settings in the DR instance to match the new environment.
  • Activate Integrations: Activate the imported integrations after the switchover to the DR environment. This step can also be automated using OIC REST APIs.

2. Export/Import B2B resources

Similar to the migration of integrations, B2B resources can also be migrated from the source (primary instance) to the target (DR instance). Below are the steps for migrating B2B Trading Partners and Schemas.

Step 1: Export/Import B2B Trading Partners

Export B2B Trading Partner

curl --location 'https://design.integration.ap-sydney-1.ocp.oraclecloud.com/ic/api/b2b/v1/tpm/partners/<Trading Partner Name>/export?integrationInstance=<Instance Name>' \
--header 'Authorization: Bearer <Auth Token>' \
--output <Filename>

Import B2B Trading Partner

curl --location --request PUT -F file=@tradingPartner.zip -F type=application/octet-stream h'https://design.integration.ap-sydney-1.ocp.oraclecloud.com/ic/api/b2b/v1/tpm/partners/import?integrationInstance=<Instance Name>&isOverrwrite=true' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <Auth Token>'

Step 2: Export/Import B2B Schemas

Export B2B Schema

curl --location 'https://design.integration.ap-sydney-1.ocp.oraclecloud.comcurl --location 'https://design.integration.ap-sydney-1.ocp.oraclecloud.com/ic/api/b2b/v1/schemas/<Schema ID>/export?integrationInstance=<Instance Name>' \
--header 'Authorization: Bearer <Auth Token>' \
--output <Filename>

Import B2B Schema

curl --location --request PUT -F file=@schemaName.zip -F type=application/octet-stream h'https://design.integration.ap-sydney-1.ocp.oraclecloud.com/ic/api/b2b/v1/schemas/import?integrationInstance=<Instance Name>&isOverrwrite=true' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <Auth Token>'

2. Export Activity Stream Logs

One of the challenges that customers often face during disaster recovery (DR) planning is the migration of logs from the source (primary) to the target (DR) environment. Logs are crucial for understanding what happened in the source integration environment, especially for support and troubleshooting purposes.

Approach to Address This Challenge:

Step 1. Enable Logging in Both Environments:

  • Ensure that detailed logging is enabled in both the source (primary) and target (DR) OIC instances.
  • Configure logging levels and settings according to the organization’s needs to capture all relevant data before and after the migration.

Step 2. Exporting Logs from Source Environment:

  1. Create Object Storage Buckets:
  • Primary Region: Create an object storage bucket in the primary region where your source OIC instance resides.
  • DR Region: Create a corresponding object storage bucket in the DR region.
  • Enable Cross-Region Replication: Configure cross-region replication for both buckets to ensure that logs are replicated from the primary region’s bucket to the DR region’s bucket.

2. Configure OCI Connector Hub:

  • Primary Region: Set up a rule in OCI Connector Hub in the primary region to export OCI logs to the object storage bucket in the primary region.
  • DR Region: Set up a similar rule in OCI Connector Hub in the DR region to export OCI logs to the object storage bucket in the DR region.
  • Note: Ensure that the OCI Connector Hub configurations are aligned for both regions to maintain consistency.

3. Ingest Logs into OCI Log Analytics:

  • Object Storage Integration: Follow the instructions provided in the OCI Log Analytics documentation to ingest logs from the object storage buckets into OCI Log Analytics.
  • Configuration: Configure the ingestion process to pull logs from both the primary and DR buckets as needed.

4. Create Dashboards in OCI Log Analytics:

  • Dashboard Creation: Set up a dashboard in OCI Log Analytics to visualize and monitor logs from both the primary and DR environments.
  • Custom Views: Create custom views and queries to filter and analyze logs according to your requirements.

By implementing these strategies, customers can maintain visibility into what happened in the source integration environment even after a switchover to the DR instance, thereby ensuring continuity of support and troubleshooting capabilities.

3. Export Runtime Data

To ensure that you can view the latest integration executions and other runtime data from the source environment before a disaster occurs, you need to create a solution that exports this data on a scheduled basis to a database. You can then develop a custom application to view and analyze this data.

Steps to Implement Custom Runtime Data Export Solution

Step 1: Create a table in the database

You’ve already provided the SQL command to create the oic_instances table in Oracle Database. Here’s the command for reference:


CREATE TABLE "INTEGRATION"."oic_instances"
( "dataFetchTime" TIMESTAMP (6),
"creationDate" DATE,
"instanceDate" DATE,
"instanceDuration" NUMBER(19,0),
"flowType" VARCHAR2(4000 BYTE) COLLATE "USING_NLS_COMP",
"hasRecoverableFaults" CHAR(1 BYTE) COLLATE "USING_NLS_COMP",
"id" VARCHAR2(4000 BYTE) COLLATE "USING_NLS_COMP" NOT NULL ENABLE,
"instanceId" VARCHAR2(4000 BYTE) COLLATE "USING_NLS_COMP",
"instanceReportingLevel" VARCHAR2(4000 BYTE) COLLATE "USING_NLS_COMP",
"integration" VARCHAR2(4000 BYTE) COLLATE "USING_NLS_COMP",
"integrationId" VARCHAR2(4000 BYTE) COLLATE "USING_NLS_COMP",
"integrationName" VARCHAR2(4000 BYTE) COLLATE "USING_NLS_COMP",
"integrationVersion" VARCHAR2(4000 BYTE) COLLATE "USING_NLS_COMP",
"invokedBy" VARCHAR2(4000 BYTE) COLLATE "USING_NLS_COMP",
"isDataAccurate" CHAR(1 BYTE) COLLATE "USING_NLS_COMP",
"isLitmusFlow" CHAR(1 BYTE) COLLATE "USING_NLS_COMP",
"isLitmusSupported" CHAR(1 BYTE) COLLATE "USING_NLS_COMP",
"isPurged" CHAR(1 BYTE) COLLATE "USING_NLS_COMP",
"lastTrackedTime" VARCHAR2(4000 BYTE) COLLATE "USING_NLS_COMP",
"status" VARCHAR2(4000 BYTE) COLLATE "USING_NLS_COMP",
"litmusResultStatus" VARCHAR2(4000 BYTE) COLLATE "USING_NLS_COMP",
"mepType" VARCHAR2(4000 BYTE) COLLATE "USING_NLS_COMP",
"nonScheduleAsync" CHAR(1 BYTE) COLLATE "USING_NLS_COMP",
"opcRequestId" VARCHAR2(4000 BYTE) COLLATE "USING_NLS_COMP",
"processingEndDate" DATE,
"projectFound" CHAR(1 BYTE) COLLATE "USING_NLS_COMP",
"receivedDate" DATE,
"replayable" CHAR(1 BYTE) COLLATE "USING_NLS_COMP",
"replayed" CHAR(1 BYTE) COLLATE "USING_NLS_COMP",
"INSTANCENAME" VARCHAR2(100 BYTE) COLLATE "USING_NLS_COMP",
"REGION" VARCHAR2(100 BYTE) COLLATE "USING_NLS_COMP",
CONSTRAINT "instances_PK" PRIMARY KEY ("id")
);

Step 2: Export Integration Instances and Runtime Data

Your stored procedure handles inserting new records and updating existing ones based on the id field. Here’s the procedure for reference:

create or replace PROCEDURE insert_update_instances
(
p_dataFetchTime IN INTEGRATION."oic_instances"."dataFetchTime"%TYPE,
p_creationDate IN INTEGRATION."oic_instances"."creationDate"%TYPE,
p_instanceDate IN INTEGRATION."oic_instances"."instanceDate"%TYPE,
p_instanceDuration IN INTEGRATION."oic_instances"."instanceDuration"%TYPE,
p_flowType IN INTEGRATION."oic_instances"."flowType"%TYPE,
p_hasRecoverableFaults IN INTEGRATION."oic_instances"."hasRecoverableFaults"%TYPE,
p_id IN INTEGRATION."oic_instances"."id"%TYPE,
p_instanceId IN INTEGRATION."oic_instances"."instanceId"%TYPE,
p_instanceReportingLevel IN INTEGRATION."oic_instances"."instanceReportingLevel"%TYPE,
p_integration IN INTEGRATION."oic_instances"."integration"%TYPE,
p_integrationId IN INTEGRATION."oic_instances"."integrationId"%TYPE,
p_integrationName IN INTEGRATION."oic_instances"."integrationName"%TYPE,
p_integrationVersion IN INTEGRATION."oic_instances"."integrationVersion"%TYPE,
p_invokedBy IN INTEGRATION."oic_instances"."invokedBy"%TYPE,
p_isDataAccurate IN INTEGRATION."oic_instances"."isDataAccurate"%TYPE,
p_isLitmusFlow IN INTEGRATION."oic_instances"."isLitmusFlow"%TYPE,
p_isLitmusSupported IN INTEGRATION."oic_instances"."isLitmusSupported"%TYPE,
p_isPurged IN INTEGRATION."oic_instances"."isPurged"%TYPE,
p_lastTrackedTime IN INTEGRATION."oic_instances"."lastTrackedTime"%TYPE,
p_status IN INTEGRATION."oic_instances"."status"%TYPE,
p_litmusResultStatus IN INTEGRATION."oic_instances"."litmusResultStatus"%TYPE,
p_mepType IN INTEGRATION."oic_instances"."mepType"%TYPE,
p_nonScheduleAsync IN INTEGRATION."oic_instances"."nonScheduleAsync"%TYPE,
p_opcRequestId IN INTEGRATION."oic_instances"."opcRequestId"%TYPE,
p_processingEndDate IN INTEGRATION."oic_instances"."processingEndDate"%TYPE,
p_projectFound IN INTEGRATION."oic_instances"."projectFound"%TYPE,
p_receivedDate IN INTEGRATION."oic_instances"."receivedDate"%TYPE,
p_replayable IN INTEGRATION."oic_instances"."replayable"%TYPE,
p_replayed IN INTEGRATION."oic_instances"."replayed"%TYPE,
p_region IN INTEGRATION."oic_instances"."REGION"%TYPE,
p_instanceName IN INTEGRATION."oic_instances"."INSTANCENAME"%TYPE
) AS
BEGIN
UPDATE INTEGRATION."oic_instances"
SET
"dataFetchTime" = p_dataFetchTime,
"creationDate" = p_creationDate,
"instanceDate" = p_instanceDate,
"instanceDuration" = p_instanceDuration,
"flowType" = p_flowtype,
"hasRecoverableFaults" = p_hasRecoverableFaults,
"instanceId" = p_instanceId,
"instanceReportingLevel" = p_instanceReportingLevel,
"integration" = p_integration,
"integrationId" = p_integrationId,
"integrationName" = p_integrationName,
"integrationVersion" = p_integrationVersion,
"invokedBy" = p_invokedBy,
"isDataAccurate" = p_isDataAccurate,
"isLitmusFlow" = p_isLitmusFlow,
"isLitmusSupported" = p_isLitmusSupported,
"isPurged" = p_isPurged,
"lastTrackedTime" = p_lastTrackedTime,
"status" = p_status,
"litmusResultStatus" = p_litmusResultStatus,
"mepType" = p_mepType,
"nonScheduleAsync" = p_nonScheduleAsync,
"opcRequestId" = p_opcRequestId,
"processingEndDate" = p_processingEndDate,
"projectFound" = p_projectFound,
"receivedDate" = p_receivedDate,
"replayable" = p_replayable,
"replayed" = p_replayed,
"REGION" = p_region,
"INSTANCENAME" = p_instanceName
WHERE "id" = p_id;

IF SQL%ROWCOUNT = 0 THEN
INSERT INTO INTEGRATION."oic_instances" ("id", "dataFetchTime", "creationDate", "instanceDate", "instanceDuration", "flowType", "hasRecoverableFaults", "instanceId", "instanceReportingLevel", "integration", "integrationId", "integrationName", "integrationVersion", "invokedBy", "isDataAccurate", "isLitmusFlow", "isLitmusSupported", "isPurged", "lastTrackedTime", "status", "litmusResultStatus", "mepType", "nonScheduleAsync", "opcRequestId", "processingEndDate", "projectFound", "receivedDate", "replayable", "replayed", "REGION", "INSTANCENAME")
VALUES (p_id, p_dataFetchTime,p_creationDate, p_instanceDate, p_instanceDuration, p_flowtype, p_hasRecoverableFaults, p_instanceId, p_instanceReportingLevel, p_integration, p_integrationId, p_integrationName, p_integrationVersion, p_invokedBy, p_isDataAccurate, p_isLitmusFlow, p_isLitmusSupported, p_isPurged, p_lastTrackedTime, p_status, p_litmusResultStatus, p_mepType, p_nonScheduleAsync, p_opcRequestId, p_processingEndDate, p_projectFound, p_receivedDate, p_replayable, p_replayed, p_region, p_instanceName);
END IF;
END insert_update_instances;

Step 3: Create schedule integration in source OIC to get instances to import them to database

  1. Create a Scheduled Integration:

In OIC, create a new scheduled integration that will run at a defined interval (e.g., every 10 minutes).

2. Invoke OIC Monitoring REST API:

Use the OIC REST API to fetch integration instance data. For example:

/ic/api/integration/v1/monitoring/instances

3. Transform and Send Data to Database:

  • After fetching the data, transform it into the format required by your stored procedure.
  • Use an integration action to invoke the insert_update_instances stored procedure, passing the fetched data as parameters.

3. Create custom application to view records in DB:

Using low-code/no-code platforms like Oracle APEX or Oracle Visual Builder is a great way to develop user-friendly applications for viewing and interacting with the records in your database table. These platforms can simplify the creation of dashboards and interfaces without needing extensive programming expertise.

4. Export/Import internal File Server resources

To migrate internal File Server resources within Oracle Integration Cloud (OIC) from a source environment to a target environment, you can utilize the OIC File server REST APIs. These APIs allow you to interact with the File Server, including listing, exporting, and importing folders, files, users, and permissions.

Here’s an example of how you can use the OIC File server REST API to get a list of all folders under the root directory in the OIC File server:

curl --location 'https://design.integration.<Region>.ocp.oraclecloud.com/ic/api/fileserver/v1/filesystem/root/?integrationInstance=<Instance Name>y&filterPattern=**&listFiles=true&patternType=File' \
--header 'Accept: application/json' \
--header 'Authorization: Bearer <Access Token>'

2. Oracle Streaming

The Pub/Sub (Publish/Subscribe) pattern is indeed a common integration approach, especially in scenarios involving streaming services like Oracle Cloud Infrastructure (OCI) Streaming or Apache Kafka. As part of a Disaster Recovery (DR) solution, it is crucial to ensure that messages published to a streaming service in the primary environment are replicated to a DR environment. This ensures that in the event of a regional outage, consumers in the DR region can continue to process messages with minimal disruption.

  • Cross-Region Replication: To ensure data resilience and availability during a regional outage, you can configure Oracle Streaming to replicate data streams to a Disaster Recovery (DR) region. This can be achieved by leveraging MirrorMaker 2 (MM2), a tool that is part of the open-source Apache Kafka ecosystem. MM2 facilitates cross-region replication for Oracle Cloud Infrastructure (OCI) Streaming by replicating Kafka topics across different Kafka clusters, which in this case are the streaming services in the primary and DR regions. To ensure that integrations continue from the correct point in case of a disaster recovery (DR) event, it’s essential to track the latest consumed offset in the streaming service. By writing this offset to a database, you can ensure that, after a failover to the DR environment, the integration reads from the last processed point. Using the OCI Streaming adapter’s ability to read from a specific offset, this custom solution allows for seamless resumption of message processing without data loss or duplication. This approach ensures that the integration platform remains robust and resilient even in DR scenarios.
  • Failover Readiness: Ensure that any integration flows relying on streaming data are configured to switch to the DR region’s streams automatically.
  • Data Retention and Recovery: Set appropriate data retention policies to keep historical data available for recovery needs. Regularly test the failover process to validate the integrity of replicated data.

3. Oracle Database

  • Active Data Guard/GoldenGate: Use Oracle Active Data Guard or Oracle GoldenGate to replicate databases in real-time to a secondary region. This ensures that your data remains synchronized and available in the event of a disaster.

4. API Gateway

  • Multi-Region API Deployment: Deploy the API Gateway in multiple regions, ensuring that APIs are available from either the primary or DR region.
  • Global Traffic Management: Implement global traffic management to route requests to the most appropriate region. During a disaster, traffic should automatically shift to the DR region.
  • API Configuration Synchronization: Keep API configurations, such as deployments, security policies and routing rules, synchronized across regions (using OCI Rest API’s) to ensure consistency.

5. Oracle Object Storage

  • Cross-Region Replication: Enable cross-region replication for critical object storage buckets. This ensures that stored files and data are accessible from the DR region.
  • Failover Planning: Ensure your integration processes can seamlessly switch to using replicated storage buckets in the DR region during a disaster.

Conclusion

In this article, I aimed to explain how to implement a Disaster Recovery (DR) environment for the Oracle Integration Cloud (OIC) platform. As you know, an integration platform interacts with several external systems, and ensuring resiliency for all these systems is essential but beyond the scope of this article.

I hope this article gives you a solid foundation and insights into implementing your DR solution for OIC.

References

--

--

Omid Izadkhasti

Principal Cloud Solution Architect @Oracle. The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.