Cloud scalability using Customer Managed Oracle REST Data Service with Autonomous JSON

Phantompete
Oracle Developers
Published in
7 min readMay 11, 2023

Co-authored with Javier de la Torre

Autonomous JSON — Why use JSON with SODA for REST?

The main benefit of using cloud services is the ability to increase or decrease resources as needed to meet the demand. Imagine you have a mobile application that becomes very popular, and many people is registering or there are peaks of usage.

Oracle provides a great solution for Developers called Autonomous JSON. Autonomous JSON is a cloud document database service that makes it simple to develop JSON-centric applications. Using the Database Actions, we can create a new collection called customers.

Once we have the collection created, we can easily create new JSON documents like the following, for creating a new customer registration:

Autonomous JSON provides an API called Simple Oracle Document Access (SODA). SODA allows to work with JSON documents without the need of knowing any SQL at all. You can implement any CRUD operation on documents and use pattern-matching query-by-example (QBE). There are separate SODA implementations for use with different languages.

SODA for REST makes easy to consume the JSON collections using REST interfaces like GET, POST, PUT, DELETE, etc. SODA for REST is deployed in ORDS.

When you provision an Autonomous Database instance, by default Oracle REST Data Services (ORDS) is preconfigured and available for the instance. ORDS makes it easy to develop REST interfaces

Autonomous JSON — Scalability

We have been talking about ORDS scalability, but what happens with Autonomous JSON? There are two main ways of scaling.

The first one is by adding CPU cores or storage manually. By clicking the button “Manage Scaling” you can define how many resources do you want to use.

This is great if you know when you are going to have peaks workloads or if you have a stable workload, but what happens if you don’t know it? For those unexpected workloads you can configure the automatic scaling. Enabling this capability, the Autonomous JSON will increase the OCPUs up to 3 times the provisioned OCPUs, and automatically will decrease once the peak is over. It is important to mention that manual scaling or automatic scaling has no downtime!

You can also configure the storage automatic scaling, and it will add more storage without any downtime.

Scaling — Why customer managed ORDS?

With the default ORDS, Oracle performs any required configuration, patching, and maintenance.

When you use the default ORDS on Autonomous Database, you cannot modify any of the ORDS configuration options.

For example, with the default configuration, the JDBC connection pools have a maximum of 100 connections and the connections for ORDS are preconfigured to use the LOW database service. What if we want to use more connections for our application?

With the customer managed ORDS we have greater control when it comes to how the session pool of our database is utilized as we can manipulate the connection limit from the ORDS perspective.

As per the reference architecture the load balancer is yet another point where we can spread the weight and reduce the load on our database especially when dealing with applications that are aimed at a large user base, this can be further enhanced by enabling application continuity for your Autonomous Database.

When ORDS runs in a customer managed environment, you are responsible for configuration, patching, and maintenance of ORDS in the customer managed environment.

Note: After you configure Autonomous Database to use your customer managed ORDS in addition to the existing autonomously managed ORDS, you can route ORDS HTTPS traffic through your environment. The default Autonomous Database web server and ORDS are still running and ORDS traffic goes to the ORDS running in the customer managed environment. This provides an additional and alternative HTTPS solution for Autonomous Database.

Read more in official documentation for ORDS 23.1

Scaling — The bigger picture

Let’s take a look at the key configuration parameters which will allow us to optimize our mid-tier

· jdbc.InitialLimit (10)- Specifies the initial size for the number of connections that will be created.

· jdbc.MaxLimit (20)– Specifies the maximum number of connections.

· cache.metadata.timeout (5m) — Specifies the setting to determine for how long a metadata record remains in the cache. Longer duration means, it takes longer to view the applied changes.

· cache.metadata.enabled (false/true)- Specifies the setting to enable or disable metadata caching. With caching enabled the database lookup is performed once hence it can greatly improve the performance more on this here.

Additional variables worth consideration

· jdbc.InactivityTimeout (1800)- Specifies how long an available connection can remain idle before it is closed. The inactivity connection timeout is in seconds.

· jdbc.statementTimeout (900)- Specifies a timeout period on a statement. An abnormally long running query or script, executed by a request, may leave it in a hanging state unless a timeout is set on the statement. Setting a timeout on the statement ensures that all the queries automatically timeout if they are not completed within the specified time period.

· soda.maxLimit (10000)- When using the SODA REST API, specifies the maximum number of documents that will be returned for a GET request on a collection URL, regardless of any limit specified in the URL.

· soda.defaultLimit (100)- When using the SODA REST API, specifies the default number of documents returned for a GET request on a collection when a limit is not specified in the URL.

If you want to more know about the available ORDS settings click here.

Testing — Demonstrating scaling of ORDS with ADB

Let’s take a look how the points we discussed in the section above apply in practice, we will be looking at two factors inserting and fetching data.

In order to measure the scalability, we can use tools such as JMeter that allow us to configure the amount of users attempting to access our application or insert data, in this case we are going to insert objects using the REST enabled SODA API, let’s jump into the results…

Testing Initial Setup

Autonomous Database Shared: 1 OCPU

Compute ORDS Server: 4 CPU 64GB RAM

ORDS Initial Configuration:

· InitialLimit(50)

· MaxLimit(100)

· Cache(false)

The configuration for the POST SODA request can be seen below:

How quickly can we insert data? Here is a sample results from a JMeter test showcasing the throughput achieved for 100k samples — 1000 concurrent users looped 100 times.

Example of changing the ORDS configuration by using the command:

ords config set jdbc.MaxLimit 100

Looking at the ORDS console we can always see the output for debugging purposes and to verify if we have reached the connection limit:

If the connection limit of the pool is exceeded warnings as the one below can be seen within the ORDS console/log:

Results summary table

Each simulation was looped 100 times giving a range of 10K to 60K inserts into the Autonomous Database!!!

For each added OCPU the InitialLimit and MaxLimit were adjusted within the ORDS Sever configuration with a total maximum of 400.

Metrics

Lets take a look how the metrics of the ORDS server and the Autonomous Database correspond to the carried out tests:

ORDS CPU Usage
Autonomous CPU usage
Autonomous sessions

In the above metrics we can most importantly see that despite peaks in the CPU usage the session amount remained rather constant, this is thanks to the ORDS configuration which manages the connections and allows us to take complete control of our session pool.

Conclusion

We can see that there is a clear increase of transactions processed when we increase the amount of OCPUs within the Autonomous database, this leaves us with no surprise as the machine operating on Exadata infrastructure is performing as it should.

However, looking at the ORDS configuration we can see how altering a few parameters can impact the transaction speed going from ~550 tx/sec to almost 1000 tx/sec with a single autonomous OCPU, this is almost doubling the transaction speed from the consumer perspective. The higher we go in scale the more negligible the difference appears to be, this will largely vary on case by case basis as each application is different and requires fine-tuning of the configuration to maximize its potential.

As an interesting fact by the end of the testing a considerable number of inserts was done! Over 2 million, this additionally demonstrates that the performance has not changed as the table grew.

For more information and to stay-up to date, you can find Javier and myself on LinkedIn.

--

--

Phantompete
Oracle Developers

New technologies, IoT & Blockchain enthusiast. Opinions expressed are solely my own and do not express the views or opinions of my employer Oracle.