Oracle Converged Database 23ai — Let’s talk about Your Productivity…

Loïc Lefèvre
db-one
Published in
3 min readMay 24, 2024
Bring your use cases, really, any, and combined…

When talking about “Converged Database”, often I’ve got to explain what it means, what are the implications, and all the benefits to expect.

Looking at the picture above, one should immediately understand that not only an Oracle database can manage relational data, but also it shines at storing and analyzing JSON documents at scale. And not just JSON documents, but XML as well. Now if you have Property graph or RDF data, then that’s the very same story. If you want to store and analyze spatial data (aka GIS), then you can also.

The lesser-known data types are Vector embeddings that have been available since early May 2024 with the 23ai release, the Text data (PDF, Word, Powerpoint, etc.) that come with Full-Text indexes; Columnar Analytics that provide tremendous performance over an in-memory-only column store; Blockchain tables that are immutable and tamper-resistant (e.g. insert only, no drop table, verifiable); Transactional Event Queue which allows consistent events/messages exchange between processes; In-Database Machine Learning that allows for creating and scoring models within the database without data movements; Internet of Things (IoT) which provide very fast asynchronous ingestion capabilities as well as time-series based processing; and finally External data which are basically a way to access any other data (local files, object storage, other database data…) as if it was a standard table.

In terms of architecture, one could deploy one database and be able to store and analyze all of these data; or it could be one container database containing pluggable databases, each with a distinct data use case; or it could be distinct databases.

As you can imagine, limiting data movements to zero helps to reduce the complexity (ETL, data synchronization across databases). This results in better security as well since all these data can now be protected by leveraging the Maximum Security Architecture. Also High-Availability and Disaster Recovery of all these data can be improved by applying Maximum Availability Architecture.

Now, to the point of increased productivity, having access to all these data use cases in the very same software allows, if needed and allowed by your security policies, to manage all of these data together using the SQL language.

Examples:

I need to query real-time JSON messages sent through a Kafka topic to detect anomalies and perform predictive maintenance; yes, you can do that!

I need to do Retrieval-Augmented Generation by leveraging Vector embeddings and providing fresh facts to this LLM; no problem, this way!

I need to analyze relational data as if it were a property graph to look for sub-graph patterns, and this should be accessible through a REST service; of course, the REST service is provided from within the database!

I need to work on relational data as if it were JSON documents in both read and write mode and restrict some JSON fields to being non-updatable; sure, we can!

The possibilities are infinite, and all of this can be done using SQL or simple APIs. Think about the complexity involved in using 2, 3, or 4… distinct tools, the required integration, the hard work to secure and make it highly available, and the cost of the skills to do so. A last aspect is the impact on energy to make a complex system evolve vs. a simpler system…

Get a glimpse of these new possibilities:

--

--