What an in-memory database is and how it persists data efficiently
Denis Anikin

So nice! “let’s forget about caching for a minute”, huh :)

First, the relational databases had used caches for ages, and most of them (with the notable exception of PostgreSQL) perform updates on disk — instead, they all use sequential writes to the transaction log. Logical reads are not free — but they are cheap enough. Logical reads become even cheaper (e.g. smaller) when combined with proper physical design optimization — either through partitioning, indexing and materialized view constructs, or with modern columnar data structures.

Second, the main “feature” that enables the linear scalability of NoSQL databases is the simplified (feature-limited) consistency model. The need to provide any guarantees about the database-level consistency of different records (and, even more, records from different tables/collections) come to the need to use either version control mechanism plus change control (Oracle, PostgreSQL), or record-level locking (DB2). The use of both mechanisms leads to the increased costs of both logical reads and logical writes — but this has nothing to do with disk speed, as the word “logical” implies that no real I/O occurs.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.