A series on Cassandra: Part 1: Getting rid of the SQL mentality.

Flavian Alexandru
Outworkers
8 min readJan 15, 2019

--

At Outworkers, Cassandra and Datastax Enterprise are core technologies in our area of competence, and moreover some of our favourite technologies in existence. There aren’t many things that can easily give you 100k writes per second, but if that’s what you are looking for you are in the right place.

We got to learn a lot about Cassandra, by using it in multiple projects and especially while writing the Scala DSL for it, our very own phantom.

From it’s humble beginnings as an afternoon project, phantom has become the de-facto standard for Scala based adopters of Cassandra, with full support for a type-safe CQL flavour, testing automation, automation of service discovery with ZooKeeper any many other cool features.

In this series of posts to follow, we get to share some of our Cassandra experience and help you transition from a traditional SQL background to a NoSQL mindset, in an example driven series of Do’s and Don’ts. Cassandra is an incredible combination of power and simplicity, and we’d love to show you all the nitpicks, and possibly infirm some of our own beliefs while at it.

The top 5 mindset changes in Cassandra adoption

1. Normalisation of data is not necessary, duplication is ok.

In Cassandra data is meant to be completely de-normalised. Duplication of data is not only possible, but it’s also encouraged and often the only way to model the SQL equivalent. In Cassandra, writing is considered an extremely cheap operation, so wether you do 10 writes per second or 100k writes per second at capacity, Cassandra will offer you horizontal scalability at the remarkable rate of about 7k writes per second per node(as measured by Netflix). More details on performance can be found in this Planet Cassandra post or the famous Netflix benchmarking test, where at 288 nodes in a cluster the number of writes approached 1.1 million per second(no, not a typo).

2. Query expressiveness is traded for performance.

If you were to compare CQL to the traditional SQL standard, you would notice an incredible decrease in complexity. We get to feel that pain first hand with our new upcoming reactive SQL DSL, morpheus. CQL is very similar to SQL in terms of keywords, so it’s also a great familiar looking place for most engineers, if not all. However, you can no longer compare and search for matches on random columns, you have no joins and barely any complexity will ever exist in your queries. There is no such thing as a stored procedure.

What you get instead is a really simple and concise query language that’s both apparently limited and yet wonderfully empowering. For those who agree simplicity is a good thing, CQL definitely fits the bill and while I’m sure some may argue more complexity is necessary, the language is plentiful to express anything you can possibly want and it takes about 2 hours to master in it’s entirety. That’s a pretty cool use of an afternoon.

3. In Cassandra, you need to plan your queries in advance.

Thanks to new emerging technologies like MongoDB and also to the traditional relational model, most people have the bias of any index querying deeply ingrained in their thought process. You’d think it’s trivial to query by any field in your database, but that’s not how really things work in Cassandra. The performance boost has to come from somewhere, and it comes from Cassandra’s ability to do extremely little work to retrieve the data for your beloved queries.

Lets take the very simple use case below, where you have a table of people, and the fields are id, name and firstName. More than simple enough for a text book default. Now say you want to query things by their id. Quite simple, make id the “Partition key”, and done, you can query by id. A “Partition key” is the way to allocate rows inside the Cassandra storage. It has some interesting properties, which we will cover in a later post. For now, just know the first part of the “Primary Key” is the “Partition Key”.

CREATE TABLE People(
id uuid,
name text,
firstName text,
PRIMARY KEY (id)
);

And the Phantom DSL equivalent:

package com.outworkers.phantom.exampleimport java.util.UUID
import com.outworkers.phantom.dsl._
case class Person(id: UUID, name: String, firstName: String)abstract class People extends Table[People, Person] {
object id extends UUIDColumn with PartitionKey
object name extends StringColumn
object firstName extends StringColumn
}

Now, you want to query your People table by a first name, with a simple query as follows:


def peopleByFirstName(firstName: String): Future[Seq[Person]] =
people.select.where(_.firstName eqs firstName).limit(5000).fetch()

Looks quite simple and straight forward, a great reminder of the SQL equivalent SELECT * FROM People WHERE firstName = ‘whatever’. However, this is not possible in Cassandra, and phantom won’t even let you compile it. Why? Because the ‘firstName’ cannot be serialised to form the primary key hash. In other words, it’s not an index or even part of an index, it’s just “stuff to store” as far as Cassandra is concerned, but it’s not “stuff to query”. Cassandra has no way to re-create the hash of the row where you data exists out of the firstName, simple as that.

You really have to think of Cassandra as one giant and overpowered java.util.HashMap when you want to build indexes. I hope the Cassandra team doesn’t hold this against us, but it’s a good way to simplify. What does a HashMap do in essence? Jump to reference. That’s exactly what Cassandra does, and although it has some clever ways to build that “jump-to-reference” or the hash, such as Compound keys or Composite keys, it’s still a single hash per match model.

That’s why you plan in advance and you think really hard about how to simplify and make as little columns as possible part of your PrimaryKey. You will need to produce the full PrimaryKey every time you want to query, as otherwise obviously the Murmur3 hash on by default cannot be reproduced.

A few tricks of the trade.

Avoid using a column in an index wherever possible, the more columns you have the less flexibility you get and the harder it becomes to keep producing all the Primary Key data at query time. The values of any column part of a Primary cannot be updated either, so you are “stuck” with whatever you write.

Complex querying is expensive and the secret is simplicity. You can only do very simple things at a very large scale. The full page stored procedures in SQL are bad old memories in Cassandra. Querying several tables at one time to fetch your data and composing Futures to achieve that is quite common.

Avoid secondary indexes. Somewhat like SQL, Cassandra gives you an Index column. This was implemented rather as a marketing decision than a technical reality. Steer clear of using secondary indexes for anything remotely performance critical.

Querying those often requires the dreaded ALLOW FILTERING, which means getting the right matches will be done by Cassandra, but in memory, at query time. You can see how this gets really messy after the first few thousand records. Simply ENABLE TRACING at query time and you can witness the scale of the damage yourself.

4. Duplicate data and maintain consistency at application level

“Ok, so indexing and querying by random columns is difficult, but I just want to query by firstName.” There is a very simple solution to that problem, data duplication.

Basically, you create another table where the column you’d like to query by is the primary key and the “other column” or other columns are the piece of data they relate to in the original table.

In this example, we relate the firstName to the id of the person in the People table. The CQL looks like this:

CREATE TABLE PeopleByFirstName (
firstName text,
id uuid,
PRIMARY KEY (firstName)
);

And the Phantom DSL equivalent:

abstract class PeopleByFirstName extends Table[People, (String, UUID)] {
object firstName extends StringColumn with PartitionKey
object id extends UUIDColumn
}

Now what you can do is quite nice. If you were using Scala with phantom, you can use Futures and compose them to achieve consistency at application level, but the same goes for any client side application capable of async execution.

The below example is intentionally verbose, but you can of course “for yield” or if you are particularly trustworthy of your network you can do parallel writes. In the below case we wait for the first write to complete before initiating a second. In the same pattern, you sync up every subsequent operation, with a few bumps along the way.

Deletes now need to run side by side for consistency purposes, updates to thePeopleByFirstNameTable are actually an INSERT followed by a DELETE, as you can no longer update the firstName in that table. It’s part of the primary now, or more specifically it’s the partition key. But with any decent client this is remarkably simple and surprisingly satisfying.

def insertPerson(person: Person): Future[ResultSet] = { for {
storeNormally <- People.store(person)
storeByName <- PeopleByFirstName.store(person.name -> person.id)
} yield storeByName
}

5. Consistency is important

Now that you’ve come a long way in your journey to CQL, it’s time to devolve yourself completely of the SQL performance limitations. Your local Postgress is well capable of taking things to 20 million records and giving you decent sort performance and query capability. This is all on the average machine, no fancy gear required.

But if you are doing “serious business”, you didn’t waste all this time just for fun. That’s where the last of the big issues comes into play, consistency. More detailed information is available here, but if you are looking for a rule of thumb, data that’s required to be immediately available lends itself to high consistency levels such as LOCAL_QUORUM orALL. If you expect real time API calls over the writes you make, set the consistency high into the sky.

Don’t fear to pay the price of consistency even if Cassandra has to run around a bit more under the hood to ensure it, it’s often well worth the time cost when you come across things like large discrepancies between nodes, where some nodes still have the data and others successfully performed a delete, lets say(the so called “Zombies”). Cassandra advertises the model of eventual consistency, with “tunnable consistency” bundled, but the default consistency level is often not enough.

Coming back to performance costs, sometimes is not worth paying for the extra network round trips and wait time, as at large scale it may cost quite a lot of money. But the rule of thumb is again simple and rewarding. If you are dealing with analytical data, reports and things you are going to process in Hadoop or Spark and you’re happy to get the results at a later point in time, you can ease off and save yourself a buck. That generally eases off the workload on the clusters enough to keep the P&L statements looking great and with that spare cash you get yourself a Datastax Enterprise license and get a whole lot of really cool features, many of which we will cover in this series.

THE END

That marks the end of our introduction to Apache Cassandra. We look forward to your feedback and comments and we hope you’ve found it interesting! Stay tuned for more in this series.

--

--