Aerospike — total garbage?

Draining Sun
Jan 16, 2017 · 7 min read

Warning: angry rant incoming. Also, it’s for version up to 3.11. Which is quite old as of 2018.

From my experience — a definitive yes! Why, you may ask? Aerospike is fast, scalable, and amazing in so many other ways. Right…

For the record, I used Aerospike with hybrid configuration. Meaning — data on disk (SSD obviously) and indexes in RAM. The data topped around 1.5 billion of various size records. Usually pretty small (1-10KB), but with high cardinality. Cluster size varied between 3–6 nodes. I tried both on Google Cloud and dedicated servers. Keep in mind, this was not a test or a benchmark, but a real life application with some serious loads (30k qps) and the whole experience spans almost two years if I recall correctly.

So here it goes. All the crap I had to deal with while trying to make Aerospike usable.

Back with Aerospike hovering somewhere around v3.5 it was unusable on Google Cloud. Nodes kept dropping out all the time, which in turn caused rebalances. And they are very slow. If I tried to speed them up, I hit the SSD drive limits. That it was Google Cloud SSD disks (slow!)— did not help Aerospikes case at all. So I either had to wait or make the system unusable. Imagine that happening at least once a week. Fun!

The wait by itself would not have been an issue if it didn’t cause issues with querying results. It was more common than not to get duplicate records during the rebalance. Any aggregation code had to be temporary disabled if I didn’t want to mess up the summarized data.

Did I mention nodes crashed? Like a lot! They just died for one reason or another. But that’s not the worst part. Even if I take into consideration the rebalancing issue! The worst part were the cold starts. Oh my God! Trying to load half a billion records on a node and then building indexes*. I am not talking about few minutes. Hours! And sometimes two nodes crashed at the same time. That’s 40% out of 5 node cluster. What if the clusters memory was near full? The replicated datas indexes had to be loaded into memory, except there wasn’t one left! So I had to quickly realize — secondary indexes costs a lot of RAM. I couldn’t just add one for whatever reason I needed. I had to consider the extra RAM needed and the cold starts duration.

*To be fair, the situation did improve with dedicated servers with good SSDs, but just so so.

Aerospike touts about their speed, etc, but they fail to mention that beyond simple key->value cache their performance sucks. And hey, if I needed just simple cache, I would have gone with Memcached or Redis.

Now, the queries on the secondary indexes are relatively slow, and they do not scale horizontally. Meaning — all nodes are queried. So once a limit is hit, you’re stuck with it. Now what? Nothing. Aerospike does not have ANY scalable way of allowing me to get multiple records based on a (range or otherwise) query. Primary indexes (key) only support direct access.

They have a way to store multiple records into one with LIST functionality, so it could help theoretically. But only if records size did not exceed maximum allowed which is based on write block size on the SSD disk (default 128KB up to recommended 1MB). Trust me, this is very low for anything serious. I’m not questioning the reason for limit, only that it does not help.

Then they introduced LargeList with virtually no limit for storing data per key. Except. They recently deprecated it. Reason being — unmaintainable or something along the lines. Imagine if I had decided to move the applications logic into it. But I didn’t, because the performance was unpredictable and poor. Pat on my shoulder for that.

So what other options did I have. Scans! The holy grail. Or so I thought. Scans are a great way to get all the records, until disk speed limit is hit that is. Even if I could apply UDF to filter results on the server side, which would help reduce the load on the client side, all of the records were still scanned. The only thing I saved was network traffic. Which was never an issue for me by the way. My other option was to read only a percentage of records. But it’s hardly useful, since it can’t be used to scale. Why? Well I can’t read every other record, so other client could read the other half. So what’s the damn point of this percentage?!

Ok, back to crashes. Quite recents versions got some interesting perks and by perks I mean bugs that rendered my cluster a few times completely unusable. One those bugs was related to scans. Apparently some conditions could cause the server to crash during a scan operation. Imagine when all of the servers in the cluster crashed. That was a very good day indeed. System offline and nothing I can do about it. Just wait for the cold starts to finish up and put it back online after.

Except. Some of the nodes decided to join the cluster before secondary indexes were fully built. Now indexes were write-only so all of my queries were failing and since I still rely on them (remember? No way to get around that) system had to stay offline. To boot it, secondary index rebuild on a live node was much slower.

This one particular feature/bug is one of my favorites. When a node crashes or is restarted and cold start is reloading all data, well, it reloads all of it. And I do mean all of it. Even the deleted one. There is no such thing as a durable delete. It was just recently released for Enterprise though. But somehow I don’t feel like coughing up $10k–20k just for this feature/bug.

Damn, so many bugs… How can such software be considered Enterprise worthy (with their pricing too!)? Unfortunately for me, this is not the end. In addition to Aerospike server, one has to use a client. And in my case it was Node.js client. Lets just say v1.x was so bad, it’s not even worth mentioning. Never in my life I have seen so many segfaults and complete Node.js process crashes (without a way of catching the errors) on any Node.js package. I mean, holy mother of God, it should not have been released to public under any circumstances.

And when it didn’t crash, it didn’t work. Queries failed, random query timeouts were left and right. Then came v2. Well, it did improve on a lot things. Haven’t seen a segfault or uncatchable crash in a while. Except it still lacks some fundamental things, such as stream control (this is server problem though). So if I have to deal with large amounts of records, I have no way of pausing the data for processing or just iterating with next(). And I also have no way of reading less data, because scans read all of it and queries are both slow and not scalable.

What exactly Aerospike is trying to solve is an enigma to me. Key->value cache? Sure, but I think I have better options. NoSQL DB? This must be a joke. Applications for high write/read throughput with billions of records? Yes, that is true, but only for key->value mapping. There is nothing else I can do with the data on scale. Then what is Aerospike for? I don’t know, but I’ve been stuck with it for the last few years and am extremely eager to find an alternative.

Originally I believed that they will drastically improve because updates were frequent and there were a lot promises. But after a while I realized something about those frequent updates. They were just a bug fix after a bug fix! And not even good bug fixes. Recent releases has at least one bug that is being fixed for the third or fourth time already. Makes one wonder what kind of QA they have there… And since the software is so buggy, everyone needs to do updates. Ok if its a cluster of 5, right? What if its 50? And nodes are large? What then? How long for a rolling upgrade to complete. This is just beyond stupid.

And whats more stupid is that they know some of their shortcomings, but do not try to solve it. They just provide workarounds. It would be all and great but those workarounds do not solve the problems, just hide it. They may function on a smaller scale, but definitely fail on a larger one. My favorite is their take on url tracker. Who could actually use that in a system tracking millions of urls?

Had enough? No? Here’s a few more features/bugs:

  • No security on Community Edition. Make sure that firewall is well configured!
  • No ability to drop a set (delete all of its content), just really slow deletes through client. And then it comes back after a cold restart!
  • No ability to gracefully remove a namespace, it just errors out if one tries to do a rolling removal from configuration file.

I could probably dig up a few more of my exquisite experiences with Aerospike, but it’s tiring just to recall them. I just wonder why they have so many problems with keeping it bug free and why can’t some highly requested features be implemented. My money goes on two things: they are working on a very low level. I’m no expert but maybe working with SSD disk directly (no filesystem) force them to make some design decisions that prevent, say, stream control? Maybe its hard too. Hard as threads in C? Who knows. Another part is their choice of data distribution algorithm (consistent hashing) which again has its own set of problems, particularly with replication.

But no point in speculating on this. It is what is and I’m done with Aerospike. Now, what are the alternatives? For that, my friends, till next time!

Draining Sun

Written by

I do stuff. Among other things.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade