Memory Matters: Benchmarking Caching Servers with Membench

Vladimir Rodionov
Carrot Data Engineering Blog
13 min readSep 27, 2024
Thank you, DALL-E … but you misspelled Memcarrot :)

I asked ChatGPT to generate a picture for this blog post. Unfortunately, it’s still learning how to spell Memcarrot 😄. On the bright side, Memcached and Redis were spelled correctly! That’s a huge step forward. Hopefully, ChatGPT will learn soon!

Today, we’ll take a deep dive into the memory usage of three caching servers: Memcached, Redis, and Memcarrot. I don’t think I need to explain why this is so important, but I’ll summarize briefly. If Server X uses 30% less RAM than Server Y to store the same amount of data and maintains an equivalent hit rate, it means we need 30% fewer servers to support our application. Fewer servers (or cloud instances) directly translate to reduced costs. It’s simple math. Ironically, until recently, there wasn’t a single tool available to help us perform such a benchmark. The reason was simple: both Memcached and Redis offer similar performance in terms of memory usage since neither implements server-side compression (although they do have slight differences in efficiency, and these differences are well above statistical error). If every caching server has similar memory usage performance, why would anyone want to measure and compare it? This was true until recently. A new caching technology has emerged, and its name is Memcarrot. Memcarrot features a technology we call SmartReal Compression, which works exceptionally well on data that is naturally compressible — whether it’s human-generated, or machine-generated. How well does it work? Let’s find out.

Introducing Membench

Membench is a pioneering benchmark tool designed to evaluate the memory usage and performance of Memcached and Redis-compatible servers. This client application offers a unique insight into how caching servers handle a wide range of data sets, making it an essential tool for developers and system administrators. Key features include:

  • Wide Diversity of Data Sets: Tests with 10 different data sets, ranging from small tweet messages (76 bytes) to large JSON objects (over 3KB), providing a comprehensive assessment of server performance across various scenarios.
  • Memory Usage Measurement: The first tool to measure the memory consumption of caching servers, ensuring that memory efficiency is as closely monitored as pure performance.
  • Performance Metrics: Measures throughput, giving a holistic view of the server’s capabilities.
  • Real-World Relevance: Data sets reflect real-world usage patterns, making the results highly applicable to actual deployment environments.

By combining memory usage metrics with performance measurements, Membench provides a thorough evaluation of caching servers, helping optimize their configuration and usage for diverse applications.

About datasets

We have compiled 10 different data sets, covering different types of a structured, semi-structured and unstructured text objects. Available datasets are:

  • Amazon Product Reviews. Random samples of book reviews in JSON format. Format: CSV. Average object size — 528 bytes. Example:
10,B00171APVA,A21BT40VZCCYT4,Carol A. Reed,0,0,5,1351209600,Healthy Dog Food,This is a very healthy dog food. Good for their digestion. Also good for small puppies. My dog eats her required amount at every feeding
  • Airbnb property descriptions. Average object size — 1367 bytes. Format: CSV. Example:
12422935,4.442651256490317,Apartment,Private room,"{TV,""Wireless Internet"",Heating,""Smoke detector"",""Carbon monoxide detector"",""First aid kit"",""Fire extinguisher"",Essentials,Hangers,""Laptop friendly workspace""}",2,1.0,Real Bed,strict,True,SF,"Beautiful private room overlooking scenic views in San Francisco's upscale Noe Valley neighborhood. You'll have your own bedroom, queen bed, dresser, wardrobe, smart TV, workstation, desk&chair, WiFi, and kitchen. Fresh towels and linens are provided for your convenience. MUNI bus stop and all tech shuttles are around the corner, J-Church subway is 5-minute walk away. Restaurants, bars, and cafes are around the corner. Ideal location for experiencing SF or commuting to Silicon Valley. We'll provide 100% cotton sheets and fresh towels. You'll have access to the full size bathroom as well as the kitchen. We'll also provide high speed business class WiFi internet, as well as HBO and Net""ix on your private TV. Guests will have access to kitchen and bathroom in addition to their private bedroom. We'll interact with you as much as you'd like to make your stay as comfortable as possible. We are available if you need us but will not disturb you otherwise. One of the sunniest neighborhoods in S",2017-08-27,t,t,100%,2017-06-07,t,2017-09-05,37.7531640472884,-122.4295260773271,Comfort Suite San Francisco,Noe Valley,3,100.0,https://a0.muscache.com/im/pictures/82509143-4b21-44eb-a556-e3c1e0afac60.jpg?aki_policy=small,94131,1.0,1.0
  • Arxiv. Large collection of scientific publications meta data from arxiv.org site. Format: JSON. Average object size — 1643 bytes. Example:
{"id":"0704.0008","submitter":"Damian Swift","authors":"Damian C. Swift","title":"Numerical solution of shock and ramp compression for general material\n  properties","comments":"Minor corrections","journal-ref":"Journal of Applied Physics, vol 104, 073536 (2008)","doi":"10.1063/1.2975338","report-no":"LA-UR-07-2051, LLNL-JRNL-410358","categories":"cond-mat.mtrl-sci","license":"http://arxiv.org/licenses/nonexclusive-distrib/1.0/","abstract":"  A general formulation was developed to represent material models for\napplications in dynamic loading. Numerical methods were devised to calculate\nresponse to shock and ramp compression, and ramp decompression, generalizing\nprevious solutions for scalar equations of state. The numerical methods were\nfound to be flexible and robust, and matched analytic results to a high\naccuracy. The basic ramp and shock solution methods were coupled to solve for\ncomposite deformation paths, such as shock-induced impacts, and shock\ninteractions with a planar interface between different materials. These\ncalculations capture much of the physics of typical material dynamics\nexperiments, without requiring spatially-resolving simulations. Example\ncalculations were made of loading histories in metals, illustrating the effects\nof plastic work on the temperatures induced in quasi-isentropic and\nshock-release experiments, and the effect of a phase transition.\n","versions":[{"version":"v1","created":"Sat, 31 Mar 2007 04:47:20 GMT"},{"version":"v2","created":"Thu, 10 Apr 2008 08:42:28 GMT"},{"version":"v3","created":"Tue, 1 Jul 2008 18:54:28 GMT"}],"update_date":"2009-02-05","authors_parsed":[["Swift","Damian C.",""]]}
  • DBLP. The DBLP is a citation network dataset. The citation data is extracted from DBLP, ACM, MAG (Microsoft Academic Graph), and other sources. Format: JSON. Average object size — 396 bytes. Example:
{ "_id" : { "$oid" : "595c2c48a7986c0872f8ba53" }, "mdate" : "2017-05-25", "author" : [ "Gabriele Moser", "Michaela De Martino", "Sebastiano B. Serpico" ], "ee" : "https://doi.org/10.1109/IGARSS.2013.6723567", "booktitle" : "IGARSS", "title" : "A multiscale contextual approach to change detection in multisensor VHR remote sensing images.", "pages" : "3435-3438", "url" : "db/conf/igarss/igarss2013.html#MoserMS13", "year" : "2013", "type" : "inproceedings", "_key" : "conf::igarss::MoserMS13", "crossref" : [ "conf::igarss::2013" ] }
  • Github. Github user (public) profiles. Format: JSON. Average object size — 821 bytes. Example:
{"login":"justinkadima","id":5258,"avatar_url":"https://avatars.githubusercontent.com/u/5258?v=3","gravatar_id":"","url":"https://api.github.com/users/justinkadima","html_url":"https://github.com/justinkadima","followers_url":"https://api.github.com/users/justinkadima/followers","following_url":"https://api.github.com/users/justinkadima/following{/other_user}","gists_url":"https://api.github.com/users/justinkadima/gists{/gist_id}","starred_url":"https://api.github.com/users/justinkadima/starred{/owner}{/repo}","subscriptions_url":"https://api.github.com/users/justinkadima/subscriptions","organizations_url":"https://api.github.com/users/justinkadima/orgs","repos_url":"https://api.github.com/users/justinkadima/repos","events_url":"https://api.github.com/users/justinkadima/events{/privacy}","received_events_url":"https://api.github.com/users/justinkadima/received_events","type":"User","site_admin":false}
  • Ohio. Ohio State Education Department Employee salary database (public). Format: CSV. Average object size — 102 byte. Example:
"Don Potter","University of Akron","Assistant Lecturer","Social Work",2472.0,2019
  • Reddit. Reddit subreddits data sets. Format: JSON. Average object size — 3044 bytes. Example:
{"_meta":{"earliest_comment_at":1134365188,"earliest_post_at":1119552233,"num_comments":14230966,"num_comments_updated_at":1707541748,"num_posts":9120981,"num_posts_updated_at":1707519565},"accept_followers":true,"accounts_active":null,"accounts_active_is_fuzzed":false,"active_user_count":null,"advertiser_category":"","all_original_content":false,"allow_discovery":true,"allow_galleries":false,"allow_images":true,"allow_polls":true,"allow_prediction_contributors":false,"allow_predictions":false,"allow_predictions_tournament":false,"allow_talks":false,"allow_videogifs":true,"allow_videos":true,"allowed_media_in_comments":[],"banner_background_color":"#0dd3bb","banner_background_image":"https://styles.redditmedia.com/t5_6/styles/bannerBackgroundImage_yddlxq1m39r21.jpg?width=4000&s=f91d1be5c5a1ea6e492818ecb8a846ea4978563c","banner_img":"","banner_size":null,"can_assign_link_flair":false,"can_assign_user_flair":false,"collapse_deleted_comments":false,"comment_contribution_settings":{"allowed_media_types":null},"comment_score_hide_mins":0,"community_icon":"https://styles.redditmedia.com/t5_6/styles/communityIcon_a8uzjit9bwr21.png?width=256&s=d28ea66f16da5a6c2ccae0d069cc4d42322d69a9","community_reviewed":true,"created":1137537905,"created_utc":1137537905,"description":"To report a site-wide rule violation to the Reddit Admins, please use our [report forms](https://www.reddit.com/report) or message [/r/reddit.com modmail](https://www.reddit.com/message/compose?to=%2Fr%2Freddit.com).\n\nThis subreddit is [archived and no longer accepting submissions.](https://redditblog.com/2011/10/18/saying-goodbye-to-an-old-friend-and-revising-the-default-subreddits/)","disable_contributor_requests":false,"display_name":"reddit.com","display_name_prefixed":"r/reddit.com","emojis_custom_size":null,"emojis_enabled":false,"free_form_reports":true,"has_menu_widget":false,"header_img":null,"header_size":null,"header_title":"","hide_ads":false,"icon_img":"","icon_size":null,"id":"6","is_crosspostable_subreddit":true,"is_enrolled_in_new_modmail":null,"key_color":"","lang":"en","link_flair_enabled":false,"link_flair_position":"","mobile_banner_image":"","name":"t5_6","notification_level":null,"original_content_tag_enabled":false,"over18":false,"prediction_leaderboard_entry_type":1,"primary_color":"#0079d3","public_description":"The original subreddit, now archived.","public_traffic":false,"quarantine":false,"restrict_commenting":false,"restrict_posting":true,"retrieved_on":1707425156,"should_archive_posts":true,"should_show_media_in_comments_setting":true,"show_media":false,"show_media_preview":true,"spoilers_enabled":true,"submission_type":"any","submit_link_label":"","submit_text":"","submit_text_html":null,"submit_text_label":"","subreddit_type":"archived","subscribers":987905,"suggested_comment_sort":null,"title":"reddit.com","url":"/r/reddit.com/","user_can_flair_in_sr":null,"user_flair_background_color":null,"user_flair_css_class":null,"user_flair_enabled_in_sr":false,"user_flair_position":"right","user_flair_richtext":[],"user_flair_template_id":null,"user_flair_text":null,"user_flair_text_color":null,"user_flair_type":"text","user_has_favorited":false,"user_is_banned":false,"user_is_contributor":false,"user_is_moderator":false,"user_is_muted":false,"user_is_subscriber":false,"user_sr_flair_enabled":null,"user_sr_theme_enabled":true,"videostream_links_count":0,"whitelist_status":"all_ads","wiki_enabled":true,"wls":6}
  • Spotify. Spotify top 40 charts by Country, Week. Format: CSV. Average object size — 904 bytes. Example:
0,Chantaje (feat. Maluma),1,2017-01-01,Shakira,https://open.spotify.com/track/6mICuAdrwEjh6Y6lroV2Kg,Argentina,top200,SAME_POSITION,253019.0,6mICuAdrwEjh6Y6lroV2Kg,El Dorado,78.0,195840.0,False,2017-05-26,"['AR', 'AU', 'AT', 'BE', 'BO', 'BR', 'BG', 'CA', 'CL', 'CO', 'CR', 'CY', 'CZ', 'DK', 'DO', 'DE', 'EC', 'EE', 'SV', 'FI', 'FR', 'GR', 'GT', 'HN', 'HK', 'HU', 'IS', 'IE', 'IT', 'LV', 'LT', 'LU', 'MY', 'MT', 'MX', 'NL', 'NZ', 'NI', 'NO', 'PA', 'PY', 'PE', 'PH', 'PL', 'PT', 'SG', 'SK', 'ES', 'SE', 'CH', 'TW', 'TR', 'UY', 'US', 'GB', 'AD', 'LI', 'MC', 'ID', 'JP', 'TH', 'VN', 'RO', 'IL', 'ZA', 'SA', 'AE', 'BH', 'QA', 'OM', 'KW', 'EG', 'MA', 'DZ', 'TN', 'LB', 'JO', 'PS', 'IN', 'BY', 'KZ', 'MD', 'UA', 'AL', 'BA', 'HR', 'ME', 'MK', 'RS', 'SI', 'KR', 'BD', 'PK', 'LK', 'GH', 'KE', 'NG', 'TZ', 'UG', 'AG', 'AM', 'BS', 'BB', 'BZ', 'BT', 'BW', 'BF', 'CV', 'CW', 'DM', 'FJ', 'GM', 'GE', 'GD', 'GW', 'GY', 'HT', 'JM', 'KI', 'LS', 'LR', 'MW', 'MV', 'ML', 'MH', 'FM', 'NA', 'NR', 'NE', 'PW', 'PG', 'PR', 'WS', 'SM', 'ST', 'SN', 'SC', 'SL', 'SB', 'KN', 'LC', 'VC', 'SR', 'TL', 'TO', 'TT', 'TV', 'VU', 'AZ', 'BN', 'BI', 'KH', 'CM', 'TD', 'KM', 'GQ', 'SZ', 'GA', 'GN', 'KG', 'LA', 'MO', 'MR', 'MN', 'NP', 'RW', 'TG', 'UZ', 'ZW', 'BJ', 'MG', 'MU', 'MZ', 'AO', 'CI', 'DJ', 'ZM', 'CD', 'CG', 'IQ', 'LY', 'TJ', 'VE', 'ET', 'XK']",0.852,0.773,8.0,-2.921,0.0,0.0776,0.187,3.05e-05,0.159,0.907,102.034,4.0
  • Twitter. Tweets with a large amount of meta information. Format: JSON. Average object size — 2581 bytes. Example:
{ "_id" : { "$oid" : "59435062a7986c085b072088" }, "text" : "@shaqdarcy hahaha.. soya talaga bro.. :)", "in_reply_to_user_id_str" : "238156878", "id_str" : "133582462910611456", "contributors" : null, "in_reply_to_user_id" : 238156878, "created_at" : "Mon Nov 07 16:31:55 +0000 2011", "in_reply_to_status_id" : { "$numberLong" : "133582357465792513" }, "entities" : { "hashtags" : [  ], "user_mentions" : [ { "screen_name" : "shaqdarcy", "indices" : [ 0, 10 ], "id_str" : "238156878", "name" : "Darcy Nicolas", "id" : 238156878 } ], "urls" : [  ] }, "geo" : null, "source" : "web", "place" : null, "favorited" : false, "truncated" : false, "coordinates" : null, "retweet_count" : 0, "in_reply_to_screen_name" : "shaqdarcy", "user" : { "profile_use_background_image" : true, "favourites_count" : 13, "screen_name" : "JaybeatBolido", "id_str" : "255006912", "default_profile_image" : false, "geo_enabled" : false, "profile_text_color" : "333333", "statuses_count" : 467, "profile_background_image_url" : "http://a0.twimg.com/images/themes/theme1/bg.png", "created_at" : "Sun Feb 20 13:31:09 +0000 2011", "friends_count" : 92, "profile_link_color" : "0084B4", "description" : "I want to be a JEDI.", "follow_request_sent" : null, "lang" : "en", "profile_image_url_https" : "https://si0.twimg.com/profile_images/1614465172/jp_normal.jpg", "profile_background_color" : "C0DEED", "url" : null, "contributors_enabled" : false, "profile_background_tile" : false, "following" : null, "profile_sidebar_fill_color" : "DDEEF6", "protected" : false, "show_all_inline_media" : false, "listed_count" : 1, "location" : "Phillipines-Manila", "name" : "Japhette Pulido", "is_translator" : false, "default_profile" : true, "notifications" : null, "profile_sidebar_border_color" : "C0DEED", "id" : 255006912, "verified" : false, "profile_background_image_url_https" : "https://si0.twimg.com/images/themes/theme1/bg.png", "time_zone" : null, "utc_offset" : null, "followers_count" : 42, "profile_image_url" : "http://a1.twimg.com/profile_images/1614465172/jp_normal.jpg" }, "retweeted" : false, "id" : { "$numberLong" : "133582462910611456" }, "in_reply_to_status_id_str" : "133582357465792513" }
  • Twitter sentiments. Twitter sentiments data set (we extract only tweet texts). Format: JSON. Average object (tweet) size — 76 bytes. Example:
"@alielayus I want to go to promote GEAR AND GROOVE but unfornately no ride there  I may b going to the one in Anaheim in May though"

Test setup

Hardware: Mac Studio M1, 64GB RAM

OS: Mac Sonoma 14.6

Memcached 1.6.29 vs Redis 7.2.5 vs Memcarrot 0.15

If you’re interested in reproducing these test results, please visit the Membench GitHub repository, where you’ll find detailed instructions and all the necessary resources. The repository contains information on how to set up the environment, configure the servers, and run the benchmarks for yourself.

Below, you’ll find a visual representation of the results from running all 10 datasets across the three competing caching servers. This single, comprehensive image presents the performance comparison, making it easy to see how each server handles memory usage with different data types.

Note: For Memcached and Redis we ran benchmark with client-side compression enabled (codec=zlib, level = 3 (default)) . Memcarrot has compression enabled by default.

Picture 1. Memory usage (GB)

This is truly a case where one picture is worth a thousand words 😄. Across all 10 datasets, Memcarrot proves to be 2–5.4x more memory efficient than Memcached (zlib), and 2.2–5.9x more efficient than Redis (zlib). This substantial memory efficiency doesn’t just look good on paper — it directly translates to reduced costs for operating your caching infrastructure. With fewer resources needed to achieve the same (or better) results, Memcarrot can significantly cut down on the number of servers or cloud instances you need to run. In short, Memcarrot is here to save you money — while delivering top-tier caching performance.

If you’re curious about server performance in terms of records per second, don’t worry — we’ve got that covered too. Our benchmark includes detailed performance data, so you can see exactly how each server stacks up when it comes to throughput and speed.

Picture 2. Load throughput in Kops (thousand operation per second)

Please note that these results compare Memcarrot with compression enabled to Memcached without client-side compression enabled. You may notice that there’s no performance data for Redis. That’s because Redis is somewhat slower compared to both Memcached and Memcarrot when request pipelining is not used. At the moment, Membench isn’t fully optimized for Redis as it lacks support for operation pipelining, which can significantly boost Redis performance in real-world scenarios.

So while Redis might show slower performance in our current tests, keep in mind that with proper pipelining, the results could be quite different. We’ll update the benchmark once Membench is optimized for Redis.

So, what is the magic?

You may be wondering: “How does it work?” Let me explain. Memcarrot applies compression server-side, which is far more efficient than client-side compression (where data is compressed by the cache client). Here’s why:

1. Content-Aware Compression

Client-side compression has one significant limitation: the client application lacks the full context of the data. It compresses data objects independently, unaware of any larger patterns or similarities across objects. This limitation is evident in our “twitter_sentiment” benchmark, where the average size of a data object (tweet) is only 78 bytes.

For example, take this tweet:

I want to go to promote GEAR AND GROOVE but unfortunately no ride there I may b going to the one in Anaheim in May though.

As a standalone sentence, this text cannot be compressed effectively by any available compression algorithm — it’s simply too small. But imagine if this sentence were part of a large novel. Large texts generally achieve a good compression ratio, often between 3:1 and 4:1.

For Memcarrot, this sentence is part of a larger “novel.” Memcarrot continuously analyzes incoming data objects and builds an optimized compression dictionary based on all the data it has seen previously. This dictionary is then used to compress every new object. As new data patterns emerge, the dictionary is dynamically updated to stay efficient. This context-aware approach enables Memcarrot to compress even small objects by leveraging knowledge of the broader dataset.

2. Block-Based Compression

One fundamental principle of data compression is:
“The larger the data, the better the compression ratio.”

Compression algorithms perform better with more data to analyze, as they can identify repeatable patterns and construct efficient dictionaries where sequences of characters are replaced by shorter codewords. This is why compressing a single sentence yields poor results, but compressing an entire book is highly effective.

Memcarrot groups all cached objects into contiguous memory blocks and applies compression at the block level. Even if a single object is small, it becomes part of a block, which is typically 4–8KB in size (configurable). This block-based compression gives an additional 30–50% boost in compression efficiency for small-to-medium-sized objects, as it allows the algorithm to analyze larger chunks of data at once.

SmartReal vs Standard compression

Our block-based, content-aware, continuously adapting compression algorithm has its own name — SmartReal Compression. Now, let’s measure the effects of content awareness and block size in our algorithm and compare it to a standard approach using the zstd compression library. We’ll begin with a single tweet, our favorite example.

bash$ echo "I want to go to promote GEAR AND GROOVE but unfortunately no ride there I may b going to the one in Anaheim in May though." >> tweet.txt
bash$ zstd -19 tweet.txt
tweet.txt : 86.99% ( 123 B => 107 B, tweet.txt.zst)

Using a single tweet, without context and without grouping them into blocks (similar to client-side compression), we can barely achieve a 1.15x compression ratio.

Next, we trained the compression codec on a Twitter sentiment dataset and ran the following command using a trained dictionary (dict.data, 1MB in size).

bash$ zstd -D dict.data -19 tweet.txt
tweet.txt : 52.84% ( 123 B => 65B, tweet.txt.zst)

Nice! Now we’ve achieved almost a 1.9x compression ratio (123/65).

This is the result of applying knowledge from the entire dataset to a single object — the power of content-aware compression. Yes, this is a single tweet that we were able to compress almost at a 2x ratio.

But what happens if we group tweets into blocks of approximately 4KB? To test this, we created a 3852-byte file by randomly selecting 50 tweets from the Twitter sentiment dataset (available from our Membench repository).

bash$ zstd -D dict.data -19 tweet4kb.txt
tweet4kb.txt : 38.44% ( 3852B => 1481B, tweet4kb.txt.zst)

The compression ratio increased to almost 2.6x.

This is the result of grouping objects into a continuous block of memory, demonstrating the power of block-based compression.

Why Memcarrot Outperforms Client-Side Compression

By combining content-aware and block-based compression, Memcarrot overcomes the inherent limitations of client-side approaches. It ensures that even small data objects like tweets achieve impressive compression ratios by leveraging the broader context of all cached data and processing objects in larger blocks. This makes Memcarrot the ideal solution for scenarios where efficient storage and bandwidth utilization are critical.

The Hidden Cost of Redis: The Memory Tax You Pay

When choosing a technology for caching data, people often opt for Redis over Memcached, not because it’s cooler 😄 (although that might be what some think), but because it offers features that Memcached lacks — most notably, data persistence and replication support. Of course, I haven’t conducted a large-scale customer poll, so this is just my observation, but the fact remains: the majority of users tend to choose Redis over Memcached.

What many don’t realize, however, is the memory tax they pay to run Redis. It requires more memory to cache the same amount of data compared to Memcached. From our benchmark results, we can clearly see that Redis uses 5–25% more memory than Memcached across all 10 datasets.

But that’s not the whole story. Redis users also need to account for at least a 25% increase in memory usage during data snapshots. This is due to how the Unix fork() process works. When Redis performs a data snapshot to disk, it forks the current process to create an atomic memory snapshot. If the parent process’s memory is modified while the snapshot is being taken, the operating system uses a copy-on-write approach to preserve the original version of the memory pages.

Memory modifications can occur even during read operations, because Redis must update the last accessed time for objects to support LRU (Least Recently Used) eviction. In extreme cases, the parent process’s entire memory can be modified during the snapshot, leading to a 100% memory overhead — meaning Redis could require double the amount of memory to complete the snapshot.

Therefore, the total Redis memory tax can range from 31% (1.05 * 1.25) to as much as 56% (1.25 * 1.25). That’s a substantial increase, and it’s something many users might not fully consider when opting for Redis over Memcached.

Oh, and by the way, Memcarrot’s data snapshots, while technically fork-less, don’t require any additional memory, and they’re significantly faster. But that’s a topic for a future blog post — stay tuned!

References:

  1. Memcarrot : https://www.github.com/carrotdata/memcarrot
  2. Membench : https://www.github.com/carrotdata/membench
  3. Carrot Data : https://trycarrots.io
  4. Redis : https://redis.io
  5. Memcached : http://memcached.org

--

--

No responses yet