Ruby is Still a Diamond
Matz is nice and so we are nice.
— Ruby Community Motto
It was December 24th of 2020 when Matz and his team released Ruby 3.0, a gift to Rubyists everywhere, and a bright day in an otherwise dark year. Ruby 3.0 was the first major version release for Ruby since 2.0 was released in 2013, and it promised to take a good hard look at some of the language’s most fundamental flaws. Was there ever such a Christmas?
Since 2.0’s release, the industry has gone through a number of priority-upending changes:
- Node.js explode onto the scene as the primary and most popular fullstack framework, far surpassing Ruby’s performance benchmarks
- Machine learning take a major foothold in the market, favoring parallel or otherwise asynchronous languages capable of handling concurrent processes.
- A huge surge in Cloud infrastructure, where scalability is king.
- Languages like Go and Elixir, established after the need for parallelization became clear, designed with asynchronous communication in mind.
I have seen a staggering amount of Ruby is Dead missives in the last few years, and a decline, or at least, an often discussed decline, of the language’s popularity and ranking. But what makes Ruby so much worse than other languages?
Retention Science is, certainly, not the only company with a full-stack Ruby on Rails application at its core. We leverage RoR in our primary Cortex service, as well as a variety of integrations and microservices. Despite the negative tradeoffs, many companies are choosing to write new code in RoR due to positive, organizational, and happiness benefits.
The truth is, we have heard the same song every year, since the release of Ruby 1.0. It is a language that, in its nature, prioritizes enabling quick prototyping, the flexibility to experiment, and logical clarity over speed and resourcefulness. These are some among the many reasons that Ruby is said to focus on developer happiness. Often, developer happiness means better, more secure software; long-term, seasoned contributors investing in support; healthy projects and symbiotic relationships with customers.
There are some straight-forward answers in addition to the above ways that technology has “moved on”: performance, data science incompatibility, as well as simply popularity & adoption. Even world-famous services like AWS Lambda did not officially support Ruby for quite some time.
However, I’ve also observed a lack of exposure to and enthusiasm for the solutions 3.0 offers in order to address the more commonly criticized pitfalls of Ruby as a language. A major version is no small thing; Python’s shift from version 2 to 3 took 14 years, And many of those improvements enabled the capabilities we favor Python for today.
So let’s talk contemporary Ruby.
Ruby’s performance is a complaint often levied against Ruby, and with good reason. 2.0 is currently in the minority when it comes to speed, especially compared to low level languages. Most other languages easily surpass Ruby on Rails performance-wise. As businesses scale up, the priority of top-shelf performance increases.
There are plenty of ways to scale intelligently with Ruby by prioritizing tradeoffs of languages. Storing business logic in a Ruby microservice where it is easily understood and communicated, exposed via API, while something like Node.js manages server connections between clients and the API, is entirely possible. These articles shaming Ruby often get bogged down by their all-or-nothing approach.
That being said, the Ruby team have done great work to respond to critiques on Ruby. If you’re more interested in new functionality, feel free to skip down to Parallelism & Concurrency.
The team behind Ruby has been working to improve memory usage during compilation. Matz’s hope was that Ruby 3.0 would be 3x faster than Ruby 2.0. Whether or not that was accomplished is a little hard to objectively measure, but many cumulative improvements were made throughout the various minor versions of 2.0. If starkly comparing 3.0 to 2.0, Ruby is, yes, nearly 3x faster. A lot of that is thanks to the Just-In-Time compiler, and its refined relative, the MJIT compiler.
One of the primary elements of Ruby’s implementation that causes slowdown is the way in which it handles garbage — failing to clean up instantiations that are no longer needed is a major resource hog. Ruby 3 comes with an enhanced garbage collector that mimics Python’s implementation of a buffer-like API.
Automatic Garbage Compaction
Garbage compaction is a relatively new arrival to Ruby, introduced for the first time in 2.7, but as a manual process invoked by the developer. In 3.0, it is entirely automatic, and occurs with the rest of the silent garbage collection behavior. Items are moved to the heap and then “compacted” when it is deemed safe to do so.
When objects are moved to the heap, changes in 3.0 allow groups of objects to be stored together at a single place in the memory in order to “defragment” data: think placing all your trash in 1 bin as it accumulates, so you can bring it all out to the curb in 1 action. This allows better utilization of the available memory, and less overhead in the cleanup stages.
- JIT compiler to an MJIT compiler, by Sudeep Tarlekar &
JIT vs MJIT, by Noah Gibbs
- Ruby Garbage Collection: More Exciting than it Sounds, by Dan Moore & Ruby Garbage Collection Deep Dive: Compaction, by Jemma Issroff
As you begin to scale as a business, the performance differential between any two languages in the implementation, compared to the architecture of the overarching multiservice system and the handful of tradeoffs that always must be taken, is the least important factor and should not be enough to make or break a project.
Languages are a tool we use in order to speak to computers. There are horrible ways to write any language; likewise, every language has at least one project that is sheer genius. The language itself doesn’t produce bad code, but neither does it prevent you from writing great code. Do not be discouraged by the constant discussions about speed; the speed differential between some of the benchmarks (which already should be taken with a grain of salt) is quite honestly a meaningless number by the time we get large enough to be concerned about major scaling concepts, like database sharding; load balancers; rate limiters; data read consistency tradeoffs; et cetera.
The truth about performance optimizations and the software industry: the underlying system supporting the (arbitrary choice of) language is typically desperately in need of a spokesperson and is likely holding the application back from more impressive and successful improvements if it does not have someone to vouch for its importance. If there is enough drive to always be releasing new features, sometimes these “known” flaws in the fundamental design are left for years, and there’s not much Ruby or any other language can do about it. Architecture needs a caretaker who is passionate about bringing systems up to speed with modern technologies at regular intervals. In order to give the code the room and the resources to grow, there has to be a controlled burn of those structural elements holding it back.
I believe that as the caretakers of these services we can best create that space in support of new growth by starting at the bottom of the stack. What is your oldest tech debt ticket? What is the most fundamental flaw of the web server config you’re using? Do yourself a solid: prepare for the future in the spirit of Matz. Try to get a pain point from your team’s past prioritized. If your team have a major and frankly emotional dependency on an ancient and nameless web server implemented 100 years prior to the creation of Puma, consider not waiting until you no longer have a choice to fix that.
The only thing worse than not performing tech debt maintenance regularly is waiting until you have to perform tech debt maintenance in order to do anything else on your list. Whenever I’ve been a part of a team performing a major update, by choice and on time, to a core or fundamental system, it almost always acts as an explosive catalyst, facilitating major changes and improvements we did not even consider because they were not within the realm of possibility. As I write this, I know Matz is somewhere right now, in front of a laptop and making huge changes to his timeless language, upending his multi-decade architectural philosophy, focused on bringing Ruby into a new generation, and I love him for that.
Ultimately it’s the bureaucratic tradeoffs we make every day if and when we have to prioritize feature development over tech debt that hold us back from seeing what the application could become. The language you chose is not tech debt — the way you are using it, and how the system is consuming it, might be. Those few second differences that seem so large at small benchmark scales immediately come out in the wash as the focus shifts to ensuring that high-volume distributed systems are supported by the logical structure of the services around it.
I’d be remiss if I didn’t mention Satwik’s narrative in his wonderful Ruby v. Python article. Its simply not significant enough to draw any formal conclusion, which Satwik mentions, and then writes 4 simple bullets that encapsulate this often-discussed but ultimately unimportant concern:
There are certain cases where one language shines over other [in the benchmarks], but only performance doesn’t seem like a good reason to pick one of these language over the other, because:
1. Developers matter more: The per hour CPU costs in the cloud are cheaper than per hour developer time.
2. In most business cases solving the problem first (getting product-market-fit) is more important then focusing on performance.
3. For large scale web applications, performance is more of a design-architecture game than of picking one language among the two.
4. If language-performance is really what you want, then there are other low-level languages (probably the compiled ones) which can do much better.
— Satwik Kansal, Python vs. Ruby
Parallelism & Concurrency
We hear these terms thrown around a lot, but they have very specific meanings in the field:
Parallelism is the task of running multiple computations on two different threads simultaneously: when one problem is solved by multiple processors.
Concurrency is the task of managing multiple computations at once: when two or more problems are solved by a single processor.
A vast majority of APIs have some level of concurrency, since processing many requests per second is typically a necessity of the API, even if they do not resolve or complete immediately. Parallelism is one step beyond concurrency in the sense that the system supports actually executing two distinct flows simultaneously.
In the below illustration, the distinction between a concurrent non-parallel execution, and a concurrent parallel execution, is shown. Note that the top illustration is basically what multi-threaded Ruby would look like; the threads are never performing simultaneously, even though they start and end concurrently.
Both concurrency and parallelism work together to make things apparently faster or higher yield.
Explanations & Thoughts: The Global Interpreter Lock (GIL)
“I regret adding Threads.”
In order to avoid race conditions, the Global Interpreter Lock was implemented. You may also hear it called the Global VM Lock (GVL).
In Ruby MRI, threads (i.e., an execution path of statements) are concurrent but not parallel. As you execute two threads, they are essentially “fighting” in order to execute themsleves, which can lead to race conditions: scenarios where the result depends on the order in which the threads are executed.
The GIL prevents any two threads from executing simultaneously, functioning as a interpreter-level mutex lock, blocking operations for all but 1 thread, effectively preventing true parallelism from ever being achieved under the current architecutre. It is not perfect, can still lead to race conditions, and as such ocasionally acts as a last line of defense against poor multithreaded development practices. Thread-safe development practices are still absolutely a necessity when using Ruby threads and you should not rely on the GIL to be thread-safe.
The best approach is to use an existing and highly endorsed library via the incredible Ruby community. One of my favorite “Laws of Software Development” is:
“Any custom developed system contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of the industry standard you refused to adopt.”
— Universal NIH-Rule, 10 Software Engineering Laws Everybody Loves to Ignore
In order to support multithreading, many of us find a well-received project like Sidekiq (at Retention Science, it’s even better if it’s a solution that is already used within the company, so that the knowledge we gain from utilizing it can then be grown and shared among peers). It is typical to place Sidekiq on an external server outside of the single-threaded, web app code to give it its own set of CPU resources to manage. Separation of concerns is key when it comes to contributing to both single- and multi-threaded applications. Then, work we’ve identified as requiring or benefiting from a multithreaded approach will be moved into a Sidekiq job to be kicked off as needed. But until Ruby 3.0, there was absolutely no way to circumvent the requirements of the GIL. Some would say Ruby’s potential for supporting parallelism was kneecapped the second it was born. But not Matz.
Edited: I previously had a line in here saying that we “don’t hear about” Python’s GIL, to which I received a number of messages from Python developers who assure me that it is all they talk about. Sorry Python!
Fibers are the response to complexity introduced by threads when it comes to concurrency. Fibers are “thread-likes”, essentially lightweight workers within the web server. They have further fine-grain control over asynchronicity, context-switching, and I/O handling, with an evolved scheduler that is capable of handling non-blocking operations dynamically. Fibers are the next iteration of concurrency in Ruby and are distinct in that they try to emulate sequential execution, with less context switching overhead and more predictable behaviour.
puts "1: Start program."f = Fiber.new do
puts "3: Entered fiber."
puts "5: Resumed fiber."
endputs "2: Resume fiber first time."
f.resumeputs "4: Resume fiber second time."
f.resumeputs "6: Finished."
Fibers, at Ruby GitHub
Ractors are the response to the complexity introduced by the GIL when it comes to parallelism. They are designed to provide parallel execution without GIL restrictions.
# Math.sqrt(number) in ractor1, ractor2 run in parallelractor1, ractor2 = *(1..2).map do
number = Ractor.recv
end# send parameters
ractor2.send 4**51p ractor1.take #=> 8.665717809264115e+16
p ractor2.take #=> 2.251799813685248e+15
Ractors vs. Threads
While the GIL is not removed in any way in ruby 3.0, ractors work around it to offer parallelism.
- Threads share everything, Ractors only share some things.
Most objects instantiated in a ractor context are not shared across threads, preventing race conditions.
- Ractors have their own global lock.
Each ractor has 1 (or more) threads. Threads in a ractor share a ractor-wide global lock so that they cannot run in parallel, but threads in different ractors can run in parallel.
- Ractors can communicate with each other without violating GIL constraints.
Ractors can communicate with each other using a queue-based system (push/pull message passing), so ractors (and the threads within) are capable of “waiting” on each other.
Ractors are an intelligent step in the right direction. They take the GIL concept and contextualize it to the scope of the ractors, allowing further flexibility when introducing parallel processes within a web server, or parallel threads across or within ractors.
This ability to contextualize the GIL is revolutionary for Ruby, as the GIL has long been one of the core restraints of the language’s abilities. Ractors reduce the weight of the obligation to implement background jobs when your CPU isn’t saturated yet.
Ractors, at Ruby GitHub
Ruby 3.0 Will Change the Way You Think of Ruby
This article does not touch on many other offerings by the Ruby 3.0 team, particularly RBS and TypeProf. When no longer considered experimental, these tools will effectively implement the TypeScript approach to Ruby, enabling developer type-awareness and static analysis. There are also a handful of syntactical improvements, like pattern matching, that are available for use in the newest iteration.
For this article, I wanted to focus on the 2 most criticized elements of Ruby: its ability to handle processes simultaneously, and its performance overall, in order to illustrate just how much Ruby 3.0 has to offer by re-evaluating its own constraints.
I come to Ruby like a multilingual comes to a native language. Ruby is where I can finally be myself, set pen to paper and just write what comes naturally. 3.0 brings Ruby into the next generation, to sit in a more well-rounded, contemporary skin. Ruby will never be the fastest language. There will always be jobs it is poorly suited for (do not write your load balancer in Ruby, please). Tradeoffs should always be evaluated when choosing any technology, and choice of language is no exception. No language is always the right choice.
But what Ruby excels at — communication, flexibility, developer happiness, ease of use — are just as important to its creators now as they ever were.
A new iteration of Ruby will refocus Ruby development, no longer on modernizations and playing catch up, but back to its roots: the spirit of the language, and the community, that brought us to this point in the first place.
Edited: I wanted to give further context to why the speed of the language doesn’t matter so much in the end, so I have refactored a bit of dialogue from GIL to a Scaling section. I’ve also refined the terminology in the GIL section to be less vague.