What’s new in Ruby 2.6?
Ruby 2.1 was released on Christmas 2013 and the tradition followed since with each new version released in the following Christmas, what leads me to believe that Ruby 2.6 will be released next month. So let’s see what’s new in this version of Ruby.
Update
WOW I never imagined that Matz himself would thank me on stage at Rubyconf in his opening keynote and drive this post forward. Thank you Matz for creating this awesome programming language and this awesome community.
Just In Time compilation (MJIT)
Vladimir Makarov, who optimized the Hash code at Ruby 2.4 and a core maintainer at the GCC project, and Takashi Kokubun, a Ruby core maintainer who rewrote the Ruby VM from 1.9 to 2.0, have proposed a JIT Compiler to the Ruby VM. Roughly, the idea behind a JIT compiler is to “inspect” the code at run time and try to optimize more intelligently the current running code as opposed to a Ahead Of Time compiler.
There were some offers as to how to create the JIT compiler itself, but in an oversimplified explanation it has been decided to reuse available C compilers on the system (gcc or clang) as the compiler itself and turn bytecode into C code and compile it. It’s pretty elegant as the the MJIT implementation finds hot-spots in the code, take the compiled bytecode, uses an ERB template to turn it into a .c
file, compile it as a shared object, and point the VM to run the shared object code instead of the bytecode.
Some initial benchmarks show significant results, with the Optcarrot (a NES emulator benchmark) being 1.77x slower on Ruby 2.5.3 and 2.48x slower on Ruby 2.0.0.
Moreover some more micro benchmarks to MJIT implementation itself show more significant results such as the Mandlebrot benchmark being 1.27x faster, Fibonacci benchmark being 3.19x faster and the const and const2 benchmarks being almost 4x faster.
John Hawthorn shows in his early post from ten months ago, internally how it looks like.
Endless ranges
Ruby introduces the (0..)
range and makes these available:
ary[1..] # identical to ary[1..-1]
(1..).each {|index| ... } # infinite loop from index 1
ary.zip(1..) {|elem, index| ... } # ary.each.with_index(1) { }
Array#union and Array#difference
There is an easier way to difference and union multiple arrays.
[1, 1, 2, 2, 3, 3, 4, 5 ].difference([1, 2, 4]) #=> [ 3, 3, 5 ]
["a", "b", "c"].union(["c", "d", "a"]) #=> [ "a", "b", "c", "d" ]
["a"].union([["e", "b"], ["a", "c", "b"]]) #=> [ "a", "e", "b", "c" ]
Array#filter is a new alias for Array#select
Much like other common used languages such as Javascript, PHP, Haskell, Java 8, Scala, R, filter
was aliased, and this is now possible:
[:foo, :bar].filter { |x| x == :foo } # => [:foo]
Enumerable#to_h now accepts a block that maps keys to values
There are many ways to create a hash out of an array in Ruby, some of them are
(1..5).map { |x| [x, x ** 2] }.to_h
#=> {1=>1, 2=>4, 3=>9, 4=>16, 5=>25}
(1..5).each_with_object({}) { |x, h| h[x] = x ** 2 }
#=> {1=>1, 2=>4, 3=>9, 4=>16, 5=>25}
Starting 2.6 it is now possible to use a block which eliminates the intermediate array,
(1..5).to_h { |x| [x, x ** 2] } #=> {1=>1, 2=>4, 3=>9, 4=>16, 5=>25}
Hash#merge, merge! now accept multiple arguments
No more jumping hoops doing stuff like this to merge multiple hashes
hash1.merge(hash2).merge(hash3)
[hash1, hash2, hash3].inject do |result, part|
result.merge(part) { |key, value1, value2| key + value1 + value2 }
end
We can now have variable amount of arguments when merging hashes
hash1.merge(hash2, hash3)
The #then method
Back in Ruby 2.5, the yield_self
method was introduced, it made possible to pass a block to any instance and get that instance inside the block as the argument.
"Hello".yield_self { |str| str + " World" } #=> "Hello World"
It might not sound useful at first but when you look at two methods that are available, Ruby’s tap
and Rails’ try
, you start to see that it is used quite a lot, moreover it does open the possibility for more readable code like Michal Lomnick shows in his post.
"https://api.github.com/repos/rails/rails"
.yield_self { |url| URI.parse(url) }
.yield_self { |url| Net::HTTP.get(url) }
.yield_self { |response| JSON.parse(response) }
.yield_self { |repo| repo.fetch("stargazers_count") }
.yield_self { |stargazers| "Rails has #{stargazers} stargazers" }
.yield_self { |string| puts string }
Or usual Rails-like controller code
events = Event.upcoming
events = events.limit(params[:limit]) if params[:limit]
events = events.where(status: params[:status]) if params[:status]
events
can become
Event.upcoming
.yield_self { |events| params[:limit] ? events.limit(params[:limit]) : events }
.yield_self { |events| params[:status] ? events.where(status: params[:status]) : events }# Or evenEvent.upcoming
.yield_self { |_| params[:limit] ? _.limit(params[:limit]) : _ }
.yield_self { |_| params[:status] ? _.where(status: params[:status]) : _ }# Or evendef with_limit(events)
params[:limit] ? events.limit(params[:limit]) : events
enddef with_status(events)
params[:status] ? events.where(status: params[:status]) : events
endEvent.upcoming
.yield_self(&method(:with_limit))
.yield_self(&method(:with_status))
Okay, so yield_self
is nice but what about then
? Well the then
method is just an alias to yield_self
so it makes code even a little bit more readable.
Event.upcoming
.then { |events| params[:limit] ? events.limit(params[:limit]) : events }
.then { |events| params[:status] ? events.where(status: status) : events }# orEvent.upcoming
.then(&method(:with_limit))
.then(&method(:with_status))
Some people are concerned that it might resemble A+ Promises too much but eventually it was merged as then
.
Random.bytes
There’s already
Random.new.bytes(10) # => "\xD7:R\xAB?\x83\xCE\xFAkO"
and now there’s
Random.bytes(8) # => "\xAA\xC4\x97u\xA6\x16\xB7\xC0\xCC"
as Matz pointed, better later than never.
Range#=== now uses cover? rather than include?
As pointed out by Zverok Kha on reddit, using cover?
in case statements now brings possibilities such as
case DateTime.now
when Date.today..Date.today + 1
'win!'
else
'fail'
end
Another notable speed improvements
Proc#call
is now around 1.4x faster.- Transient Heap support for Hash has been added. This reduce the memory footprint of short-living memory objects. The benchmark shows reduced memory consumption of short living Hash objects by about 7%.
I’ve started a newsletter to share my stories and interesting posts I find, http://eepurl.com/gcld-T, don’t worry I won’t post unwanted or any promotional emails.