Take a step back in history with the archives of PragPub magazine. The Pragmatic Programmers hope you’ll find that learning about the past can help you make better decisions for the future.

FROM THE ARCHIVES OF PRAGPUB MAGAZINE FEBRUARY 2013

Deploying with JRuby in the Cloud:
Adjusting the Settings on Your Java Machine

By Joe Kutner

The Pragmatic Programmers
8 min readMay 25, 2022

--

If you’ve been hesitant to switch to JRuby due to lack of familiarity with the JVM, Joe has some good news for you.

https://pragprog.com/newsletter/
https://pragprog.com/newsletter/

Cornelius Vanderbilt made his millions in the shipping industry. He started with regional steamboat lines on the eastern seaboard of the United States but knew that he would have to ship goods to California in order for his business to succeed. In 1849, the year of the California gold rush, he switched his fleet to ocean-going steamships that carried enormous loads in one trip without the maintenance overhead of hundreds of smaller ships. Deploying Ruby applications on the Java Virtual Machine (JVM) via JRuby has many of the same advantages.

The JVM can service thousands of requests in parallel without the memory or maintenance overhead of an equivalent number of single-threaded MRI instances. (You probably know that MRI means Matz’s Ruby Interpreter, the reference implementation of the Ruby programming language.) Yet, many developers and software companies have been hesitant to switch to JRuby. The biggest hurdle is often a lack of familiarity with the JVM and its new ecosystem. But some new cloud products released in the last few months have done much to alleviate this problem.

In December, Heroku announced dedicated support for JRuby, which adds their product to a growing list of JRuby-supporting cloud platforms including those from Google, Red Hat, and Engine Yard. In this article, we’ll discuss how you can use these platforms to get the most out of your applications. We’ll assume that you have JRuby installed, and that you have some familiarity with the Ruby language and its ecosystem. Beyond those prerequisites, there isn’t much else you need to know. Deploying JRuby apps to the cloud isn’t that different from deploying MRI-based applications.

Getting Ready for JRuby

For all of the cloud options we’ll discuss, you’ll need to first make sure your application is ready for JRuby. If you’re starting from scratch, the rails new command or the equivalent in another web framework will generate exactly what you need. If you’re porting an existing application, you’ll need to do two things:

  • Download the jruby-lint gem and run the jrlint tool against your project.
  • Replace your web server with a JRuby web server.

If you need more information, both of these are described in great detail in my book Deploying with JRuby from the Pragmatic Bookshelf. But for our present purposes, just note that the jrlint tool will alert you to any incompatibilities in your code (the most common is the need to replace your database adapter or any other gem that uses native code). We’ll assume you’re past that. Now all that’s left is choosing a server. There are five dominant options:

  • Trinidad: a lightweight server built on the Apache Tomcat server.
  • TorqueBox: a JRuby application server (AS) built on JBoss AS.
  • TorqueBox-lite: Only the web bits of the TorqueBox AS.
  • Puma: a pure Ruby web server.
  • Warbler: not really a server, but a tool for packaging a Ruby application to run on any Java server.

Your choice of server will be largely dictated by the cloud platform your decide to use. Fortunately, you have several choices again, but we’ll begin with the newest of the bunch.

JRuby on Heroku

Heroku is a cloud application platform that has supported both Ruby and Java applications for several years, but it’s only recently introduced dedicated support for combining the two with JRuby.

Setting up a JRuby app to run on Heroku is largely the same as setting up an MRI app. The primary difference is that you will need to add a line like the one below to your Gemfile.

ruby ‘1.9.3’, :engine => ‘jruby’, :engine_version => ‘1.7.1’

This instructs Heroku to use JRuby version 1.7.1 and run it in 1.9.3 compatibility mode.

Next, you’ll need to pick your web server. Heroku does not provide direct support for any single server, but it does recommend Puma. That’s because Puma has the smallest memory footprint of the options described above, and it can fit on a single dyno with ease. (A dyno is the basic unit of composition on Heroku, “a lightweight container running a single user-specified command.”) To use Puma, replace your existing server with this line in your Gemfile.

gem ‘puma’

Trinidad and TorqueBox-lite also work on Heroku. TO use them, you need to replace the puma gem with the trinidad or torquebox-lite gem respectively. Be aware, though, that the underlying Java servers they are built on will consume more memory, which could be a problem if your app is particularly large.

In any case, the next step is modifying your Procfile to start the JRuby server you’ve chosen. For Puma, it would look like this:

web: bundle exec rails server puma -p $PORT -e $RACK_ENV

Finally, create a Heroku account and download the command-line tool. Then run the following commands from your application’s root directory:

$ heroku create --remote jruby-master
Creating severe-mountain-793... done, stack is cedar
http://severe-mountain-793.herokuapp.com/
| git@heroku.com:severe-mountain-793.git
Git remote jruby-master added
$ git push heroku jruby-master
Counting objects: 692, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (662/662), done.
Writing objects: 100% (692/692), 141.01 KiB, done.
Total 692 (delta 379), reused 0 (delta 0)
-----> Heroku receiving push
-----> Ruby/Rails app detected
-----> Using Ruby version: ruby-1.9.3-jruby-1.7.1
-----> Installing JVM: openjdk7-latest
-----> Installing dependencies using Bundler version 1.2.1
# ...

You app is now running on JRuby in the cloud, and you can start taking advantage of all that JRuby has to offer.

Aside from the servers mentioned earlier, it’s also possible to run the full TorqueBox server on Heroku. However, there is a bit of finagling required to get past issues with slug size, timeouts and memory limits. Even more unfortunate is that deploying TorqueBox to Heroku compromises many of the TorqueBox features, such as clustering and STOMP support.

If TorqueBox is what you’re after, a better solution may be the OpenShift platform.

TorqueBox on OpenShift

OpenShift is a platform-as-a-service (PaaS) offering from Red Hat. The main advantage of OpenShift over other PaaS options is that the underlying technology is open source, which means you could potentially set up an OpenShift cloud within your own organization.

Because OpenShift and TorqueBox are maintained by the same organization (Red Hat), the two technologies align well with each other. OpenShift has only recently come out of beta, and there have been many gyrations in the set-up steps for TorqueBox. The current standard is to use the openshift-quickstart tool, which is maintained by the TorqueBox team. To use it, you’ll need an existing JBoss AS7 application, which can be created with the rhc command-line tool provided by OpenShift. You can download it when you create an account at (http://openshift.redhat.com). Then run the following command:

$ rhc app create -a yourapp -t jbossas-7

This creates an empty JBoss AS7 application — not a JRuby application. So you’ll need to delete the scaffolding with these commands:

$ cd yourapp
$ rm -rf pom.xml src

Then merge the openshift-quickstart code into your project like this:

$ git remote add upstream -m master git://github.com/torquebox/openshift-quickstart.git$ git pull -s recursive -X theirs upstream master

Once you git push the changes to your remote OpenShift repo, which will already be configured, you’ll be running TorqueBox in the cloud. Now you can leverage background jobs, scheduled jobs, daemon services, and many other enterprise-class features that are integrated into your application. You won’t need to set up additional infrastructure like workers or slave instances to run them.

While OpenShift and Heroku are the most promising JRuby cloud environments, they are both relatively new options. Several other vendors have products that have been around for a while.

EngineYard, CloudBees, Google, and More

EngineYard’s JRuby cloud platform was first released in 2011 with dedicated support for the Trinidad server. To use it, simply select the JRuby and Trinidad options when creating a new instance. Then deploy your Trinidad app as you would deploy any other Ruby or Rails application to their cloud platform.

If you’re using Warbler to package your application into an archive file, then you have even more cloud deployment options. CloudBees, the maintainer of the Jenkins CI server, supports code-hosting, databases, and WAR files in the cloud. Jelastic and Google AppEngine (GAE) provide similar support. To get started with these services, you’ll need to install the warbler gem, and then run the warble war command from your application’s root directory.

The warble war command generates a WAR file, which is essentially a Zip archive file that follows a few conventions. This archive contains everything your application needs to run (including its gem dependencies) on CloudBees, GAE, or Jelastic. All that’s left to do is to deploy the WAR file with the respective platform’s tools.

Each of these solutions, including Heroku and OpenShift, has its own approach to background jobs, databases, and other external components. But you’ll essentially get the same benefits no matter what web server and cloud vendor powers your JRuby application. So that brings us to the basic question: is JRuby in the cloud worth it?

Why Use JRuby in the Cloud?

You may be aware that many of the advantages JRuby has over MRI relate to infrastructure and scalability. But these features lose their importance when deploying to a PaaS where infrastructure and scalability are managed for you. Instead, the JRuby capabilities that are most worthwhile in the cloud include integrating Java libraries into your apps, writing multi-threaded code (that actually executes in parallel), and having applications interact in ways that were not possible before (such as sharing resources).

These are significant advantages. But there’s also this: you’ll be preparing yourself for the future. MRI won’t become multi-threaded anytime soon, and without concurrency or additional infrastructure the user demand on your application will likely overpower your ability to scale. The easiest way to avoid this problem down the road is to use the best platform now.

Looking Forward

Cloud computing is young, and moving to the cloud can be a bumpy ride. None of the vendors we’ve discussed here support the full spectrum of JRuby’s power. Notable features that are missing include clustering (that is, the ability to replicate data and share computation across instances) and tooling (such as the Java Management Extensions). But these features don’t exist in an MRI environment anyways. Hopefully, these JRuby vendors will provide them soon.

The missing features shouldn’t stop you from getting your Ruby or Rails application on JRuby, though. Only the JVM can provide your products with the highest levels of performance and up-time. With JRuby cloud support, you’ll be up and running in seconds and you won’t miss the next California gold rush.

Cover from PragPub magazine, February 2013
Cover from PragPub magazine, February 2013

--

--

PragPub
The Pragmatic Programmers

The Pragmatic Programmers bring you archives from PragPub, a magazine on web and mobile development (by editor Michael Swaine, of Dr. Dobb’s Journal fame).