Ruby Proctor is a web app I wrote to help Ruby developers write Rubocop rules more efficiently. The app itself is pretty simple (and is a thin wrapper around the Rubocop runner), but since it relies on executing user-provided code, making it safe was fairly challenging. The main risks I was concerned about were:
- Users rewriting classes/methods in my app
- Using RubyProctor to make http requests and DDoS other sites
- Tying up the CPU on my server with an infinite loop or other expensive operation
- Removing files on the server, or writing files and causing the disk to run out of space
For context, RubyProctor is a simple Sinatra app deployed on Kubernetes using Google Container Engine. Some of these solutions will work on other platforms.
Rewriting classes/methods in my app
Addressing this one was easy — instead of running eval in the same process as my Sinatra server, on every request, I start a new ruby process to execute the Rubocop rule that I want to test.
Using Ruby Proctor to make http requests and DDoS other sites
Since in a Ruby program, you can always shell out to other programs, the only way to definitely prevent an application from making external network requests is to block it at the server level. I was able to accomplish this with my Kubernetes cluster using Istio, which by default prevents any traffic in or out of the containers in your cluster. I added a simple ingress rule to allow traffic into the container:
Kubernetes 1.8 has support for setting Network Policies to block egress traffic, which is a simpler way of accomplishing this.
Tying up the CPU on my server with an infinite loop other expensive operation
When triggering external processes in Ruby script, you can set resource limits on the process. Through this mechanism, I limited the process I spawn to run the Rubocop rule to 2 CPU seconds. A couple other issues can arise here — if the user includes code that spawns off another process, the runtime of that process will not count towards the CPU time of the process running the users’ code, and potentially hose the container. The solution to this one is putting a limit on the nproc resource, which limits the number of processes that that process can start.
Open3.capture2(‘ruby’, ‘-e’, program, arg1, arg2, rlimit_cpu: [2,2], rlimit_nproc: 1)
This leaves one other way for an attacker to potentially crash the container — if they include code that sleeps the process for a long time, this consumes no CPU time, but could still cause the container to run out of memory. I was able to get around this by using the timout ruby module to run the code, killing anything that takes longer than two seconds:
Removing files on the server, or writing files and causing the disk to run out of space
Kubernetes Security Contexts were helpful in solving this one. In the container spec for the container running the server code, I set a security context that has the readOnlyRootFilesystem option set, so that no processes running on the container could make changes to the filesystem.
The spec for the container looks like this:
— name: CONTAINER_NAME
— containerPort: CONTAINER_PORT
Other avenues I investigated were running the code in a Ruby sandbox — however the most popular sandbox is written in JRuby, while I was interested in running MRI 2.4.1 code (what I use). The other sandboxes I found haven’t been updated recently.
Another option I considered was running the Rubocop rules in an AWS lambda task. It looks like it is possible to run lambda tasks synchronously, which would be needed for this task, and to upload arbitrary files (a ruby interpreter) and run them.
While you cannot modify the filesystem on the container running the server code, it is still possible to read the filesystem. This isn’t a big issue since I’m planning on open sourcing the code for this project anyway, but I’m interested in buttoning this up and plan on investigating running the Rubocop runner as a unix process group that has read access to a limited part of the filesystem. The main thing it needs to be able to read are the directories where gems on the system are installed.
Thanks for reading, and let me know if you think I missed anything!