Implementing Web Services
Thundering Web Requests: Part 2
This is the second post in a series of posts exploring web services related technologies. It documents my observations from using different technologies to implement web services.
Recently, I wanted to explore web service technologies. I decided to do this exploration by simulating and handling a thundering herd of web requests.
As part of this exercise, one of the tasks was to
Implement a web service that services HTTP GET requests with an optional query parameter N (default 10) by returning N random numbers from the closed interval 0 and 999,999 as a list of N 6-character strings in JSON format.
The service reports the time taken to service each HTTP GET request.
Choice of Technologies
First, as Tornado, a Python web framework, got me into this rabbit hole, I added Tornado to the list of technologies to implement the web service. Then, a quick search revealed Cyclone as a fast Tornado-like Twisted-based library to implement web services in Python. So, I added Cyclone to the list. Since I had recently used Flask to implement a web server to test Android apps, I added Flask to the list.
At this point, to break the homogeneity of technologies, I decided to explore few non-Python technologies.
I had used Vert.x in the past and I was aware that both Vert.x and Ratpack supported implementation of web services. So, I added them to the list with Kotlin as the language of implementation. Since I picked Kotlin as an implementation language, I then added Ktor, a framework to build asynchronous servers and client using Kotlin, to the list.
Since I had used Erlang to implement a web client, I wanted to explore an Erlang based web service technology. For this purpose, I added Cowboy to the list. Then I learned about Yaws, an Erlang web server. After little digging, I found the approaches to implementing a web service was a bit different in Cowboy and Yaws. So, I added Yaws to the list. As I had been curious about Elixir and I had heard about Phoenix, I add Phoenix to the list. As I learned about Phoenix, it seemed a bit heavy for my purpose. So, the search for a lighter library/framework led me to Trot.
My curiosity to toy around with Crystal language led me to add Kemal, a web framework, to the list. A similar reason led to the addition of Micronaut, a JVM-based framework to build microservices, to the list.
Finally, given my recent experience with Go and the widespread use of Go in service-rich environments, I added Go to the list as well.
In short, the choice of technologies was dictated by my interest to explore new programming languages, current popularity of languages/technologies used to implement web services, and explore the effect of heterogeneity (if any) of languages/technologies.
As in the case of implementing web clients, I wanted to keep the source artifacts and the process to use the source artifacts simple and easy while jumping thru necessary “hurdles” in using a technology. So, given the simplicity of the web services, I approached developing the variants of web services as a hacking exercise with minimal use of software engineering practices outside coding.
Compared to implementing web clients applicable, I had to use more build tools to manage dependencies and build the services in this exercise. Specifically, I used Cargo to build the Rust variant, Gradle to build Micronaut-based Kotlin variant, Make to build Cowboy-based Erlang variant, Mix to build elixir variants, and Shards to build the Crystal variant.
My experience with the language features of Erlang, Elixir, Go, and Kotlin was the same as when I implemented the web clients. As I dabbled more with Elixir, I grew to like it more but I still preferred Erlang over Elixir; specifically, for its simplicity.
As for programming in Crystal, having programmed in Groovy, I really enjoyed the simple and succinct Ruby-like syntax along with the good type inference system.
Of all of the considered languages, Rust was most troublesome in terms of the time taken to implement the service. This was due to two reasons. First, after my recent use of programming languages that supported dynamic typing or type inference, the use of explicit types seemed cumbersome (at least) for this exercise. Second, I took some time to coming to grip with syntax of Rust and using
unwrap in dealing with results. Interestingly, while I thought the ownership type system would trip me, it did not; may be, it was the simplicity of the web service and not me :)
In terms of out-of-the-box performance, almost every language/technology were impressive with Crystal, Erlang, Elixir, Go, and Rust really shining; more on this in the next blog post.
Every considered technology provided great support for concurrent processing of requests — it required no extra work :) In every case, once I figured out how and where to plug in the service logic, I could rely on the technology to invoke the service logic to service concurrent requests. As for concurrency control, I did not have to deal with any due to the simplicity of the service logic.
That said, the Node.js implementation does use a global variable that is accessed by concurrent requests. However, since Node.js is single-threaded, this was not an issue.
Also, I uncovered concurrent request processing triggered some issues in the Crystal implementation on Raspberry Pi. More on that below.
Support for Web Services/APIs (Libraries)
Every technology provided a way to specify a function/method as a handler. Once I figured this out and plugged in the service logic at the appropriate location, no extra coding magic was required.
Most often, figuring out the way to specify the handler was easy. The main difference across the technologies was the mechanism used to specify the handler, e.g., explicitly in code (e.g., Tornado, Ktor), code annotations (e.g., Flask, Micronaut), configuration file (e.g., Yaws).
Scaffolding (Skeleton Code)
Of the considered technologies, I found Phoenix to be the toughest to use as creating the app via Mix tool generated a lot of scaffolding artifacts, e.g., folders for static content and models. While such scaffolding can clearly help the development of web applications, it is an overkill for simple web services/APIs. As a Phoenix newbie, identifying and clearing out unnecessary artifacts required some effort. Looking back, I wonder if there are other flags such as
--no-webpack to the Mix tool that can be used generate slimmer scaffolding for web services/APIs.
While the scaffolding generated by Micronaut and Cowboy also required some cleaning, this effort was far less compared to the effort required to clean up the scaffolding generated by Phoenix.
The remaining tools either required minimal scaffolding (e.g., Trot, Cowboy) or none (e.g., Ratpack).
With the exception of Actix, every technology provided good documentation that was easily accessible. The information in the documentation (e.g., examples) were immediately usable. As a newbie to most technologies, these examples cut down the time to start using a technology.
With Actix, the actix-web library changed version from 0.7 to 1.0.0 when I was implementing the service using Actix/Rust. Consequently, the more accessible documentation available at https://actix.rs/docs/ was lagging behind the more precise but less accessible documentation available at https://doc.rust-lang.org/. As a Rust newbie, this led to quite a bit experimentation to get a working implementation. The situation was aggravated by the change to the Actix API used to access query parameters.
Managing dependencies was a breeze with all technologies. Cargo for Rust, Shards for Crystal, Mix for Elixir, and Gradle for Micronaut were really easy to use.
Every technology supported the execution of the web service via a single simple command. This is ideal to get started with the technologies.
In terms of support debugging the service implementation, Phoenix and Kemal really shined. With no additional development cost, the service implementations in Phoenix and Kemal have builtin support to provide helpful HTML responses for “incorrect” requests, e.g.,
request to read an undefined route: http://127.0.0.1:1234.
While I had heard about the richness of error messages from Rust compiler, I encountered them first hand in this exercise and I was really impressed. The messages were informative about the issue. At times, the provided suggestions were sufficient to fix the issue :)
The only glitch in terms of tooling was with Crystal. At the time of this exercise, Crystal was not supported on Raspberry Pi and building the compiler from source was complicated by the fact that the compiler is bootstrapped. The only option was to rebuild the compiler incrementally starting from the earlier pre-bootstrap version of the compiler or cross compile the compiler. While the former approach failed, the latter succeeded after many trials. Even so, the executable generated by the resulting compiler exhibited buggy concurrent behavior.
Every technology was accompanied by very good API documentation. While many had immediately accessible examples along with discussion of how to realize basic use cases using the technology, I thought more short examples covering more features would further help jump start a newbie’s first brush with a technology.
As with Erlang documentation, once I got a hang of it, I really liked the structure of Rust API documentation scheme. However, I wished the documentation web pages was more easy to navigate using the keyboard.
Given the simplicity of the web service, every implementation was pretty short and simple. The implementations were a bit lengthy in instances where build tools (e.g., Mix) were required and the technologies were more geared toward web applications rather than web services (e.g., Phoenix).
In terms of choosing a technology to develop web services, all of these technologies were really good and easy to use while offering comparable features. So, the choice really boils down to personal comfort with the language, the technology, and the desired performance.
The code for all services is available on GitHub. I will discuss the implementation of the services in a future post focused on performance evaluation of the services.
Observations from evaluating the web services based on these technologies.
Nov-15–2019: Dropping Yaws-Erlang implementation
During the rerun, the Ansible script used execute every service implementation failed to execute Yaws-Erlang implementation. Hence, I could not collect data about Yaws-Erlang implementation. Consequently, I will not be making further observations about Yaws-Erlang implementations.
Nov-14–2019: Dropping Cyclone-Python implementation
Since I had to rerun the experiments due to a bug, I decided ignore Cyclone-Python in my experiments as Cyclone depends on Python2 and Python2 is reaching end-of-life at the end of 2019.
Aug-21–2019: Improving concurrency/parallelism of Ratpack server
While Ratpack implementation was parallel out of the box, I found that the implementation could be made more parallel with minor tweaks. The change was to merely move the computation associated with a request into a non-request processing thread by adding two lines. Again, a simple enough change to get better performance
Aug-16–2019: Improving concurrency/parallelism of Vert.x server
While setting up the evaluation of the service implementations on my Raspberry Pi cluster, I found the Vert.x implementation was using only one processor core. One way to use all cores was to execute the request handler as blocking code in a worker thread using
Vertx.executBlocking(). Another way was to use multiple instances of the server either by directly creating these instances or using the deployment feature of Vert.x. I choose to directly create multiple instances of the server within a simple for loop.
All said, while Vert.x does not provide service level parallelism by default, enabling such parallelism in Vert.x is neither complicated nor complex.