Observations about Web Service Technologies: Using Custom Web Clients

Thundering Web Requests: Part 5

This is the fifth post in a series of posts exploring web services related technologies.

Having implemented custom web clients and evaluated web service technology using Apache Bench, I evaluated the same web service implementations using custom web clients. This post documents both server-side and client-side observations from this evaluation.

Setup

The setup from the previous AB experiment was used with the following changes.

Benchmarking Tools

Instead of Apache Bench, three custom web clients: HTTPoison+Elixir, Go, and Vertx-Kotlin, were used.

Each of these clients reported the time taken by each request. They also reported if a request succeeded or failed based on the error reported by the underlying web service technology. Unlike Apache Bench, they did not check for failures in terms of incomplete payload.

External to these clients, the total wall-clock time taken by all requests in an execution of a client was captured by executing the clients using the time command on Linux.

Derived Requests per Second

The requests per second metric was calculated across all clients in an Ansible script execution using the total number of issued requests and the total wall-clock time reported by the time command. So, this metric includes any warm up time required by the clients.

Performance

Since the clients were based on different language platforms and web technologies, they likely exhibited different performance and behavioural characteristics. I will explore this aspect in the next blog post.

Execution

The execution of this experiment was similar to the previous AB experiment barring one difference: in each Ansible script execution, on each client node, one of the three web clients were chosen at random and executed. So, while 963 Ansible script executions involved at least two different clients, seven executions involved only the Go web client and five executions involved only the Vertx-Kotlin web client.

Observations about Performance

Client-side Observations

For each network traffic configuration and concurrent requests configuration pair, the Ansible script execution with the highest number of requests per second was considered in making the following observations.

Client-side performance at 10, 6, and 2 MBps [click to enlarge]

A likely reason for observations 2 and 3 could be the behavioural differences between the custom web clients.

None of the web services could service 10K requests per second on a Raspberry Pi 3B

Server-side Observations

The server-side observations in this experiment were similar to the server-side observations from the previous AB experiment. The only difference was, at every network traffic configuration, at lower concurrent requests configurations, most web service implementations did better than in the previous AB experiment .

Server-side performance at 10, 6, and 2 MBps [click to enlarge]

Actix-Rust and Go implementations consistently performed better than other implementations

Observations about Failure/Reliability

As in the previous AB experiment, the client-side performance of each web service implementation improved as the number of concurrent requests increased. To understand this, I again examined failures during execution.

Based on Failed Clients

Unlike in the previous AB experiment, every execution of custom clients completed without crashes, i.e., there were no failing clients.

Based on Raw Least Number of Failed Requests

The below table lists the least number of failed requests across all five Ansible script executions against a service implementation in a configuration. (Execution) Instances with no failed requests are not shown, i.e., empty cells and absent columns. Significant instances where the number of failures were more than 5% of the maximum number of requests are show hilighted in red.

Raw least number of failed requests across all five executions. A/B/C/D/E denotes request failed with A checkout_timeout errors, B connect_timeout errors, C timeout errors, D closed errors, and E errors with unknown reasons. (Best-Case) [Click to enlarge]

From the above table, we observe

Types of Errors (Failures)

In the execution instances with failed requests, there were 5 types of errors.

Of the three different web clients used, only Vertx-Kotlin and Go clients failed to report the reason for failed requests (unknown reason error). The number of such failed requests due to this error is highlighted in green in the table. The remaining types of errors are reported only by HTTPoison-Elixir client.

Based on Corrected Least Number of Failed Requests

Based on the error types, instances involving failures only from checkout_timeout errors need to be eliminated. So, with this elimination/correction, the above table changes as follows.

Corrected least number of failed requests across all five executions. A/B/C/D/E denotes request failed with A checkout_timeout errors, B connect_timeout errors, C timeout errors, D closed errors, and E errors with unknown reasons. (Best-Case) [Click to enlarge]

In the above corrected best-case table,

Based on Corrected Most Number of Failed Requests (Worst-case)

I also considered the most number of failed requests (worst-case) data after correcting it for checkout_timeout errors.

Corrected most number of failed requests across all five executions. A/B/C/D/E denotes request failed with A checkout_timeout errors, B connect_timeout errors, C timeout errors, D closed errors, and E errors with unknown reasons. (Worst-Case) [Click to enlarge]

In the above corrected worst-case table,

Some Explanations

Failure and Performance: Overall, compared to the previous AB experiment, fewer failed requests were observed in this experiment. This could be attributed to the overall reduced performance of the web services which in turn could be attributed to the performance of web clients in isolation or in combination; something to consider/explore.

Failures with Ktor-Kotlin Service Implementation: Of the four Go clients and one HTTPoison-Elixir client involved in the worst-case execution of Ktor-Kotlin implementation, one of the Go clients timed out on four requests after 30 seconds, which is the default timeout for HTTP connections in Go. While 540 second timeout was used in the previous AB experiment, the default timeout provided by the underlying web client technologies were used in this experiment. The default timeout is a likely reason for the failures encountered by Ktor-Kotlin implementation.

Failures with HTTPoison-Elixir Web Client: Hackney library is the basis of HTTPoison library. The default timeout to check out a socket from a pool of sockets and to make connection with a service is 8 seconds each. Also, the default timeout for receiving data over a connection is 5 seconds. All of these in total is much less than the 540 second timeout used in the previous AB experiment. So, again, the default timeouts are a likely reason for the failures encountered with HTTPoison-Elixir web client.

Summary

Unlike the previous AB experiment, this experiment was a bit flawed due to the reliance on default configuration setting, and this resulted in data that needed some correction. So, one big takeaway is

While using a feature of a library, understand how various options and their (default) values influence the behaviours of the feature

Despite the flawed experimentation, all but one of the observations about web service implementations from the previous AB experiment held true in this experiment.

Next Up

In my next post, I will examine the three technologies used to implement the web clients used in this experiment.

Venkatesh-Prasad Ranganath

Written by

Software Craftsman / Researcher. Posts mostly about Software Engineering, Systems, Performance, Scale, and Security.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade