Load Testing using Tsung

By Swaroop C H & Divyanshu

Helpshift Engineering Team
helpshift-engineering
5 min readMay 1, 2014

--

Background

Elasticsearch is one of the pillars of the Helpshift platform. We were adding a new feature related to live notifications for new issues, so we decided to use the new distributed percolator engine introduced in Elasticsearch 1.0. Before jumping into using percolators, we wanted to do load-testing to get an idea of the performance.

The best load-testing tool that our colleagues recommended was Tsung — it is an open source tool written in Erlang.

Note: Percolator engine feature is an “inverse” of a search engine. Usually, you register data and send search queries and you get back results containing data. In a percolator engine, you register queries and send data and you get back results containing queries. This is useful when you want to trigger events for new data that match queries of interest. For more information, see the Elasticsearch reference on percolators.

Installation

Installing Tsung on Linux is fairly straight-forward.

The installation steps we followed for Tsung version 1.5.0 are:

On Ubuntu Linux:

On Arch Linux:

Run tsung -v to check that tsung is indeed installed.

Getting started with Tsung

Note: Do keep the Tsung user manual open for reference when you are trying this out.

Creating a basic Tsung load-test is just creating a XML file:

If you want to sample the actual traffic being generated to make sure it looks correct, use dumptraffic="true" attribute on the top-level tsung tag, but do not use this for actual testing, because it slows down Tsung to a crawl.

Contents of query.json is:

Running the tsung is:

It takes some time to generate reports after the load is done, so be patient — for example, if you run the actual load-testing for 1 minute, then tsung finishes running in about 2.5 minutes.

You can watch it’s progress by tailing the tsung.log file in the output directory it mentions when you start it:

Once tsung finishes running, you can generate a report using:

You should see some pretty graphs in the html page. Take some time to become familiar with them. Example graphs are:

Transactions and Pages
HTTP Return Code Status rate

Dynamic requests

If you want to generate some random input with each query, you can use variables.

For example, we wanted to register a lot of queries with the percolator engine under different document-ids:

The queries JSON file is:

As before, run tsung and generate stats file:

Distributed Tsung

To run Tsung across machines, you have to:

  1. Install Erlang and Tsung on all the machines — make sure all machines have the same versions of both Erlang and Tsung
  2. Open ports for access in the machines to each other : port range 0–65535
  3. Configure hostname on both machines for each other, because the tsung configuration demands host names and not ips, and those machines need to know how to talk to each other using those host names.

Now expand the client list in the tsung configuration xml file:

Running and generating graphs is as same as before.

Additional Notes

If you want to generate more than one kind of query, then use sessions and specify multiple requests.

If you’re generating a large number of requests, ensure that maxusers attribute of client is high (see above). Relatedly, ensure that the ulimit for file descriptors is high on the client machines as well.

If you want more flexibility in generating the body of the requests, then you will need to write Erlang code and use the %% sigil, example:

Postscript

In the end, we realized that adding a lot of percolators has a performance hit on regular search queries, so we decided to have a separate percolator cluster where we only register our search queries and make percolator api calls, no data is stored. Having separate clusters ensures the performance is not degraded and ensures scalability of speed for both searches and live notifications.

If you’re interested in working with Erlang, Tsung, Elasticsearch and related topics, we are hiring, join us!

--

--