Basic Step of Load Test with Tomcat

Bogeun Kim
bokunn91

--

In South Korea, there is a unique recruitment system different from all the other countries.

Our web service supports it to increase efficiency of whole process of recruitment. There are some events with large traffic for a very short time like 100-150k requests for 1 min for announcements of successful candidates.

We have to figure out a performance of system in advance. And we have to be able to set number of server/tomcat to support large traffic. First step of it is measuring performance of single tomcat.

So I’m going to explain how to do this.

1. Expect how the result can be

Throughput (transactions/sec) and Latency (response-time) can be nice indicators to represent its performance. I already had had some experience how many requests can be processed on single tomcat.

  • Expected Throughput: over 1,000 TPS
  • Expected Latency: under 100 ms

Throughput or Latency also can be affected by whether certain web page have heavy algorithm (logic, DB query or something) or not. So I set the minimum performance like above.

2. Set the environment to test

JMeter supports client-server structure to generate lots of requests stably.

Architecture for load test with JMeter

3. Set the tomcat to process lots of requests

I just want to test only one tomcat because we can anticipate the performance with multi-tomcat based on that. Actually there are lots of options for tomcat but I introduce representative options to affect its performance.

Tomcat Info

  • apache-tomcat-8.5.53 (Binary Distributions)

conf/server.xml

<Connector port="8080"
protocol="HTTP/1.1"
connectionTimeout="630000"
redirectPort="8443"
maxPostSize="52428800"
URIEncoding="UTF-8"
keepAliveTimeout="10000"
maxThreads="2000"
minSpareThreads="20"
processorCache="2000"
server="bws" />
  • keepAliveTimeout:
    The number of milliseconds this Connector will wait for another HTTP request before closing the connection
  • maxThreads: The maximum number of request processing threads

bin/setenv.sh

CATALINA_OPTS="$CATALINA_OPTS -Xms2G"
CATALINA_OPTS="$CATALINA_OPTS -Xmx2G"
  • -Xms: Sets the initial size (in bytes) of the heap
  • -Xmx: Specifies the maximum size (in bytes) of the heap

4. Make JMeter (*.jmx) scripts which has instructions to send requests to server and execute it

JMeter recommands to use JMeter GUI mode to make scripts because it’s an easy way to make. But on actual test, it would be better to use JMeter CLI mode.

JMeter user’s manual said CLI mode must be used for load testing.
JMeter GUI mode in Windows.

Created script is located on the JMeter Client. And then we can execute the command like below to request as described above.

sh jmeter.sh -n \
-t scripts/sample-test.jmx \
-l output/$(date '+%Y%m%d_%H%M%m')/result_$(date '+%Y%m%d_%H%M%m').jtl \
-j output/$(date '+%Y%m%d_%H%M%m')/log_$(date '+%Y%m%d_%H%M%m').log \
-e -o output/$(date '+%Y%m%d_%H%M%m')/report \
-r -X

Some directories (scripts, output) were made personally to make it easier to understand. And the result of each test are located on the path which consists of date and time.

  • -t: JMeter scripts (*.jmx) path
  • -l: Name of JTL file to log sample results to
  • -j: Name of JMeter run log file
  • -e: Generate report dashboard after load test
  • -o: Output folder where to generate the report dashboard after load test.
  • -r: Run the test in the servers specified by the JMeter property “remote_hosts”
  • -X: Means exit the servers at the end of the test

You can check the log of tomcat or CPU Utilization to check that it really gets some web reqeusts from JMeter.

5. Check the result of load test with report dashboard

After load test, we can get some report created from JMeter at the output path with -e/o options. It makes html report so we can see the results on the browser.

Load Test Report Dashboard created from JMeter.

You can analysys it from various perspectives (with TPS, Latency, Active Threads, …)

6. Analysys the result

First, we can see the statistics of the test. It might be the proper test only if the error rate is 0%. If there are some failure from responses, you have to find out why it did first.

Statistics of the result.

Average response-time (Latency) is also important indicator represents tomcat’s processing performance. We want it to be less than 100 ms. The result ‘0.46 ms’ from the above picture came out well because it was a simple index page.

TPS means Transactions Per Sec. It’s the number of processed requests per 1 sec. TPS and Average response-time (Latency) are the most used indicator to grasp the performance.

There are lots of other graphs to make it easier to understand and to let us know how the test was processed.

Response Times Over Time (Latency)
Codes per Second (HTTP status)
Active Threads Over Time (per JMeter Server)
Total Transactions Per Second

7. Conclusion

Anyway, finally we got the result of load test with single tomcat.

With simple index page

  • TPS: 12,100
  • Latency: 99.8 ms
  • Active User: 1,500

With the page which has some DB processing
(Select 10 rows with 1 table)

  • TPS: 1,460
  • Latency: 92.8 ms
  • Active User: 140

Environment

  • AWS EC2 m5.large (1 core, 2 vCPU, Memory: 8 GiB)
  • Tomcat-8.5.53 (Thread: 1,000, Heap: 2GB)

With overall process like above, we can do the load test! Cheer up! :)

8. Issue

No matter how I try to use CPU Utilization over 80% on the AWS EC2 (m5.large) just with the process of tomcat, it didn’t go up. We can make some guess why it happened. There might be some factor to affect it like ‘limit of network bandwidth’, ‘max number of file descriptors on the linux’, …. So far we couldn’t find out.

Furthermore on the AWS EC2 with c5.xlarge (2 core, 4 vCPU, 8 GiB), it didn’t go up over 50% even though we had increased active user over 20,000. There were extra resources for ‘network bandwidth’, ‘file descriptor’, …. AWS Support couldn’t find the reason.

CPU Utilization on the AWS EC2 c5.xlarge (2 core, 4 vCPU, 8 GiB) didn’t go up over 50%. Why did the cpu utilization of system occupy the half of total?

If you want to use optimized server for cost or performance thing, make sure that 해당 서버가 자신의 자원을 최대한으로 활용할 수 있는지도 확인해야한다.

--

--

Bogeun Kim
Bogeun Kim

Written by Bogeun Kim

Technical Product Manager & Team Lead DevOps Engineer & Team Lead https://www.linkedin.com/in/bogeun-kim

Responses (1)