Performance Testing of Event-Driven Microservices
Let’s go back to an earlier time in software development…
… the quest to deliver best-in-class products coupled with intensifying competition led to an increased pace in software development cycles. These cycles allowed little time to think about the repercussions of building complex monolithic applications. With continuous evolution, the complexity of these applications constantly increased, becoming strenuous to maintain…
Fast forward to the present. Tech professionals have experienced firsthand the pitfalls of building such complex monolithic applications. Issues include, but are not limited to:
- Large monoliths are difficult to maintain and tracking down software bugs becomes a tedious task.
- They share common libraries. The update or addition of libraries can lead to dependency issues that if not handled properly, create an inconsistent system. This can also require a whole system reboot to update, which in turn leads to increased downtimes.
- Monoliths represent a technology lock-down for developers since they’re bound to the software language and frameworks of the original application.
- Monoliths limit scalability. The usual strategy of creating multiple instances to handle incoming traffic may not work here due to complexity and dependency of resources.
Gradually, the need to build software applications based on smaller and more maintainable services was discerned. This gave rise to anti-monolithic patterns, microservices-based architecture being one of them.
What Is Microservices-Based Architecture?
A microservice is a “minimal independent process interacting via messages.” This can be a more granular and task-oriented, meaning microservices are generally smaller, dedicated, independently deployable, easier to test, scalable, maintainable, and can take advantage of a distributed architecture.
A microservices architecture becomes “a distributed application where all its processes are microservices, which are smaller, independent, dedicated and are independently deployable, testable, scalable and maintainable.”
Microservices-based architecture is a more recent extension of service-oriented architecture (SOA). However, a key difference is that SOA focuses more on reusability and segregation, whereas microservices-based architecture focus more on replacing large applications with systems that can incrementally evolve and are easier to manage using smaller services.
This previous Capital One DevExchange article Delivering Microservices for Enterprise with DevOps gives insight into microservices architecture from an enterprise standpoint.
Often, microservices have smart endpoints and are event-driven; i.e. based on an event, they receive a request, process it, and generate a response accordingly. Less frequently, microservices with HTTP endpoints expose APIs to external applications. Looking from an application perspective, an application consists of numerous microservices and each microservice performs a dedicated task in a system of well-coordinated and loosely-coupled sets of services.
But this raises a question: “How does one test an event-driven microservice, which does not have an HTTP endpoint, and is not accessible directly from any web interface?”
Microservices in Action
Now-a-days, applications in e-commerce, finance, healthcare, and multiple other sectors are using microservices-based architecture. Let’s illustrate how a microservice works in the financial sector.
Consider a money transfer transaction which involves various components and interfaces like a web application, REST API, a microservice, a message broker, and a NoSQL database.
1. User logs into the web application
2. User performs a transaction
3. Message is sent to the TRANSACTION INITIATED topic of the message broker
4. All the subscribed queues receive messages and the RECORDCREATOR microservice -listening to one of those queues- analyzes the message
5. A transaction is created in the database, and sends a message to the RECORD CREATED topic of the message broker
6. FILECREATOR microservice then picks up the message and generates a data file
How to Functional Test a Microservice
Before jumping into performance testing, let us examine an approach to functional test a microservice.
From the above example, each microservice is triggered when an event occurs and all interactions are based on messages. For a quality engineer to test a RECORDCREATOR microservice, send a message to the message broker and validate the data inserted into a NoSQL database. This confirms microservice functionality!
If the sole functionality of a microservice is to send a message to other queues, then subscribe multiple queues to the same topic. Use one queue dedicated for the intended functionality and another “test queue” to check whether the message was forwarded by the message broker or not.
How to Performance Test a Microservice
Functional testing of a microservice guarantees the quality of the intended requirement. However, a best-in-class application undergoes functional, performance and security tests. Additionally, in a microservices-based architecture, there is a need to performance test all the individual components of an application — the microservice, API, database, web application and also the application as a whole. The performance tests of these individual components lay the foundation for the overall application performance.
Let’s look at how we can performance test an event-driven microservice. Below is the list of the components and tools involved in the performance test for a RECORDCREATOR microservice:
Amazon SNS/SQS, NoSQL database
- JMeter — A java application designed to load test functional behavior and measure performance of an application under test.
- influxDB — An open-source time series database.
- Grafana — It is an open source metric analytics & visualization suite. It is most commonly used for visualizing time series data for infrastructure and application analytics.
- CloudWatch — Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications that run-on AWS.
Performance testing encompass wide variety of tests. It includes, but not limited to load test, stress test, and soak test etc. However, in this example, we will perform a load test for a fixed duration of time to understand the throughput (i.e. messages processed by) of a microservice.
1. Microservice is deployed on an AWS EC2 instance and CloudWatch monitoring is enabled for the EC2 instance and SQS queue.
2. Message size conforms to the requirements of SQS message size.
3. Amazon SQS is subscribed to a valid SNS topic.
4. Test execution is performed on an independent AWS EC2 instance.
5. influxDB instance is available and the parameters in influxdb.conf file are updated to support Graphite as an input source.
6. Grafana dashboard instance is available.
To performance test a RECORDCREATOR microservice, send numerous messages to Amazon SNS topic using JMeter. Then calculate the service throughput with the help of SQS metrics such as NumberOfMessagesSent, NumberOfMessagesDeleted, ApproximateNumberOfMessagesVisible from Amazon CloudWatch and the response time metric based on the timestamp difference of the message from Amazon SQS and a transaction created in the NoSQL database. A point to note here is, a valid message is deleted from the SQS only when the microservice processes and acknowledges the message, before it’s visibility timeout expires.
For reference, CloudWatch metric definitions for Amazon Simple Queue Service are:
- NumberOfMessagesSent — The number of messages added to a queue.
- NumberOfMessagesDeleted — The number of messages deleted from a queue.
- ApproximateNumberOfMessagesVisible — The number of messages available for retrieval from the queue.
1. Create a Java class that takes a transactionId and sends a message to Amazon SNS topic.
2. Extend this Java class with AbstractJavaSamplerClient to make it compatible with Java request Sampler of JMeter.
3. Now, convert this java file into a jar file and place it in lib/ext folder of JMeter installation directory. Also, place the dependent jar files in the same directory, esp. jar files based on the imports specified in the program file.
4. Open JMeter and create a new test plan. Add a thread group to the plan as this helps us to configure number of users(threads), duration of test execution and loop Count.
5. Add a Random Variable component to the thread group. This helps in creating a unique transactionId.
6. Add JavaRequest Sampler and select the class name present in the jar file (created in step 3) from the class name dropdown. Pass the transactionId (created in step 5) as a parameter to the class file.
7. Add a backend listener to JavaRequest Sampler and choose the implementation as GraphiteBackendListenerClient (JMeter > 2.9). Configure the parameters and point it to influxDB instance. This helps us to save the time series metrics to influxDB, which in turn are viewed on Grafana dashboard.
8. To perform a load test for fixed time, In the thread group, select the number of threads as 6, duration as 1200 seconds and loop Count as forever.
9. Execute the test, and it will populate messages in the desired SQS queue, which is subscribed to the SNS topic as specified in the Java class.
10. Open CloudWatch and monitor the queue. (If microservice you’re testing is active, it should process all the incoming messages).
11. We can see real time results in Grafana by creating a dashboard, that takes time series metrics from influxDb and CloudWatch.
Through JMeter, we continuously send messages to SNS topic and these messages are forwarded to SQS queues. A microservice listening to the desired queue analyzes the incoming message and completes the intended task. Once the task is completed, it acknowledges the message and the message is deleted from the queue. NumberOfMessagesDeleted metric from AWS CloudWatch gives us the throughput of the microservice.
Performance Test Visualization
Add CloudWatch and influxDB as data sources in Grafana instance. JMeter results sent to influxDB via Graphite listener and CloudWatch metrics from AWS are then viewed on Grafana dashboard by configuring necessary panels. Metrics such as Number of threads are configured from influxDB and other metrics such as NumberOfMessagesSent, NumberOfMessagesDeleted, ApproximateNumberOfMessagesVisible, EC2 CPUUtilization, ECS CPUUtilization, and ECS MemoryUtilization from CloudWatch.
Install Jolokia plugin on the server and send JVM Metrics to understand system behavior including but not limited to heap memory usage, service thread count during the performance test. This additional feature opens the feasibility of seeing real-time performance of the echo system on the Grafana dashboard. Note that, AWS CloudWatch sends metrics to Grafana dashboard in a 5-minute periods. To send metric data in 1-minute periods, detailed monitoring on the instance needs to be enabled.
A similar approach needs be followed to test any microservice that listens to a queue and writes into a database.
A performance test can also be done in a Docker container and integrated to the microservice CI/CD pipeline to facilitate independently deployable, testable, and scalable features of the microservice.
Microservice-based architecture is evolving and developers are embracing the change. Increasing business needs and competition demands a best-in-class product, which makes functional and performance testing of microservices-based applications more crucial than ever. Event-based microservices are dependent on events and these microservices are testable as long as one is able to trigger an event and validate the intended behavior. Each microservice needs to be thoroughly performance tested and validated to determine the application stability and elasticity. This ensures the microservice can handle a high amount of event traffic and the application can qualify as a best-in-class product.
For more on APIs, open source, community events, and developer culture at Capital One, visit DevExchange, our one-stop developer portal. https://developer.capitalone.com/