Performance Tests — Better, Faster, Stronger

We use continuous delivery methodology — which means we deploy several times each day, that is thanks to our feature development cycle. It includes different types of automatic tests that run on the developers’ branches in the CI server after each commit. This way provides fast feedback for the developer and helps to find bugs during the feature development.

One of our goals is to provide the fastest user experience in the e-commerce industry during the whole year and especially during the high load of the holiday season. The combination between this goal and supporting many code changes every day forces us to run performance tests at least once a day on production code.

Performance test once a day — is it enough?!

As part of our work, we strive to establish a performance testing culture across the company.
We wanted to provide fast feedback for the developers in performance aspect the same way as we did with functional tests. That means that we wanted to run performance tests for each commit.
While establishing this culture, we’ve encountered some challenges, which made us question whether we should integrate performance testing as part of the development cycle or not.


There are many reasons for not doing it; here are some of the reasons that held us back

“I’m a developer, leave the testing to the QA”

Running performance tests requires new tools and expanding the common developer’s skill set.
Tools like “Jmeter” and “Load runner” don’t seem so attractive to developers.

Comparing “apples to apples”

Environment stability is influenced by many factors, and any of them can lead to inconclusive results.
Maintaining a stable environment is extremely hard and therefore one might encounter inconsistency between tests.
Some will pass and some will fail for the wrong reasons making them unreliable.

Why the fuss?

Performance testing is a complex process.
You define the flow, write a script, create data, analyze the results and then, sanitize your environment for the next iteration.
This process inflates the feature development cycle, which is already a complex one.

It *feels* fast enough….

Sometimes you have no prior test results to compare with and no performance SLA is defined by the product.
Given no benchmark or requirements, if it *feels* fast enough — you tend to go with your gut.

Move fast and break things

Continuous Delivery had changed our lives.
Our progress is fast and gradual so we can fix things on the go.
On the other hand, testing every change takes its toll and it isn’t worth it.


And yet, performance testing is worth it!

Here is why you should integrate performance testing in your development cycle

Tests protect your code

Performance tests protect your code the same way functional tests do

Your code was designed perfectly according to the initial requirements and system status.
However, the system has changed over time and made your component slower while damaging the user experience.
A simple performance test will protect it and reflect the impact of your (or someone else’s) changes on your code.

Performance tests can become an essential part of your sanity

Well designed performance tests are another protective layer

As years go by, performance tests have proven themselves quite essential when it comes to the sanity process.
They help prevent critical problems in your production environment, better than existing functional tests, simply because they run on an actual environment with simulated traffic..

Better safe than sorry

The sooner issues are detected — the simpler and cheaper the solution will be. 
 If it’s true for functional problems, it is even more significant
in performance issues.

As I mentioned before, performance may be affected by many factors, therefore, the investigation and definition of a performance issue is complicated and takes time.
While you develop a feature “you are holding all the cards” shortening the investigation and fixing periods.

The ability to get fast performance feedback will shorten the development process and prevent expensive problems in the future.

Make the developers understand the “why” and not just the “how”

A developer needs to fix a functional bug. In order to fix the bug he will add a filter to elasticsearch query. The QA engineer will validate it and the fix will be deployed.
As a result, the cache will be invalid frequently, it will require more resources, the page will be 10% slower and load endurance will decreased.

If only the developer had been aware of the performance degradation, he would research the issue, try to resolve it, understand why it happened and better understand how the underlying elasticsearch works.

Ok, you were tempted by the performance-aholic!
Now that you’re convinced and understand why you should include performance testing as part of your development cycle.
In upcoming posts, I’ll describe the challenges that we’re facing in order to make this happen:

  1. Writing performance tests in a simple manner similar to writing integration tests.
  2. Running and analyzing results as part of the CI process.
  3. Creating a dependable and stable environment for performance testing.
  4. Involving the Product Managers in defining performance requirements

Thanks,
Tom Sender