There often is a somewhat contentious relationship between developers and marketeers. But with the growth of cloud computing and the use of data-driven decision making within firms, the need for these teams to work together is growing exponentially.
But why is that? Suppose your company is investing largely in a marketing campaign to launch a (new) product or service. The job of the marketeers is to make sure this launch is a success and will lead to new customers and an increase in revenue. If the product or service is unavailable due to technical complications, the marketing campaign will not thrive and might even fail.
Marketeers will hold the development team responsible for this situation, while it might not even have been their mistake. Before starting building the new product or service, the business requirements that were discussed with the developers should include the expected number of (concurrent) users. Both marketeers and developers should respect these expectations and share as much relevant data as possible.
One of the most unexpected convergence points for these two groups is when and how to do successful load testing.
Making this convergence point work well is precisely the challenge we had when creating a social tv guide application a few years ago. Therefore we decided to use LoadImpact to test the performance of our application.
The First Test
One of the most important things when building a mobile application is scalability, the more users the application can handle the more profit you could potentially generate. Performance should therefore be maximized in order to get the most out of your app. When our development team started working on this application, they respected most of the general principles for building scalable software. Keeping this in mind, let’s have a look at the first load testing results. (PS: On behalf of the client, the exact numbers are blurred).
So the green dotted line gives an indication of the number of concurrent users (right vertical axis) LoadImpact is sending to our application, while the blue dotted line indicates the average loading time (left vertical axis) these users experience. The average load time is acceptable when the blue line is below the green line until they first cross at 13:26:30, when the average loading time first exceeds the number of concurrent users. After this crossing point we can see an increase of the average loading time that is relatively larger than the increase in the number of concurrent users. Then we can see something unexpected between 13:28:00 and 13:28:30 when the average loading time suddenly shows a steep decrease.
This is when the api crashed and the returned an error, resulting in much faster loading times as the application couldn’t connect with the api anymore. For our developers this point should be considered as the maximum number of concurrent users our application could handle. However, the marketing team had data showing the impact of loading time on the user experience and considered the crossing point of both lines as our maximum number of concurrent users. As this number was smaller than the number of users described in the business requirements, both teams decided to take action and realign their expectations. This resulted in a new and higher number of concurrent users the application should be able to handle
As the application was already build with scalability in mind, the first step we could take was increasing the number of servers to horizontally scale the application. This resulted in an increase of performance, much to delight of the marketeers. But as servers are costly, the developers decided to look into possibilities for vertical scaling as well. For example we chose to switch to serve more (static) assets over a CDN, refactor our code and apply client-side caching — we already had server-side caching in place. After some local tests, we tried running a new test on LoadImpact.
IMPORTANT: The color of the lines has changes, meaning the green dotted lines now indicates the average loading time and the blue dotted lined shows the number of concurrent users.
Wow! The results stunned the marketeers as the average loading time of the application has dropped a lot, especially in comparison to the number of concurrent users. What would have happened if the marketing campaign had run with the performance from the first test?