Adding Scalability Testing to your Jenkins CI/CD pipeline with Stacktical
Stacktical lets you test and analyse the scalability of your front-end, back-end API and other services in your tech stack. It makes it easier than ever before to fix the scalability bottlenecks of your product and handle as many concurrent users as you can squeeze from your infrastructure.
Continuous Scalability Testing lets you fix scalability bottlenecks at the speed of Continuous Delivery. Using Stacktical’s Docker application, you can now automatically meet concurrency objectives for every new release of your product, and easily keep all scalability stakeholders in the loop.
This article focuses on adding scalability testing to your Jenkins-powered CI/CD pipeline but it also applies to other CI softwares like Circle, Concourse and more.
The Age of Continuous
Ever since the industrial revolution, our world has seen its fair share of automation technologies. It is no surprise that shipping softwares has ultimately become a matter of automatically moving and validating development on the Continuous Integration and Continuous Delivery “conveyor belt”.
With the rise of infrastructure-as-code technologies like Terraform and container-based ecosystems leveraging Docker and Kubernetes, we are increasingly moving away from human intervention when it comes to integration and delivery.
Even though not everybody is doind Continuous Integration and that (very) early stage startups can probably manage to deliver without pipeline, it is not too far-fetched to state that the CI/CD pipeline is at the heart of the modern, technology-enabled company that is willing to stay competitive.
Still, it is certainly no mere challenge to quickly connect the engineering efforts of your teams to your customers, while still meeting the reliability standards that keeps everybody happy.
Now that you ship a testable and operable software… Will it scale?
Why Scalability Testing
By abstracting hardware resources with virtualization technologies, we’re now able to control our infrastructure capacity using softwares. In the case of Cloud Computing, we even have an unlimited amount of capacity at our disposal.
We have reached a point where matching user traffic with hosting capacity comes down to configuring these softwares.
For example, if you attach an AWS ELB Application Load Balancer to your Auto Scaling group, you can create auto scaling policies that will use ALB’s ActiveConnectionCount
to scale your application automatically.
But how many active connections are we talking about exactly?
Scalability Testing gives you the answer to just that.
Performance is not Scalability
If you want to serve your pages in less than three seconds, you’re going to have a tough time determining how many servers you’ll need to satisfy that requirement:
- Load Testing only gives you raw performance metrics that you need to analyze into scalability assumptions.
- You can’t appreciate how a service scales without drawing a chart, meaning dozens to hundreds of load tests.
- You can’t identify your service peak scalability without actually reaching and crossing it during a load test.
So while Performance Testing will help you validate your three seconds SLA, it is truly Scalability Testing that will help you turn these three seconds into a scalability insight and configure your hosting capacity.
Especially since Stacktical enables you to do 1, 2 and 3 within minutes thanks to predictive analytics and AI.
Continuous Scalability Testing
Just like it is the case with Performance Testing, Scalability Testing requires you to provision an environment that is identical to production in terms of software and hardware configuration. This could be your development, testing, staging or any other suitable environment. Anything goes, as long as you are able to test the scalability of your production without directly hitting it.
You are also free to engineer the CI/CD pipeline around the principles that work for your agile developement team. At Stacktical, we deploy our release branch to staging whenever our build passes Unit, Integration and E2E tests.
After that, our build undergoes load testing and scalability testing, before possibly moving to the QA stage. With scalability testing, we can now refuse build that don’t validate our capacity requirements (and you should too).
The simplest way to continuously test the scalability of your application is to run the stacktical/willitscale:latest
docker image from a Jenkins job.
We’re also exploring multiple ways to help you integrate Stacktical with your systems, such as a Cloud Testing feature and direct access to our API.
Our Scalability Testing Requirements
Software requirements:
- An activated Stacktical account (sign up here if needed)
- A Stacktical Tech Stack representing your project
- A configured Service in your Stacktical Tech Stack
- A running Jenkins 2.x server located in proximity to the target service.
- Docker installed on the Jenkins node
- The ability for the jenkins user to run Docker commands directly or with sudo *
- You have downloaded or are able to pull the Stacktical Bench Docker Application
For most platforms you can simply add the jenkins user to the docker group in /etc/group, but we suggest you Google the specific steps that apply to your specific distribution.
Hardware requirements:
- CPU: Intel(R) Xeon(R) CPU @ 2.60GHz
- Memory: 2GB RAM
- Bandwidth: A dedicated low latency / high bandwidth connection.
It is important to be as close as possible to the target service to reduce latency and improve the scalability reports accuracy.
Lifting System Limits
Stacktical for Docker needs to open multiple connections to be able to simulate hundreds of virtual users during the load test phase of your scalability test. This requires you to modify the number of open file descriptors your system can handle.
First check your effective (soft) limit with ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 709
virtual memory (kbytes, -v) unlimited
In this output, our limit is only 256
which is way too low.
Then check your maximum limit with cat /proc/sys/fs/file-max
(this could vary between operating systems).
Now let’s work on improving your effective limit while never going above your maximum limit.
Raising the number of open file descriptors
You can raise the maximum number of open file descriptors for your current terminal session using the following command : ulimit -n {SOFT-FILE-MAX-VALUE}:{HARD-FILE-MAX-VALUE}
e.g. ulimit -n 32768 65536
Again, your SOFT-FILE-MAX-VALUE
and HARD-FILE-MAX-VALUE
values must never be higher than the maximum number of open file descriptors allowed by your system.
If you want your modification to persist, you can add the following to your /etc/security/limits.conf
:
* soft nofile {SOFT-FILE-MAX-VALUE}
root soft nofile {SOFT-FILE-MAX-VALUE}
* hard nofile {HARD-FILE-MAX-VALUE}
root hard nofile {HARD-FILE-MAX-VALUE}
A per our previous example, we recommend 32768
as a soft value and 65536
as a hard value (as long as they’re not above your system limits).
Reboot your machine for the persistent changes to be effective.
As easy as running a Jenkins job
Grab your Docker Application parameters
After creating your tech stacks and services on Stacktical, you will have access to the credentials needed to run your test:
- A STACKTICAL_APPID, the identifier of your tech stack
- A STACKTICAL_APIKEY, your authentication token
- A STACKTICAL_SVCID, the identifier of the service you’d like to test
Running the Docker Application from a Jenkins job
Using the Stacktical Docker application is easy as one-two-three:
- Create or select a new Jenkins job
- Create or select a new build step of
Execute shell
type - Append the following Docker command:
docker run --rm \
-e STACKTICAL_APPID={MY_TECH_STACK_APP_ID} \
-e STACKTICAL_APIKEY={MY_TECH_STACK_API_KEY} \
-e STACKTICAL_SVCID={MY_TESTED_SERVICE_ID} \
--name willitscale stacktical/willitscale:latest
Done!
If your services requires a http authentication you can also provide your BASIC AUTH credentials using the following option:
-e HTTP_AUTH={HTTP_AUTH_LOGIN}:{HTTP_AUTH_PASSWORD}
Integrated in the CI pipeline (chained jobs)
Replace everything between {} with your endpoint basic http authentication login and password
Once the build has completed you will be able to consult the scalability report of the build.
What to do with that data?
A scalability report is generated with every scalability tests and all reports are stored in the /reports
section of your Stacktical account.
If you’re using Slack, also make sure you connect your account to get notified of new reports, directly to the channel of your choice.
You can see a demo scalability report at this address.
Conclusion
By fixing scalability bottlenecks at the speed of Continuous Delivery, Continuous Scalability Testing prevents you from shipping software that is not efficient and reliable at scale.
It also enables you to make the most out of the capacity your software, middleware and hardware can offer while minimizing the risk of facing production.
About Stacktical
Stacktical is a hyper-efficient service level management platform on the Ethereum blockchain, that automates the compensation of users during downtimes and other performance events that affect their experience.
Our Token Sale is starting soon!