Benchmarking Your Solution in Go
Let’s benchmark your solution without any guessing! Then see is there any real improvement?
After our system is up and running… After all refactoring sessions to make sure our code is clean, readable, and maintainable. Then it’s time for us to improve the system and making it better.
Quite often, there were several ways to attain better performance and most of them were achieved through experimentation. There would be questions like if I use this algorithm, will I get better performance? Or if I use this library instead of another library, will there be any significant difference? These questions could be answered without guessing or manual testing by doing benchmarking.
With benchmarking, we’ll have a piece of evidence for our solution. If it’s not doing any better, then we’ll move to another solution. Then all these solutions can be compared to one another, it will be easier for us to choose the best one.
Start with the basic
Let’s start with a simple example. Write this code in
We defined a simple function that mocks a certain operation that takes time to complete, in this case, it’s a random duration between 0 and 100 milliseconds. Simple and easy!
Now, let’s move into the benchmarking code. You can write this in
main_test.go file. Remember that benchmarking is like any other test code.
You can run the test by typing
go test -bench=. . But by doing this you’re not only run the benchmarking code, you’ll also run the test code (if you have one).
You can filter out the test to make sure that it’ll run the benchmarking code only by running
go test -bench=Bench . Because all benchmark functions need to be started with
Bench. Just like you define test functions with a word
Test at the start of the function name. If you want to be more specific then you can provide the flag
-bench with the name of your benchmark function, e.g
go test -bench=BenchmarkCalculate .
By default, the benchmark function is run for a minimum of 1 second. If the second has not elapsed when the Benchmark function returns, the value of b.N is increased in the sequence 1, 2, 5, 10, 20, 50, … and the function run again.
You could specify how long the benchmarking duration with command
go test -bench=. -benchtime=20s . This command makes the benchmarking session takes 20 seconds to complete.
Benchmark RestAPI app
We’ll keep using our old
Calculate function and use it in a RestAPI application. Plus, there will be an additional function that mocks a heavy operation named
CalculateSlow. Now, let’s change our
This application is pretty simple. It’s just containing one endpoint that uses
CalculateSlow. We have to specify
y as a query string, then we’ll get the result.
How about the benchmarking code?
Certainly, a longer code compared to the one we had. I’ll explain this bit by bit.
- You can run the code like before. But now we have 3 benchmark functions. By running command
go test -bench=Bench -benchtime=5s. You’ll get something similar to this:
- With a flag
-benchtime=5s, each benchmark function will run at a minimum of 5 seconds.
- Note that we’ve defined a package level variable
resultfor storing the result we get from the API. All this is to avoid compiler optimisations and eliminating the function under test that will artificially lowering the run time of the benchmark.
- If you switch the order of the benchmark functions, for example,
BenchmarkCalculateRestAPI1is defined first, you’ll get the same result. You see a quite difference between the first benchmark function and the second one is due to a random integer.
- By seeing this result, inputs of one, one hundred and one million don’t make any significant difference. You’ll see this better by removing the random integer in
Is that it? Can we make our code better? Let’s look for a solution, like using a
Making it better
Can we make our code better using the awesome feature Go has provided us, which is
goroutine . Let’s find out.
By running command
go test -bench=Bench -benchtime=5s , you’ll get this similar result:
CalculateSlow are running concurrently, we get a better result. Notice that for every benchmark function, we get approximately 2 seconds, which is the duration of sleep we defined in
CalculateSlow . This is definitely a good solution to be implemented.
We started from a simple example to a real-world case, then we improved the previous code with a better solution.
The thing that we have to remember is performance tweaking must be done after the system is up and running. Not before that, or else we’ll be trapped in premature optimization.
Some solutions may require us to pull some tricks out of our sleeve that probably will make our code less obvious and readable. This would be a drawback we’d get by implementing the solution. But again, engineering is about knowing the solution we have has more advantages compared to disadvantages.
Thank you for reading and happy coding!
Other articles similar to this article:
Load Testing using Locust.io
There is a time after our application or service is running, we want to know the performance and load that can be…
Testing REST API in Go with Testify and Mockery
“The bitterness of poor quality remains long after the sweetness of low price is forgotten.” — Benjamin Franklin