Benchmark your code with Deno bench
Purpose
Deno’s standard library comes with very useful utility functions for benchmarking any piece of code. The piece of code could be in the same file or imported from some other module (the most likely case).
The benchmarking functions could be used inside Deno’s testing functions deno.test
, therefore they also integrate easily with CI (continuous integration). The benchmarking script could be present in the same directory as the other tests so that it can run along with the standard unit tests deno test
.
Similar to the testing functions, benchmarking functions also need to be registered before they are executed. Generally, benchmarking functions would be registered inside the function registered for Deno.test
.
It’s useful to know that, the benchmarking functions don’t perform any comparisons on their own. They only produce the time taken by a piece of code. The job of comparison is the user’s responsibility.
Usage
To use the benchmarking functions, import the two useful functions from the bench module:
import { bench, runBenchmarks } from "https://deno.land/std/testing/bench.ts";
The first function bench
is used to register pieces of code to benchmark. This can be called many times. The second function runBenchMarks
executes all the registered pieces of code.
Step 1 — register or bench
The first step is to register the pieces of code to benchmark. This is achieved by calling bench
function as many times as needed. The bench
function takes two types of inputs:
- Named function: A named function that will have the piece of code to benchmark. The function must be a named function. Anonymous functions aren’t allowed, like arrow functions
() => {}
. The named function must take an argument of type BenchmarkTimer that offers two functions: start and stop. These functions need to be called inside the named function to indicate the start and end of the benchmark. Both start and stop must be called, otherwise, an error would be thrown.
bench(function bench1(b) {
b.start();
//...code to benchmark...
b.stop();
});
- Object (BenchmarkDefinition): This is the other way to provide inputs to the bench function. This way of providing input is more flexible than the named function. An object can be supplied that contains a bunch of options:
BenchmarkDefinition {
func: BenchmarkFunction;
name: string;
runs?: number;
}
There are three possible options:
- func: A named function that would have the code to benchmark. This is similar to the named function we saw above.
- name: The name of the benchmarking test case
- runs: This is an optional argument that specifies how many times the benchmarking should be done. If unspecified runs default to 1. If runs are more than 1, the benchmarking result would be an average of all the runs.
bench({func: function bench1(b) {
b.start();
//...code to benchmark...
b.stop();
},
name: 'bench test 1',
runs: 10 });
The above call registers a function bench1 that would execute 10 times.
Step 2 — run benchmarks
This is the second step. In this step, an async function runBenchmarks
is called to execute all the registered functions. Once the benchmarking is done, a result object gets returned that contains the results for each of the registered benchmarks. TherunBenchmarks
function takes some optional inputs (there is no mandatory input).
The first optional input is runOptions
that contains some settings to control what to execute, what to skip and the level of verbosity.
BenchmarkRunOptions {
only?: RegExp;
skip?: RegExp;
silent?: boolean;
}
- only: Execute only the benchmarks whose name matches the regex
- skip: Skip the benchmarks whose name matches the regex
- silent: Do not write anything on the console. This is useful when benchmarks run in CI (as part of some script).
await runBenchmarks();//run with default options, no skip, all execute, output on consoleawait runBenchmarks({only: /bench\d/, skip: /bench/, silent: true});//run only tests matching bench[0-9], skip any that doesn't have number, do not produce output on console
In addition to the run options, the other optional argument is a callback function that would be notified about benchmarking progress.
await runBenchmarks({silent: true}, (progress) => {});
The progress callback comes with an argument of type BenchmarkRunProgress
and contains a lot of details like:
BenchmarkRunProgress {
queued?: Array<{ name: string; runsCount: number }>;
running?: { name: string; runsCount: number; measuredRunsMs: number[] };
state?: ProgressState;
}
We’ll not go into details of the progress event as these may not be for general use. The reasons are: 1) There are way too many progress events 2) It increases the benchmarking time (the total time taken by benchmark tests to run).
Step 3 — Output
The last step in benchmarking is to process the output. The output (type: BenchmarkRunResult) comes back as an object that contains details of the executed benchmarks:
BenchmarkResult {
name: string;
totalMs: number;
runsCount: number;
measuredRunsAvgMs: number;
measuredRunsMs: number[];
}BenchmarkRunResult {
filtered: number;
results: BenchmarkResult[];
}
In short, the benchmark run result contains an array whose each element contains the result of a benchmark (a registration via bench function). The output seems a bit verbose, but it contains useful information such as total time taken for each test, and total skipped tests (filtered). We’ll see the output when we go through examples.
The final step could be to compare the time taken with some baseline, otherwise how would you know if the result is good or not.
The benchmarking procedure certainly looked complex, but it is quite simple to use. The steps are simple: 1) register 2) run 3) process. That’s all!
Examples
Now that we’ve seen the benchmarking concepts, let’s go over some examples.
First, let’s start with a simple benchmark. This benchmark gets 8, 16, and 32 bit random values (1000 each) using crypto.getRandomValues function.
import { assert } from "https://deno.land/std/testing/asserts.ts"
import { bench, runBenchmarks } from "https://deno.land/std/testing/bench.ts";
import { getRandomValues8, getRandomValues16, getRandomValues32 } from "./deno_some_library.ts";Deno.test('bench 1000 8,16,32 bit random values',
async () => {
bench(function benchBasic(b) {
b.start();
getRandomValues8(1000);
getRandomValues16(1000);
getRandomValues32(1000);
b.stop();
});
const result=await runBenchmarks();
assert(result.results.length === 1);
assert(result.results[0].measuredRunsMs.length === 1);
});
Here is the output of running the benchmark with deno test
:
deno test
running 1 tests
test benchmarking tests ... running 1 benchmark ...
benchmark benchBasic ...
2ms
benchmark result: DONE. 1 measured; 0 filtered
ok (5ms)test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out (5ms)--//Result
{
filtered: 0,
results: [
{
name: "benchBasic",
totalMs: 2,
runsCount: 1,
measuredRunsAvgMs: 2,
measuredRunsMs: [ 2 ]
}
]
}
The output is produced on the console. This may be useful if benchmarking is done manually. If benchmarking runs in CI, the output on the console isn’t of much use.
Next, the same benchmark can be done for multiple runs with silent mode:
import { assert } from "https://deno.land/std/testing/asserts.ts"
import { bench, runBenchmarks } from "https://deno.land/std/testing/bench.ts";
import { getRandomValues8, getRandomValues16, getRandomValues32 } from "./deno_some_library.ts";Deno.test('bench 5000 8, 16, 32 bit random values for 10K runs',
async () => {
bench({func: function benchWithOptions(b) {
b.start();
getRandomValues8(5000);
getRandomValues16(5000);
getRandomValues32(5000);
b.stop();
}, name: 'bench 5000 8, 16, 32 bit random values for 10K runs', runs: 10000});
const result=await runBenchmarks({silent: true});
assert(result.results.length === 1);
assert(result.results[0].measuredRunsMs.length === 10000);
});
Here is the output:
deno test
running 1 tests
test benchmarking tests ... ok (657ms)test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out (658ms)--//Result
{
filtered: 0,
results: [
{
name: "bench 5000 8, 16, 32 bit random values for 10K runs",
totalMs: 602,
runsCount: 10000,
measuredRunsAvgMs: 0.0602,
measuredRunsMs: [
2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
... 9900 more items
]
}
]
}
Next, here is the benchmarking of fetching data from https://deno.land in parallel:
Deno.test('bench fetchData',
async () => {
bench(async function fetchData(b) {
const urls = new Array(50).fill("https://deno.land/");
b.start();
await Promise.all(
urls.map(
async (denoland: string) => {
const r = await fetch(denoland);
await r.text();
},
),
);
b.stop();
});
const result=await runBenchmarks({silent:true});
assert(result.results.length === 1);
assert(result.results[0].measuredRunsMs.length === 1);
});
Here is the output:
deno test --allow-net
running 1 tests
test benchmarking tests ... ok (367ms)test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out (367ms)--//Result
{
filtered: 0,
results: [
{
name: "fetchData",
totalMs: 360,
runsCount: 1,
measuredRunsAvgMs: 360,
measuredRunsMs: [ 360 ]
}
]
}
Next, here is an example of comparing the output with a baseline (up to 10% above the baseline value):
import { assert } from "https://deno.land/std/testing/asserts.ts"
import { bench, runBenchmarks } from "https://deno.land/std/testing/bench.ts";
import { getRandomValues8, getRandomValues16, getRandomValues32 } from "./deno_some_library.ts";const baseline=1300;Deno.test('benchmarking tests',
async () => {
bench({func: function benchWithOptions(b) {
b.start();
getRandomValues8(15000);
getRandomValues16(15000);
getRandomValues32(15000);
b.stop();
}, name: 'bench 15000 8, 16, 32 bit random values for 10K runs', runs: 10000}); const result=await runBenchmarks({silent: true});
const runResult=result.results[0];
assert(result.results.length === 1);
assert(runResult.measuredRunsMs.length === 10000);
assert(runResult.totalMs < (baseline+baseline*.1));
}
);
Finally, here is the example of running multiple benchmarks in a single run:
import { assert } from "https://deno.land/std/testing/asserts.ts"
import { bench, runBenchmarks } from "https://deno.land/std/testing/bench.ts";
import { getRandomValues8, getRandomValues16, getRandomValues32 } from "./deno_some_library.ts";const baseline=1300;Deno.test('benchmarking tests',
async () => {
bench(function benchBasic(b) {
b.start();
getRandomValues8(1000);
getRandomValues16(1000);
getRandomValues32(1000);
b.stop();
}); bench({func: function benchWithOptions(b) {
b.start();
getRandomValues8(15000);
getRandomValues16(15000);
getRandomValues32(15000);
b.stop();
}, name: 'bench 15000 8, 16, 32 bit random values for 10K runs', runs: 10000}); bench(async function fetchData(b) {
const urls = new Array(50).fill("https://deno.land/");
b.start();
await Promise.all(
urls.map(
async (denoland: string) => {
const r = await fetch(denoland);
await r.text();
},
),
);
b.stop();
}); const result=await runBenchmarks({silent: true});
assert(result.results.length === 3);
}
);
Here is the output:
deno test --allow-net
running 1 tests
test benchmarking tests ... ok (1620ms)test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out (1621ms)--//Result
{
filtered: 0,
results: [
{
name: "benchBasic",
totalMs: 2,
runsCount: 1,
measuredRunsAvgMs: 2,
measuredRunsMs: [ 2 ]
},
{
name: "bench 15000 8, 16, 32 bit random values for 10K runs",
totalMs: 1244,
runsCount: 10000,
measuredRunsAvgMs: 0.1244,
measuredRunsMs: [
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0,
0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 2, 0, 0, 0, 0, 0, 0,
... 9900 more items
]
},
{
name: "fetchData",
totalMs: 306,
runsCount: 1,
measuredRunsAvgMs: 306,
measuredRunsMs: [ 306 ]
}
]
}
This story is a part of the exclusive medium publication on Deno: Deno World.