Welcome to Benchmark Harness’s documentation!¶
benchmark-harness is designed to make it easy to create simple suites of standalone benchmarks while avoiding some common pitfalls in benchmarking. In particular, benchmarks are always run for a specified duration to avoid reporting anomalies due to background system activity, startup costs, garbage collection or JIT activity, etc.
Quick Start¶
A simple benchmark looks like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | from benchmark_harness import run_benchmark
def fib(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fib(n - 1) + fib(n - 2)
def benchmark():
"""fib!"""
fib(20)
run_benchmark(benchmark, meta={"title": "Everyone loves fib()"})
|
This script can be run directly:
$ python benchmarks/fib/benchmark.py
fib: completed 67 trials
Min: 0.007
Max: 0.010
Output can be redirected to get a full JSON record:
$ python tests/fib/benchmark.py | python -m json.tool
{
"meta": {
"title": "Everyone loves fib()"
},
"times": [
0.00791311264038086,
…
]
}
benchmark-harness installs the command-line benchmark-harness utility which
makes it easy to run many benchmarks if you organize them into a directory
containing one directory per benchmark with a benchmark.py file. If the above
file were saved to benchmarks/fib/benchmark.py
, a sample run would look
like this:
$ benchmark-harness --benchmark-dir=benchmarks/
fib: completed 59 trials
Min: 0.008
Max: 0.010