API¶
-
benchmark_harness.runners.
run_benchmark
(*args, **kwargs)[source]¶ Run a benchmark a few times and report the results.
Arguments:
- benchmark
- The benchmark callable.
run_benchmark
will time the executation of this function and report those times back to the harness. However, ifbenchmark
returns a value, that result will reported instead of the raw timing. - setup
- A function to be called before running the benchmark function(s).
- max_time
- The number of seconds to run the benchmark function. If not given
and if
handle_argv
isTrue
this’ll be automatically determined from the--max_time
flag. - handle_argv
True
if the script should handlesys.argv
and configure itself from command-line arguments- meta
- Key/value pairs to be returned as part of the benchmark results.
-
benchmark_harness.runners.
run_comparison_benchmark
(*args, **kwargs)[source]¶ Benchmark the difference between two functions.
Arguments are as for
run_benchmark
, except that this takes 2 benchmark functions, an A and a B, and reports the difference between them.For example, you could use this to test the overhead of an ORM query versus a raw SQL query – pass the ORM query as
benchmark_a
and the raw query asbenchmark_b
and this function will report the difference in time between them.For best results, the A function should be the more expensive one (otherwise djangobench will report results like “-1.2x slower”, which is just confusing).
-
benchmark_harness.suite.
run_benchmark
(benchmark, env=None, max_time=None, python_executable=None, stderr=None)[source]¶