Run a benchmark a few times and report the results.
- The benchmark callable.
run_benchmarkwill time the executation of this function and report those times back to the harness. However, if
benchmarkreturns a value, that result will reported instead of the raw timing.
- A function to be called before running the benchmark function(s).
- The number of seconds to run the benchmark function. If not given
Truethis’ll be automatically determined from the
Trueif the script should handle
sys.argvand configure itself from command-line arguments
- Key/value pairs to be returned as part of the benchmark results.
Benchmark the difference between two functions.
Arguments are as for
run_benchmark, except that this takes 2 benchmark functions, an A and a B, and reports the difference between them.
For example, you could use this to test the overhead of an ORM query versus a raw SQL query – pass the ORM query as
benchmark_aand the raw query as
benchmark_band this function will report the difference in time between them.
For best results, the A function should be the more expensive one (otherwise djangobench will report results like “-1.2x slower”, which is just confusing).
run_benchmark(benchmark, env=None, max_time=None, python_executable=None, stderr=None)¶
run_benchmarks(benchmarks, max_time=None, output_dir=None, includes=None, excludes=None, continue_on_error=False, python_executable=None, env=None)¶
Allow functions to return normal Python data structure
If stdout is a tty, basic stats and a human-meaningful result will be displayed. If not, JSON will be returned for a script to process