View on
MetaCPAN
Matthew Horsfall (alh) > perl-5.27.3 > bench.pl

Download:
perl-5.27.3.tar.gz

Annotate this POD

Website

Source   Latest Release: perl-5.27.6

NAME ^

bench.pl - Compare the performance of perl code snippets across multiple perls.

SYNOPSIS ^

    # Basic: run the tests in t/perf/benchmarks against two or
    # more perls

    bench.pl [options] perlA[=labelA] perlB[=labelB] ...

    # run the tests against the same perl twice, with varying options

    bench.pl [options] perlA=bigint --args='-Mbigint' perlA=plain

    # Run bench on blead, saving results to file; then modify the blead
    # binary, and benchmark again, comparing against the saved results

    bench.pl [options] --write=blead.time ./perl=blead
    # ... hack hack hack, updating ./perl ...
    bench.pl --read=blead.time ./perl=hacked

    # You can also combine --read with --write and new benchmark runs

    bench.pl --read=blead.time --write=last.time -- ./perl=hacked

DESCRIPTION ^

By default, bench.pl will run code snippets found in t/perf/benchmarks (or similar) under cachegrind, in order to calculate how many instruction reads, data writes, branches, cache misses, etc. that one execution of the snippet uses. Usually it will run them against two or more perl executables and show how much each test has gotten better or worse.

It is modelled on the perlbench tool, but since it measures instruction reads etc., rather than timings, it is much more precise and reproducible. It is also considerably faster, and is capable of running tests in parallel (with -j). Rather than displaying a single relative percentage per test/perl combination, it displays values for 13 different measurements, such as instruction reads, conditional branch misses etc.

There are options to write the raw data to a file, and to read it back. This means that you can view the same run data in different views with different selection and sort options. You can also use this mechanism to save the results of timing one perl, and then read it back while timing a modification, so that you don't have rerun the same tests on the same perl over and over, or have two perl executables built at the same time.

The optional =label after each perl executable is used in the display output. If you are doing a two step benchmark then you should provide a label for at least the "base" perl. If a label isn't specified, it defaults to the name of the perl executable. Labels must be unique across all current executables, plus any previous ones obtained via --read.

In its most general form, the specification of a perl executable is:

    path/perl=+mylabel --args='-foo -bar' --args='-baz' \
                       --env='A=a' --env='B=b'

This defines how to run the executable path/perl. It has a label, which due to the +, is appended to the binary name to give a label of path/perl=+mylabel (without the +, the label would be just mylabel).

It can be optionally followed by one or more --args or --env switches, which specify extra command line arguments or environment variables to use when invoking that executable. Each --env switch should be of the form --env=VARIABLE=value. Any --arg values are concatenated to the eventual command line, along with the global --perlargs value if any. The above would cause a system() call looking something like:

    PERL_HASH_SEED=0 A=a B=b valgrind --tool=cachegrind \
        path/perl -foo -bar -baz ....

OPTIONS ^

General options

Test selection options

Input options

Benchmarking options

Benchmarks will be run for all perls specified on the command line. These options can be used to modify the benchmarking behavior:

Output options

Any results accumulated via --read or by running benchmarks can be output in any or all of these three ways:

syntax highlighting: