Sawyer X > perl-5.25.11 > bench.pl

Download:
perl-5.25.11.tar.bz2

Annotate this POD

Website

Source  

NAME ^

bench.pl - Compare the performance of perl code snippets across multiple perls.

SYNOPSIS ^

    # Basic: run the tests in t/perf/benchmarks against two or
    # more perls

    bench.pl [options] -- perlA[=labelA] perlB[=labelB] ...

    # run the tests against same perlA 2x, with and without extra
    # options

    bench.pl [options] -- perlA=fast PerlA=slow -Mstrict -Dpsltoc 

    # Run bench.pl's own built-in sanity tests

    bench.pl --action=selftest

    # Run bench on blead, which is then modified and timed again

    bench.pl [options] --write=blead.time -- ./perl=blead
    # hack hack hack
    bench.pl --read=blead.time -- ./perl=hacked

    # You can also combine --read with --write
    bench.pl --read=blead.time --write=last.time -- ./perl=hacked

DESCRIPTION ^

By default, bench.pl will run code snippets found in t/perf/benchmarks (or similar) under cachegrind, in order to calculate how many instruction reads, data writes, branches, cache misses, etc. that one execution of the snippet uses. It will run them against two or more perl executables and show how much each test has gotten better or worse.

It is modelled on the perlbench tool, but since it measures instruction reads etc., rather than timings, it is much more precise and reproducible. It is also considerably faster, and is capable of running tests in parallel (with -j). Rather than displaying a single relative percentage per test/perl combination, it displays values for 13 different measurements, such as instruction reads, conditional branch misses etc.

There are options to write the raw data to a file, and to read it back. This means that you can view the same run data in different views with different selection and sort options. You can also use this mechanism to save the results of timing one perl, and then read it back while timing a modification, so that you dont have rerun the same tests on the same perl over and over, or have two perls built at the same time.

The optional =label after each perl executable is used in the display output. If you are doing a two step benchmark then you should provide a label for at least the "base" perl.

OPTIONS ^

syntax highlighting: