View on
MetaCPAN is shutting down
For details read Perl NOC. After June 25th this page will redirect to
Steve Hay > perl-5.26.1 >


Annotate this POD


Source   Latest Release: perl-5.28.0-RC2

NAME ^ - Compare the performance of perl code snippets across multiple perls.


    # Basic: run the tests in t/perf/benchmarks against two or
    # more perls [options] -- perlA[=labelA] perlB[=labelB] ...

    # run the tests against same perlA 2x, with and without extra
    # options [options] -- perlA=fast PerlA=slow -Mstrict -Dpsltoc 

    # Run's own built-in sanity tests --action=selftest

    # Run bench on blead, which is then modified and timed again [options] --write=blead.time -- ./perl=blead
    # hack hack hack --read=blead.time -- ./perl=hacked

    # You can also combine --read with --write --read=blead.time --write=last.time -- ./perl=hacked


By default, will run code snippets found in t/perf/benchmarks (or similar) under cachegrind, in order to calculate how many instruction reads, data writes, branches, cache misses, etc. that one execution of the snippet uses. It will run them against two or more perl executables and show how much each test has gotten better or worse.

It is modelled on the perlbench tool, but since it measures instruction reads etc., rather than timings, it is much more precise and reproducible. It is also considerably faster, and is capable of running tests in parallel (with -j). Rather than displaying a single relative percentage per test/perl combination, it displays values for 13 different measurements, such as instruction reads, conditional branch misses etc.

There are options to write the raw data to a file, and to read it back. This means that you can view the same run data in different views with different selection and sort options. You can also use this mechanism to save the results of timing one perl, and then read it back while timing a modification, so that you dont have rerun the same tests on the same perl over and over, or have two perls built at the same time.

The optional =label after each perl executable is used in the display output. If you are doing a two step benchmark then you should provide a label for at least the "base" perl.


syntax highlighting: