The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.
=pod

=head1 NAME

Test::FAQ - Frequently Asked Questions about testing with Perl

=head1 DESCRIPTION

Frequently Asked Questions about testing in general and specific
issues with Perl.

=head2 Is there any tutorial on testing?

L<Test::Tutorial>

=head2 Are there any modules for testing?

A whole bunch.  Start with L<Test::Simple> then move onto Test::More.

Then go onto L<http://search.cpan.org> and search for "Test".

=head2 Are there any modules for testing web pages/CGI programs?

L<Test::WWW::Mechanize>, L<Test::WWW::Selenium>

=head2 Are there any modules for testing external programs?

L<Test::Cmd>

=head2 Can you do xUnit/JUnit style testing in Perl?

Yes, L<Test::Class> allows you to write test methods while continuing to
use all the usual CPAN testing modules.  It is the best and most
perlish way to do xUnit style testing.

L<Test::Unit> is a more direct port of xUnit to Perl, but it does not use
the Perl conventions and does not play well with other CPAN testing
modules.  As of this writing, it is abandoned.  B<Do not use>.

The L<Test::Inline> (aka L<Pod::Tests>) is worth mentioning as it allows you to
put tests into the POD in the same file as the code.


=head2 How do I test my module is backwards/forwards compatible?

First, install a bunch of perls of commonly used versions.  At the
moment, you could try these

    5.7.2
    5.6.1
    5.005_03
    5.004_05

if you're feeling brave, you might want to have on hand these

    bleadperl
    5.6.0
    5.004_04
    5.004

going back beyond 5.003 is probably beyond the call of duty.

You can then add something like this to your F<Makefile.PL>.  It
overrides the L<ExtUtils::MakeMaker> C<test_via_harness()> method to run the tests
against several different versions of Perl.

    # If PERL_TEST_ALL is set, run "make test" against 
    # other perls as well as the current perl.
    {
        package MY;

        sub test_via_harness {
            my($self, $orig_perl, $tests) = @_;

            # names of your other perl binaries.
            my @other_perls = qw(perl5.004_05 perl5.005_03 perl5.7.2);

            my @perls = ($orig_perl);
            push @perls, @other_perls if $ENV{PERL_TEST_ALL};

            my $out;
            foreach my $perl (@perls) {
                $out .= $self->SUPER::test_via_harness($perl, $tests);
            }

            return $out;
        }
    }

and re-run your F<Makefile.PL> with the C<PERL_TEST_ALL> environment
variable set

    PERL_TEST_ALL=1 perl Makefile.PL

now C<make test> will run against each of your other perls.


=head2 If I'm testing Foo::Bar, where do I put tests for Foo::Bar::Baz?

=head2 How do I know when my tests are good enough?

A: Use tools for measuring the code coverage of your tests, e.g. how many of
your source code lines/subs/expressions/paths are executed (aka covered) by
the test suite. The more, the better, of course, although you may not
be able achieve 100%. If your testsuite covers under 100%, then
the rest of your code is, basically, untested. Which means it may work in
surprising ways (e.g. doesn't do things like they are intended or
documented), have bugs (e.g. return wrong results) or it may not work at
all.

=head2 How do I measure the coverage of my test suite?

L<Devel::Cover>

=head2 How do I get tests to run in a certain order?

Tests run in alphabetical order, so simply name your test files in the order
you want them to run.  Numbering your test files works, too.

    t/00_compile.t
    t/01_config.t
    t/zz_teardown.t

0 runs first, z runs last.

To achieve a specific order, try L<Test::Manifest>.

Typically you do B<not> want your tests to require being run in a
certain order, but it can be useful to do a compile check first or to
run the tests on a very basic module before everything else.  This
gives you early information if a basic module fails which will bring
everything else down.

Another use is if you have a suite wide setup/teardown, such as
creating and delete a large test database, which may be too
expensive to do for every test.

We recommend B<against> numbering every test file.  For most files
this ordering will be arbitrary and the leading number obscures the
real name of the file.  See L<What should I name my test files?> for
more information.


=head2 What should I name my tests?

=head2 What should I name my test files?

A test filename serves three purposes:

Most importantly, it serves to identify what is being tested.  Each
test file should test a clear piece of functionality.  This could be
at single class, a single method, even a single bug.

The order in which tests are run is usually dictated by the filename.
See L<How do I get tests to run in a certain order?> for details.

Finally, the grouping of tests into common bits of functionality can
be achieved by directory and filenames.  For example, all the tests
for L<Test::Builder> are in the F<t/Builder/> directory.

As an example, F<t/Builder/reset.t> contains the tests for
C<< Test::Builder->reset >>.  F<t/00compile.t> checks that everything
compiles, and it will run first.  F<t/dont_overwrite_die_handler.t>
checks that we don't overwrite the C<< $SIG{__DIE__} >> handler.


=head2 How do I deal with tests that sometimes pass and sometimes fail?

=head2 How do I test with a database/network/server that the user may or may not have?

=head2 What's a good way to test lists?

C<is_deeply()> from L<Test::More> as well as L<Test::Deep>.

=head2 Is there such a thing as untestable code?

There's always compile/export checks.

Code must be written with testability in mind.  Separation of form and
functionality.

=head2 What do I do when I can't make the code do the same thing twice?

Force it to do the same thing twice.

Even a random number generator can be tested.

=head2 How do I test a GUI?

=head2 How do I test an image generator?

=head2 How do I test that my code handles failures gracefully?

=head2 How do I check the right warnings are issued?

L<Test::Warn>

=head2 How do I test code that prints?

L<Test::Output>

=head2 I want to test that my code dies when I do X

L<Test::Exception>

=head2 I want to print out more diagnostic info on failure.

C<ok(...) || diag "...";>

=head2 How can I simulate failures to make sure that my code does the Right Thing in the face of them?


=head2 Why use an ok() function?

On Tue, Aug 28, 2001 at 02:12:46PM +0100, Robin Houston wrote:
> Michael Schwern wrote:
> > Ah HA!  I've been wondering why nobody ever thinks to write a simple
> > ok() function for their tests!  perlhack has bad testing advice.
> 
> Could you explain the advantage of having a "simple ok() function"?

Because writing:

    print "not " unless some thing worked;
    print "ok $test\n";  $test++;

gets rapidly annoying.  This is why we made up subroutines in the
first place.  It also looks like hell and obscures the real purpose.

Besides, that will cause problems on VMS.


> As somebody who has spent many painful hours debugging test failures,
> I'm intimately familiar with the _disadvantages_. When you run the
> test, you know that "test 113 failed". That's all you know, in general.

Second advantage is you can easily upgrade the C<ok()> function to fix
this, either by slapping this line in:

        printf "# Failed test at line %d\n", (caller)[2];

or simply junking the whole thing and switching to L<Test::Simple> or
L<Test::More>, which does all sorts of nice diagnostics-on-failure for
you.  Its C<ok()> function is backwards compatible with the above.

There's some issues with using L<Test::Simple> to test really basic Perl
functionality, you have to choose on a per test basis.  Since
L<Test::Simple> doesn't use C<pack()> it's safe for F<t/op/pack.t> to use
L<Test::Simple>.  I just didn't want to make the perlhack patching
example too complicated.


=head2 Dummy Mode

> One compromise would be to use a test-generating script, which allows
> the tests to be structured simply and _generates_ the actual test
> code. One could then grep the generated test script to locate the
> failing code.

This is a very interesting, and very common, response to the problem.
I'm going to make some observations about reactions to testing,
they're not specific to you.

If you've ever read the Bastard Operator From Hell series, you'll
recall the Dummy Mode.

    The words "power surging" and "drivers" have got her.  People hear
    words like that and go into Dummy Mode and do ANYTHING you say.  I
    could tell her to run naked across campus with a powercord rammed
    up her backside and she'd probably do it...  Hmmm...

There seems to be a Dummy Mode with respect to testing.  An otherwise competent
person goes to write a test and they suddenly forget all basic
programming practice.


The reasons for using an C<ok()> function above are the same reasons to
use functions in general, we should all know them.  We'd laugh our
heads off at code that repeated as much as your average test does.
These are newbie mistakes.

And the normal 'can do' flair seems to disappear.  I know Robin.  I
*know* that in any other situation he would have come up with the
C<caller()> trick in about 15 seconds flat.  Instead weird, elaborate,
inelegant hacks are thought up to solve the simplest problems.


I guess there are certain programming idioms that are foreign enough
to throw your brain into reverse if you're not ready for them.  Like
trying to think in Lisp, for example.  Or being presented with OO for
the first time.  I guess writing test is one of those.


=head2 How do I use Test::More without depending on it?

Install L<Test::More> into F<t/lib> under your source directory.  Then in your tests
say C<use lib 't/lib'>.

=head2 How do I deal with threads and forking?

=head2 Why do I need more than ok?

Since every test can be reduced to checking if a statement is true,
C<ok()> can test everything.  But C<ok()> doesn't tell you why the test
failed.  For that you need to tell the test more... which is why 
you need L<Test::More>.

    ok $pirate->name eq "Roberts", "person's name";

    not ok 1 - person's name
    # Failed test at pirates.t line 23.

If the above fails, you don't know what C<< $person->name >> returned.
You have to go in and add a C<diag> call.  This is time consuming.  If
it's a heisenbug, it might not fail again!  If it's a user reporting a
test failure, they might not be bothered to hack the tests to give you
more information.

    is $person->name, "Roberts", "person's name";

    not ok 1 - person's name
    # Failed test at pirates.t line 23.
    #        got: 'Wesley'
    #   expected: 'Roberts'

Using C<is> from L<Test::More> you now know what value you got and
what value you expected.

The most useful functions in L<Test::More> are C<is()>, C<like()> and C<is_deeply()>.


=head2 What's wrong with C<print $test ? "ok" : "not ok">?

=head2 How do I check for an infinite loop?

On Mon, Mar 18, 2002 at 03:57:55AM -0500, Mark-Jason Dominus wrote:
> 
> Michael The Schwern <schwern@pobox.com> says:
> > Use alarm and skip the test if $Config{d_alarm} is false (see
> > t/op/alarm.t for an example).  If you think the infinite loop is due
> > to a programming glitch, as opposed to a cross-platform issue, this
> > will be enough.
> 
> Thanks very much!
> 

=head2 How can I check that flock works?

=head2 How do I use the comparison functions of a testing module without it being a test?

Any testing function based on L<Test::Builder>, most are, can be quieted so it does
not do any testing.  It simply returns true or false.  Use the following code...

    use Test::More;     # or any testing module
    
    use Test::Builder;
    use File::Spec;

    # Get the internal Test::Builder object
    my $tb = Test::Builder->new;

    $tb->plan("no_plan");
    
    # Keep Test::Builder from displaying anything
    $tb->no_diag(1);
    $tb->no_ending(1);
    $tb->no_header(1);
    $tb->output( File::Spec->devnull );

    # Now you can use the testing function.
    print is_deeply( "foo", "bar" ) ? "Yes" : "No";