Xiong Changnian > Test-Ranger-v0.0.4 > Test::Ranger

Download:
developer-tools/Test-Ranger-v0.0.4.tar.gz

Dependencies

Annotate this POD

CPAN RT

New  2
Open  0
View/Report Bugs
Module Version: v0.0.4   Source  

NAME ^

Test::Ranger - Test with data tables, capturing, templates

VERSION ^

This document describes Test::Ranger version 0.0.1

TODO: THIS IS A DUMMY, NONFUNCTIONAL RELEASE.

SYNOPSIS ^

    # Object-oriented usage
    use Test::Ranger;

    my $group    = Test::Ranger->new([
        {
            -coderef    => \&Acme::Teddy::_egg,
            -basename   => 'teddy-egg',
        },
        
        {
            -name       => '4*7',
            -given      => [ 4, 7 ],
            -return     => {
                -is         => 42,
            },
            -stdout     => {
                -like       => [ qw(hello world) ],
                -matches    => 2,
                -lines      => 1,
            },
        },
        
        {
            -name       => '9*9',
            -given      => [ 9, 9 ],
            -return     => {
                -is         => 81,
            },
        },
        
        {
            -name       => 'string',
            -given      => [ 'monkey' ],
            -warn       => {
                -like       => 'dummy',
            },
        },
        
    ]); ## end new

    $group->test();
    
    __END__

DESCRIPTION ^

The computer should be doing the hard work. That's what it's paid to do, after all. -- Larry Wall

This is a comprehensive testing module compatible with Test::More and friends within TAP::Harness. Helper scripts and templates are included to make test-driven development quick, easy, and reliable. Test data structure is open; choose from object-oriented methods or procedural/functional calls.

Tests themselves are formally untestable. All code conceals bugs. Do you want to spend your time debugging tests or writing production code? The Test::Ranger philosophy is to reduce the amount of code in a test script and let test data (given inputs and wanted outputs) dominate.

Many hand-rolled test scripts examine expected output to see if it matches expectations. Test::Ranger traps fatal exceptions cleanly and makes it easy to subtest every execution for both expected and unexpected output.

Approach

Our overall approach is to declare all the conditions for a series of tests in an Arrayref-of-Hashrefs. We execute the tests, supplying inputs to code under test and capturing outputs within the same AoH. Then we compare each execution's actual outputs with what we expected.

Each test is represented by a hashref in which each key is a literal string; the values may be thought of as attributes of the test. The literal keys are part of our published interface; accessor methods are not required. Hashrefs and their keys may be nested DWIMmishly.

Much of the merit of our approach lies in sticky declaration. Once you declare, say, a coderef, you don't need to declare it again for every set of givens. Or, you can declare a given list of arguments once and pass them to several subroutines. See "-sticky", "-clear".

Test::Ranger does not lock you in to a single specific approach. You can declare your entire test series as an object and simply "test()" it, letting TR handle the details. You can read your data from somewhere and just use TR to capture a single execution, then examine the results on your own. You can mix TR methods and function calls; you can add other Test::More-ish checks. The door is open.

Templates

To further speed things along, please note that a number of templates are shipped with TR. These may be copied, modified, and extended as you please, of course. Consider them a sort of cookbook and an appendix to this documentation.

GLOSSARY ^

You are in a maze of twisty little tests, all different.

The word test is so heavily overloaded that documentation may be unclear. In TR docs, I will use the following terms:

manager

E.g., prove, make test; program that runs a "suite" through a "harness"

harness

E.g., Test::Harness or TAP::Harness; summarizes "framework" results

framework

E.g., Test::Simple or Test::More; sends results to "harness"

suite

Folder or set of test scripts.

script

File containing Perl code meant to be run by a "harness"; filename usually ends in .t

list

Array or series of (several sequential) test declarations

declaration

The data required to execute a test, including given "inputs" and expected "outputs"; also, the phase in which this data is constructed

execution

The action of running a test "declaration" and capturing actual "outputs"; also, the phase in which this is done

checking

The action of comparing actual and expected values for some execution; also, the phase in which this is done

subtest

A single comparison of actual and expected results for some output.

Note that a Test::More::subtest(), used internally by Test::Ranger, counts as a single 'test' passed to harness. In these docs, a 'subtest' is any one check within a call to subtest().

inputs

Besides arguments passed to SUT, any state that it might read, such as @ARGV and %ENV.

Inputs are given, perhaps generated.

outputs

Besides the conventional return value, anything else SUT might write, particularly STDOUT and STDERR; also includes exceptions thrown by SUT.

Outputs may be actual ("-got") results or expected ("-want").

CUT, SUT, MUT

code under test, subroutine..., module...; the thing being tested

INTERFACE ^

The primary interface to TR is the test data structure you normally supply as the argument to "new()". There are also a number of methods you can use. In Perl, methods can be called as functions; Test::Ranger's methods are written with this in mind.

$test

You can call this anything you like, of course. This is the football you pass to various methods.

The Test::Ranger object represents a single test "declaration". Besides the data that you provide, it contains test outputs and some housekeeping information. See </new()>.

All data is kept in essentially the same structure you pass to the constructor. You should use the following literal hash keys to access any element or object attribute. Generally, values are interpreted according to the rule: If the value is a simple scalar, it is considered the intended value of the corresponding key. If the value is an arrayref, it is dereferenced and the resulting array considered a list of intended values. If the value is a hashref, then it is considered to introduce more keys from this interface.

It's not necessary to supply values for all these keys. The only essential key is "-coderef". If nothing else is declared, &{$test-{-coderef}}()> will be executed, with no arguments. One subtest will pass if the execution's return value is TRUE. STDOUT and STDERR are expected to be empty. Any exception will be trapped and reported as a subtest failure.

See defaults.

Inputs

These are the values passed into a test execution.

-given

    {-given => [2, 'foo']}              # default == ()

Values passed as arguments to CUT. Your code might well be passed a hashref; so, if you supply this here, TR will not look deeper for more keys.

-argv

    {-argv => [--fleegle => 'one']}     # default: untouched

Before each execution, @ARGV will be set to this list. Existing @ARGV will be replaced.

-env

    {-env => [PERL5LIB => '../blib']}   # default: untouched

Before each execution, this list will be added to %ENV. Existing %ENV key/value pairs will be untouched.

-infile

    {-infile => 'my/data/file.txt'}

The supplied file will be opened for reading and one record passed per execution. This hashref can be declared as the value for some other input; if it's found at the top level of a declaration, the record will be passed in as a single string "-given".

-input

    {-input => {-foo => 'baz'}}

These values will not be processed directly when the declaration is executed. This feature is intended to allow you to supply additional test inputs. It's up to you to pick them out during execution and use them as you wish.

You may, if you like, collect other inputs here; it's not required. -inputs is a synonym.

Expectations

These are the values you want to find after a test execution.

-return

    {-return => 1}                      # default: any TRUE value

The value normally returned by the execution. Your code might well return a hashref; so, if you supply this here, TR will not look deeper for more keys.

TODO: wantarray?

-stdout

    {-stdout => [qw(foo bar baz)]}      # default eq q{}

STDOUT is always captured. If a scalar or arrayref is supplied for this key (as the example), it will be treated as though you had declared:

    {
        -stdout => {
            -like       => [qw(foo bar baz)],
            -matches    => 1,
        }
    }

You may want more control over the comparison. If you supply a hashref, you can declare various subkeys for this purpose.

-warn

STDERR is always captured. This feature is parallel to "-stdout". Synonym for -stderr.

-fatal

Exceptions are always trapped. You might want the execution to fatal out; if so, supply a value for this key, which will be subtested as are "-stdout" and "-warn".

-want

    {-want => {-foo => 'baz'}}

Extra values will not be processed directly when the declaration is executed. This feature is intended to allow you to supply additional test expectations. It's up to you to pick them out during comparison and use them as you wish.

You may, if you like, collect other wants here; it's not required. -wants is a synonym.

Results

Actual results from execution and comparison are stored in the same object you declared in the constructor. You may wish to perform additional subtests on them. You might like to dump stored results to screen or disk.

-got

    {
        -got => {
            -return => 'foo',
            -stdout     => {
                -string     => 'Hello, world!',
            },
            -stderr     => {
                -string     => 'Foo caught at line 17.',
                -matches    => 3,
            },
            -fatal      => undef,
        },
    }

All captured results are stored as subkeys of -got. To see in detail how TR stores these results, you might like to use the convenience method "dump()".

Execution control

TODO: Explain these. See also crossjoin() and friends.

-expand

-sticky

-clear

-bailout

-skip

-done

Comparison control

TODO: For any script, comparisons may be done for each declaration immediately after its execution; or all at once after all executions have completed.

Between any pair of actual and expected outputs, one or more subtests can be made. These are declared, generally, with a subkey similar to the corresponding framework comparison function. If, for any "-want", an expected value is given directly, it will be compared using its fallback.

FIXME: fallback table

-want value fallback method

-return scalar Test::More::is() not scalar Test::More::is_deeply()

-stdout -stderr -fatal

-is

    {-is => 'Hello, world!'}

String eq. A synonym is -string.

-number

    {-number => 42}

Numeric ==. An additional subtest will fail if the actual result raises a warning of string in numeric comparision. This warning will not be captured into {-got}{-warn}. See "Category Hierarchy" in perllexwarn.

-min

    {-min => 3}

Subtest passes if actual value is at least this.

-max

    {-max => 5}

Subtest passes if actual value is not more than this.

-like

    {-like => [qw(foo bar baz)]}

A regex will be constructed from the provided list, e.g.:

    $test->{-got} =~ m/foo|bar|baz/

This subtest will pass if there is at least one match. But see "matches".

-regex

    {-regex => qr/$my_big_fat_regex/}

Like "-like", but allows you to use full regex syntax.

-matches

    {
        -like       => [qw(foo bar baz)],
        -matches    => 2,
    }

Only useful with "-like" or "-regex". Checks to see that at least a required number of matches were found. But see "-max".

-lines

    {-lines => 7}

Checks to see that at least a required number of lines were captured. As a sanity check, this subtest will also fail if the actual number of lines is much greater than expected. TODO: how many more is too much? To override this and get finer control, see "number", "min", "min". You can say:

    {-lines => {-min => 5, -max => 9} }         

Class Methods

TODO: Explain these stuffs

new()

Takes a hashref or arrayref as a required argument.

If you pass a hashref to the constructor Test::Ranger::new(), it will return an object blessed into class Test::Ranger; if you pass an arrayref of hashrefs, it will bless each hashref into the base class, wrap the arrayref in a top-level hashref, and bless the mess into Test::Ranger::List.

TR objects are conventionally hashref based and its keys are part of our public interface. So, you're free to poke around as you like.

Returns $self.

Object Methods

TODO: Explain these stuffs

TR object methods may generally be called as fully-qualified functions. Internally, the argument supplied to the function will be passed to "new()".

execute()

Takes a hashref or arrayref as a required argument when called as a function. Takes no argument when called as a method.

Perform the execution of the declared data and code, capturing outputs. If the class is Test::Ranger::List, then the list will be looped through sequentiallly and each Test::Ranger subobject executed.

Returns $self.

check()

Takes a hashref or arrayref as a required argument when called as a function. Takes no argument when called as a method.

Perform a series of subtests comparing expected and actual results. Each subtest writes to STDOUT/STDERR for consumption by a harness.

Returns $self.

test()

Takes a hashref or arrayref as an optional argument when called as a function. Takes no argument when called as a method.

Performs both the execution and comparison phase on a Test::Ranger object. If invoked on a Test::Ranger::List, then each subobject will be executed and then compared before going to the next. You may prefer this to the two-step execute; compare; approach, especially if you want to "-bailout" of testing on failure.

Returns $self.

append()

Takes a TR object as a required argument. Not usable as a function.

Pushes its argument into $self.

Returns $self.

crossjoin()

Takes a TR object as a required argument. Not usable as a function.

Builds a two-dimensional matrix from $self and its argument, then flattens this into a new list. Useful for running every possible combination of two lists of test declarations.

Returns $self.

shuffle()

Takes any scalar value as an argument, but interprets it as a boolean. A Test::Ranger::List object (only) method.

Pseudo-randomly rearranges a list of TR subobjects, so they won't execute in sequence. Useful for uncovering unexpected state retention. Best used after "expand()".

Returns $self.

expand()

Takes no argument. Not usable as a function.

Expands the declarations in an object to the most-fully-qualified extent. This removes all dependency on "-sticky" declaration and inserts defaults for all values not supplied. Useful if you're not sure TR is really DWYM.

Returns $self.

dump()

Takes a keyword as an optional argument; see below. Not usable as a function.

    print $test->dump('form');

Convenience method dumps an entire object so you can see what's in it. Keyword 'form' uses Perl6::Form to try to get a compact dump. Default is Data::Dumper.

Returns a big long string.

DIAGNOSTICS ^

Error message here, perhaps with %s placeholders

[Description of error here]

Another error message here

[Description of error here]

[Et cetera, et cetera]

CONFIGURATION AND ENVIRONMENT ^

Test::Ranger requires no configuration files or environment variables.

DEPENDENCIES ^

None.

INCOMPATIBILITIES ^

None reported.

BUGS AND LIMITATIONS ^

Please report any bugs or feature requests to bug-test-ranger@rt.cpan.org, or through the web interface at http://rt.cpan.org.

AUTHOR ^

Xiong Changnian <xiong@cpan.org>

LICENSE ^

Copyright (C) 2010 Xiong Changnian <xiong@cpan.org>

This library and its contents are released under Artistic License 2.0:

http://www.opensource.org/licenses/artistic-license-2.0.php

syntax highlighting: