The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.

NAME

Statistics::Test::WilcoxonRankSum - perform the Wilcoxon (aka Mann-Whitney) rank sum test on two sets of numeric data.

VERSION

This document describes Statistics::Test::WilcoxonRankSum version 0.0.1

SYNOPSIS

    use Statistics::Test::WilcoxonRankSum;

    my $wilcox_test = Statistics::Test::WilcoxonRankSum->new();

    my @dataset_1 = (4.6, 4.7, 4.9, 5.1, 5.2, 5.5, 5.8, 6.1, 6.5, 6.5, 7.2);
    my @dataset_2 = (5.2, 5.3, 5.4, 5.6, 6.2, 6.3, 6.8, 7.7, 8.0, 8.1);

    $wilcox_test->load_data(\@dataset_1, \@dataset_2);
    my $prob = $wilcox_test->probability();

    my $pf = sprintf '%f', $prob; # prints 0.091022

    print $wilcox_test->probability_status();

    # prints something like:
    # Probability:   0.002797, exact
    # or
    # Probability:   0.511020, normal approx w. mean: 104.000000, std deviation:  41.840969, z:   0.657251

    my $pstatus = $wilcox_test->probability_status();
    # $pstatus is like the strings above

    $wilcox_test->summary();

    # prints something like:

    # ----------------------------------------------------------------
    # dataset |    n      | rank sum: observed / expected 
    # ----------------------------------------------------------------
    #   1    |     10    |               533      /    300
    # ----------------------------------------------------------------
    #   2    |     50    |              1296      /   1500
    # ----------------------------------------------------------------
    # N (size of both datasets):      60
    # Probability:   0.000006, normal approx w. mean: 305.000000, std deviation:  50.414945, z:   4.522468
    # Significant (at 0.05 level)
    # Ranks of dataset 1 are higher than expected

DESCRIPTION

In statistics, the Mann-Whitney U test (also called the Mann-Whitney-Wilcoxon (MWW), Wilcoxon rank-sum test, or Wilcoxon-Mann-Whitney test) is a non-parametric test for assessing whether two samples of observations come from the same distribution. The null hypothesis is that the two samples are drawn from a single population, and therefore that their probability distributions are equal. See the Wikipedia entry http://en.wikipedia.org/wiki/Mann-Whitney_U (for eg.) or statistic textbooks for further details.

When the sample sizes are small the probability can be computed directly. For larger samples usually a normal approximation is used.

The Mechanics

Input to the test are two sets (lists) of numbers. The values of both lists are ranked from the smallest to the largest, while remembering which set the items come from. When the values are the same, they get an average rank. For each of the sample sets, we compute the rank sum. Under the assumption that the two samples come from the same population, the rank sum of the first set should be close to the value n1 * (n1 + n2 + 1)/2, where n1 and n2 are the sample sizes. The test computes the (exact, resp. approximated) probability of the actual rank sum against the expected value (which is the one given above). So, when the computed probability is below 0.05, we can reject the null hypothesis at level 0.05 and conclude that the two samples are significantly different.

Implementation

The implementation follows the mechanics described above. The exact probability is computed for sample sizes less than 20, but this threshold can be set with `new'. For larger samples the probability is computed by normal approximation.

INTERFACE

Constructor

new()

Builds a new Statistics::Test::WilcoxonRankSum object.

When called like this:

 Statistics::Test::WilcoxonRankSum->new( { exact_upto => 30 }

the exact probability will be computed for sample sizes lower than 30 (instead of 20, which is the default).

Providing the Data

load_data(\@dataset_1, \@dataset_2)
set_dataset1(\@dataset_1)
set_dataset2(\@dataset_2)

When calling these methods, all previously computed rank sums and probabilities are reset.

Computations

Ranks

compute_ranks()

The two datasets are put together and ranked (taking care of ties). The method returns a hash reference to a hash, with the data values as keys, looking like this:

                      '3' => {
                              'tied' => 2,
                              'in_dataset' => {
                                               'ds2' => 2
                                              },
                              'rank' => '1.5'
                             },
                      '24' => {
                               'tied' => 1,
                               'in_dataset' => {
                                                'ds1' => 1
                                               },
                               'rank' => '7'
                              },
compute_rank_array

Returns the ranks computed above in a differen form (depending on the context, an array of or the reference to array references):

 [ [ '1.5', 'ds2' ], [ '1.5', 'ds2' ], [ '3', 'ds1' ], ...]

The first item in the second level arrays is the rank and the second marks the data set the ranked item came from. ds1 --> first dataset, ds2 --> second dataset.

In scalar context returns the number of elements (ie. the size of the two samples).

rank_sum_for

Computes rank sum for dataset given as argument. If the argument matches 1, this will be dataset 1, else dataset 2.

get_smaller_rank_sum

Checks which of the two rank sums is the smaller one.

smaller_rank_sums_count

For the set with the smaller rank sum, counts the number of partitions (of the ranks) giving a smaller rank sum than the observed one. Needed to compute the exact probability.

rank_sums_other_than_expected_counts

For the set with the smaller rank sum, counts the number of partitions (of the ranks) giving a rank sum other than the observed one (For example if the rank sum is larger than expected, counts the number of partitions giving a rank sum larger than the observed one). Needed to compute the exact probability.

Probabilities

probability

Computes (and returns) the probability of the given outcome under the assumption that the two data samples come from the same population. When the size of the two samples taken together is less than exact_upto, "probability_exact" is called, else "probability_normal_approx". The parameter exact_upto can be passed to "new" as argument and defaults to 20.

When the size of the two samples taken together is less than 5, it makes not much sense to compute the probability. Currently, only the "summary" method issues a warning.

This method is also called whenever an object of this class needs to be coerced to a number.

probability_exact

Compute the probability by counting.

probability_normal_approx

Compute the probability by normal approximation.

Display and Notification

probability_status

Tells if the probability has been or can be computed. If it has been computed shows the value and how it has been computed (by the direct method or by normal approximation).

summary

Prints or returns a string with diagnostics like this:

    # ----------------------------------------------------------------
    # dataset |    n      | rank sum: observed / expected 
    # ----------------------------------------------------------------
    #   1    |     10    |               533      /    300
    # ----------------------------------------------------------------
    #   2    |     50    |              1296      /   1500
    # ----------------------------------------------------------------
    # N (size of both datasets):      60
    # Probability:   0.000006, normal approx w. mean: 305.000000, std deviation:  50.414945, z:   4.522468
    # Significant (at 0.05 level)
    # Ranks of dataset 1 are higher than expected

This method also issues a warning, when the size of the 2 samples taken together is less than 5.

summary is called whenever an object of this class needs to be coerced to a string.

as_hash

Returns a hash reference with the gathered data, needed to compute the probabilities, with the following keys:

dataset_1

The first dataset (array ref)

dataset_2

The second dataset (also array ref)

n1

size of first dataset

n2

size of second dataset

N
 n1 + n2
rank_array

the array returned by "compute_rank_array", see there.

rank_sum_1, rank_sum_2

rank sum of first and second dataset respectively.

rank_sum_1_expected rank_sum_2_expected

the expected rank sums, if the two samples came from the same population. For the first dataset this is:

  n1 * (N+1) / 2
probability
probability_normal_approx

data used for computing the probability by normal approximation, when the sample size is too large. A hash reference with the following keys: mean, std deviation, z.

Getter

The following methods are provided by the Class::Std :get facility and return the corresponding object data:

get_dataset1, get_dataset2
get_n1
get_n2
get_N
get_max_rank_sum
get_rank_array
get_rankSum_dataset1, get_rankSum_dataset2
expected_rank_sum_dataset1, expected_rank_sum_dataset2

DIAGNOSTICS

Need array ref to dataset
Datasets must be passed as array references

When a "Providing the Data" method is called without enough arguments, or when the arguments are not array references.

dataset has no element greater 0

It makes no sense to compute the probability when all the items are 0.

Please set/load datasets before computing ranks

Maybe you called a "compute_ranks" method, and didn't hand in both datasets?

Argument must match `1' or `2' (meaning dataset 1 or 2)

The method "rank_sum_for" must know what dataset to compute the rank for: dataset 1, if the argument matches 1, dataset 2 if the argument matches 2.

Rank sum bound %i is bigger than the maximum possible rank sum %i
Sum of %i and %i must be equal to number of ranks: %i

Plausibility checks before doing the rank sum counts ("smaller_rank_sums_count"). Something's terribly broken when this occurs.

CONFIGURATION AND ENVIRONMENT

Statistics::Test::WilcoxonRankSum requires no configuration files or environment variables.

DEPENDENCIES

Carp
Carp::Assert
Class::Std
Contextual::Return
Set::Partition
List::Util qw(sum)
Math::BigFloat
Math::Counting
Statistics::Distributions

INCOMPATIBILITIES

None reported.

BUGS AND LIMITATIONS

No bugs have been reported.

Please report any bugs or feature requests to bug-statistics-test-wilcoxonranksum@rt.cpan.org, or through the web interface at http://rt.cpan.org.

TO DO

a sort function as argument (maybe at construction time)

such that float data within a given interval can be considered equal

a more obvious warning the sample sizes are definitely too small

AUTHOR

Ingrid Falk <ingrid dot falk at loria dot fr>

LICENCE AND COPYRIGHT

Copyright (c) 2008, Ingrid Falk <ingrid dot falk at loria dot fr>. All rights reserved.

This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See perlartistic.

DISCLAIMER OF WARRANTY

BECAUSE THIS SOFTWARE IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE SOFTWARE, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE SOFTWARE "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE SOFTWARE IS WITH YOU. SHOULD THE SOFTWARE PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR, OR CORRECTION.

IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE SOFTWARE AS PERMITTED BY THE ABOVE LICENCE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE SOFTWARE (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE SOFTWARE TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.