The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.

NAME

    AI::MXNet::Metric - Online evaluation metric module.

DESCRIPTION

    Base class of all evaluation metrics.

get_config

    Save configurations of metric. Can be recreated
        from configs with mx->metric->create(%{ $config })

NAME

    AI::MXNet::Perplexity

DESCRIPTION

    Calculate perplexity.

    Parameters
    ----------
    ignore_label : int or undef
        index of invalid label to ignore when
        counting. usually should be -1. Include
        all entries if undef.
    axis : int (default -1)
        The axis from prediction that was used to
        compute softmax. By default uses the last
        axis.

NAME

    AI::MXNet::PearsonCorrelation

DESCRIPTION

    Computes Pearson correlation.

    Parameters
    ----------
    name : str
        Name of this metric instance for display.

    Examples
    --------
    >>> $predicts = [mx->nd->array([[0.3, 0.7], [0, 1.], [0.4, 0.6]])]
    >>> $labels   = [mx->nd->array([[1, 0], [0, 1], [0, 1]])]
    >>> $pr = mx->metric->PearsonCorrelation()
    >>> $pr->update($labels, $predicts)
    >>> print pr->get()
    ('pearson-correlation', '0.421637061887229')

NAME

    AI::MXNet::Loss

DESCRIPTION

    Dummy metric for directly printing loss.

    Parameters
    ----------
    name : str
        Name of this metric instance for display.

NAME

    AI::MXNet::Confidence

DESCRIPTION

    Accuracy by confidence buckets.

    Parameters
    ----------
    name : str
        Name of this metric instance for display.
    num_classes: Int
        number of classes
    confidence_thresholds: ArrayRef[Num]
        confidence buckets
    For example
    my $composite_metric  = AI::MXNet::CompositeEvalMetric->new;
    $composite_metric->add(mx->metric->create('acc'));
    $composite_metric->add(
        AI::MXNet::Confidence->new(
            num_classes => 2,
            confidence_thresholds => [ 0.5, 0.7, 0.8, 0.9 ],
        )
    );

NAME

    AI::MXNet::CustomMetric

DESCRIPTION

    Custom evaluation metric that takes a sub ref.

    Parameters
    ----------
    eval_function : subref
        Customized evaluation function.
    name : str, optional
        The name of the metric
    allow_extra_outputs : bool
        If true, the prediction outputs can have extra outputs.
        This is useful in RNN, where the states are also produced
        in outputs for forwarding.

create

    Create an evaluation metric.

    Parameters
    ----------
    metric : str or sub ref
        The name of the metric, or a function
        providing statistics given pred, label NDArray.