Marvin Humphrey > Lucy > Lucy::Analysis::RegexTokenizer

Download:
Lucy-0.4.0.tar.gz

Dependencies

Annotate this POD

Website

View/Report Bugs
Module Version: 0.004000   Source  

NAME ^

Lucy::Analysis::RegexTokenizer - Split a string into tokens.

SYNOPSIS ^

    my $whitespace_tokenizer
        = Lucy::Analysis::RegexTokenizer->new( pattern => '\S+' );

    # or...
    my $word_char_tokenizer
        = Lucy::Analysis::RegexTokenizer->new( pattern => '\w+' );

    # or...
    my $apostrophising_tokenizer = Lucy::Analysis::RegexTokenizer->new;

    # Then... once you have a tokenizer, put it into a PolyAnalyzer:
    my $polyanalyzer = Lucy::Analysis::PolyAnalyzer->new(
        analyzers => [ $word_char_tokenizer, $normalizer, $stemmer ], );

DESCRIPTION ^

Generically, "tokenizing" is a process of breaking up a string into an array of "tokens". For instance, the string "three blind mice" might be tokenized into "three", "blind", "mice".

Lucy::Analysis::RegexTokenizer decides where it should break up the text based on a regular expression compiled from a supplied pattern matching one token. If our source string is...

    "Eats, Shoots and Leaves."

... then a "whitespace tokenizer" with a pattern of "\\S+" produces...

    Eats,
    Shoots
    and
    Leaves.

... while a "word character tokenizer" with a pattern of "\\w+" produces...

    Eats
    Shoots
    and
    Leaves

... the difference being that the word character tokenizer skips over punctuation as well as whitespace when determining token boundaries.

CONSTRUCTORS ^

new( [labeled params] )

    my $word_char_tokenizer = Lucy::Analysis::RegexTokenizer->new(
        pattern => '\w+',    # required
    );

INHERITANCE ^

Lucy::Analysis::RegexTokenizer isa Lucy::Analysis::Analyzer isa Clownfish::Obj.

syntax highlighting: