KinoSearch::Analysis::Tokenizer - Split a string into tokens.
The KinoSearch code base has been assimilated by the Apache Lucy project. The "KinoSearch" namespace has been deprecated, but development continues under our new name at our new home: http://lucy.apache.org/
my $whitespace_tokenizer = KinoSearch::Analysis::Tokenizer->new( pattern => '\S+' ); # or... my $word_char_tokenizer = KinoSearch::Analysis::Tokenizer->new( pattern => '\w+' ); # or... my $apostrophising_tokenizer = KinoSearch::Analysis::Tokenizer->new; # Then... once you have a tokenizer, put it into a PolyAnalyzer: my $polyanalyzer = KinoSearch::Analysis::PolyAnalyzer->new( analyzers => [ $case_folder, $word_char_tokenizer, $stemmer ], );
Generically, "tokenizing" is a process of breaking up a string into an array of "tokens". For instance, the string "three blind mice" might be tokenized into "three", "blind", "mice".
KinoSearch::Analysis::Tokenizer decides where it should break up the text based on a regular expression compiled from a supplied pattern
matching one token. If our source string is...
"Eats, Shoots and Leaves."
... then a "whitespace tokenizer" with a pattern
of "\\S+"
produces...
Eats, Shoots and Leaves.
... while a "word character tokenizer" with a pattern
of "\\w+"
produces...
Eats Shoots and Leaves
... the difference being that the word character tokenizer skips over punctuation as well as whitespace when determining token boundaries.
my $word_char_tokenizer = KinoSearch::Analysis::Tokenizer->new( pattern => '\w+', # required );
\w+(?:[\x{2019}']\w+)*
, which matches "it's" as well as "it" and "O'Henry's" as well as "Henry".KinoSearch::Analysis::Tokenizer isa KinoSearch::Analysis::Analyzer isa KinoSearch::Object::Obj.
Copyright 2005-2011 Marvin Humphrey
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.