Elastic::Model - A NoSQL document store with full text search for Moose objects using Elasticsearch as a backend.
version 0.27
package MyApp; use Elastic::Model; has_namespace 'myapp' => { user => 'MyApp::User', post => 'MyApp::Post' }; has_typemap 'MyApp::TypeMap'; # Setup custom analyzers has_filter 'edge_ngrams' => ( type => 'edge_ngram', min_gram => 2, max_gram => 10 ); has_analyzer 'edge_ngrams' => ( tokenizer => 'standard', filter => [ 'standard', 'lowercase', 'edge_ngrams' ] ); no Elastic::Model;
Elastic::Model is a framework to store your Moose objects, which uses Elasticsearch as a NoSQL document store and flexible search engine.
It is designed to make it easy to start using Elasticsearch with minimal extra code, but allows you full access to the rich feature set available in Elasticsearch as soon as you are ready to use it.
If you are not familiar with Elastic::Model, you should start by reading Elastic::Manual::Intro.
The rest of the documentation on this page explains how to use the Elastic::Model module itself.
This version of Elastic::Model has been updated to work with Elasticsearch::Compat, a compatibility layer over the new official Elasticsearch Perl client: Elasticsearch.
In the near future, I will be releasing a version of Elastic::Model that works with Elasticsearch directly, without need for the compatibility layer. In the meantime, you should be able to continue using Elastic::Model as normal by simply changing:
use ElasticSearch; my $es = ElasticSearch->new(...);
to:
use Elasticsearch::Compat; my $es = Elasticsearch::Compat->new(...);
Your application needs a model class to handle the relationship between your object classes and the Elasticsearch cluster.
model
Your model class is most easily defined as follows:
package MyApp; use Elastic::Model; has_namespace 'myapp' => { user => 'MyApp::User', post => 'MyApp::Post' }; no Elastic::Model;
This applies Elastic::Model::Role::Model to your MyApp class, Elastic::Model::Meta::Class::Model to MyApp's metaclass and exports functions which help you to configure your model.
MyApp
Your model must define at least one namespace, which tells Elastic::Model which type (like a table in a DB) should be handled by which of your classes. So the above declaration says:
"For all indices which belong to namespace myapp, objects of class MyApp::User will be stored under the type user in Elasticsearch."
myapp
MyApp::User
user
Elastic::Model uses a TypeMap to figure out how to inflate and deflate your objects, and how to configure them in Elasticsearch.
You can specify your own TypeMap using:
has_typemap 'MyApp::TypeMap';
See Elastic::Model::TypeMap::Base for instructions on how to define your own type-map classes.
If you have attributes whose values are unique, then you can customize the index where these unique values are stored.
has_unique_index 'myapp_unique';
The default value is unique_key.
unique_key
Analysis is the process of converting full text into terms or tokens and is one of the things that gives full text search its power. When storing text in the Elasticsearch index, the text is first analyzed into terms/tokens. Then, when searching, search keywords go through the same analysis process to produce the terms/tokens which are then searched for in the index.
terms
tokens
Choosing the right analyzer for each field gives you enormous control over how your data can be queried.
There are a large number of built-in analyzers available, but frequently you will want to define custom analyzers, which consist of:
zero or more character filters
a tokenizer
zero or more token filters
Elastic::Model provides sugar to make it easy to specify custom analyzers:
Character filters can change the text before it gets tokenized, for instance:
has_char_filter 'my_mapping' => ( type => 'mapping', mappings => ['ph=>f','qu=>q'] );
See "Default character filters" in Elastic::Model::Meta::Class::Model for a list of the built-in character filters.
A tokenizer breaks up the text into individual tokens or terms. For instance, the pattern tokenizer could be used to split text using a regex:
pattern
has_tokenizer 'my_word_tokenizer' => ( type => 'pattern', pattern => '\W+', # splits on non-word chars );
See "Default tokenizers" in Elastic::Model::Meta::Class::Model for a list of the built-in tokenizers.
Any terms/tokens produced by the "has_tokenizer" can the be passed through multiple token filters. For instance, each term could be broken down into "edge ngrams" (eg 'foo' => 'f','fo','foo') for partial matching.
has_filter 'my_ngrams' => ( type => 'edge_ngram', min_gram => 1, max_gram => 10, );
See "Default token filters" in Elastic::Model::Meta::Class::Model for a list of the built-in character token filters.
Custom analyzers can be defined by combining character filters, a tokenizer and token filters, some of which could be built-in, and some defined by the keywords above.
For instance:
has_analyzer 'partial_word_analyzer' => ( type => 'custom', char_filter => ['my_mapping'], tokenizer => ['my_word_tokenizer'], filter => ['lowercase','stop','my_ngrams'] );
See "Default analyzers" in Elastic::Model::Meta::Class::Model for a list of the built-in analyzers.
If you would like to override any of the core classes used by Elastic::Model, then you can do so as follows:
override_classes ( domain => 'MyApp::Domain', store => 'MyApp::Store' );
The defaults are:
namespace ----------- Elastic::Model::Namespace
namespace
-----------
domain -------------- Elastic::Model::Domain
domain
--------------
store --------------- Elastic::Model::Store
store
---------------
view ---------------- Elastic::Model::View
view
----------------
scope --------------- Elastic::Model::Scope
scope
results ------------- Elastic::Model::Results
results
-------------
cached_results ------ Elastic::Model::Results::Cached
cached_results
------
scrolled_results ---- Elastic::Model::Results::Scrolled
scrolled_results
----
result -------------- Elastic::Model::Result
result
bulk ---------------- Elastic::Model::Bulk
bulk
Elastic::Model::Role::Model
Elastic::Manual
Elastic::Doc
Clinton Gormley <drtech@cpan.org>
This software is copyright (c) 2013 by Clinton Gormley.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
To install Elastic::Model, copy and paste the appropriate command in to your terminal.
cpanm
cpanm Elastic::Model
CPAN shell
perl -MCPAN -e shell install Elastic::Model
For more information on module installation, please visit the detailed CPAN module installation guide.