The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.
NAME
    Elastic::Model - A NoSQL document store with full text search for Moose
    objects using Elasticsearch as a backend.

VERSION
    version 0.27

SYNOPSIS
        package MyApp;

        use Elastic::Model;

        has_namespace 'myapp' => {
            user => 'MyApp::User',
            post => 'MyApp::Post'
        };

        has_typemap 'MyApp::TypeMap';

        # Setup custom analyzers

        has_filter 'edge_ngrams' => (
            type     => 'edge_ngram',
            min_gram => 2,
            max_gram => 10
        );

        has_analyzer 'edge_ngrams' => (
            tokenizer => 'standard',
            filter    => [ 'standard', 'lowercase', 'edge_ngrams' ]
        );

        no Elastic::Model;

DESCRIPTION
    Elastic::Model is a framework to store your Moose objects, which uses
    Elasticsearch as a NoSQL document store and flexible search engine.

    It is designed to make it easy to start using Elasticsearch with minimal
    extra code, but allows you full access to the rich feature set available
    in Elasticsearch as soon as you are ready to use it.

INTRODUCTION TO Elastic::Model
    If you are not familiar with Elastic::Model, you should start by reading
    Elastic::Manual::Intro.

    The rest of the documentation on this page explains how to use the
    Elastic::Model module itself.

IMPORTANT
    This version of Elastic::Model has been updated to work with
    Elasticsearch::Compat, a compatibility layer over the new official
    Elasticsearch Perl client: Elasticsearch.

    In the near future, I will be releasing a version of Elastic::Model that
    works with Elasticsearch directly, without need for the compatibility
    layer. In the meantime, you should be able to continue using
    Elastic::Model as normal by simply changing:

        use ElasticSearch;
        my $es = ElasticSearch->new(...);

    to:

        use Elasticsearch::Compat;
        my $es = Elasticsearch::Compat->new(...);

USING ELASTIC::MODEL
    Your application needs a "model" class to handle the relationship
    between your object classes and the Elasticsearch cluster.

    Your model class is most easily defined as follows:

        package MyApp;

        use Elastic::Model;

        has_namespace 'myapp' => {
            user => 'MyApp::User',
            post => 'MyApp::Post'
        };

        no Elastic::Model;

    This applies Elastic::Model::Role::Model to your "MyApp" class,
    Elastic::Model::Meta::Class::Model to "MyApp"'s metaclass and exports
    functions which help you to configure your model.

    Your model must define at least one namespace, which tells
    Elastic::Model which type (like a table in a DB) should be handled by
    which of your classes. So the above declaration says:

    *"For all indices which belong to namespace "myapp", objects of class
    "MyApp::User" will be stored under the type "user" in Elasticsearch."*

  Custom TypeMap
    Elastic::Model uses a TypeMap to figure out how to inflate and deflate
    your objects, and how to configure them in Elasticsearch.

    You can specify your own TypeMap using:

        has_typemap 'MyApp::TypeMap';

    See Elastic::Model::TypeMap::Base for instructions on how to define your
    own type-map classes.

  Custom unique key index
    If you have attributes whose values are unique, then you can customize
    the index where these unique values are stored.

        has_unique_index 'myapp_unique';

    The default value is "unique_key".

  Custom analyzers
    Analysis is the process of converting full text into "terms" or "tokens"
    and is one of the things that gives full text search its power. When
    storing text in the Elasticsearch index, the text is first analyzed into
    terms/tokens. Then, when searching, search keywords go through the same
    analysis process to produce the terms/tokens which are then searched for
    in the index.

    Choosing the right analyzer for each field gives you enormous control
    over how your data can be queried.

    There are a large number of built-in analyzers available, but frequently
    you will want to define custom analyzers, which consist of:

    *   zero or more character filters

    *   a tokenizer

    *   zero or more token filters

    Elastic::Model provides sugar to make it easy to specify custom
    analyzers:

   has_char_filter
    Character filters can change the text before it gets tokenized, for
    instance:

        has_char_filter 'my_mapping' => (
            type        => 'mapping',
            mappings    => ['ph=>f','qu=>q']
        );

    See "Default character filters" in Elastic::Model::Meta::Class::Model
    for a list of the built-in character filters.

   has_tokenizer
    A tokenizer breaks up the text into individual tokens or terms. For
    instance, the "pattern" tokenizer could be used to split text using a
    regex:

        has_tokenizer 'my_word_tokenizer' => (
            type        => 'pattern',
            pattern     => '\W+',          # splits on non-word chars
        );

    See "Default tokenizers" in Elastic::Model::Meta::Class::Model for a
    list of the built-in tokenizers.

   has_filter
    Any terms/tokens produced by the "has_tokenizer" can the be passed
    through multiple token filters. For instance, each term could be broken
    down into "edge ngrams" (eg 'foo' => 'f','fo','foo') for partial
    matching.

        has_filter 'my_ngrams' => (
            type        => 'edge_ngram',
            min_gram    => 1,
            max_gram    => 10,
        );

    See "Default token filters" in Elastic::Model::Meta::Class::Model for a
    list of the built-in character token filters.

   has_analyzer
    Custom analyzers can be defined by combining character filters, a
    tokenizer and token filters, some of which could be built-in, and some
    defined by the keywords above.

    For instance:

        has_analyzer 'partial_word_analyzer' => (
            type        => 'custom',
            char_filter => ['my_mapping'],
            tokenizer   => ['my_word_tokenizer'],
            filter      => ['lowercase','stop','my_ngrams']
        );

    See "Default analyzers" in Elastic::Model::Meta::Class::Model for a list
    of the built-in analyzers.

  Overriding Core Classes
    If you would like to override any of the core classes used by
    Elastic::Model, then you can do so as follows:

        override_classes (
            domain  => 'MyApp::Domain',
            store   => 'MyApp::Store'
        );

    The defaults are:

    *   "namespace" "-----------" Elastic::Model::Namespace

    *   "domain" "--------------" Elastic::Model::Domain

    *   "store" "---------------" Elastic::Model::Store

    *   "view" "----------------" Elastic::Model::View

    *   "scope" "---------------" Elastic::Model::Scope

    *   "results" "-------------" Elastic::Model::Results

    *   "cached_results" "------" Elastic::Model::Results::Cached

    *   "scrolled_results" "----" Elastic::Model::Results::Scrolled

    *   "result" "--------------" Elastic::Model::Result

    *   "bulk" "----------------" Elastic::Model::Bulk

SEE ALSO
    *   Elastic::Model::Role::Model

    *   Elastic::Manual

    *   Elastic::Doc

AUTHOR
    Clinton Gormley <drtech@cpan.org>

COPYRIGHT AND LICENSE
    This software is copyright (c) 2013 by Clinton Gormley.

    This is free software; you can redistribute it and/or modify it under
    the same terms as the Perl 5 programming language system itself.