The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.

NAME

Locale::Maketext::Gettext - Joins the gettext and Maketext frameworks

SYNOPSIS

In your localization class:

  package MyPackage::L10N;
  use base qw(Locale::Maketext::Gettext);
  return 1;

In your application:

  use MyPackage::L10N;
  $LH = MyPackage::L10N->get_handle or die "What language?";
  $LH->bindtextdomain("mypackage", "/home/user/locale");
  $LH->textdomain("mypackage");
  $LH->maketext("Hello, world!!");

If you want to have more control to the detail:

  # Change the output encoding
  $LH->encoding("UTF-8");
  # Stick with the Maketext behavior on lookup failures
  $LH->die_for_lookup_failures(1);
  # Flush the MO file cache and re-read your updated MO files
  $LH->reload_text;
  # Set the encoding of your maketext keys, if not in English
  $LH->key_encoding("Big5");
  # Set the action when encode fails
  $LH->encode_failure(Encode::FB_HTMLCREF);

Use Locale::Maketext::Gettext to read and parse the MO file:

  use Locale::Maketext::Gettext;
  %Lexicon = read_mo($MOfile);

DESCRIPTION

Locale::Maketext::Gettext joins the GNU gettext and Maketext frameworks. It is a subclass of Locale::Maketext(3) that follows the way GNU gettext works. It works seamlessly, both in the sense of GNU gettext and Maketext. As a result, you enjoy both their advantages, and get rid of both their problems, too.

You start as an usual GNU gettext localization project: Work on PO files with the help of translators, reviewers and Emacs. Turn them into MO files with msgfmt. Copy them into the appropriate locale directory, such as /usr/share/locale/de/LC_MESSAGES/myapp.mo.

Then, build your Maketext localization class, with your base class changed from Locale::Maketext(3) to Locale::Maketext::Gettext. That's all. ^_*'

METHODS

$LH->bindtextdomain(DOMAIN, LOCALEDIR)

Register a text domain with a locale directory. Returns LOCALEDIR itself. If LOCALEDIR is omitted, the registered locale directory of DOMAIN is returned. This method always success.

$LH->textdomain(DOMAIN)

Set the current text domain. Returns the DOMAIN itself. If DOMAIN is omitted, the current text domain is returned. This method always success.

$text = $LH->maketext($key, @param...)

Lookup the $key in current lexicon and return a translated message in users language. This is the same method in Locale::Maketext(3), with a wrapper that returns the text string encoded according to the current encoding. Refer to Locale::Maketext(3) for the maketext plural notation.

$LH->language_tag

Retrieve the language tag. This is the same method in Locale::Maketext(3). It is readonly.

$LH->encoding(ENCODING)

Set or retrieve the output encoding. The default is the same encoding as the gettext MO file. You should not override this method, as contract to the current practice of Locale::Maketext(3).

$LH->key_encoding(ENCODING)

Specify the encoding used in your original text. The maketext method itself isn't multibyte-safe to the _AUTO lexicon. If you are using your native non-English language as your original text and you are having troubles like:

Unterminated bracket group, in:

Then, specify the key_encoding to the encoding of your original text. Returns the current setting.

$LH->encode_failure(CHECK)

Set the action when encode fails. This happens when the output text is out of the scope of your output encoding. For exmaple, output Chinese into US-ASCII. Refer to Encode(3) for the possible values of this CHECK. The default is FB_DEFAULT, which is a safe choice that never fails. But part of your text may be lost, since that is what FB_DEFAULT does. Returns the current setting.

$LH->die_for_lookup_failures(SHOULD_I_DIE)

Maketext dies for lookup failures, but GNU gettext never fails. By default Lexicon::Maketext::Gettext follows the GNU gettext behavior. But if you are Maketext-styled, or if you need a better control over the failures (like me :p), set this to 1. Returns the current setting.

$LH->reload_text

Purge the MO text cache. It purges the MO text cache from the base class Locale::Maketext::Gettext. The next time maketext is called, the MO file will be read and parse from the disk again. This is used when your MO file is updated, but you cannot shutdown and restart the application. For example, when you are a co-hoster on a mod_perl-enabled Apache, or when your mod_perl-enabled Apache is too vital to be restarted for every update of your MO file, or if you are running a vital daemon, such as an X display server.

FUNCTIONS

%Lexicon = read_mo($MOfile);

Read and parse the MO file. Returns the read %Lexicon. The returned lexicon is in its original encoding.

If you need the meta infomation of your MO file, parse the entry $Lexicon{""}. For example:

  /^Content-Type: text\/plain; charset=(.*)$/im;
  $encoding = $1;

read_mo() is exported by default, but you need to use Locale::Maketext::Gettext in order to use it. It is not exported from your localization class, but from the Locale::Maketext::Gettext package.

($encoding, %Lexicon) = readmo($MOfile);

(deprecated) Read and parse the MO file. Returns a suggested default encoding and %Lexicon. The suggested encoding is the encoding of the MO file itself. The %Lexicon is returned in perl's internal encoding.

This method is deprecated and will be removed in the future. use read_mo() instead. There are far too many meta infomation to be returned other than its encoding. It's not possible to change the API for each new requirement. See read_mo() above for how to parse the meta infomation by yourself.

NOTES

WARNING: Don't try to put any lexicon in your language subclass. When the textdomain method is called, the current lexicon will be replaced, but not appended. This is to accommodate the way textdomain works. Messages from the previous text domain should not stay in the current text domain.

An essential benefit of this Locale::Maketext::Gettext over the original Locale::Maketext(3) is that: GNU gettext is multibyte safe, but perl source isn't. GNU gettext is safe to Big5 characters like \xa5\x5c (Gong1). But if you follow the current Locale::Maketext(3) document and put your lexicon as a hash in the source of a localization subclass, you have to escape bytes like \x5c, \x40, \x5b, etc., in the middle of some natural multibyte characters. This breaks these characters in halves. Your non-technical translators and reviewers will be presented with unreadable mess, "Luan4Ma3". Sorry to say this, but it is weird for a localization framework to be not multibyte-safe. But, well, here comes Locale::Maketext::Gettext to rescue. With Locale::Maketext::Gettext, you can sit back and relax now, leaving all this mess to the excellent GNU gettext framework. ^_*'

The idea of Locale::Maketext::Getttext came from Locale::Maketext::Lexicon(3), a great work by Autrijus. But it has several problems at that time (version 0.16). I was first trying to write a wrapper to fix it, but finally I dropped it and decided to make a solution towards Locale::Maketext(3) itself. Locale::Maketext::Lexicon(3) should be fine now if you obtain a version newer than 0.16.

Locale::Maketext::Gettext also solved the problem of lack of the ability to handle the encoding in Locale::Maketext(3). I implement this since this is what GNU gettext does. When %Lexicon is read from MO files by read_mo(), the encoding tagged in gettext MO files is used to decode the text into perl's internal encoding. Then, when extracted by maketext, it is encoded by the current encoding value. The encoding can be set at run time, so that you can run a daemon and output to different encoding according to the language settings of individual users, without having to restart the application. This is an improvement to the Locale::Maketext(3), and is essential to daemons and mod_perl applications.

You should trust the encoding of your gettext MO file. GNU gettext msgfmt checks the illegal characters for you when you compile your MO file from your PO file. The encoding form your MO files are always good. If you try to output to a wrong encoding, part of your text may be lost, as FB_DEFAULT does. If you don't like this FB_DEFAULT, change the failure behavior with the method encode_failure.

If you need the behavior of auto Traditional Chinese/Simplfied Chinese conversion, as GNU gettext smartly does, do it yourself with Encode::HanExtra(3), too. There may be a solution for this in the future, but not now.

If you set textdomain to a domain that is not bindtextdomain to specific a locale directory yet, it will try search system locale directories. The current system locale directory search order is: /usr/share/locale, /usr/lib/locale, /usr/local/share/locale, /usr/local/lib/locale. Suggestions for this search order are welcome.

NOTICE: MyPackage::L10N::en->maketext(...) is not available anymore, as the maketext method is no more static. That is a sure result, as %Lexicon is imported from foreign sources dynamically, but not statically hardcoded in perl sources. But the documentation of Locale::Maketext(3) does not say that you can use it as a static method anyway. Maybe you were practicing this before. You had better check your existing code for this. If you try to invoke it statically, it returns undef.

dgettext and dcgettext in GNU gettext are not implemented. It's not possible to temporarily change the current text domain in the current design of Locale::Maketext::Gettext. Besides, it's meaningless. Locale::Maketext is object-oriented. You can always raise a new language handle for another text domain. This is different from the situation of GNU gettext. Also, the category is always LC_MESSAGES. Of course it is. We are gettext and Maketext. ^_*'

Avoid creating different language handles with different textdomain on the same localization subclass. This currently works, but it violates the basic design of Locale::Maketext(3). In Locale::Maketext(3), %Lexicon is saved as a class variable, in order for the lexicon inheritance system to work. So, multiple language handles to a same localization subclass shares a same lexicon space. Their lexicon space clash. I tried to avoid this problem by saving a copy of the current lexicon as an instance variable, and replacing the class lexicon with the current instance lexicon whenever it is changed by another language handle instance. But this involves large scaled memory copy, which affects the proformance seriously. This is discouraged. You are adviced to use a single textdomain for a single localization class.

The key_encoding is a workaround, not a solution. There is no solution to this problem yet. You should avoid using non-English language as your original text. You'll get yourself into trouble if you mix several original text encodings, for example, joining several pieces of code from programmers all around the world, with their messages written in their own language and encodings. Solution suggestions are welcome.

BUGS

GNU gettext never fails. I tries to achieve it as long as possible. The only reason that maketext may die unexpectedly now is "Unterminated bracket group". I cannot get a better solution to it currently. Suggestions are welcome.

You are welcome to fix my English. I have done my best to this documentation, but I'm not a native English speaker after all. ^^;

SEE ALSO

Locale::Maketext(3), Locale::Maketext::TPJ13(3), Locale::Maketext::Lexicon(3), Encode(3), bindtextdomain(3), textdomain(3). Also, please refer to the official GNU gettext manual at http://www.gnu.org/manual/gettext/.

AUTHOR

imacat <imacat@mail.imacat.idv.tw>

COPYRIGHT

Copyright (c) 2003 imacat. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.