Edgewall Software

Adding i18n/l10n to Trac plugins (Trac ≥ 0.12)

Intro and Motivation

Are you a user of Trac and do your work with it, nothing more? Well, you may ignore this page and go on reading another subpage of CookBook.

Professional coders/translators, please skip to the actual cookbook content in 'Required workflow', since there can't be any news for you before that section.

If you want to learn about translation for a plugin, that as you know already provides one/several message catalog/s, the section 'Do translators work' and following parts are for you.

Ultimately, all plugin maintainers and developers in general, who are facing requests and are willing to take care for growing demand of their plugin to speak same (foreign) language(s) as Trac ≥ 0.12 should, just read on.

i18n, l10n, … help!

In short i18n stands for internationalization (count 18 chars between i and n) and is defined as software design for programs with translation support. localisation that is abbreviated as l10n could be seen as a follow-up process providing data for one or more locales. It is taking care of feature differences between the original/default (that is English is most cases including Trac) and a given locale as well. Such features are i.e. sentence structure including punctuation and formatting of numbers, date/time strings, and currencies. Once you did some ground work at the source (i18n), what's remaining is proper translation work (l10n), putting more or less effort in preserving the sense of the original while looking as native locale as possible.1

NLS (National Language Support or Native Language Support) is meant to be the sum of both. And there are more related terms that we could safely skip for now.1, 2

Background and concept of i18n/l10n support for Trac plugins

It begun with adding Babel to Trac. Babel is a very powerful translation framework. For one part, it is a message extraction tool: it can extract messages from source code files (in our case, Python and Javascript) as well as from Genshi templates, and create catalog templates (.pot files). It can also create and update the message catalogs (.po files), and compile those catalogs (.mo files). For the other part, as a Python library used within Trac, it provides the implementation of the message retrieval functions (gettext and related). And there's even more to it than that, if you're interested please visit the Babel project (also here on edgewall.org).

Now back to the plugins…

Some plugin maintainers created their own translation module inside each plugin separately. Growing amount of code redundancy and possibility of error within imperfect copies and variants of a translation module all that was certainly not a desirable situation. And Trac core maintainers took responsibility with adding functions dedicated to i18n/l10n support for Trac plugins.

The evolution of this functions has been documented in ticket 7497. The final implementation as mentioned there in comment 12 was introduced to Trac trunk in changeset r7705 and finally done with changeset r7714.

Now adding the needed i18n/l10n helper functions is done by importing a set of functions from trac/util/translation.py and providing the necessary extra information (domain) for storing and fetching the messages from the plugin code into plugin specific message catalogs. During plugin initialization, the dedicated translation domain is created as well and corresponding catalog files holding translated messages are loaded in memory. If everything is setup correctly, when a translatable text is encountered at runtime inside the plugin's code, the i18n/l10n helper functions will try to get the corresponding translation from a message catalog of the plugin's domain.

The message catalog selection is done according to the locale setting. Valid settings are a combination of language and country code, optionally extended further by the character encoding used, i.e. to read like ‘de_DE.UTF-8’. Trac uses UTF-8 encoding internally, so there is not much to tell about that. 'C' is a special locale code since it disables all translations and programs use English texts as required by POSIX standard.3

Required workflow

You need to:

  • specify in you plugin's setup.py file on which files the Babel commands will have to operate
  • create a setup.cfg files for adding options to the Babel commands
  • in your Python source code:
    • define specializations of the translation functions for your specific domain (there's a helper function for doing that easily)
    • in the "root" Component in your plugin (one you're sure is always enabled) and initialize the translation domain in its __init__ method
    • use your translation functions appropriately
  • in your Genshi templates:
    • be sure to have the necessary namespace declaration and domain directive in place
    • use the i18n: directive as appropriate
  • in your Javascript code:
    • be sure to load your catalog and define your domain specific translation functions
    • use the translation functions as appropriate

Now, a detailed walk-through…

Enable Babel support for your plugin

Add Babel commands to the setup (setup.py)

Babel only does extract from Python scripts by default. To extract messages from Genshi templates as well, you'll have to declare the needed extractors in setup.py:

  • setup.py

    diff --git a/setup.py b/setup.py
    a b  
    3435 
    3536from setuptools import find_packages, setup 
    3637 
     38extra = {} 
     39try: 
     40    from trac.util.dist import get_l10n_cmdclass 
     41    cmdclass = get_l10n_cmdclass() 
     42    if cmdclass: # Yay, Babel is there, we've got something to do! 
     43        extra['cmdclass'] = cmdclass 
     44        extractors = [ 
     45            ('**.py',                'python', None), 
     46            ('**/templates/**.html', 'genshi', None), 
     47            ('**/templates/**.txt',  'genshi', { 
     48                'template_class': 'genshi.template:TextTemplate', 
     49            }), 
     50        ] 
     51        extra['message_extractors'] = { 
     52            'foo': extractors, 
     53        } 
     54except ImportError: 
     55    pass 
     56 
    3757setup( 
    3858    name = 'foo', 
    3959    version = '0.12', 
     
    5369            'templates/*.txt', 
    5470            'htdocs/*.*', 
    5571            'htdocs/css/*.*', 
     72            'locale/*/LC_MESSAGES/*.mo', 
    5673        ] 
    5774    }, 
    5875    install_requires = [ 
     
    96113        ] 
    97114    }, 
    98115    test_suite = '<path>.tests', 
     116    **extra 
    99117) 

Preset configuration for Babel commands (setup.cfg)

Add some lines to setup.cfg or, if it doesn't exist by now, create it with the following content:

[extract_messages]
add_comments = TRANSLATOR:
msgid_bugs_address =
output_file = <path>/locale/messages.pot
# Note: specify as 'keywords' the functions for which the messages
#       should be extracted. This should match the list of functions
#       that you've listed in the `domain_functions()` call above.
keywords = _ N_ tag_
# Other example:
#keywords = _ ngettext:1,2 N_ tag_
width = 72

[init_catalog]
input_file = <path>/locale/messages.pot
output_dir = <path>/locale
domain = foo

[compile_catalog]
directory = <path>/locale
domain = foo

[update_catalog]
input_file = <path>/locale/messages.pot
output_dir = <path>/locale
domain = foo

Replace <path> as appropriate (i.e. the relative path to the folder containing the locale directory, for example mytracplugin).

This will tell Babel where to look for and store message catalog files.

In the extract_messages section there is just one more lines you may like to change: msgid_bugs_address. To allow for direct feedback regarding your i18n work add a valid e-mail address or a mailing list dedicated to translation issues there.

The add_comments line simply lists the tags in the comments surrounding the calls to the translation functions in the source code that have to be propagated to the catalogs (see extract_messages in Babel's documentation). So you will want to leave that one untouched.

Register message catalog files for packaging

To include the translated messages into the packaged plugin you need to add the path to the catalog files to package_data in the call for function setup() in setup.py:

  • setup.py

    diff -u a/setup.py b/setup.py
    a b  
    3939    package_data = { 
    4040        <path>: [ 
    4141            'htdocs/css/*.css', 
     42            'locale/*/LC_MESSAGES/*.mo', 
    4243        ], 
    4344    }, 
    4445    entry_points = { 

Make the Python code translation-aware

Prepare domain-specific translation helper functions

Pick a reasonably unique name for the domain, e.g. 'foo' (if your plugin is named 'foo', that is).

This will be the basename for the various translation catalog files (e.g. foo/locale/fr/LC_MESSAGES/foo.po for the French catalog).

At run-time, the translation functions (typically _(...)) have to know in which catalog the translation will be found. Specifying the 'foo' domain in every such call would be tedious, that's why there's a facility for creating partially instantiated domain-aware translation functions: domain_functions.

This helper function should be called at module load time, like this:

from trac.util.translation import domain_functions

_, tag_, N_, add_domain = \
    domain_functions('foo', ('_', 'tag_', 'N_', 'add_domain'))

The translation functions which can be bound to a domain are:

  • '_': extract and translate
  • 'ngettext': extract and translate (singular, plural, num)
  • 'tgettext', 'tag_': same as '_' but for Markup
  • 'tngettext', 'tagn_': same as 'ngettext' but for Markup
  • 'gettext': translate only, don't extract
  • 'N_': extract only, don't translate
  • 'add_domain': register the catalog file for the bound domain

Note: N_ and gettext() are usually used in tandem. For example, when you have a global dict containing strings that need to extracted, you want to mark those strings for extraction but you don't want to put their translation in the dict: use N_("the string"); when you later use that dict and want to retrieve the translation for the string corresponding to some key, you don't want to mark anything here: use gettext(mydict.get(key)).

To inform Trac about where the plugin's message catalogs can be found, you'll have to call the add_domain function obtained via domain_functions as shown above. One place to do this is in the __init__ function of your plugin's main component, like this:

    def __init__(self):
        import pkg_resources  # here or with the other imports
        # bind the 'foo' catalog to the specified locale directory
        try:
            locale_dir = pkg_resources.resource_filename(__name__, 'locale')
        except KeyError:
            pass  # no locale directory in plugin if Babel is not installed
        else:
            add_domain(self.env.path, locale_dir)

assuming that folder locale will reside in the same folder as the file containing the code above, referred to as <path> below (as can be observed inside the Python egg after packaging).

The i18n/l10n helper functions are available inside the plugin now, but if the plugin code contains several python script files and you encounter text for translation in one of them too, you need to import the functions from the main script, say its name is api.py, there:

from api import _, tag_, N_

Mark text for extraction

In python scripts you'll have to wrap text with the translation function _() to get it handled by translation helper programs.

  • <path>/api.py

    a b  
    1     msg = 'This is a msg text.' 
     1    msg = _("This is a msg text.") 

Note, that quoting of (i18n) message texts should really be done in double quotes. Single quotes are reserved for string constants (see commit note for r9751).

This is a somewhat time consuming task depending on the size of the plugin's code. If you initially fail to find all desired texts you may notice this by missing them from the message catalog later and come back to this step again. If the plugin maintainer is unaware of your i18n work or unwilling to support it and he adds more message without the translation function call, remember that you have to do the wrapping of these new texts too.

Make the Genshi templates translation-aware

First, keep an eye on the Genshi documentation on this topic, Internationalization and Localization.

Text extraction from Python code and Genshi templates

Message extraction for Genshi templates should be done auto-magically. However there is the markup i18n:msg available to ensure extraction even from less common tags. For a real-world example have a look at Trac SVN changeset r9542 for marking previously undetected text in templates.

Runtime support

Extraction is auto-magical, however message retrieval at runtime is not. You have to make sure you've specified the appropriate domain in your template, by adding a i18n:domain directive. Usually you would put it in the top-level element, next to the mandatory xmlns:i18n namespace declaration.

For example:

<!DOCTYPE html
    PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"
      xmlns:xi="http://www.w3.org/2001/XInclude"
      xmlns:py="http://genshi.edgewall.org/"
      xmlns:i18n="http://genshi.edgewall.org/i18n" i18n:domain="foo">
...
</html>

Make the Javascript code translation-aware

Text extraction from Javascript code

Adding support for translating the marked strings in the Javascript code is a bit more involved, but if you made it to this point, that shouldn't scare you away…

We currently support only statically deployed Javascript files, which means they can't be translated like template files on the server, but that the translation has to happen dynamically on the client side. To this end, we want to send an additional .js file containing a dictionary of the messages that have to be translated, and only those. In order to clearly identify which strings have to be present in this dictionary, we'll extract the messages marked for translation (the usual _(...) ways) from the Javascript code into a dedicated catalog template, and from there, we'll create dedicated catalogs for each locale. In the end, the translations present in each compiled catalog will be extracted and placed into a .js file containing the messages dictionary and some setup code.

The first change is to use get_l10n_js_cmdclass in lieu of get_l10n_cmdclass. The former adds a few more setup commands for extracting messages strings from Javascript .js files and <script type="text/javascript"> snippets in .html files, initialization and updating of dedicated catalog files, and finally compiling that catalog and creating a .js file containing the dictionary of strings, ready to be used by the babel.js support code already present in Trac pages.

The change to setup.py looks like this:

  • setup.py

    diff -u a/setup.py b/setup.py
    a b from setuptools import setup 
    55extra = {} 
    66 
    7 from trac.util.dist import get_l10n_cmdclass 
    8 cmdclass = get_l10n_cmdclass() 
     7from trac.util.dist import get_l10n_js_cmdclass 
     8cmdclass = get_l10n_js_cmdclass() 
    99if cmdclass: 
    1010    extra['cmdclass'] = cmdclass 
    1111    extra['message_extractors'] = { 

That was the easiest part.

Now, as you need to actually send that .js file containing the messages dictionary, call add_script() as appropriate in the process_request() method from your module:

  • <path>/api.py

    a b class FooTracPlugin(Component): 
    243243        builds = self._extract_builds(self._get_info(start, stop)) 
    244244        data = {'builds': list(builds)} 
    245245        add_script(req, '<path>/hudsontrac.js') 
     246        if req.locale is not None: 
     247            add_script(req, '<path>/foo/%s.js' % req.locale) 
    246248        add_stylesheet(req, '<path>/foo.css') 
    247249        return 'foo-build.html', data, None 
    248250 

Now, you need to expand the setup.cfg file with the configuration that the new cmdclass dedicated to Javascript translation need. Those classes all end with an _js suffix.

[extract_messages_js]
add_comments = TRANSLATOR:
copyright_holder = <Your Name>
msgid_bugs_address = <Your E-Mail>
output_file = <path>/locale/messages-js.pot
keywords = _ ngettext:1,2 N_
mapping_file = messages-js.cfg

[init_catalog_js]
domain = foo-js
input_file = <path>/locale/messages-js.pot
output_dir = <path>/locale

[compile_catalog_js]
domain = foo-js
directory = <path>/locale

[update_catalog_js]
domain = foo-js
input_file = <path>/locale/messages-js.pot
output_dir = <path>/locale

[generate_messages_js]
domain = foo-js
input_dir = <path>/locale
output_dir = <path>/htdocs/foo

As before, replace <path> with what's appropriate for your plugin. Note that the domain name is now foo-js, not just foo as before. This is necessary as we want to have only the strings actually needed Javascript to be stored in the .js file containing the messages dictionary.

We're nearly done yet… noticed the mapping_file = messages-js.cfg line? We need to configure separately how to do the extraction for this messages-js.pot catalog template. The messages-js.cfg file has the following content:

# mapping file for extracting messages from javascript files into
# <path>/locale/messages-js.pot (see setup.cfg)
[javascript: **.js]

[extractors]
javascript_script = trac.util.dist:extract_javascript_script

[javascript_script: **.html]

Bonus points for anyone who will manage to simplify a bit this procedure ;-)

Announce new plugin version

The plugin will not work with any Trac version before 0.12, since import of the translation helper functions introduced for 0.12 will fail. It is possible to wrap the import with a 'try:' and define dummy functions in a corresponding 'except ImportError:' to allow the plugin to work with older versions of Trac, but there might already be a different version for 0.11 and 0.12, so this is not required in most cases. If it is strictly required for your plugin, have a look at setup.py of the Mercurial plugin provided with Trac.

In all other cases you'll just add a line like the following as another argument to the setup() function in plugin's setup.py:

install_requires = ['Trac >= 0.12'],

All the work you did by now will go unnoticed, at least with regard to package naming. To help with identification of the new revision you should bump the plugin's version. This is done by changing the version/revision, typically in setup.cfg or setup.py. And you may wish to leave a note regarding your i18n work along the copyright notices as well.

Summing it up

Here's an example of the changes required to add i18n support to the HudsonTrac? plugin (trac-0.12 branch):

You'll find another example attached to this page. That is Sub-Tickets plugin v0.1.0 and a diff containing all i18n/l10n related work to produce a German translation based on that source.

Do translators work

General advice from TracL10N on making good translation for Trac applies here too.

I.e. it's desirable to maintain a consistent wording across Trac and Trac plugins. Since this is going beyond the scope of aforementioned TracL10N, there might be the need for more coordination. Consider joining the Trac plugin l10n project, that utilizes Transifex for uniform access to message catalogs for multiple plugins backed by a dedicated (Mercurial) message catalog repository at Bitbucket.org. Trac has some language teams at Transifex as well, so this is a good chance for tight translator cooperation.

For those who read previous parts, you do notice that we switch from talking about i18n to 'l10n' now, don't you? No source code mangling. All code below is no meant to become part of the plugin source but meant to be put to the command line.

Switch to root directory of plugin's source, i.e.:

cd /usr/src/trac_plugins/foo

Extract the messages that where marked for translation before, or on case of Genshi templates are exposed by other means:

python ./setup.py extract_messages

The attentive reader will notice that the argument to setup.py has the same wording as a section in setup.cfg, that is not incidental. And this does apply to the following command lines as well.

If you attempt to do improvements on existing message catalogs you'll update the one for your desired language:

python ./setup.py update_catalog -l de_DE

If you omit the language selection argument -l and identifier string, existing catalogs of all languages will be updated, what is acceptably fast (just seconds) on current hardware.

But if you happen to do all the i18n work before, the you know you there's nothing to update right now. Well, so now it's time to create the message catalog for your desired language:

python ./setup.py init_catalog -l de_DE

As you may guess, there is not much to be done, if the helper programs don't know what language you'd like to work on, so the language selection argument -l and identifier string are mandatory here.

Now fire up the editor of your choice. There are dedicated message catalog (.po) file editors that ensure for quick results as a beginner as well as make working on large message catalogs with few untranslated texts or translations marked 'fuzzy' much more convenient. See dedicated resources for details on choosing an editor program as well as for help on editing .po files.4, 5

If not already taken care for by your (PO) editor, the place to announce yourself as the last translator is after the default TRANSLATOR: label at top of the message catalog file.

Compile and use it

Compile the messages.po catalog file with your translations into a machine readable messages.mo file.

python ./setup.py compile_catalog -f -l de_DE

The argument -f is needed to include even the msgid's marked 'fuzzy'. If you have prepared only one translated catalog the final language selection argument -l and identifier string are superfluous. But as soon as there are several other translations that you don't care, it will help to select just your work for compilation.

Now you've used all four configuration sections in setup.cfg, that are dedicated to i18n/l10n helper programs. You could finish your work by packaging the plugin.

Make the python egg as usual:

python ./setup.py bdist_egg

Install the new egg and restart your web-server after you made sure to purge any former version of that plugin (without your latest work).

Note that if the plugin's setup.py has installed the proper extra commands (extra['cmdclass'] = cmdclass like in the above), then bdist_egg will automatically take care of the compile_catalog command, as well as the commands related to Javascript i18n if needed.

Advanced stuff

Translating Option* documentation

Trac 1.0 added support for a special kind of N_ marker, cleandoc_, which can be used to reformat multiline messages in a compact form. There's also support to apply this "cleandoc" transformation to the documentation of instances of trac.config.Option and its subclasses. However, this support is coming from a special Python extractor which has to be used instead of the default Python extractor from Babel.

The additional change is:

  • setup.py

     
    2424    if cmdclass: 
    2525        extra['cmdclass'] = cmdclass 
    2626        extractors = [ 
    27             ('**.py',                'python', None), 
     27            ('**.py',                'trac.dist:extract_python', None), 
    2828            ('**/templates/**.html', 'genshi', None) 
    2929        ] 
    3030        extra['message_extractors'] = { 

The default cleanup_keywords (the Option subclasses) are not automatically keywords however. The corresponding option for the [extract_messages] section of the setup.cfg file should therefore contain the cleandoc_ token, or the various Config subclasses together with the position of the doc argument in that subclass. As an example, see the following excerpt from the SpamFilter plugin setup.cfg file:

[extract_messages]
add_comments = TRANSLATOR:
msgid_bugs_address = [...]
output_file = tracspamfilter/locale/messages.pot
keywords = _ ngettext:1,2 N_ tag_ Option:4 BoolOption:4 IntOption:4 ListOption:6 ExtensionOption:5
width = 72

This makes it possible for the extractor to get the doc strings from those options automatically. For example, in adapters.py:

class AttachmentFilterAdapter(Component):
    """Interface to check attachment uploads for spam.
    """
    implements(IAttachmentManipulator)

    sample_size = IntOption('spam-filter', 'attachment_sample_size', 16384,
        """The number of bytes from an attachment to pass through the spam
        filters.""", doc_domain='tracspamfilter')

as you can see, it's also necessary to specify the domain, otherwise the lookup of the translated message at runtime (within the [[TracIni]] macro, typically) will fail.

About 'true' l10n

A human translator will/should do better than an automatic translation, since good l10n has more of a transcription than pure translation word by word. It's encouraging to see the raise of native words for such terms like changeset, ticket and repository in different languages. This will help Trac to not only fulfill its promise to help project teams focusing on their work but even extend its use to project management in general, where use of native language is much more common or even required in contrast to the traditional software development.

Details on that matter tend to become religious, so let's stop here.

Related resources

See TracL10N and more specifically TracL10N#ForDevelopers, which contains general tips that are also valid for plugin translation.

1 http://en.wikipedia.org/wiki/Internationalization_and_localization - Internationalization and localization
2 http://en.wikipedia.org/w/index.php?title=Multilingualism&section=18 - Multilingualism in computing
3 http://www.gnu.org/software/gettext/manual/gettext.html#Locale-Names - GNU 'gettext' utilities: Locale Names
4 http://www.gnu.org/software/gettext/manual/gettext.html#Editing - GNU 'gettext' utilities: Editing PO Files
5 http://techbase.kde.org/Localization/Concepts/PO_Odyssey - PO Odyssey in 'Localization/Concepts' section of KDE TechBase

Last modified 4 months ago Last modified on Mar 20, 2014 11:58:19 AM

Attachments (2)

Download all attachments as: .zip