How Would It Translate?
There is a movement spearheaded by Aspiration <www.aspirationtech.org> as a collaborative practice. This new practice is dubbed “open translation.” There are abundant examples of community translation, however the tools required are primitive or simply don't exist and hence the opportunity for the practice to gather more momentum are stunted.
Ethan Zuckerman has commented on the need for a ‘polygot Internet’ and the need for collaborative translation:
“The polyglot Internet demands that we explore the possibility and power of distributed human translation. Hundreds of millions of Internet users speak multiple languages; some percentage of these users are capable of translating between these. These users could be the backbone of a powerful, distributed peer production system able to tackle the audacious task of translating the Internet.
We are at the very early stages of the emergence of a new model for translation of online content—“peer production” models of translation. Yochai Benkler uses the term “peer production” to describe new ways of organizing collaborative projects beyond such conventional arrangements as corporate firms. Individuals have a variety of motives for participation in translation projects, sometimes motivated by an explicit interest in building intercultural bridges, sometimes by fiscal reward or personal pride. In the same way that open source software is built by programmers fueled both by personal passion and by support from multinational corporations, we need a model for peer-produced translation that enables multiple actors and motivations.
To translate the Internet, we need both tools and communities. Open source translation memories will allow translators to share work with collaborators around the world; translation marketplaces will let translators and readers find each other through a system like Mechanical Turk, enhanced with reputation metrics; browser tools will let readers seamlessly translate pages into the highest-quality version available and request future human translations. Making these tools useful requires building large, passionate communities committed to bridging a polyglot web, preserving smaller languages, and making tools and knowledge accessible to a global audience.”
—Ethan Zuckerman, 2009
The gaps in the tools and practices for collaborative translation have been been documented in the Open Translations Tools book <en.flossmanuals.net/OpenTranslationTools> which was the result of a Book Sprint coordinated by FLOSS Manuals and Aspiration. The content below comes from the chapter ‘The Current State’ which identifies the tools and processes required to catalyze this emergent field.
Though a number of ‘Open Translation Tools’ provide limited support for translation workflow processes, there is currently no tool or platform with rich and general support for managing and tracking a broad range of translation tasks and workflows. The internet has made possible a plethora of different collaborative models to support translation processes. But there are few FLOSS tools to manage those processes: tracking assets and state, role and assignments, progress and issues. While tools like Transifex provide support for specific workflows in specific communities, generalized translation workflow tools are still few in number. An ideal Open Translation tool would understand the range of roles played in translation projects, and provide appropriate features and views for users in each role. As of this writing, most Open Translation tools at best provide workflow support for the single type of user which that tool targets.
Distributed translation with memory aggregation
As translation and localization evolve to more online-centric models, there is still a dearth of tools which leverage the distributed nature of the Internet and offer remote translators the ability to contribute translations to sites of their choosing. As of this writing, Worldwide Lexicon is the most advanced platform in this regard, providing the ability for blogs and other open content sites to integrate distributed translation features into their interfaces. In addition, there needs to be a richer and more pervasive capture model for content translated through such distributed models, in order to aggregate comprehensive translation memories in a range of language pairs.
The lack of integration and interoperability between tools means both frustration for users and feature duplication by developers. Different communities have their own toolkits, but it is difficult for a translation project to make coherent use of a complete tool set. Among the interoperability issues which require further attention in the Open Translation tools ecology:
- Common programming interfaces for tools to connect, share data and requests, and collect translation memories and other valuable data.
- Plugins for content management systems to export content into PO-files (a standardized file format for storing translated phrases), so that content can be translated by the wealth of tools that offer PO support.
- Better integration between different projects, including shared glossaries, common user interfaces and subsystems, and rich file import/export.
- Generic code libraries for common feature requirements. “gettext” stands out as one of the most ubiquitous programming interfaces in the Open Translation arena, but many more interfaces and services could be defined and adopted to maximize interoperability of both code and data.
Tools for content review are also lacking; features for quality review should be focused on distributed process and community-based translation. As such reviews can be a delicate matter, the ideal communication model when there are quality problems is to contact the translator, but timing can be an issue. In systems with live posts and rapid translation turnaround, quick review is important and it may not be possible to reconnect with the content translator in a timely fashion.