For a long time, the machine translation's relationship with human translators was no more complex than post-editing badly translated text, a process most translators find to be a tiresome chore. With the advent of Neural Machine Translation (NMT), however, Machine Translation (MT) is not just something that creates more tedious work for translators. It is now a partner to them, making them faster and their output more accurate.
While improvements to MT typically mean increases in its usual applications (like post-editing or automatic translation), the real winner with NMT is translators. This is particularly true when a translator is able to use it in real time as they translate, as opposed to post-editing MT output. When the translator actively works with an NMT engine to create a translation, they are able to build and learn from each other, the engine offering up a translation the human may not have considered, and the human serving as a moderator, and in so doing, a teacher of the engine.
For example, during the translation process, when the translator corrects the beginning of a sentence, it improves the system’s chances getting the rest of the translation right. Often all it takes is a nudge at the beginning of a sentence to fix the rest, and the snowball of mistakes unravels.
Meanwhile, NMT's characteristic improvements in grammar and coherence mean that when it reaches a correct translation, the translator spends less time fixing grammar, beating MT output and skipping post-editing all together. When they have the opportunity to work together, translators and their NMT engines quite literally finish each other's sentences. Besides speeding up the process, and here I’m speaking as a translator, it's honestly a rewarding experience.
A bit of history
Prior to neural machine translation, there have been two main paradigms in the history of the field. The first was rules-based machine translation (RBMT) and the second, dominant until very recently, was phrase-based statistical machine translation (SMT).
When building rules-based machine translation systems, linguists and computer scientists joined forces to write thousands of rules for translating text from one language to another. This was good enough for monolingual reviewers to be able to get the general idea of important documents in an otherwise unmanageable body of content in a language they couldn’t read. But for the purposes of actually creating good translations, this approach has obvious flaws: it’s time consuming and, naturally, results in low quality translations.
Phrase-based SMT, on the other hand, looks at a large body of bilingual text and creates a statistical model of probable translations. The trouble with SMT is its reliance on systems. For instance, it is unable to associate synonyms or derivatives of a single word, requiring the use of a supplemental system responsible for morphology. It also requires a language model to ensure fluency, but this is limited to a given word's immediate surroundings. SMT is therefore prone to grammatical errors, and relatively inflexible when it encounters phrases that are different from those included in its training data.
Finally, here we are at the advent of neural machine translation. Virtually all NMT systems use what is known as "attentional encoder-decoder" architecture. The system has two main neural networks, one that receives a sentence (the encoder) and transforms it into a series of coordinates, or “vectors”. A decoder neural network then gets to work transforming those vectors back into text in another language, with an attention mechanism sitting in between, helping the decoder network focus on the important parts of the encoder output.
What’s next?
Given the fact that quality and accessibility of NMT continues to improve, it will gradually come to be an indispensable part of a translator's toolbox, just as CAT tools and translation memory already have.
A lot of current research has to do with getting better data, and with building systems that need less data. Both of these areas will continue to improve MT quality and accelerate its usefulness to translators. Hopefully, this usefulness will also reach more languages, especially ones with less data available for training. Once that happens, translators in those languages could get through more and more text, gradually improving the availability of quality text both for the public and for further MT training, in turn allowing those translators, having already built the groundwork, to move on to bigger challenges.
©TranslatorPub.com 2024 All Rights Reserved.
Mail comments and suggestions to info@translatorpub.com
| Privacy Policy
| Sitemap.