Abstract
Translation memory tools now offer the translator to insert post-edited machine translation segments for which no match is found in the databases. The Google Translator Toolkit does this by default, advising in its Settings window: "Most users should not modify this". Post-editing of no matches appears to work on engines trained with specific bilingual data on a source written under controlled language constraints. Would this, however, work for any type of task as Google's advice implies? We have tested this by carrying out experiments with English-Chinese trainees, using the Toolkit to translate from the source text (the control group) and by post-editing (the experimental group).Results showthat post-editing gains in productivity aremarginal. With regard to quality, however, post-editing produces significantly better statistical results compared to translating manually. These gains in quality are observed independently of language direction, text difficulty or translator's level of performance. In light of these findings, we discuss whether translators should consider post-editing as a viable alternative to conventional translation. © 2011 Springer Science+Business Media B.V.
Original language | English |
---|---|
Pages (from-to) | 217-237 |
Number of pages | 21 |
Journal | Machine Translation |
Volume | 25 |
Issue number | 3 |
DOIs | |
Publication status | Published - 2011 |
Keywords
- control groups
- information theory
- machine translation
- memory
- source text
- training
- translation (languages)
- translator toolkit