Adapting machine translation models to new genres

自然语言处理
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Neural machine translation systems are often optimized to perform well for specific text genres or domains, such as newspaper articles, user manuals, or customer support chats. \n\n![image.png](https://dev-media.amazoncloud.cn/d404af4c10494ccc999e267d656059dc_image.png)\n\nMultidomain adaptation is the adaptation of an existing neural machine translation model to new domains, while maintaining translation quality in the original domain.\n\nIn industrial settings with hundreds of language pairs to serve, however, a single translation system per language pair, which performs well across different text domains, is more efficient to deploy and maintain. Additionally, service providers may not know in advance which domains customers will be interested in.\n\nAt this year’s Conference on Empirical Methods in Natural Language Processing (++[EMNLP](https://www.amazon.science/conferences-and-events/emnlp-2021)++), we are ++[presenting](https://www.amazon.science/publications/improving-the-quality-trade-off-for-neural-machine-translation-multi-domain-adaptation)++ a new approach to multidomain adaptation for neural translation models, or adapting an existing model to new domains while maintaining translation quality in the original domain. Our approach provides a better trade-off between performance on old and new tasks than its predecessors do.\n\nWe combine two domain-adaptation techniques known as ++[elastic weight consolidation](https://arxiv.org/abs/1612.00796)++ and ++[data mixing](https://aclanthology.org/P17-2061/)++, and our paper draws a theoretical connection between them that explains why they work well together.\n\nBoth are techniques for preventing catastrophic forgetting, where a model forgets the task it had originally learned when trying to learn a new task. Elastic weight consolidation (EWC) constrains the way the model’s parameters are updated, while data mixing is a data-driven strategy that exposes the translation system to old and new data at the same time.\n\nWe show in our experiments that EWC combined with data mixing yields strong improvements on the original task, according to BLEU score, a common machine translation quality metric based on word overlap with a reference translation. Relative to EWC on its own, our system improves performance on existing tasks by 2 BLEU points for a German-to-English translation system and 0.8 BLEU points for English to French, while maintaining comparable performance on the new tasks. On the other hand, combination with EWC improves over data mixing on its own by providing a parameter to control the performance on original versus new tasks.\n\n### **A more intuitive loss function**\n\nImagine we have a translation system that has learned to translate news articles, political debates, and user manuals, and we want to adapt it to handle customer support chats and medical reports. So we expose our already trained system to chat and medical-translation examples. \n\nDuring adaptation, we do not want the model to forget how to translate news articles, political debates, and user manuals. Elastic weight consolidation encourages updates to the model parameters in such a way that existing knowledge stored in the parameters will be preserved. How much knowledge will be preserved versus how much new information will be taken in is controlled by a hyperparameter, λ. \n\nFor data mixing, the translation examples from chats and medical reports will be complemented by a sample from the existing news, political-debate, and user manuals data. The mixing ratio is typically 1:1 but can be varied to shift the balances between old and new tasks.\n\n#### **Amazon at EMNLP**\n\nRead more about ++[Amazon's presence at EMNLP](https://www.amazon.science/conferences-and-events/emnlp-2021)++, including papers, membership in the organizing committee, and workshop and tutorial involvement.\n\nOur work provides a theoretical analysis that shows how these two very different strategies are connected. A machine learning algorithm learns based on a loss function — an equation that states the goal of the learning process. We can derive a loss function for the combination of EWC and data mixing that has a better intuitive motivation than the original loss function for EWC.\n\nThe details are in the paper, but the basic idea is that the standard EWC loss function assumes that the tasks being learned are conditionally independent — that good performance on one task is unrelated to good performance on the other.\n\nWith translation, this is unlikely to be the case: representations useful for one task may very well be useful for the other. So we relax the assumption of conditional independence, assuming instead that there is some subset of the training data for one task that captures general information about the problem space useful for the second task. Then we derive a loss fuction that incorporates our new assumption.\n\nThe term we add to the loss function is equivalent to mixing a sample of the existing data into the new data. So our analysis provides a theoretical foundation for our intuition that combining EWC and data mixing should work better than using either on its own.\n\n#### **Which learning strategy to choose? Both!**\n\nWe experimented with EWC, data mixing, and their combination on publicly available data sets for German to English (DE→EN) and English to French (EN→FR) translation systems. The figures below show our results as measured by mean BLEU scores on news articles (representing old tasks) and mean BLEU scores on three new domains per language pair. The baseline score represents our current translation system and “adapt” represents a system that is updated naively, without consideration for the performance on the old tasks. \n\nOur results show that although EWC succeeds in mitigating catastrophic forgetting, as seen by the reduced drop in BLEU on news articles, this comes at a considerable cost in terms of new-domain quality. \n\n![image.png](https://dev-media.amazoncloud.cn/fe2a33246f58409f97243b67e1240607_image.png)\n\nModel adaptation results (DE→EN on top, EN→FR on bottom) varying λ for EWC (left to right from 10-1 to 10-5) and the old/new data ratio (100:1, 10:1 and 1:1) for data mixing. For EWC + data mixing, the old/new data ratio is 1:1. The x-axis shows BLEU scores on three new datasets, the y-axis BLEU scores on the original dataset.\n\nIn comparison, data mixing with a 1:1 ratio of old and new data allows for high quality on the new domains while retaining substantially higher performance on old tasks (rightmost point on data-mixing curve). However, even increasing the ratio of old to new data to 100:1 doesn’t recover the baseline performance on the old task (translation of news articles). Thanks to the strength parameter λ, the combination of EWC and data mixing yields the overall best performance.\n\nMultidomain adaptation is a relevant area of research for [Amazon Translate](https://aws.amazon.com/cn/translate/?trk=cndc-detail), the real-time machine translation service from Amazon Web Services that supports translation between hundreds of languages for a growing and diverse set of customer use cases and domains. This paper complements our ++[previously published work](https://www.amazon.science/publications/distilling-multiple-domains-for-neural-machine-translation)++ that introduced a multidomain adaptation strategy with model distillation.\n\nABOUT THE AUTHOR\n\n#### **[Eva Hasler](https://www.amazon.science/author/eva-hasler)**\n\nEva Hasler is a machine learning scientist in the Alexa AI organization.\n\n\n\n\n\n\n\n\n\n\n\n\n\n","render":"<p>Neural machine translation systems are often optimized to perform well for specific text genres or domains, such as newspaper articles, user manuals, or customer support chats.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/d404af4c10494ccc999e267d656059dc_image.png\\" alt=\\"image.png\\" /></p>\n<p>Multidomain adaptation is the adaptation of an existing neural machine translation model to new domains, while maintaining translation quality in the original domain.</p>\n<p>In industrial settings with hundreds of language pairs to serve, however, a single translation system per language pair, which performs well across different text domains, is more efficient to deploy and maintain. Additionally, service providers may not know in advance which domains customers will be interested in.</p>\n<p>At this year’s Conference on Empirical Methods in Natural Language Processing (<ins><a href=\\"https://www.amazon.science/conferences-and-events/emnlp-2021\\" target=\\"_blank\\">EMNLP</a></ins>), we are <ins><a href=\\"https://www.amazon.science/publications/improving-the-quality-trade-off-for-neural-machine-translation-multi-domain-adaptation\\" target=\\"_blank\\">presenting</a></ins> a new approach to multidomain adaptation for neural translation models, or adapting an existing model to new domains while maintaining translation quality in the original domain. Our approach provides a better trade-off between performance on old and new tasks than its predecessors do.</p>\n<p>We combine two domain-adaptation techniques known as <ins><a href=\\"https://arxiv.org/abs/1612.00796\\" target=\\"_blank\\">elastic weight consolidation</a></ins> and <ins><a href=\\"https://aclanthology.org/P17-2061/\\" target=\\"_blank\\">data mixing</a></ins>, and our paper draws a theoretical connection between them that explains why they work well together.</p>\n<p>Both are techniques for preventing catastrophic forgetting, where a model forgets the task it had originally learned when trying to learn a new task. Elastic weight consolidation (EWC) constrains the way the model’s parameters are updated, while data mixing is a data-driven strategy that exposes the translation system to old and new data at the same time.</p>\n<p>We show in our experiments that EWC combined with data mixing yields strong improvements on the original task, according to BLEU score, a common machine translation quality metric based on word overlap with a reference translation. Relative to EWC on its own, our system improves performance on existing tasks by 2 BLEU points for a German-to-English translation system and 0.8 BLEU points for English to French, while maintaining comparable performance on the new tasks. On the other hand, combination with EWC improves over data mixing on its own by providing a parameter to control the performance on original versus new tasks.</p>\n<h3><a id=\\"A_more_intuitive_loss_function_16\\"></a><strong>A more intuitive loss function</strong></h3>\\n<p>Imagine we have a translation system that has learned to translate news articles, political debates, and user manuals, and we want to adapt it to handle customer support chats and medical reports. So we expose our already trained system to chat and medical-translation examples.</p>\n<p>During adaptation, we do not want the model to forget how to translate news articles, political debates, and user manuals. Elastic weight consolidation encourages updates to the model parameters in such a way that existing knowledge stored in the parameters will be preserved. How much knowledge will be preserved versus how much new information will be taken in is controlled by a hyperparameter, λ.</p>\n<p>For data mixing, the translation examples from chats and medical reports will be complemented by a sample from the existing news, political-debate, and user manuals data. The mixing ratio is typically 1:1 but can be varied to shift the balances between old and new tasks.</p>\n<h4><a id=\\"Amazon_at_EMNLP_24\\"></a><strong>Amazon at EMNLP</strong></h4>\\n<p>Read more about <ins><a href=\\"https://www.amazon.science/conferences-and-events/emnlp-2021\\" target=\\"_blank\\">Amazon’s presence at EMNLP</a></ins>, including papers, membership in the organizing committee, and workshop and tutorial involvement.</p>\n<p>Our work provides a theoretical analysis that shows how these two very different strategies are connected. A machine learning algorithm learns based on a loss function — an equation that states the goal of the learning process. We can derive a loss function for the combination of EWC and data mixing that has a better intuitive motivation than the original loss function for EWC.</p>\n<p>The details are in the paper, but the basic idea is that the standard EWC loss function assumes that the tasks being learned are conditionally independent — that good performance on one task is unrelated to good performance on the other.</p>\n<p>With translation, this is unlikely to be the case: representations useful for one task may very well be useful for the other. So we relax the assumption of conditional independence, assuming instead that there is some subset of the training data for one task that captures general information about the problem space useful for the second task. Then we derive a loss fuction that incorporates our new assumption.</p>\n<p>The term we add to the loss function is equivalent to mixing a sample of the existing data into the new data. So our analysis provides a theoretical foundation for our intuition that combining EWC and data mixing should work better than using either on its own.</p>\n<h4><a id=\\"Which_learning_strategy_to_choose_Both_36\\"></a><strong>Which learning strategy to choose? Both!</strong></h4>\\n<p>We experimented with EWC, data mixing, and their combination on publicly available data sets for German to English (DE→EN) and English to French (EN→FR) translation systems. The figures below show our results as measured by mean BLEU scores on news articles (representing old tasks) and mean BLEU scores on three new domains per language pair. The baseline score represents our current translation system and “adapt” represents a system that is updated naively, without consideration for the performance on the old tasks.</p>\n<p>Our results show that although EWC succeeds in mitigating catastrophic forgetting, as seen by the reduced drop in BLEU on news articles, this comes at a considerable cost in terms of new-domain quality.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/fe2a33246f58409f97243b67e1240607_image.png\\" alt=\\"image.png\\" /></p>\n<p>Model adaptation results (DE→EN on top, EN→FR on bottom) varying λ for EWC (left to right from 10-1 to 10-5) and the old/new data ratio (100:1, 10:1 and 1:1) for data mixing. For EWC + data mixing, the old/new data ratio is 1:1. The x-axis shows BLEU scores on three new datasets, the y-axis BLEU scores on the original dataset.</p>\n<p>In comparison, data mixing with a 1:1 ratio of old and new data allows for high quality on the new domains while retaining substantially higher performance on old tasks (rightmost point on data-mixing curve). However, even increasing the ratio of old to new data to 100:1 doesn’t recover the baseline performance on the old task (translation of news articles). Thanks to the strength parameter λ, the combination of EWC and data mixing yields the overall best performance.</p>\n<p>Multidomain adaptation is a relevant area of research for Amazon Translate, the real-time machine translation service from Amazon Web Services that supports translation between hundreds of languages for a growing and diverse set of customer use cases and domains. This paper complements our <ins><a href=\\"https://www.amazon.science/publications/distilling-multiple-domains-for-neural-machine-translation\\" target=\\"_blank\\">previously published work</a></ins> that introduced a multidomain adaptation strategy with model distillation.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Eva_Haslerhttpswwwamazonscienceauthorevahasler_52\\"></a><strong><a href=\\"https://www.amazon.science/author/eva-hasler\\" target=\\"_blank\\">Eva Hasler</a></strong></h4>\n<p>Eva Hasler is a machine learning scientist in the Alexa AI organization.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭