Ensuring that new language-processing models don't backslide

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"The models behind [machine learning (ML) services](https://aws.amazon.com/machine-learning/) are continuously being updated, and the new models are usually more accurate than the old ones. But an overall improvement in accuracy can still be accompanied by regression — a loss of accuracy — in particular cases.\n\nThis can be frustrating for users, especially if a given regression has downstream consequences. A virtual conversational agent, for example, might regress on a user request early in a dialogue, which disrupts the ensuing conversation. \n\n![image.png](https://dev-media.amazoncloud.cn/a1643e444a5b40ee8119fad39dc522d2_image.png)\n\nAfter an update, even if the overall accuracy improves, an agent may start to make incorrect predictions (red) early in a conversation, leading to a frustrating user experience.\n\nIn a [paper](https://www.amazon.science/publications/regression-bugs-are-in-your-model-measuring-reducing-and-analyzing-regressions-in-nlp-model-updates) we’re presenting at this year’s meeting of the Association for Computational Linguistics ([ACL](https://www.amazon.science/conferences-and-events/acl-ijcnlp-2021)), we describe a new approach to regression-free model updating in natural-language processing (NLP), which enables us to build new deep neural models that not only perform better in accuracy but consistently preserve legacy models’ correct classifications. \n\nThe paper has two parts: a study of model update regression and a proposal for mitigating it. In the study, we use public benchmark models based on the BERT language model and train them on the seven different NLP tasks of the General Language Understanding Evaluation (GLUE) framework. Then we train updated models using either different model parameters or a more powerful BERT model. We find that regression occurs on 1.9% to 7.6% of input cases, even though overall performance improves after retraining.\n\nTo mitigate regression, we formulate the problem of matching past performance as a constrained optimization problem, then relax the problem to be approximated via [knowledge distillation](https://www.amazon.science/tag/knowledge-distillation), which encourages the new model to imitate the old one in the right context.\n\nOur research is part of Amazon Web Services’ (AWS’s) recent work on “[graceful AI](https://www.amazon.science/latest-news/graceful-ai)”, machine learning systems that are not just accurate but also more transparent, more interpretable, and more compatible with their predecessors. We believe that regression-minimized model updating is a critical building block for successful ML services that are continuously improving and evolving gracefully.\n\n#### **Regression bugs are in your NLP model!**\n\nIn our study, we measure model update regression by negative flip rate (NFR), or the percentage of cases in which the old classifier predicts correctly but the new classifier predicts incorrectly. For services with tens of millions of users, the types of NFRs we measure would translate to poor experiences for hundreds of thousands of users. When regression occurs at that scale, it often requires extensive, time-consuming error analysis and model patching.\n\n![image.png](https://dev-media.amazoncloud.cn/b25fe14d08ec4784b9b263b4d2b71666_image.png)\n\nUpdating from old to new models may correct mistakes (positive-flip cases, blue) but introduce new ones as well (negative-flip cases, red). Examples are from a paraphrase classification task.\nCREDIT: GLYNIS CONDON\n\nOur study showed that in updated models, NFRs are often much higher than the total accuracy gains, from two to eight times as high. This implies that simply aiming for greater accuracy improvements in updated models will not ensure a decrease in regression; i.e., improving accuracy and minimizing regression are related but separate learning targets.\n\nFinally, we also found that minor changes, such as using different random seeds (constants that introduce randomness into the training process) can cause significant variation in regression rate, a consideration that any mitigation strategy will need to account for.\n\n#### **How to mitigate regressions**\n\nRegression-free model updating requires a model to both learn the target task and comply with conditions posed by the old model, making it a constrained optimization problem. We relax the hard constraint into a soft inequality condition and propose a proxy to replace NFR: a continuous measure that uses Kullback-Leibler divergence — a standard similarity measure — over prediction logits, or the unnormalized outputs of both the old and new models. We can thus approximate the constrained optimization problem as optimizing a joint objective of classification loss and knowledge distillation penalty.\n\nIn evaluating our approach, we used two baselines. One was a model updated in the traditional way, without any attempt to control regression. The other was an ensemble that included both the original model and the updated model; the ensemble’s final classification was a combination of both models’ outputs. \n\nOur results show that when updating involved changing language models — switching from BERT-base to BERT-large, for instance — our knowledge distillation approach was the most effective, cutting average NFR to 2.91%, versus 3.63% for the ensemble model and 4.57% for a conventional update. At the same time, our model was slightly more accurate than both baselines.\n\nWe also evaluated our models using the [CheckList](https://www.aclweb.org/anthology/2020.acl-main.442/) protocol, which assesses an NLP model’s performance using different classes of input data, designed to elicit different types behavior. We found that distillation can effectively reduce regressions across almost all types of behavioral tests, implying that our distillation method is actually aligning the new model’s behavior with the old model, rather than using short cuts in a few special cases. \n\nWhen updating involved different random seeds, without a change of language model, the ensemble method worked better than ours, which was a surprise. This is possibly because ensembles naturally reduce output variance, making them less prone to overfitting, which could reduce regressions. \n\nGiven the results of our initial study, we hypothesized that single-model variance could be a function of the choice of random seeds. So we designed a simple model selection procedure in which we train 20 different models using 20 random seeds and pick out the one that offers the greatest NFR reduction. We found that in the cases in which updates preserve the same language model, this approach reduces regression as well as ensemble methods do, without the added operational overhead of running two models in parallel.\n\nAt AWS AI, we are committed to continuing to explore innovative solutions to this problem and to ensure that customers can always enjoy state-of-the-art technologies without painful transitions. We hope our work will inspire the AI community to develop more advanced methods and build easily maintainable, ever-improving systems.","render":"<p>The models behind <a href=\\"https://aws.amazon.com/machine-learning/\\" target=\\"_blank\\">machine learning (ML) services</a> are continuously being updated, and the new models are usually more accurate than the old ones. But an overall improvement in accuracy can still be accompanied by regression — a loss of accuracy — in particular cases.</p>\\n<p>This can be frustrating for users, especially if a given regression has downstream consequences. A virtual conversational agent, for example, might regress on a user request early in a dialogue, which disrupts the ensuing conversation.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/a1643e444a5b40ee8119fad39dc522d2_image.png\\" alt=\\"image.png\\" /></p>\n<p>After an update, even if the overall accuracy improves, an agent may start to make incorrect predictions (red) early in a conversation, leading to a frustrating user experience.</p>\n<p>In a <a href=\\"https://www.amazon.science/publications/regression-bugs-are-in-your-model-measuring-reducing-and-analyzing-regressions-in-nlp-model-updates\\" target=\\"_blank\\">paper</a> we’re presenting at this year’s meeting of the Association for Computational Linguistics (<a href=\\"https://www.amazon.science/conferences-and-events/acl-ijcnlp-2021\\" target=\\"_blank\\">ACL</a>), we describe a new approach to regression-free model updating in natural-language processing (NLP), which enables us to build new deep neural models that not only perform better in accuracy but consistently preserve legacy models’ correct classifications.</p>\\n<p>The paper has two parts: a study of model update regression and a proposal for mitigating it. In the study, we use public benchmark models based on the BERT language model and train them on the seven different NLP tasks of the General Language Understanding Evaluation (GLUE) framework. Then we train updated models using either different model parameters or a more powerful BERT model. We find that regression occurs on 1.9% to 7.6% of input cases, even though overall performance improves after retraining.</p>\n<p>To mitigate regression, we formulate the problem of matching past performance as a constrained optimization problem, then relax the problem to be approximated via <a href=\\"https://www.amazon.science/tag/knowledge-distillation\\" target=\\"_blank\\">knowledge distillation</a>, which encourages the new model to imitate the old one in the right context.</p>\\n<p>Our research is part of Amazon Web Services’ (AWS’s) recent work on “<a href=\\"https://www.amazon.science/latest-news/graceful-ai\\" target=\\"_blank\\">graceful AI</a>”, machine learning systems that are not just accurate but also more transparent, more interpretable, and more compatible with their predecessors. We believe that regression-minimized model updating is a critical building block for successful ML services that are continuously improving and evolving gracefully.</p>\\n<h4><a id=\\"Regression_bugs_are_in_your_NLP_model_16\\"></a><strong>Regression bugs are in your NLP model!</strong></h4>\\n<p>In our study, we measure model update regression by negative flip rate (NFR), or the percentage of cases in which the old classifier predicts correctly but the new classifier predicts incorrectly. For services with tens of millions of users, the types of NFRs we measure would translate to poor experiences for hundreds of thousands of users. When regression occurs at that scale, it often requires extensive, time-consuming error analysis and model patching.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/b25fe14d08ec4784b9b263b4d2b71666_image.png\\" alt=\\"image.png\\" /></p>\n<p>Updating from old to new models may correct mistakes (positive-flip cases, blue) but introduce new ones as well (negative-flip cases, red). Examples are from a paraphrase classification task.<br />\\nCREDIT: GLYNIS CONDON</p>\n<p>Our study showed that in updated models, NFRs are often much higher than the total accuracy gains, from two to eight times as high. This implies that simply aiming for greater accuracy improvements in updated models will not ensure a decrease in regression; i.e., improving accuracy and minimizing regression are related but separate learning targets.</p>\n<p>Finally, we also found that minor changes, such as using different random seeds (constants that introduce randomness into the training process) can cause significant variation in regression rate, a consideration that any mitigation strategy will need to account for.</p>\n<h4><a id=\\"How_to_mitigate_regressions_29\\"></a><strong>How to mitigate regressions</strong></h4>\\n<p>Regression-free model updating requires a model to both learn the target task and comply with conditions posed by the old model, making it a constrained optimization problem. We relax the hard constraint into a soft inequality condition and propose a proxy to replace NFR: a continuous measure that uses Kullback-Leibler divergence — a standard similarity measure — over prediction logits, or the unnormalized outputs of both the old and new models. We can thus approximate the constrained optimization problem as optimizing a joint objective of classification loss and knowledge distillation penalty.</p>\n<p>In evaluating our approach, we used two baselines. One was a model updated in the traditional way, without any attempt to control regression. The other was an ensemble that included both the original model and the updated model; the ensemble’s final classification was a combination of both models’ outputs.</p>\n<p>Our results show that when updating involved changing language models — switching from BERT-base to BERT-large, for instance — our knowledge distillation approach was the most effective, cutting average NFR to 2.91%, versus 3.63% for the ensemble model and 4.57% for a conventional update. At the same time, our model was slightly more accurate than both baselines.</p>\n<p>We also evaluated our models using the <a href=\\"https://www.aclweb.org/anthology/2020.acl-main.442/\\" target=\\"_blank\\">CheckList</a> protocol, which assesses an NLP model’s performance using different classes of input data, designed to elicit different types behavior. We found that distillation can effectively reduce regressions across almost all types of behavioral tests, implying that our distillation method is actually aligning the new model’s behavior with the old model, rather than using short cuts in a few special cases.</p>\\n<p>When updating involved different random seeds, without a change of language model, the ensemble method worked better than ours, which was a surprise. This is possibly because ensembles naturally reduce output variance, making them less prone to overfitting, which could reduce regressions.</p>\n<p>Given the results of our initial study, we hypothesized that single-model variance could be a function of the choice of random seeds. So we designed a simple model selection procedure in which we train 20 different models using 20 random seeds and pick out the one that offers the greatest NFR reduction. We found that in the cases in which updates preserve the same language model, this approach reduces regression as well as ensemble methods do, without the added operational overhead of running two models in parallel.</p>\n<p>At AWS AI, we are committed to continuing to explore innovative solutions to this problem and to ensure that customers can always enjoy state-of-the-art technologies without painful transitions. We hope our work will inspire the AI community to develop more advanced methods and build easily maintainable, ever-improving systems.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭