Question answering as a "lingua franca" for transfer learning

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Few-shot learning is a technique in which we attempt to learn a general machine learning model for a set of related tasks and then customize it to new tasks with only a handful of training examples. This sharing of knowledge across tasks is called transfer learning.\n\nIn a paper [we presented](https://www.amazon.science/publications/language-model-is-all-you-need-natural-language-understanding-as-question-answering) at the International Conference on Acoustics, Speech, and Signal Processing ([ICASSP](https://www.amazon.science/conferences-and-events/icassp-2021)), we show how we can use question-answering as a base task and achieve effective transfer learning between natural-language understanding (NLU) tasks by treating them as if they were question-answering tasks.\n\nFor instance, consider the task of intent classification, which is a mainstay of voice agents such as Alexa. If an Alexa customer says, “Alexa, play the album Innervisions”, the intent is play_music (as opposed to, say, check_weather or set_timer). The task of intent classification can be recast as the answering of a question, such as “Is the intent play_music?”\n\n![image.png](https://dev-media.amazoncloud.cn/dd40f16530bc47a3a57c3507271d3fe7_image.png)\n\nBy treating natural-language-understanding tasks as if they were question-answering tasks, the researchers' new method (QANLU) enables a model to be successively fine-tuned on multiple datasets, which can dramatically improve performance.\nCREDIT: GLYNIS CONDON\n\nIn our paper, we show that if a model has been trained to do question answering (QA), this kind of task recasting lets it transfer knowledge to other NLU tasks much more efficiently than it would otherwise. We call this method QANLU.\n\nAcross numerous experiments involving two different NLU tasks (intent classification and slot tagging), two different baseline models, and several different strategies for sampling few-shot training examples, our model consistently delivered the best performance, with relative improvements of at least 20% in several cases and 65% in one case.\n\nWe also found that sequentially fine-tuning a model on multiple tasks could improve its performance on each. In the graph below, for instance, the orange plot indicates the performance of the baseline model in our experiments; the blue plot indicates the performance of a question-answering model fine-tuned, using our method, on a restaurant-domain NLU dataset; and the grey plot indicates the performance of the question-answering model fine-tuned, using our method, first on an airline-travel NLU dataset (ATIS — Airline Travel Information Systems) and then on the restaurant-domain dataset.\n\nWith ten examples for fine-tuning, that is, our method confers a 21% improvement over baseline when the question-answering model is fine-tuned directly on the restaurant dataset. But when it’s first fine-tuned on the ATIS dataset, the improvement leaps to 63%.\n\nThis demonstrates that the advantages of our approach could compound as the model is fine-tuned on more and more tasks.\n\n![image.png](https://dev-media.amazoncloud.cn/1ca6f309a979475294573fff722d56a5_image.png)\n\n#### **Transference**\n\nMapping NLU problems to question answering has been studied in the literature; members of our research group have [published on the topic in the past](https://www.amazon.science/blog/turning-dialogue-tracking-into-a-reading-comprehension-problem). The novelty of this work is to study the power of this approach for transfer learning.\n\nToday, most NLU systems are built atop Transformer-based models pretrained on huge corpora of text, so they encode statistics about word sequences across entire languages. Extra layers are added to one of these networks, and the complete model is retrained on the target NLU task.\n\nThis is the paradigm we consider in our work. In our experiments, we used two different types of pretrained Transformer models, DistilBERT and ALBERT.\n\nIn addition to evaluating the effectiveness of QANLU for intent classification, we also evaluate it on the related task of slot tagging. In the example above — “Alexa, play Innervisions” — “Innervisions” is the value of a slot labeled album_name. There, the question corresponding to the slot-tagging task would be “What album name was mentioned?”\n\nOne interesting side effect of QANLU is that training on the questions and answers created for NLU tasks could improve model performance on the native question-answering task as well. If that’s the case, it opens the further possibility of using mappings between NLU and question answering for data augmentation.","render":"<p>Few-shot learning is a technique in which we attempt to learn a general machine learning model for a set of related tasks and then customize it to new tasks with only a handful of training examples. This sharing of knowledge across tasks is called transfer learning.</p>\n<p>In a paper <a href=\\"https://www.amazon.science/publications/language-model-is-all-you-need-natural-language-understanding-as-question-answering\\" target=\\"_blank\\">we presented</a> at the International Conference on Acoustics, Speech, and Signal Processing (<a href=\\"https://www.amazon.science/conferences-and-events/icassp-2021\\" target=\\"_blank\\">ICASSP</a>), we show how we can use question-answering as a base task and achieve effective transfer learning between natural-language understanding (NLU) tasks by treating them as if they were question-answering tasks.</p>\\n<p>For instance, consider the task of intent classification, which is a mainstay of voice agents such as Alexa. If an Alexa customer says, “Alexa, play the album Innervisions”, the intent is play_music (as opposed to, say, check_weather or set_timer). The task of intent classification can be recast as the answering of a question, such as “Is the intent play_music?”</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/dd40f16530bc47a3a57c3507271d3fe7_image.png\\" alt=\\"image.png\\" /></p>\n<p>By treating natural-language-understanding tasks as if they were question-answering tasks, the researchers’ new method (QANLU) enables a model to be successively fine-tuned on multiple datasets, which can dramatically improve performance.<br />\\nCREDIT: GLYNIS CONDON</p>\n<p>In our paper, we show that if a model has been trained to do question answering (QA), this kind of task recasting lets it transfer knowledge to other NLU tasks much more efficiently than it would otherwise. We call this method QANLU.</p>\n<p>Across numerous experiments involving two different NLU tasks (intent classification and slot tagging), two different baseline models, and several different strategies for sampling few-shot training examples, our model consistently delivered the best performance, with relative improvements of at least 20% in several cases and 65% in one case.</p>\n<p>We also found that sequentially fine-tuning a model on multiple tasks could improve its performance on each. In the graph below, for instance, the orange plot indicates the performance of the baseline model in our experiments; the blue plot indicates the performance of a question-answering model fine-tuned, using our method, on a restaurant-domain NLU dataset; and the grey plot indicates the performance of the question-answering model fine-tuned, using our method, first on an airline-travel NLU dataset (ATIS — Airline Travel Information Systems) and then on the restaurant-domain dataset.</p>\n<p>With ten examples for fine-tuning, that is, our method confers a 21% improvement over baseline when the question-answering model is fine-tuned directly on the restaurant dataset. But when it’s first fine-tuned on the ATIS dataset, the improvement leaps to 63%.</p>\n<p>This demonstrates that the advantages of our approach could compound as the model is fine-tuned on more and more tasks.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/1ca6f309a979475294573fff722d56a5_image.png\\" alt=\\"image.png\\" /></p>\n<h4><a id=\\"Transference_23\\"></a><strong>Transference</strong></h4>\\n<p>Mapping NLU problems to question answering has been studied in the literature; members of our research group have <a href=\\"https://www.amazon.science/blog/turning-dialogue-tracking-into-a-reading-comprehension-problem\\" target=\\"_blank\\">published on the topic in the past</a>. The novelty of this work is to study the power of this approach for transfer learning.</p>\\n<p>Today, most NLU systems are built atop Transformer-based models pretrained on huge corpora of text, so they encode statistics about word sequences across entire languages. Extra layers are added to one of these networks, and the complete model is retrained on the target NLU task.</p>\n<p>This is the paradigm we consider in our work. In our experiments, we used two different types of pretrained Transformer models, DistilBERT and ALBERT.</p>\n<p>In addition to evaluating the effectiveness of QANLU for intent classification, we also evaluate it on the related task of slot tagging. In the example above — “Alexa, play Innervisions” — “Innervisions” is the value of a slot labeled album_name. There, the question corresponding to the slot-tagging task would be “What album name was mentioned?”</p>\n<p>One interesting side effect of QANLU is that training on the questions and answers created for NLU tasks could improve model performance on the native question-answering task as well. If that’s the case, it opens the further possibility of using mappings between NLU and question answering for data augmentation.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭