Controlling language generation models without training data

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"![image.png](https://dev-media.amazoncloud.cn/8bac86fc61774516955a7d3e738de7c4_image.png)\n\nCONVERSATIONAL AI / NATURAL-LANGUAGE PROCESSING\n\nMany popular AI applications — such as machine translation, conversational AI, and question answering — depend on natural-language generation, or producing new sequences of syntactically and semantically coherent text.\n\nSometimes it’s useful to modulate the output of a language generator: on different occasions, for instance, a machine translation model might need to produce translations that are more formal or more idiomatic; a conversational-AI model might focus more on delivering information or on eliciting responses from a human interlocutor.\n\nTypically, building natural-language-generation (NLG) models that provide this kind of control means re-training them on the appropriate type of annotated data — formal or informal diction, informative or interrogative utterances. But researchers in the Alexa AI organization have invented a method for modulating a language generator’s output without the need for re-training.\n\nInstead, they’ve added what they describe as three “control knobs” to an NLG model, which can vary the model output. They describe their approach in a paper called “[Zero-shot controlled generation with encoder-decoder transformers](https://arxiv.org/pdf/2106.06411.pdf)”, which they’ve posted to the arXiv. The paper’s corresponding author, senior applied scientist Mahdi Namazifar, answered [3 questions](https://www.amazon.science/tag/3-questions) about his team’s work for Amazon Science.\n\n#### **Q. What does it mean to add a “control knob” to an NLG model?**\n\nThe general belief in this area is that once you have a trained model, if you go into it and make changes manually, that causes it to degenerate. Contrary to this belief, one of the things that we do in this work is exactly that: we take a trained model and manually manipulate the weights of the model, the parameters of the model. And we show that not only does it not cause degeneration of the model — we see that we can maintain the quality of the generations — but you achieve control by doing this, if you do it in a systematic way and an intuitive way. \n\n![image.png](https://dev-media.amazoncloud.cn/3153d8135d5c4c368621dc021ee8e459_image.png)\n\nA diagram of the researchers’ attention-biasing “control knob”. The attention distribution learned by a trained model (lower group of blue bars) is re-weighted (bt) and normalized to produce a new distribution.\n\nA good example of that is attention biasing. The attention mechanism makes a decision that at this point I need to pay attention, in this distribution, to the input. We show that we can go into the attention modules and force the trained model to pay more attention to certain parts of the context than it usually would.\n\nFor instance, if you have a dialogue model, and we want the next response to the user to be more informative, we can actually force the model to pay more attention to the knowledge snippet that we’ve provided. And again, the expectation would be this would throw off the model completely, but we saw — very surprisingly — that that doesn't happen, and in fact it achieves what we had in mind, and it follows the intuition. \n\n#### **Q. What are the other two knobs?**\n\nAnother knob we introduce here is decoder mixing. Imagine you have two different models that have two different decoders, and these decoders have learned different skills. So, for instance, in a conversation and a dialogue system, the decoder has learned, given the dialogue history, how to respond. And imagine a different decoder with a completely different task — for instance, an autoencoder, which, given an encoding of the input, is able to reconstruct the input. So, these two decoders have learned different skills. We show that by mixing them, we can mix the skills that they have learned. This one can respond; this one can copy from the input. If you have, for instance, some knowledge within the input, then combining these allows us to have a response that is more informative. \n\nThe third knob is another interesting one. To get to certain desired controls, what we propose here is to augment the input with certain additional input — which, again, is intuitively designed. For instance, if you want the generated language to be a question, we show that if we get a bunch of questions, and we encode them in a certain way, and we augment our input with this encoding of the questions, the model is able to generate more questions; whatever the model wanted to generate, it generates it in a question manner. Or if you wanted to generate according to a certain topic, you would give it control phrases in that topic, and it would push the model to generate according to that topic, or with a certain sentiment, and so on. \n\nThis is kind of similar to the concept of priming of language models that is out there in the literature, but priming was never shown to work for smaller language models. It was shown to work for language models with hundreds of billions of parameters, which are very computationally expensive to run. But we show that this version of “priming” could allow much, much smaller models — three orders of magnitude smaller models, even — to use this notion of priming in a different way. And again, this knob and the other two need absolutely no additional training and annotated data.\n\n#### **Q. Did you experiment with any other types of control knobs?**\n\nEncoder-decoder Transformer-based NLG models have two sets of attentions. One is called self-attention, and one is called cross-attention. Self-attention gets kicked in as the model is generating, and it attends to what it has generated up to this point — “what was the word that came out of my mouth two seconds ago?” Cross-attention pays attention to the context — everything that was said last turn, or some sort of knowledge about the topic of the conversation, and so on. We saw that applying attention biasing on cross-attention worked very well, as is discussed in the paper, but when we apply attention biasing to self-attention, we see basically what we were expecting from the get-go, which was the model starting to generate gibberish, basically, or the models starting to degenerate. \n\n![image.png](https://dev-media.amazoncloud.cn/069789ace872487eaae250f248ea5c5f_image.png)\n\nA diagram of the researchers’ model, including the three control knobs that worked (A – C) and one that didn’t (self-attention biasing, D).\n\nAfter a lot of digging into this, we propose — basically, as a hypothesis — that in these models, this self-attention module is responsible for the fluency of the generated language, and probably that's its main function. So why is this important? We show that if we have another model that is fluent, and we replace this part of the model with another fluent model, we still get good generations. What that tells us is maybe we don't need to focus on training these when we're training a model for whatever task we have. If we have a model that generates fluently, we can just use those weights and modules. \n\nThe benefit of doing that is basically savings in computations. We see that in some cases, we can train the model with 44% fewer weights and parameters and still get pretty competitive numbers, which is very important, because training these models is very expensive. Training time would be reduced significantly, and we can use smaller machines for training the same model, which also reduces the carbon footprint.\n\nThat's a secondary kind of contribution of this work, which is focusing on a case where it didn't work. This knob didn't work, and when we dug into why it didn't work, we came to some new findings.\n\nABOUT THE AUTHOR\n\n#### **[Larry Hardesty](https://www.amazon.science/author/larry-hardesty)**\n\nLarry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.","render":"<p><img src=\\"https://dev-media.amazoncloud.cn/8bac86fc61774516955a7d3e738de7c4_image.png\\" alt=\\"image.png\\" /></p>\n<p>CONVERSATIONAL AI / NATURAL-LANGUAGE PROCESSING</p>\n<p>Many popular AI applications — such as machine translation, conversational AI, and question answering — depend on natural-language generation, or producing new sequences of syntactically and semantically coherent text.</p>\n<p>Sometimes it’s useful to modulate the output of a language generator: on different occasions, for instance, a machine translation model might need to produce translations that are more formal or more idiomatic; a conversational-AI model might focus more on delivering information or on eliciting responses from a human interlocutor.</p>\n<p>Typically, building natural-language-generation (NLG) models that provide this kind of control means re-training them on the appropriate type of annotated data — formal or informal diction, informative or interrogative utterances. But researchers in the Alexa AI organization have invented a method for modulating a language generator’s output without the need for re-training.</p>\n<p>Instead, they’ve added what they describe as three “control knobs” to an NLG model, which can vary the model output. They describe their approach in a paper called “<a href=\\"https://arxiv.org/pdf/2106.06411.pdf\\" target=\\"_blank\\">Zero-shot controlled generation with encoder-decoder transformers</a>”, which they’ve posted to the arXiv. The paper’s corresponding author, senior applied scientist Mahdi Namazifar, answered <a href=\\"https://www.amazon.science/tag/3-questions\\" target=\\"_blank\\">3 questions</a> about his team’s work for Amazon Science.</p>\\n<h4><a id=\\"Q_What_does_it_mean_to_add_a_control_knob_to_an_NLG_model_12\\"></a><strong>Q. What does it mean to add a “control knob” to an NLG model?</strong></h4>\\n<p>The general belief in this area is that once you have a trained model, if you go into it and make changes manually, that causes it to degenerate. Contrary to this belief, one of the things that we do in this work is exactly that: we take a trained model and manually manipulate the weights of the model, the parameters of the model. And we show that not only does it not cause degeneration of the model — we see that we can maintain the quality of the generations — but you achieve control by doing this, if you do it in a systematic way and an intuitive way.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/3153d8135d5c4c368621dc021ee8e459_image.png\\" alt=\\"image.png\\" /></p>\n<p>A diagram of the researchers’ attention-biasing “control knob”. The attention distribution learned by a trained model (lower group of blue bars) is re-weighted (bt) and normalized to produce a new distribution.</p>\n<p>A good example of that is attention biasing. The attention mechanism makes a decision that at this point I need to pay attention, in this distribution, to the input. We show that we can go into the attention modules and force the trained model to pay more attention to certain parts of the context than it usually would.</p>\n<p>For instance, if you have a dialogue model, and we want the next response to the user to be more informative, we can actually force the model to pay more attention to the knowledge snippet that we’ve provided. And again, the expectation would be this would throw off the model completely, but we saw — very surprisingly — that that doesn’t happen, and in fact it achieves what we had in mind, and it follows the intuition.</p>\n<h4><a id=\\"Q_What_are_the_other_two_knobs_24\\"></a><strong>Q. What are the other two knobs?</strong></h4>\\n<p>Another knob we introduce here is decoder mixing. Imagine you have two different models that have two different decoders, and these decoders have learned different skills. So, for instance, in a conversation and a dialogue system, the decoder has learned, given the dialogue history, how to respond. And imagine a different decoder with a completely different task — for instance, an autoencoder, which, given an encoding of the input, is able to reconstruct the input. So, these two decoders have learned different skills. We show that by mixing them, we can mix the skills that they have learned. This one can respond; this one can copy from the input. If you have, for instance, some knowledge within the input, then combining these allows us to have a response that is more informative.</p>\n<p>The third knob is another interesting one. To get to certain desired controls, what we propose here is to augment the input with certain additional input — which, again, is intuitively designed. For instance, if you want the generated language to be a question, we show that if we get a bunch of questions, and we encode them in a certain way, and we augment our input with this encoding of the questions, the model is able to generate more questions; whatever the model wanted to generate, it generates it in a question manner. Or if you wanted to generate according to a certain topic, you would give it control phrases in that topic, and it would push the model to generate according to that topic, or with a certain sentiment, and so on.</p>\n<p>This is kind of similar to the concept of priming of language models that is out there in the literature, but priming was never shown to work for smaller language models. It was shown to work for language models with hundreds of billions of parameters, which are very computationally expensive to run. But we show that this version of “priming” could allow much, much smaller models — three orders of magnitude smaller models, even — to use this notion of priming in a different way. And again, this knob and the other two need absolutely no additional training and annotated data.</p>\n<h4><a id=\\"Q_Did_you_experiment_with_any_other_types_of_control_knobs_32\\"></a><strong>Q. Did you experiment with any other types of control knobs?</strong></h4>\\n<p>Encoder-decoder Transformer-based NLG models have two sets of attentions. One is called self-attention, and one is called cross-attention. Self-attention gets kicked in as the model is generating, and it attends to what it has generated up to this point — “what was the word that came out of my mouth two seconds ago?” Cross-attention pays attention to the context — everything that was said last turn, or some sort of knowledge about the topic of the conversation, and so on. We saw that applying attention biasing on cross-attention worked very well, as is discussed in the paper, but when we apply attention biasing to self-attention, we see basically what we were expecting from the get-go, which was the model starting to generate gibberish, basically, or the models starting to degenerate.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/069789ace872487eaae250f248ea5c5f_image.png\\" alt=\\"image.png\\" /></p>\n<p>A diagram of the researchers’ model, including the three control knobs that worked (A – C) and one that didn’t (self-attention biasing, D).</p>\n<p>After a lot of digging into this, we propose — basically, as a hypothesis — that in these models, this self-attention module is responsible for the fluency of the generated language, and probably that’s its main function. So why is this important? We show that if we have another model that is fluent, and we replace this part of the model with another fluent model, we still get good generations. What that tells us is maybe we don’t need to focus on training these when we’re training a model for whatever task we have. If we have a model that generates fluently, we can just use those weights and modules.</p>\n<p>The benefit of doing that is basically savings in computations. We see that in some cases, we can train the model with 44% fewer weights and parameters and still get pretty competitive numbers, which is very important, because training these models is very expensive. Training time would be reduced significantly, and we can use smaller machines for training the same model, which also reduces the carbon footprint.</p>\n<p>That’s a secondary kind of contribution of this work, which is focusing on a case where it didn’t work. This knob didn’t work, and when we dug into why it didn’t work, we came to some new findings.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Larry_Hardestyhttpswwwamazonscienceauthorlarryhardesty_48\\"></a><strong><a href=\\"https://www.amazon.science/author/larry-hardesty\\" target=\\"_blank\\">Larry Hardesty</a></strong></h4>\n<p>Larry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭