Learning new language-understanding tasks from just a few examples

机器学习
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"One of the first things a voice agent like Alexa does when receiving a new instruction is to classify its intent — playing music, getting the weather, turning on a smart-home device, and the like.\n\nAlexa adds new intents all the time, as new skills are developed or old ones extended. Often, because the new intents correspond to newly envisioned use cases, training data is sparse. In such cases, it would be nice to be able to leverage Alexa’s existing capacity for intent classification to learn new intents from just a few examples — maybe five or 10.\n\nMachine learning from limited examples is known as few-shot learning. At this year’s ++[Spoken Language Technology](https://www.amazon.science/conferences-and-events/slt-2020)++ Workshop, my colleagues and I ++[presented](https://www.amazon.science/publications/protoda-efficient-transfer-learning-for-few-shot-intent-classification)++ a new approach to few-shot learning for intent classification that combines two techniques: prototypical networks, or ProtoNets, which have been widely used in image classification; and neural data augmentation, or using a neural network to generate new, synthetic training examples from the small number available in the few-shot-learning scenario.\n\n![image.png](https://dev-media.amazoncloud.cn/92ff189523ed46b0832887607b0e6331_image.png)\n\nIn our experiments, we pretrained (top) a ProtoNet (P), whose inputs were embeddings from a sentence encoder. Then we experimented with two different placements of our synthetic-data generator (G): between the encoder and the ProtoNet (center) and between the ProtoNet and the classifier (bottom).\n\nIn experiments, we first compared our ProtoNet, without data augmentation, to a neural network that used conventional transfer learning to adapt to new tasks. According to F1 score, which factors in both false-positive and false-negative rate, the ProtoNet outperformed the baseline by about 1% in the five-shot case and 5% in the 10-shot case.\n\nThen we added neural data augmentation to the ProtoNet and compared its performance to that of a ProtoNet in which we augmented data by the standard technique, adding noise to the real samples. Both augmented-data models outperformed the basic ProtoNet, but our model returned 8.4% fewer F1 errors in the five-shot case and 12.4% fewer in the 10-shot case.\n\n\n#### **ProtoNets**\n\n\nProtoNets are used to do meta-learning, or learning how to learn. With ProtoNets, a machine learning model is trained to embed inputs, or represent them as points in a high-dimensional space. The goal of training is to learn an embedding that maximizes the distance between points representing instances of different classes and minimizes the distance between points representing instances of the same classes. In our case, the classes are different intents, but they might be different types of objects, or different types of sounds, or the like.\n\nProtoNets are trained in batches, such that each batch contains multiple instances of several different classes. After each batch, stochastic gradient descent adjusts the parameters of the model to optimize the distances between embeddings. \n\nIt’s not necessary that each batch include instances of all the classes the model will see. This makes ProtoNets very flexible, in terms of both the number of classes they’re trained on and the number of instances per class. \n\nDoing few-shot learning with a trained ProtoNet is a matter of simply using it to embed, say, five or ten examples of each new class. Then the embeddings for each class are averaged to produce a representative embedding — or prototype — of the class as a whole. Classifying a new input involves embedding it and then determining which prototype it’s closest to.\n\n\n#### **Data augmentation**\n\n\nTo this general procedure, we add data augmentation (DA), to enable better separation between prototypes. (Hence the name of our model: ProtoDA.) During few-shot learning, the embedded samples for each new class pass to a neural-network-based generator, which produces additional embedded samples, labeled as belonging to the same classes as the input samples.\n\nWe train the sample generator using the same loss function we use to train the ProtoNet. That is, the generator learns to generate new samples that, when combined with the real samples, maximize the separation between instances of different classes and minimize the separation between instances of the same classes.\n\n\n#### **Location, location, location**\n\n\nIn our experiments, we positioned the sample generator at two different locations in our network (see diagram above). Before passing to the ProtoNet, textual inputs run through an encoder that performs an initial embedding. This embedding is a fixed-length representation of variable-length sentences, and it leverages bidirectional long-short-term-memory (LSTM) networks to capture contextual information about the inputs.\n\nThe output of the sentence encoder is a 768-dimensional embedding, in which spatial relationships represent semantic relationships. This passes to the ProtoNet, whose output is a 128-dimensional embedding, in which spatial relationships represent membership in different classes.\n\nIn one experiment, we positioned the sample generator between the semantic encoder and the ProtoNet, and in another, we positioned the generator between the ProtoNet and our model’s classification layer. \n\nWe found that adding a neural sample generator to our model worked best when its inputs were the embeddings produced by the ProtoNet. That’s the model that reduced F1 errors 8.4% and 12.4% relative to the model that produced synthetic samples by adding noise.\n\nWe believe that the lower dimensionality of the ProtoNet space (128 instead of 768 features) and proximity to the training objective function (ProtoNet loss) contribute to the difference in performance.\n\nABOUT THE AUTHOR\n\n#### **[Manoj Kumar](https://www.amazon.science/author/manoj-kumar)**\n\nManoj Kumar is an applied scientist in Alexa AI's Natural Understanding group.","render":"<p>One of the first things a voice agent like Alexa does when receiving a new instruction is to classify its intent — playing music, getting the weather, turning on a smart-home device, and the like.</p>\n<p>Alexa adds new intents all the time, as new skills are developed or old ones extended. Often, because the new intents correspond to newly envisioned use cases, training data is sparse. In such cases, it would be nice to be able to leverage Alexa’s existing capacity for intent classification to learn new intents from just a few examples — maybe five or 10.</p>\n<p>Machine learning from limited examples is known as few-shot learning. At this year’s <ins><a href=\\"https://www.amazon.science/conferences-and-events/slt-2020\\" target=\\"_blank\\">Spoken Language Technology</a></ins> Workshop, my colleagues and I <ins><a href=\\"https://www.amazon.science/publications/protoda-efficient-transfer-learning-for-few-shot-intent-classification\\" target=\\"_blank\\">presented</a></ins> a new approach to few-shot learning for intent classification that combines two techniques: prototypical networks, or ProtoNets, which have been widely used in image classification; and neural data augmentation, or using a neural network to generate new, synthetic training examples from the small number available in the few-shot-learning scenario.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/92ff189523ed46b0832887607b0e6331_image.png\\" alt=\\"image.png\\" /></p>\n<p>In our experiments, we pretrained (top) a ProtoNet §, whose inputs were embeddings from a sentence encoder. Then we experimented with two different placements of our synthetic-data generator (G): between the encoder and the ProtoNet (center) and between the ProtoNet and the classifier (bottom).</p>\n<p>In experiments, we first compared our ProtoNet, without data augmentation, to a neural network that used conventional transfer learning to adapt to new tasks. According to F1 score, which factors in both false-positive and false-negative rate, the ProtoNet outperformed the baseline by about 1% in the five-shot case and 5% in the 10-shot case.</p>\n<p>Then we added neural data augmentation to the ProtoNet and compared its performance to that of a ProtoNet in which we augmented data by the standard technique, adding noise to the real samples. Both augmented-data models outperformed the basic ProtoNet, but our model returned 8.4% fewer F1 errors in the five-shot case and 12.4% fewer in the 10-shot case.</p>\n<h4><a id=\\"ProtoNets_15\\"></a><strong>ProtoNets</strong></h4>\\n<p>ProtoNets are used to do meta-learning, or learning how to learn. With ProtoNets, a machine learning model is trained to embed inputs, or represent them as points in a high-dimensional space. The goal of training is to learn an embedding that maximizes the distance between points representing instances of different classes and minimizes the distance between points representing instances of the same classes. In our case, the classes are different intents, but they might be different types of objects, or different types of sounds, or the like.</p>\n<p>ProtoNets are trained in batches, such that each batch contains multiple instances of several different classes. After each batch, stochastic gradient descent adjusts the parameters of the model to optimize the distances between embeddings.</p>\n<p>It’s not necessary that each batch include instances of all the classes the model will see. This makes ProtoNets very flexible, in terms of both the number of classes they’re trained on and the number of instances per class.</p>\n<p>Doing few-shot learning with a trained ProtoNet is a matter of simply using it to embed, say, five or ten examples of each new class. Then the embeddings for each class are averaged to produce a representative embedding — or prototype — of the class as a whole. Classifying a new input involves embedding it and then determining which prototype it’s closest to.</p>\n<h4><a id=\\"Data_augmentation_27\\"></a><strong>Data augmentation</strong></h4>\\n<p>To this general procedure, we add data augmentation (DA), to enable better separation between prototypes. (Hence the name of our model: ProtoDA.) During few-shot learning, the embedded samples for each new class pass to a neural-network-based generator, which produces additional embedded samples, labeled as belonging to the same classes as the input samples.</p>\n<p>We train the sample generator using the same loss function we use to train the ProtoNet. That is, the generator learns to generate new samples that, when combined with the real samples, maximize the separation between instances of different classes and minimize the separation between instances of the same classes.</p>\n<h4><a id=\\"Location_location_location_35\\"></a><strong>Location, location, location</strong></h4>\\n<p>In our experiments, we positioned the sample generator at two different locations in our network (see diagram above). Before passing to the ProtoNet, textual inputs run through an encoder that performs an initial embedding. This embedding is a fixed-length representation of variable-length sentences, and it leverages bidirectional long-short-term-memory (LSTM) networks to capture contextual information about the inputs.</p>\n<p>The output of the sentence encoder is a 768-dimensional embedding, in which spatial relationships represent semantic relationships. This passes to the ProtoNet, whose output is a 128-dimensional embedding, in which spatial relationships represent membership in different classes.</p>\n<p>In one experiment, we positioned the sample generator between the semantic encoder and the ProtoNet, and in another, we positioned the generator between the ProtoNet and our model’s classification layer.</p>\n<p>We found that adding a neural sample generator to our model worked best when its inputs were the embeddings produced by the ProtoNet. That’s the model that reduced F1 errors 8.4% and 12.4% relative to the model that produced synthetic samples by adding noise.</p>\n<p>We believe that the lower dimensionality of the ProtoNet space (128 instead of 768 features) and proximity to the training objective function (ProtoNet loss) contribute to the difference in performance.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Manoj_Kumarhttpswwwamazonscienceauthormanojkumar_50\\"></a><strong><a href=\\"https://www.amazon.science/author/manoj-kumar\\" target=\\"_blank\\">Manoj Kumar</a></strong></h4>\n<p>Manoj Kumar is an applied scientist in Alexa AI’s Natural Understanding group.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭