Reducing unnecessary clarification questions from voice agents

自然语言处理
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"If two people are talking in a noisy environment, and one doesn’t hear the other clearly or doesn’t quite understand what the other person meant, the natural reaction is to ask for clarification. The same is true with voice agents like Alexa. Rather than taking a potentially wrong action based on inaccurate or incomplete understanding, Alexa will ask a follow-up question, such as whether a requested timer should be set for fifteen or fifty minutes.\n\nTypically, the decision to ask such questions is based on the confidence of a machine learning model. If the model predicts multiple competing hypotheses with high confidence, a clarifying question can decide among them.\n\nOur analysis of Alexa data, however, suggests that 77% of the time, the model’s top-ranked prediction is the right one, even if alternative hypotheses also get high confidence scores. In those cases, we'd like to reduce the number of clarifying questions we ask.\n\nLast week, at the IEEE Automatic Speech Recognition and Understanding Workshop (++[ASRU](https://www.amazon.science/conferences-and-events/asru-2021)++), we presented work in which we attempt to ++[reduce unnecessary follow-up questions](https://www.amazon.science/publications/deciding-whether-to-ask-clarifying-questions-in-large-scale-spoken-language-understanding)++ by training a machine learning model to determine when clarification is really necessary.\n\nIn experiments, we compared our approach to one in which the decision to ask follow-up questions was based on confidence score thresholds and other similar heuristics. We found that our model improved the F1 score of clarification questions by 81%. (The F1 score factors in both false positives — here, questions that didn’t need to be asked — and false negatives — here, questions that should have been asked but weren’t.)\n\n#### **HypRank model**\n\nWith most voice agents, the acoustic signal of a customer utterance first passes to an automatic-speech-recognition (ASR) model, which generates multiple hypotheses about what the customer said. The top-ranked hypotheses then pass to a natural-language-understanding (NLU) model, which identifies the customer’s intent — the action the customer wants performed, such as PlayVideo — and the utterance slots — the entities on which the intent should act, such as VideoTitle, which might take the value “Harry Potter”.\n\nIn the setting we consider in our paper, hypotheses generated by our ASR and NLU models pass to a third model, called ++[HypRank](https://www.amazon.science/blog/hyprank-how-alexa-determines-what-skill-can-best-meet-a-customers-need)++, for hypothesis ranker. HypRank combines the predictions and confidence scores of ASR, intent classification, and slot-filling with contextual signals, such as which skills a given customer has enabled, to produce an overall ranking of the different hypotheses.\n\n![image.png](https://dev-media.amazoncloud.cn/b80bc496afef43b6be6cadb1426348e9_image.png)\n\nAn example of HypRank’s ranking of hypotheses for the utterance “Harry Potter”. There are three potential sources of ambiguity: ASR, intent confidence, and hypothesis confidence.\n\nWith this approach, there are three possible sources of ambiguity: similarity of ASR score, similarity of intent classification score, and similarity of overall HypRank score. In a traditional scheme, a small enough difference in any of these scores would automatically trigger a clarification question.\n\n#### **To clarify or not to clarify**\n\nInstead, in our method, we train yet another machine learning model to decide whether a clarification question is in order. In addition to similarity of ASR, NLU, or HypRank score, the model considers two other sources of ambiguity: signal-to-noise ratio (SNR) and truncated utterances. A truncated utterance is one that ends with an article (“an”, “the”, etc.), one of several possessives (such as “my”), or a preposition. For instance, “Alexa, ++[play ‘Hello’ by”](https://www.amazon.science/blog/how-alexa-can-use-song-playback-duration-to-learn-customers-preferences)++ is a truncated utterance.\n\nAs input, the model receives the top-ranked HypRank hypothesis; any other hypotheses with similar enough scores on any of the three measures; the SNR; a binary value indicating whether the request is a repetition (an indication that it wasn’t satisfactorily fulfilled the first time); and binary values indicating which of the five sources of ambiguity pertain.\n\nThe number of input hypotheses can vary, depending on how many types of ambiguity pertain. So the vector representations of all hypotheses other than the top-ranked hypothesis are combined to form a summary vector, which is then concatenated with vector representations of the other inputs. The concatenated vector passes to a classifier, which decides whether to issue a clarification question.\n\n![image.png](https://dev-media.amazoncloud.cn/4054cc0d39804ff5a1e87132ac27f0d4_image.png)\n\nThe architecture of our model.\n\n#### **Experiments**\n\nTo our knowledge, there are no existing datasets that feature multiple ASR and NLU hypotheses labeled according to accuracy. So to train our model, we used data that had been automatically annotated by a ++[model that my Amazon colleagues presented](https://www.amazon.science/publications/large-scale-hybrid-approach-for-predicting-user-satisfaction-with-conversational-agents)++ last year at the ++[NeurIPS](https://www.amazon.science/tag/neurips)++ Workshop on Human-in-the-Loop Dialogue Systems.\n\nTheir model was trained on a combination of hand-annotated data and data labeled according to feedback from customers who were specifically asked, after Alexa interactions, whether they were satisfied with their results. We used the model to label additional utterances, with no human involvement.\n\nSince all the samples in the dataset featured at least one type of ambiguity, our baseline was asking clarification questions in every case. That approach has a false-negative rate of zero — it never fails to ask a clarification question when necessary — but it could have a high false-positive rate. Our approach may increase the false-negative rate, but the increase in F1 score means that it strikes a much better balance between false negatives and false positives.\n\nABOUT THE AUTHOR\n\n#### **[Joo-Kyung Kim](https://www.amazon.science/author/joo-kyung-kim)**\n\nJoo-Kyung (J. K.) Kim is an applied scientist in the Alexa AI group.\n\n","render":"<p>If two people are talking in a noisy environment, and one doesn’t hear the other clearly or doesn’t quite understand what the other person meant, the natural reaction is to ask for clarification. The same is true with voice agents like Alexa. Rather than taking a potentially wrong action based on inaccurate or incomplete understanding, Alexa will ask a follow-up question, such as whether a requested timer should be set for fifteen or fifty minutes.</p>\n<p>Typically, the decision to ask such questions is based on the confidence of a machine learning model. If the model predicts multiple competing hypotheses with high confidence, a clarifying question can decide among them.</p>\n<p>Our analysis of Alexa data, however, suggests that 77% of the time, the model’s top-ranked prediction is the right one, even if alternative hypotheses also get high confidence scores. In those cases, we’d like to reduce the number of clarifying questions we ask.</p>\n<p>Last week, at the IEEE Automatic Speech Recognition and Understanding Workshop (<ins><a href=\\"https://www.amazon.science/conferences-and-events/asru-2021\\" target=\\"_blank\\">ASRU</a></ins>), we presented work in which we attempt to <ins><a href=\\"https://www.amazon.science/publications/deciding-whether-to-ask-clarifying-questions-in-large-scale-spoken-language-understanding\\" target=\\"_blank\\">reduce unnecessary follow-up questions</a></ins> by training a machine learning model to determine when clarification is really necessary.</p>\n<p>In experiments, we compared our approach to one in which the decision to ask follow-up questions was based on confidence score thresholds and other similar heuristics. We found that our model improved the F1 score of clarification questions by 81%. (The F1 score factors in both false positives — here, questions that didn’t need to be asked — and false negatives — here, questions that should have been asked but weren’t.)</p>\n<h4><a id=\\"HypRank_model_10\\"></a><strong>HypRank model</strong></h4>\\n<p>With most voice agents, the acoustic signal of a customer utterance first passes to an automatic-speech-recognition (ASR) model, which generates multiple hypotheses about what the customer said. The top-ranked hypotheses then pass to a natural-language-understanding (NLU) model, which identifies the customer’s intent — the action the customer wants performed, such as PlayVideo — and the utterance slots — the entities on which the intent should act, such as VideoTitle, which might take the value “Harry Potter”.</p>\n<p>In the setting we consider in our paper, hypotheses generated by our ASR and NLU models pass to a third model, called <ins><a href=\\"https://www.amazon.science/blog/hyprank-how-alexa-determines-what-skill-can-best-meet-a-customers-need\\" target=\\"_blank\\">HypRank</a></ins>, for hypothesis ranker. HypRank combines the predictions and confidence scores of ASR, intent classification, and slot-filling with contextual signals, such as which skills a given customer has enabled, to produce an overall ranking of the different hypotheses.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/b80bc496afef43b6be6cadb1426348e9_image.png\\" alt=\\"image.png\\" /></p>\n<p>An example of HypRank’s ranking of hypotheses for the utterance “Harry Potter”. There are three potential sources of ambiguity: ASR, intent confidence, and hypothesis confidence.</p>\n<p>With this approach, there are three possible sources of ambiguity: similarity of ASR score, similarity of intent classification score, and similarity of overall HypRank score. In a traditional scheme, a small enough difference in any of these scores would automatically trigger a clarification question.</p>\n<h4><a id=\\"To_clarify_or_not_to_clarify_22\\"></a><strong>To clarify or not to clarify</strong></h4>\\n<p>Instead, in our method, we train yet another machine learning model to decide whether a clarification question is in order. In addition to similarity of ASR, NLU, or HypRank score, the model considers two other sources of ambiguity: signal-to-noise ratio (SNR) and truncated utterances. A truncated utterance is one that ends with an article (“an”, “the”, etc.), one of several possessives (such as “my”), or a preposition. For instance, “Alexa, <ins><a href=\\"https://www.amazon.science/blog/how-alexa-can-use-song-playback-duration-to-learn-customers-preferences\\" target=\\"_blank\\">play ‘Hello’ by”</a></ins> is a truncated utterance.</p>\n<p>As input, the model receives the top-ranked HypRank hypothesis; any other hypotheses with similar enough scores on any of the three measures; the SNR; a binary value indicating whether the request is a repetition (an indication that it wasn’t satisfactorily fulfilled the first time); and binary values indicating which of the five sources of ambiguity pertain.</p>\n<p>The number of input hypotheses can vary, depending on how many types of ambiguity pertain. So the vector representations of all hypotheses other than the top-ranked hypothesis are combined to form a summary vector, which is then concatenated with vector representations of the other inputs. The concatenated vector passes to a classifier, which decides whether to issue a clarification question.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/4054cc0d39804ff5a1e87132ac27f0d4_image.png\\" alt=\\"image.png\\" /></p>\n<p>The architecture of our model.</p>\n<h4><a id=\\"Experiments_34\\"></a><strong>Experiments</strong></h4>\\n<p>To our knowledge, there are no existing datasets that feature multiple ASR and NLU hypotheses labeled according to accuracy. So to train our model, we used data that had been automatically annotated by a <ins><a href=\\"https://www.amazon.science/publications/large-scale-hybrid-approach-for-predicting-user-satisfaction-with-conversational-agents\\" target=\\"_blank\\">model that my Amazon colleagues presented</a></ins> last year at the <ins><a href=\\"https://www.amazon.science/tag/neurips\\" target=\\"_blank\\">NeurIPS</a></ins> Workshop on Human-in-the-Loop Dialogue Systems.</p>\n<p>Their model was trained on a combination of hand-annotated data and data labeled according to feedback from customers who were specifically asked, after Alexa interactions, whether they were satisfied with their results. We used the model to label additional utterances, with no human involvement.</p>\n<p>Since all the samples in the dataset featured at least one type of ambiguity, our baseline was asking clarification questions in every case. That approach has a false-negative rate of zero — it never fails to ask a clarification question when necessary — but it could have a high false-positive rate. Our approach may increase the false-negative rate, but the increase in F1 score means that it strikes a much better balance between false negatives and false positives.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"JooKyung_Kimhttpswwwamazonscienceauthorjookyungkim_44\\"></a><strong><a href=\\"https://www.amazon.science/author/joo-kyung-kim\\" target=\\"_blank\\">Joo-Kyung Kim</a></strong></h4>\n<p>Joo-Kyung (J. K.) Kim is an applied scientist in the Alexa AI group.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭