Why ambient computing needs self-learning

机器学习
自然语言处理
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Today at the annual meeting of the ACM Special Interest Group on Information Retrieval (++[SIGIR](https://www.amazon.science/conferences-and-events/sigir-2022)++), Ruhi Sarikaya, the director of applied science for Alexa AI, delivered a keynote address titled “Intelligent Conversational Agents for Ambient Computing”. This is an edited version of that talk.\n\nFor decades, the paradigm of personal computing was a desktop machine. Then came the laptop, and finally mobile devices so small we can hold them in our hands and carry them in our pockets, which felt revolutionary.\n\n.++[@Ruhi_Sarikaya](https://twitter.com/Ruhi_Sarikaya?ref_src=twsrc%5Etfw)++ (++[@amazon](https://twitter.com/amazon?ref_src=twsrc%5Etfw)++ Alexa) is now giving the 3rd #SIGIR2022 keynote.\n\nFirst time Ruhi is attending SIGIR, surely not the last one! [pic.twitter.com/KL0SnPq2CS](https://t.co/KL0SnPq2CS)\n\n— SIGIR 2022 😷 (@SIGIRConf) [July 14, 2022](https://twitter.com/SIGIRConf/status/1547561854636986368?ref_src=twsrc%5Etfw)\n\nAll these devices, however, tether you to a screen. For the most part, you need to physically touch them to use them, which does not seem natural or convenient in a number of situations.\n\nSo what comes next?\n\nThe most likely answer is the Internet of things (IOT) and other intelligent, connected systems and services. What will the interface with the IOT be? Will you need a separate app on your phone for each connected device? Or when you walk into a room, will you simply speak to the device you want to reconfigure?\n\nAt Alexa, we’re betting that conversational AI will be the interface for the IOT. And this will mean a shift in our understanding of what conversational AI is.\n\nIn particular, the IOT creates new forms of context for conversational-AI models. By “context”, we mean the set of circumstances and facts that surround a particular event, situation, or entity, which an AI model can exploit to improve its performance.\n\nFor instance, context can help resolve ambiguities. Here are some examples of what we mean by context:\n\n- Device state: If the oven is on, then the question “What is the temperature?” is more likely to refer to oven temperature than it is in other contexts.\n- Device types: If the device has a screen, it’s more likely that “play Hunger Games” refers to the movie than if the device has no screen.\n- Physical/digital activity: If a customer listens only to jazz, “Play music” should elicit a different response than if the customer listens only to hard rock; if the customer always makes coffee after the alarm goes off, that should influence the interpretation of a command like “start brewing”. \n\nThe same type of reasoning applies to other contextual signals, such as time of day, device and user location, environmental changes as measured by sensors, and so on.\n\nTraining a conversational agent to factor in so many contextual signals is much more complicated than training it to recognize, say, song titles. Ideally, we would have a substantial number of training examples for every combination of customer, device, and context, but that’s obviously not practical. So how do we scale the training of contextually aware conversational agents?\n\n\n#### **Self-learning**\n\nThe answer is ++[self-learning](https://www.amazon.science/tag/self-learning)++. By self-learning, we mean a framework that enables an autonomous agent to learn from customer-system interactions, system signals, and predictive models.\n\nCustomer-system interactions can provide both implicit feedback and explicit feedback. Alexa already handles both. If a customer interrupts Alexa’s response to a request — a “barge-in”, as we call it — or rephrases the request, that’s implicit feedback. Aggregated across multiple customers, barge-ins and rephrases indicate requests that aren’t being processed correctly.\n\nCustomers can also explicitly teach Alexa how to handle particular requests. This can be customer-initiated, as when customers use Alexa’s ++[interactive-teaching](https://www.amazon.science/blog/new-alexa-features-interactive-teaching-by-customers)++ capability, or Alexa-initiated, as when Alexa asks, “Did I answer your question?”\n\nThe great advantages of self-learning are that it doesn’t require data annotation, so it scales better while protecting customer privacy; it minimizes the time and cost of updating models; and it relies on high-value training data, because customers know best what they mean and want.\n\nWe have a few programs targeting different applications of self-learning, including automated generation of ground truth annotations, defect reduction, teachable AI, and determining root causes of failure.\n\n\n#### **Automated ground truth generation**\n\nAt Alexa, we have launched a multiyear initiative to shift Alexa’s ML model development from manual-annotation-based to primarily self-learning-based. The challenge we face is to convert customer feedback, which is often binary or low dimensional (yes/no, defect/non-defect), into high-dimensional synthetic labels such as transcriptions and named-entity annotations.\n\nOur approach has two major components: (1) an exploration module and (2) a feedback collection and label generation module. Here’s the architecture of the label generation model:\n\n![下载 13.jpg](https://dev-media.amazoncloud.cn/d48bb796b96d461e91f8ff060f9fd564_%E4%B8%8B%E8%BD%BD%20%2813%29.jpg)\n\nThe ground truth generation model converts customer feedback, which is often binary or low dimensional, into high-dimensional synthetic labels.\n\nThe input features include the dialogue context (user utterance, Alexa response, previous turns, next turns), categorical features (domain, intent, dialogue status), numerical features (number of tokens, speech recognition and natural-language-understanding confidence scores), and raw audio data. The model consists of a turn-level encoder and a dialogue-level Transformer-based encoder. The turn-level textual encoder is a pretrained RoBERTa model.\n\nWe pretrain the model in a self-supervised way, using synthetic contrastive data. For instance, we randomly swap answers from different dialogues as defect samples. After pretraining, the model is trained in a supervised fashion on multiple tasks, using explicit and implicit user feedback.\n\nWe evaluate the label generation model on several tasks. Two of these are goal segmentation, or determining which utterances in a dialogue are relevant to the accomplishment of a particular task, and goal evaluation, or determining whether the goal was successfully achieved.\n\nAs a baseline for these tasks, we used a set of annotations each of which was produced in a single pass by a single annotator. Our ground truth, for both the model and the baseline, was a set of annotations each of which had been corroborated by three different human annotators.\n\nOur model’s outputs on both tasks were comparable to the human annotators’: our model was slightly more accurate but had a slightly lower F1 score. We can set a higher threshold, exceeding human performance significantly, and still achieve much larger annotation throughput than manual labeling does.\n\nIn addition to the goal-related labels, our model also labels utterances according to intent (the action the customer wants performed, such as playing music), slots (the data types the intent operates on, such as song names), and slot-values (the particular values of the slots, such as “Purple Haze”).\n\nAs a baseline for slot and intent labeling, we used a RoBERTa-based model that didn’t incorporate contextual information, and we found that our model outperformed it across the board.\n\n#### **Self-learning-based defect reduction**\n\nThree years ago, we deployed a self-learning mechanism that automatically corrects defects in Alexa’s interpretation of customer utterances based purely on implicit signals.\n\nThis mechanism — unlike the ground truth generation module — doesn’t involve retraining Alexa’s natural-language-understanding models. Instead, it overwrites those models’ outputs, to improve their accuracy.\n\nThere are two ways to provide rewrites:\n- **Precomputed rewriting** produces request-rewrite pairs offline and loads them at run time. This process has no latency constraints, so it can use complex models, and during training, it can take advantage of rich offline signals such as user follow-up turns, user rephrases, Alexa responses, and video click-through rate. Its drawback is that at run time, it can’t take advantage of contextual information.\n- **Online rewriting** leverages contextual information (e.g., previous dialogue turns, dialogue location, times) at run time to produce rewrites. It enables rewriting of long-tail-defect queries, but it must meet latency constraints, and its training can’t take advantage of offline information.\n\n#### **Precomputed rewriting**\n\nWe’ve experimented with two different approaches to precomputing rewrite pairs, one that uses pretrained BERT models and one that uses absorbing Markov chains.\n\nThis slide illustrates the BERT-based approach:\n![下载 14.jpg](https://dev-media.amazoncloud.cn/988f623803534e5ba0f4d8a34aac7adc_%E4%B8%8B%E8%BD%BD%20%2814%29.jpg)\n\nThe contextual rephrase detection model casts rephrase detection as a span prediction problem, predicting the probability that each token is the start or end of a span.\n\nAt left is a sample dialogue in which an Alexa customer rephrases a query twice. The second rephrase elicits the correct response, so it’s a good candidate for a rewrite of the initial query. The final query is not a rephrase, and the rephrase extraction model must learn to differentiate rephrases from unrelated queries.\n\nWe cast rephrase detection as a span prediction problem, where we predict the probability that each token is the start or end of a span, using the embedding output of the final BERT layer. We also use timestamping to threshold the number of subsequent customer requests that count as rephrase candidates.\n\nWe ++[use absorbing Markov chains](https://www.amazon.science/blog/how-we-taught-alexa-to-correct-her-own-defects)++ to extract rewrite pairs from rephrase candidates that recur across a wide range of interactions.\n![下载 3.jpg](https://dev-media.amazoncloud.cn/90f1f51d9dfa43f881a4799a3f6b82e2_%E4%B8%8B%E8%BD%BD%20%283%29.jpg)\n\nThe probabilities of sequences of rephrases across customer interactions can be encoded in absorbing Markov chains.\n\nA Markov chain models a dynamic system as a sequence of states, each of which has a certain probability of transitioning to any of several other states. An absorbing Markov chain is one that has a final state, with zero probability of transitioning to any other, which is accessible from any other system state.\n\nWe use absorbing Markov chains to encode the probabilities that any given rephrase of the same query will follow any other across a range of interactions. Solving the Markov chain gives us the rewrite for any given request that is most likely to be successful.\n\n\n#### **Online rewriting**\n\nInstead of relying on customers’ own rephrasings, the online rewriting mechanism uses retrieval and ranking models to generate rewrites.\n\nRewrites are based on customers’ habitual usage patterns with the agent. In the example below, for instance, based on the customer’s interaction history, we rewrite the query “What’s the weather in Wilkerson?” as “What’s the weather in Wilkerson, California?” — even though “What’s the weather in Wilkerson, Washington?” is the more common query across interactions.\n\nThe model does, however, include a global layer as well as a personal layer, to prevent overindexing on personalized cases (for instance, inferring that a customer who likes the Selena Gomez song “We Don’t Talk Anymore” will also like the song from Encanto “We Don’t Talk about Bruno”) and to enable the model to provide rewrites when the customer’s interaction history provides little or no guidance.\n![下载 4.jpg](https://dev-media.amazoncloud.cn/b76c16b87736453e88ffcd7a7386f1e6_%E4%B8%8B%E8%BD%BD%20%284%29.jpg)\n\nThe online rewriting model’s personal layer factors in customer context, while the global prevents overindexing on personalized cases.\n\nThe personalized workstream and the global workstream include both retrieval and ranking models:\n\n- The retrieval model uses a dense-passage-retrieval (DPR) model, which maps texts into a low-dimensional, continuous space, to extract embeddings for both the index and the query. Then it uses some similarity measurement to decide the rewrite score.\n- The ranking model combines fuzzy match (e.g., through a single-encoder structure) with various metadata to make a reranking decision.\n\nWe’ve deployed all three of these self-learning approaches — BERT- and Markov-chain-based offline rewriting and online rewriting — and all have made a significant difference in the quality of Alexa customers’ experience.\n\nIn experiments, we compared the BERT-based offline approach to four baseline models on six machine-annotated and two human-annotated datasets, and it outperformed all baselines across the board, with improvements of as much as 16% to 17% on some of the machine-annotated datasets, while almost doubling the improvement on the human-annotated ones.\n\nThe offline approach that uses absorbing Markov chains has rewritten tens of millions of outputs from Alexa’s automatic-speech-recognition models, and it has a win-loss ratio of 8.5:1, meaning that for every one incorrect rewrite, it has 8.5 correct ones.\n\nAnd finally, in a series of A/B tests of the online rewrite engine, we found that the global rewrite alone reduced the defect rate by 13%, while the addition of the personal rewrite model reduced defects by a further 4%.\n\n#### **Teachable AI**\n\nQuery rewrites depend on implicit signals from customers, but customers can also explicitly teach Alexa their personal preferences, such as “I’m a Warriors fan” or “I like Italian restaurants.”\n\nAlexa’s teachable-AI mechanism can be either customer-initiated or Alexa-initiated. Alexa proactively senses teachable moments — as when, for instance, a customer repeats the same request multiple times or declares Alexa’s response unsatisfactory. And a customer can initiate a guided Q&A with Alexa with a simple cue like “Alexa, learn my preferences.”\n\nIn either case, Alexa can use the customer’s preferences to guide the very next customer interaction.\n\n#### **Failure point isolation**\n\nBesides recovering from defects through query rewriting, we also want to understand the root cause of failures for defects.\n\nDialogue assistants like Alexa depend on multiple models that process customer requests in stages. First, a voice trigger (or “wake word”) model determines whether the user is speaking to the assistant. Then an automatic-speech-recognition (ASR) module converts the audio stream into text. This text passes to a natural-language-understanding (NLU) component that determines the user request. An entity recognition model recognizes and resolves entities, and the assistant generates the best possible response using several subsystems. Finally, the text-to-speech (TTS) model renders the response into human-like speech.\n\nFor Alexa, part of self-learning is automatically determining, when a failure occurs, which component has failed. An error in an upstream component can propagate through the pipeline, in which case multiple components may fail. Thus, we focus on the first component that fails in a way that is irrecoverable, which we call the “failure point”.\n\nIn our initial work on failure point isolation, we recognize five error points as well as a “correct” class (meaning no component failed). The possible failure points are false wake (errors in voice trigger); ASR errors; NLU errors (for example, incorrectly routing “play Harry Potter” to video instead of audiobook); entity resolution and recognition errors; and result errors (for example, playing the wrong Harry Potter movie).\n\nTo better illustrate failure point problem, let's examine a multiturn dialogue:\n\n![下载 2.jpg](https://dev-media.amazoncloud.cn/b677523c841e4665ab873c2906bd5d3c_%E4%B8%8B%E8%BD%BD%20%282%29.jpg)\n\nFailure point isolation identifies the earliest point in the processing pipeline at which a failure occurs, and errors that the conversational agent recovers from are not classified as failures.\nIn the first turn, the customer is trying to open a garage door, and the conversational assistant recognizes the speech incorrectly. The entity resolution model doesn't recover from this error and also fails. Finally, the dialogue assistant fails to perform the correct action. In this case, ASR is the failure point, despite the other models’ subsequent failure.\n\nOn the second turn, the customer repeats the request. ASR makes a small error by not recognizing the article \"the\" in the speech, but the dialogue assistant takes the correct action. We would mark this turn as correct, as the ASR error didn't lead to downstream failure.\n\nThe last turn highlights one of the limitations of our method. The user is asking the dialogue assistant to make a sandwich, which dialogue assistants cannot do — yet. All models have worked correctly, but the user is not satisfied. In our work, we do not consider such turns defective.\n\nOn average, our best failure point isolation model achieves close to human performance across different categories (>92% vs human). This model uses extended dialogue context, features derived from logs of the assistants (e.g., ASR confidence), and traces of decision-making components (e.g., NLU modules). We outperform humans in result and correct-class detection. ASR, entity resolution, and NLU are in the 90-95% range.\n\nThe day when computing fades into the environment, and we walk from room to room casually instructing embedded computing devices how we want them to behave, may still lie in the future. But at Alexa AI, we’re already a long way down that path. And we’re moving farther forward every day.\n\nABOUT THE AUTHOR\n#### **[Ruhi Sarikaya](https://www.amazon.science/author/ruhi-sarikaya)**\nRuhi Sarikaya is director of applied science, Alexa AI.\n\n","render":"<p>Today at the annual meeting of the ACM Special Interest Group on Information Retrieval (<ins><a href=\"https://www.amazon.science/conferences-and-events/sigir-2022\" target=\"_blank\">SIGIR</a></ins>), Ruhi Sarikaya, the director of applied science for Alexa AI, delivered a keynote address titled “Intelligent Conversational Agents for Ambient Computing”. This is an edited version of that talk.</p>\n<p>For decades, the paradigm of personal computing was a desktop machine. Then came the laptop, and finally mobile devices so small we can hold them in our hands and carry them in our pockets, which felt revolutionary.</p>\n<p>.<ins><a href=\"https://twitter.com/Ruhi_Sarikaya?ref_src=twsrc%5Etfw\" target=\"_blank\">@Ruhi_Sarikaya</a></ins> (<ins><a href=\"https://twitter.com/amazon?ref_src=twsrc%5Etfw\" target=\"_blank\">@amazon</a></ins> Alexa) is now giving the 3rd #SIGIR2022 keynote.</p>\n<p>First time Ruhi is attending SIGIR, surely not the last one! <a href=\"https://t.co/KL0SnPq2CS\" target=\"_blank\">pic.twitter.com/KL0SnPq2CS</a></p>\n<p>— SIGIR 2022 😷 (@SIGIRConf) <a href=\"https://twitter.com/SIGIRConf/status/1547561854636986368?ref_src=twsrc%5Etfw\" target=\"_blank\">July 14, 2022</a></p>\n<p>All these devices, however, tether you to a screen. For the most part, you need to physically touch them to use them, which does not seem natural or convenient in a number of situations.</p>\n<p>So what comes next?</p>\n<p>The most likely answer is the Internet of things (IOT) and other intelligent, connected systems and services. What will the interface with the IOT be? Will you need a separate app on your phone for each connected device? Or when you walk into a room, will you simply speak to the device you want to reconfigure?</p>\n<p>At Alexa, we’re betting that conversational AI will be the interface for the IOT. And this will mean a shift in our understanding of what conversational AI is.</p>\n<p>In particular, the IOT creates new forms of context for conversational-AI models. By “context”, we mean the set of circumstances and facts that surround a particular event, situation, or entity, which an AI model can exploit to improve its performance.</p>\n<p>For instance, context can help resolve ambiguities. Here are some examples of what we mean by context:</p>\n<ul>\n<li>Device state: If the oven is on, then the question “What is the temperature?” is more likely to refer to oven temperature than it is in other contexts.</li>\n<li>Device types: If the device has a screen, it’s more likely that “play Hunger Games” refers to the movie than if the device has no screen.</li>\n<li>Physical/digital activity: If a customer listens only to jazz, “Play music” should elicit a different response than if the customer listens only to hard rock; if the customer always makes coffee after the alarm goes off, that should influence the interpretation of a command like “start brewing”.</li>\n</ul>\n<p>The same type of reasoning applies to other contextual signals, such as time of day, device and user location, environmental changes as measured by sensors, and so on.</p>\n<p>Training a conversational agent to factor in so many contextual signals is much more complicated than training it to recognize, say, song titles. Ideally, we would have a substantial number of training examples for every combination of customer, device, and context, but that’s obviously not practical. So how do we scale the training of contextually aware conversational agents?</p>\n<h4><a id=\"Selflearning_31\"></a><strong>Self-learning</strong></h4>\n<p>The answer is <ins><a href=\"https://www.amazon.science/tag/self-learning\" target=\"_blank\">self-learning</a></ins>. By self-learning, we mean a framework that enables an autonomous agent to learn from customer-system interactions, system signals, and predictive models.</p>\n<p>Customer-system interactions can provide both implicit feedback and explicit feedback. Alexa already handles both. If a customer interrupts Alexa’s response to a request — a “barge-in”, as we call it — or rephrases the request, that’s implicit feedback. Aggregated across multiple customers, barge-ins and rephrases indicate requests that aren’t being processed correctly.</p>\n<p>Customers can also explicitly teach Alexa how to handle particular requests. This can be customer-initiated, as when customers use Alexa’s <ins><a href=\"https://www.amazon.science/blog/new-alexa-features-interactive-teaching-by-customers\" target=\"_blank\">interactive-teaching</a></ins> capability, or Alexa-initiated, as when Alexa asks, “Did I answer your question?”</p>\n<p>The great advantages of self-learning are that it doesn’t require data annotation, so it scales better while protecting customer privacy; it minimizes the time and cost of updating models; and it relies on high-value training data, because customers know best what they mean and want.</p>\n<p>We have a few programs targeting different applications of self-learning, including automated generation of ground truth annotations, defect reduction, teachable AI, and determining root causes of failure.</p>\n<h4><a id=\"Automated_ground_truth_generation_44\"></a><strong>Automated ground truth generation</strong></h4>\n<p>At Alexa, we have launched a multiyear initiative to shift Alexa’s ML model development from manual-annotation-based to primarily self-learning-based. The challenge we face is to convert customer feedback, which is often binary or low dimensional (yes/no, defect/non-defect), into high-dimensional synthetic labels such as transcriptions and named-entity annotations.</p>\n<p>Our approach has two major components: (1) an exploration module and (2) a feedback collection and label generation module. Here’s the architecture of the label generation model:</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/d48bb796b96d461e91f8ff060f9fd564_%E4%B8%8B%E8%BD%BD%20%2813%29.jpg\" alt=\"下载 13.jpg\" /></p>\n<p>The ground truth generation model converts customer feedback, which is often binary or low dimensional, into high-dimensional synthetic labels.</p>\n<p>The input features include the dialogue context (user utterance, Alexa response, previous turns, next turns), categorical features (domain, intent, dialogue status), numerical features (number of tokens, speech recognition and natural-language-understanding confidence scores), and raw audio data. The model consists of a turn-level encoder and a dialogue-level Transformer-based encoder. The turn-level textual encoder is a pretrained RoBERTa model.</p>\n<p>We pretrain the model in a self-supervised way, using synthetic contrastive data. For instance, we randomly swap answers from different dialogues as defect samples. After pretraining, the model is trained in a supervised fashion on multiple tasks, using explicit and implicit user feedback.</p>\n<p>We evaluate the label generation model on several tasks. Two of these are goal segmentation, or determining which utterances in a dialogue are relevant to the accomplishment of a particular task, and goal evaluation, or determining whether the goal was successfully achieved.</p>\n<p>As a baseline for these tasks, we used a set of annotations each of which was produced in a single pass by a single annotator. Our ground truth, for both the model and the baseline, was a set of annotations each of which had been corroborated by three different human annotators.</p>\n<p>Our model’s outputs on both tasks were comparable to the human annotators’: our model was slightly more accurate but had a slightly lower F1 score. We can set a higher threshold, exceeding human performance significantly, and still achieve much larger annotation throughput than manual labeling does.</p>\n<p>In addition to the goal-related labels, our model also labels utterances according to intent (the action the customer wants performed, such as playing music), slots (the data types the intent operates on, such as song names), and slot-values (the particular values of the slots, such as “Purple Haze”).</p>\n<p>As a baseline for slot and intent labeling, we used a RoBERTa-based model that didn’t incorporate contextual information, and we found that our model outperformed it across the board.</p>\n<h4><a id=\"Selflearningbased_defect_reduction_68\"></a><strong>Self-learning-based defect reduction</strong></h4>\n<p>Three years ago, we deployed a self-learning mechanism that automatically corrects defects in Alexa’s interpretation of customer utterances based purely on implicit signals.</p>\n<p>This mechanism — unlike the ground truth generation module — doesn’t involve retraining Alexa’s natural-language-understanding models. Instead, it overwrites those models’ outputs, to improve their accuracy.</p>\n<p>There are two ways to provide rewrites:</p>\n<ul>\n<li><strong>Precomputed rewriting</strong> produces request-rewrite pairs offline and loads them at run time. This process has no latency constraints, so it can use complex models, and during training, it can take advantage of rich offline signals such as user follow-up turns, user rephrases, Alexa responses, and video click-through rate. Its drawback is that at run time, it can’t take advantage of contextual information.</li>\n<li><strong>Online rewriting</strong> leverages contextual information (e.g., previous dialogue turns, dialogue location, times) at run time to produce rewrites. It enables rewriting of long-tail-defect queries, but it must meet latency constraints, and its training can’t take advantage of offline information.</li>\n</ul>\n<h4><a id=\"Precomputed_rewriting_78\"></a><strong>Precomputed rewriting</strong></h4>\n<p>We’ve experimented with two different approaches to precomputing rewrite pairs, one that uses pretrained BERT models and one that uses absorbing Markov chains.</p>\n<p>This slide illustrates the BERT-based approach:<br />\n<img src=\"https://dev-media.amazoncloud.cn/988f623803534e5ba0f4d8a34aac7adc_%E4%B8%8B%E8%BD%BD%20%2814%29.jpg\" alt=\"下载 14.jpg\" /></p>\n<p>The contextual rephrase detection model casts rephrase detection as a span prediction problem, predicting the probability that each token is the start or end of a span.</p>\n<p>At left is a sample dialogue in which an Alexa customer rephrases a query twice. The second rephrase elicits the correct response, so it’s a good candidate for a rewrite of the initial query. The final query is not a rephrase, and the rephrase extraction model must learn to differentiate rephrases from unrelated queries.</p>\n<p>We cast rephrase detection as a span prediction problem, where we predict the probability that each token is the start or end of a span, using the embedding output of the final BERT layer. We also use timestamping to threshold the number of subsequent customer requests that count as rephrase candidates.</p>\n<p>We <ins><a href=\"https://www.amazon.science/blog/how-we-taught-alexa-to-correct-her-own-defects\" target=\"_blank\">use absorbing Markov chains</a></ins> to extract rewrite pairs from rephrase candidates that recur across a wide range of interactions.<br />\n<img src=\"https://dev-media.amazoncloud.cn/90f1f51d9dfa43f881a4799a3f6b82e2_%E4%B8%8B%E8%BD%BD%20%283%29.jpg\" alt=\"下载 3.jpg\" /></p>\n<p>The probabilities of sequences of rephrases across customer interactions can be encoded in absorbing Markov chains.</p>\n<p>A Markov chain models a dynamic system as a sequence of states, each of which has a certain probability of transitioning to any of several other states. An absorbing Markov chain is one that has a final state, with zero probability of transitioning to any other, which is accessible from any other system state.</p>\n<p>We use absorbing Markov chains to encode the probabilities that any given rephrase of the same query will follow any other across a range of interactions. Solving the Markov chain gives us the rewrite for any given request that is most likely to be successful.</p>\n<h4><a id=\"Online_rewriting_101\"></a><strong>Online rewriting</strong></h4>\n<p>Instead of relying on customers’ own rephrasings, the online rewriting mechanism uses retrieval and ranking models to generate rewrites.</p>\n<p>Rewrites are based on customers’ habitual usage patterns with the agent. In the example below, for instance, based on the customer’s interaction history, we rewrite the query “What’s the weather in Wilkerson?” as “What’s the weather in Wilkerson, California?” — even though “What’s the weather in Wilkerson, Washington?” is the more common query across interactions.</p>\n<p>The model does, however, include a global layer as well as a personal layer, to prevent overindexing on personalized cases (for instance, inferring that a customer who likes the Selena Gomez song “We Don’t Talk Anymore” will also like the song from Encanto “We Don’t Talk about Bruno”) and to enable the model to provide rewrites when the customer’s interaction history provides little or no guidance.<br />\n<img src=\"https://dev-media.amazoncloud.cn/b76c16b87736453e88ffcd7a7386f1e6_%E4%B8%8B%E8%BD%BD%20%284%29.jpg\" alt=\"下载 4.jpg\" /></p>\n<p>The online rewriting model’s personal layer factors in customer context, while the global prevents overindexing on personalized cases.</p>\n<p>The personalized workstream and the global workstream include both retrieval and ranking models:</p>\n<ul>\n<li>The retrieval model uses a dense-passage-retrieval (DPR) model, which maps texts into a low-dimensional, continuous space, to extract embeddings for both the index and the query. Then it uses some similarity measurement to decide the rewrite score.</li>\n<li>The ranking model combines fuzzy match (e.g., through a single-encoder structure) with various metadata to make a reranking decision.</li>\n</ul>\n<p>We’ve deployed all three of these self-learning approaches — BERT- and Markov-chain-based offline rewriting and online rewriting — and all have made a significant difference in the quality of Alexa customers’ experience.</p>\n<p>In experiments, we compared the BERT-based offline approach to four baseline models on six machine-annotated and two human-annotated datasets, and it outperformed all baselines across the board, with improvements of as much as 16% to 17% on some of the machine-annotated datasets, while almost doubling the improvement on the human-annotated ones.</p>\n<p>The offline approach that uses absorbing Markov chains has rewritten tens of millions of outputs from Alexa’s automatic-speech-recognition models, and it has a win-loss ratio of 8.5:1, meaning that for every one incorrect rewrite, it has 8.5 correct ones.</p>\n<p>And finally, in a series of A/B tests of the online rewrite engine, we found that the global rewrite alone reduced the defect rate by 13%, while the addition of the personal rewrite model reduced defects by a further 4%.</p>\n<h4><a id=\"Teachable_AI_125\"></a><strong>Teachable AI</strong></h4>\n<p>Query rewrites depend on implicit signals from customers, but customers can also explicitly teach Alexa their personal preferences, such as “I’m a Warriors fan” or “I like Italian restaurants.”</p>\n<p>Alexa’s teachable-AI mechanism can be either customer-initiated or Alexa-initiated. Alexa proactively senses teachable moments — as when, for instance, a customer repeats the same request multiple times or declares Alexa’s response unsatisfactory. And a customer can initiate a guided Q&amp;A with Alexa with a simple cue like “Alexa, learn my preferences.”</p>\n<p>In either case, Alexa can use the customer’s preferences to guide the very next customer interaction.</p>\n<h4><a id=\"Failure_point_isolation_133\"></a><strong>Failure point isolation</strong></h4>\n<p>Besides recovering from defects through query rewriting, we also want to understand the root cause of failures for defects.</p>\n<p>Dialogue assistants like Alexa depend on multiple models that process customer requests in stages. First, a voice trigger (or “wake word”) model determines whether the user is speaking to the assistant. Then an automatic-speech-recognition (ASR) module converts the audio stream into text. This text passes to a natural-language-understanding (NLU) component that determines the user request. An entity recognition model recognizes and resolves entities, and the assistant generates the best possible response using several subsystems. Finally, the text-to-speech (TTS) model renders the response into human-like speech.</p>\n<p>For Alexa, part of self-learning is automatically determining, when a failure occurs, which component has failed. An error in an upstream component can propagate through the pipeline, in which case multiple components may fail. Thus, we focus on the first component that fails in a way that is irrecoverable, which we call the “failure point”.</p>\n<p>In our initial work on failure point isolation, we recognize five error points as well as a “correct” class (meaning no component failed). The possible failure points are false wake (errors in voice trigger); ASR errors; NLU errors (for example, incorrectly routing “play Harry Potter” to video instead of audiobook); entity resolution and recognition errors; and result errors (for example, playing the wrong Harry Potter movie).</p>\n<p>To better illustrate failure point problem, let’s examine a multiturn dialogue:</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/b677523c841e4665ab873c2906bd5d3c_%E4%B8%8B%E8%BD%BD%20%282%29.jpg\" alt=\"下载 2.jpg\" /></p>\n<p>Failure point isolation identifies the earliest point in the processing pipeline at which a failure occurs, and errors that the conversational agent recovers from are not classified as failures.<br />\nIn the first turn, the customer is trying to open a garage door, and the conversational assistant recognizes the speech incorrectly. The entity resolution model doesn’t recover from this error and also fails. Finally, the dialogue assistant fails to perform the correct action. In this case, ASR is the failure point, despite the other models’ subsequent failure.</p>\n<p>On the second turn, the customer repeats the request. ASR makes a small error by not recognizing the article “the” in the speech, but the dialogue assistant takes the correct action. We would mark this turn as correct, as the ASR error didn’t lead to downstream failure.</p>\n<p>The last turn highlights one of the limitations of our method. The user is asking the dialogue assistant to make a sandwich, which dialogue assistants cannot do — yet. All models have worked correctly, but the user is not satisfied. In our work, we do not consider such turns defective.</p>\n<p>On average, our best failure point isolation model achieves close to human performance across different categories (&gt;92% vs human). This model uses extended dialogue context, features derived from logs of the assistants (e.g., ASR confidence), and traces of decision-making components (e.g., NLU modules). We outperform humans in result and correct-class detection. ASR, entity resolution, and NLU are in the 90-95% range.</p>\n<p>The day when computing fades into the environment, and we walk from room to room casually instructing embedded computing devices how we want them to behave, may still lie in the future. But at Alexa AI, we’re already a long way down that path. And we’re moving farther forward every day.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\"Ruhi_Sarikayahttpswwwamazonscienceauthorruhisarikaya_159\"></a><strong><a href=\"https://www.amazon.science/author/ruhi-sarikaya\" target=\"_blank\">Ruhi Sarikaya</a></strong></h4>\n<p>Ruhi Sarikaya is director of applied science, Alexa AI.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭