Alexa gets better at predicting customers’ goals

自然语言处理
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Amazon’s goal for Alexa is that customers should find interacting with her as natural as interacting with another human being. Toward that end, in September, we announced ++[natural turn-taking](https://www.amazon.science/blog/change-to-alexa-wake-word-process-adds-natural-turn-taking)++, or conversing with Alexa without repetition of the wake word, and in July we began the public beta of ++[Alexa Conversations](https://www.amazon.science/blog/science-innovations-power-alexa-conversations-dialogue-management)++, which makes it easier for developers to integrate sophisticated conversational experiences into their Alexa skills.\n\nNow, we’re taking another step toward natural interaction with a capability that lets Alexa infer customers’ latent goals — goals that are implicit in customer requests but not directly expressed. For instance, if a customer asks, “How long does it take to steep tea?”, the latent goal could be setting a timer for steeping a cup of tea.\n\nWith the new capability, Alexa might answer that question, “Five minutes is a good place to start\", then follow up by asking, \"Would you like me to set a timer for five minutes?”\n\n![image.png](https://dev-media.amazoncloud.cn/3dd9fc1683214e819d3ef807ea3e86be_image.png)\n\nIn this interaction, Alexa infers that a customer who asks about the weather at the beach may be interested in other information that could be useful for planning a beach trip.\n\nCREDIT: GLYNIS CONDON\n\nTransitions like this appear simple, but under the hood a number of sophisticated algorithms are running to detect latent goals, formulate them into actions that frequently span different skills, and surface them to customers in a way that doesn’t feel disruptive.\n\n\n#### **The trigger model**\n\n\nThe first step is to decide whether to anticipate a latent goal at all. Our early experiments showed that not all dialogue contexts are well suited to latent-goal discovery. When a customer asked for “recipes for chicken”, for instance, one of our initial prototypes would incorrectly follow up by asking, “Do you want me to play chicken sounds?”\n\nTo determine whether to suggest a latent goal, we use a deep-learning-based trigger model that factors in several aspects of the dialogue context, such as the text of the customer’s current session with Alexa and whether the customer has engaged with Alexa’s multi-skill suggestions in the past.\n\nIf the trigger model finds the context suitable, the system suggests a skill to service the latent goal. Those suggestions are based on relationships learned by the latent-goal discovery model. For instance, the model may have discovered that customers who ask how long tea should steep frequently follow up by asking Alexa to set a timer for that amount of time. \n\n\n#### **Latent-goal discovery**\n\n\nThe latent-goal discovery model analyzes multiple features of customer utterances, including pointwise mutual information, which measures the likelihood of an interaction pattern ++[in a given context](https://www.amazon.science/blog/how-we-taught-alexa-to-correct-her-own-defects)++ relative to its likelihood across all Alexa traffic. Deep-learning-based sub-modules assess additional features, such as whether the customer was trying to rephrase a prior command or issue a new command, or whether the direct goal and the latent goal share common entities or values (such as the time-value required to steep tea). \n\nOver time, the discovery model improves its predictions through active learning, which identifies sample interactions that would be particularly informative during future fine-tuning. \n\nNext, the ++[semantic-role labeling](https://en.wikipedia.org/wiki/Semantic_role_labeling)++ model looks for named entities and other arguments from the current conversation, including Alexa’s own responses. Our ++[context carryover models](https://www.amazon.science/blog/teaching-alexa-to-follow-conversations)++ transform those entities into a ++[structured format](https://www.amazon.science/blog/how-alexa-is-learning-to-converse-more-naturally)++ that the follow-on skill can consume, even if it is a third-party skill that uses its own ontology, or concept hierarchy.\n\nLastly, through ++[bandit learning](https://www.amazon.science/blog/a-general-approach-to-solving-bandit-problems)++, in which machine learning models track whether recommendations are helping or not, underperforming experiences are automatically suppressed.\n\nThis capability is already available to Alexa customers in English in the United States. It requires no additional effort from skill developers to activate. However, skill developers can make their skills more visible to the discovery model by using the ++[Name-Free Interaction Toolkit](https://developer.amazon.com/en-US/blogs/alexa/alexa-skills-kit/2020/07/add-new-signals-to-your-skill-that-alexa-can-consider-for-name-free-requests)++, which provides natural hooks for interactions between skills. While skills may experience different results, our early metric show that latent-goal discovery has increased customer engagement with some developers’ skills.\n\nWe are thrilled about this invention as it aids discovery of Alexa’s skills and provides increased utility to our customers. \n\nABOUT THE AUTHOR\n\n\n#### **[Anjishnu Kumar](https://www.amazon.science/author/anjishnu-kumar)**\n\n\nAnjishnu Kumar is a senior applied scientist in the Alexa AI organization.\n\n\n#### **[Anand Rathi](https://www.amazon.science/author/anand-rathi)**\n\n\nAnand Rathi is a director of software development in the Alexa AI organization.","render":"<p>Amazon’s goal for Alexa is that customers should find interacting with her as natural as interacting with another human being. Toward that end, in September, we announced <ins><a href=\\"https://www.amazon.science/blog/change-to-alexa-wake-word-process-adds-natural-turn-taking\\" target=\\"_blank\\">natural turn-taking</a></ins>, or conversing with Alexa without repetition of the wake word, and in July we began the public beta of <ins><a href=\\"https://www.amazon.science/blog/science-innovations-power-alexa-conversations-dialogue-management\\" target=\\"_blank\\">Alexa Conversations</a></ins>, which makes it easier for developers to integrate sophisticated conversational experiences into their Alexa skills.</p>\n<p>Now, we’re taking another step toward natural interaction with a capability that lets Alexa infer customers’ latent goals — goals that are implicit in customer requests but not directly expressed. For instance, if a customer asks, “How long does it take to steep tea?”, the latent goal could be setting a timer for steeping a cup of tea.</p>\n<p>With the new capability, Alexa might answer that question, “Five minutes is a good place to start&quot;, then follow up by asking, &quot;Would you like me to set a timer for five minutes?”</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/3dd9fc1683214e819d3ef807ea3e86be_image.png\\" alt=\\"image.png\\" /></p>\n<p>In this interaction, Alexa infers that a customer who asks about the weather at the beach may be interested in other information that could be useful for planning a beach trip.</p>\n<p>CREDIT: GLYNIS CONDON</p>\n<p>Transitions like this appear simple, but under the hood a number of sophisticated algorithms are running to detect latent goals, formulate them into actions that frequently span different skills, and surface them to customers in a way that doesn’t feel disruptive.</p>\n<h4><a id=\\"The_trigger_model_15\\"></a><strong>The trigger model</strong></h4>\\n<p>The first step is to decide whether to anticipate a latent goal at all. Our early experiments showed that not all dialogue contexts are well suited to latent-goal discovery. When a customer asked for “recipes for chicken”, for instance, one of our initial prototypes would incorrectly follow up by asking, “Do you want me to play chicken sounds?”</p>\n<p>To determine whether to suggest a latent goal, we use a deep-learning-based trigger model that factors in several aspects of the dialogue context, such as the text of the customer’s current session with Alexa and whether the customer has engaged with Alexa’s multi-skill suggestions in the past.</p>\n<p>If the trigger model finds the context suitable, the system suggests a skill to service the latent goal. Those suggestions are based on relationships learned by the latent-goal discovery model. For instance, the model may have discovered that customers who ask how long tea should steep frequently follow up by asking Alexa to set a timer for that amount of time.</p>\n<h4><a id=\\"Latentgoal_discovery_25\\"></a><strong>Latent-goal discovery</strong></h4>\\n<p>The latent-goal discovery model analyzes multiple features of customer utterances, including pointwise mutual information, which measures the likelihood of an interaction pattern <ins><a href=\\"https://www.amazon.science/blog/how-we-taught-alexa-to-correct-her-own-defects\\" target=\\"_blank\\">in a given context</a></ins> relative to its likelihood across all Alexa traffic. Deep-learning-based sub-modules assess additional features, such as whether the customer was trying to rephrase a prior command or issue a new command, or whether the direct goal and the latent goal share common entities or values (such as the time-value required to steep tea).</p>\n<p>Over time, the discovery model improves its predictions through active learning, which identifies sample interactions that would be particularly informative during future fine-tuning.</p>\n<p>Next, the <ins><a href=\\"https://en.wikipedia.org/wiki/Semantic_role_labeling\\" target=\\"_blank\\">semantic-role labeling</a></ins> model looks for named entities and other arguments from the current conversation, including Alexa’s own responses. Our <ins><a href=\\"https://www.amazon.science/blog/teaching-alexa-to-follow-conversations\\" target=\\"_blank\\">context carryover models</a></ins> transform those entities into a <ins><a href=\\"https://www.amazon.science/blog/how-alexa-is-learning-to-converse-more-naturally\\" target=\\"_blank\\">structured format</a></ins> that the follow-on skill can consume, even if it is a third-party skill that uses its own ontology, or concept hierarchy.</p>\n<p>Lastly, through <ins><a href=\\"https://www.amazon.science/blog/a-general-approach-to-solving-bandit-problems\\" target=\\"_blank\\">bandit learning</a></ins>, in which machine learning models track whether recommendations are helping or not, underperforming experiences are automatically suppressed.</p>\n<p>This capability is already available to Alexa customers in English in the United States. It requires no additional effort from skill developers to activate. However, skill developers can make their skills more visible to the discovery model by using the <ins><a href=\\"https://developer.amazon.com/en-US/blogs/alexa/alexa-skills-kit/2020/07/add-new-signals-to-your-skill-that-alexa-can-consider-for-name-free-requests\\" target=\\"_blank\\">Name-Free Interaction Toolkit</a></ins>, which provides natural hooks for interactions between skills. While skills may experience different results, our early metric show that latent-goal discovery has increased customer engagement with some developers’ skills.</p>\n<p>We are thrilled about this invention as it aids discovery of Alexa’s skills and provides increased utility to our customers.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Anjishnu_Kumarhttpswwwamazonscienceauthoranjishnukumar_43\\"></a><strong><a href=\\"https://www.amazon.science/author/anjishnu-kumar\\" target=\\"_blank\\">Anjishnu Kumar</a></strong></h4>\n<p>Anjishnu Kumar is a senior applied scientist in the Alexa AI organization.</p>\n<h4><a id=\\"Anand_Rathihttpswwwamazonscienceauthoranandrathi_49\\"></a><strong><a href=\\"https://www.amazon.science/author/anand-rathi\\" target=\\"_blank\\">Anand Rathi</a></strong></h4>\n<p>Anand Rathi is a director of software development in the Alexa AI organization.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭