New Alexa feature enables natural, multiparty interactions

自然语言处理
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Alexa’s Conversation Mode — which we ++[announced last year](https://www.amazon.science/blog/change-to-alexa-wake-word-process-adds-natural-turn-taking)++ and are launching today — represents a major milestone in voice AI. Conversation Mode will let customers with the Echo Show 10 interact with Alexa more naturally, without the need to repeat the wake word.\n\nUsing a combination of visual and acoustic cues, the feature’s AI will recognize when customer speech is directed at the device and whether a reply is expected. A customer can invoke Conversation Mode by saying, “Alexa, join the conversation” and exit by saying, “Leave the conversation”. Alternatively, Alexa will exit the mode if there is no interaction for a short period of time.\n\n![image.png](https://dev-media.amazoncloud.cn/061f5aef0dfd47888dd007269ab99491_image.png)\n\nConversation Mode measures visual device directedness by estimating the head orientation of each person in the device’s field of view.\n\nConversation Mode enables one or more customers to engage with Alexa simultaneously. This makes detecting device directedness even harder, since a question like ‘How about a comedy?’ could be directed at Alexa or at another customer.\n\nThe feature also needs to have a low latency, to accurately detect the start of a device-directed utterance; otherwise, Alexa might not capture the full utterance. This is easier in wake-word-based interactions, as the detection of the wake word provides a defined starting point for processing an utterance.\n\nEnabling wake-word-free interactions for Conversation Mode required innovations in several areas, including visual device directedness detection, audio-based voice activity detection, and audiovisual feature fusion.\n\n#### **Visual device directedness detection (CVDD)**\n\nIn human communication, one cue for determining whom an utterance is directed to is the speaker’s physical orientation. Similarly, we developed a method for measuring visual device directedness by estimating the head orientation of each person in the device’s field of view.\n\n#### **Learn more**\n\nRead more about Alexa's new Conversation Mode on ++[About Amazon](https://www.aboutamazon.com/news/devices/conversation-mode-helps-interactions-with-alexa-feel-more-natural)++.\n\nThe standard approach to this problem is to detect a coarse set (typically five) of facial landmarks and then estimate face orientation from them using a geometry-based technique called perspective-n-point (PnP). This method is fast but has low accuracy in real-world scenarios. An alternative is to directly train a model that classifies each image region as device directed or not and apply it to the output of a face detector. But this requires a large, annotated dataset that is expensive to collect.\n\nInstead, we represent each head as a linear combination of template 3-D heads with different attributes. We trained a deep-neural-network model to infer the coefficients of the templates for a given input image and to determine the orientation of the head in the image. Then we quantized the weights of the model, to reduce its size and execution time.\n\nIn our experiments, this approach reduced the false-rejection rate (FRR) for visual device directedness detection by almost 80% relative to the PnP approach.\n\n#### **Audio-based device voice activity detection (DVAD)**\n\nIn addition to visual directedness, Conversation Mode leverages audio cues to determine when speech is directed at the device. To process the audio signal, we use a type of model known as a separable convolutional neural network (CNN). A standard CNN model works by sliding fixed-size filters across the input, looking for telltale patterns wherever they occur. In a separable CNN, the matrices that encode the filters are decomposed into smaller matrices, which are multiplied together to approximate the source matrix, reducing the computational burden.\n\nWe conducted experiments to fine-tune the architecture and optimize the filter size and the matrix decomposition to minimize latency.\n\n![image.png](https://dev-media.amazoncloud.cn/c04656b01e9040d180cd99f38fc479ff_image.png)\n\nThe DVAD model architecture, using separable convolutional neural networks. The inputs to the model are snapshots of the audio frequency spectrum known as log frequency filterbank energies (LFBEs).\n\nIn experiments, the addition of the DVAD model reduced the FRR by 83% relative to a model that used visual data only. The DVAD model is especially effective in reducing false wakes triggered by ambient noise or Alexa’s own responses when the customer is looking at the device but not speaking. Relative to the visual-only model, the addition of DVAD achieved an 80% reduction in false wakes due to ambient noise and a 42% reduction in false wakes triggered by Alexa’s own responses, all without increasing latency.\n\nWe are excited to launch Conversation Mode to our customers and look forward to their feedback. We are continuing to work on multiple improvements, such as “anaphoric barge-ins”, which would allow customers to interrupt a list of options with an exclamation like “That one!” We hope to delight our customers with updates to the feature, while breaking new scientific ground to enable them.\n\nABOUT THE AUTHOR\n\n#### **the Alexa AI team**\n\n\n\n\n\n\n\n\n\n\n\n\n","render":"<p>Alexa’s Conversation Mode — which we <ins><a href=\\"https://www.amazon.science/blog/change-to-alexa-wake-word-process-adds-natural-turn-taking\\" target=\\"_blank\\">announced last year</a></ins> and are launching today — represents a major milestone in voice AI. Conversation Mode will let customers with the Echo Show 10 interact with Alexa more naturally, without the need to repeat the wake word.</p>\n<p>Using a combination of visual and acoustic cues, the feature’s AI will recognize when customer speech is directed at the device and whether a reply is expected. A customer can invoke Conversation Mode by saying, “Alexa, join the conversation” and exit by saying, “Leave the conversation”. Alternatively, Alexa will exit the mode if there is no interaction for a short period of time.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/061f5aef0dfd47888dd007269ab99491_image.png\\" alt=\\"image.png\\" /></p>\n<p>Conversation Mode measures visual device directedness by estimating the head orientation of each person in the device’s field of view.</p>\n<p>Conversation Mode enables one or more customers to engage with Alexa simultaneously. This makes detecting device directedness even harder, since a question like ‘How about a comedy?’ could be directed at Alexa or at another customer.</p>\n<p>The feature also needs to have a low latency, to accurately detect the start of a device-directed utterance; otherwise, Alexa might not capture the full utterance. This is easier in wake-word-based interactions, as the detection of the wake word provides a defined starting point for processing an utterance.</p>\n<p>Enabling wake-word-free interactions for Conversation Mode required innovations in several areas, including visual device directedness detection, audio-based voice activity detection, and audiovisual feature fusion.</p>\n<h4><a id=\\"Visual_device_directedness_detection_CVDD_14\\"></a><strong>Visual device directedness detection (CVDD)</strong></h4>\\n<p>In human communication, one cue for determining whom an utterance is directed to is the speaker’s physical orientation. Similarly, we developed a method for measuring visual device directedness by estimating the head orientation of each person in the device’s field of view.</p>\n<h4><a id=\\"Learn_more_18\\"></a><strong>Learn more</strong></h4>\\n<p>Read more about Alexa’s new Conversation Mode on <ins><a href=\\"https://www.aboutamazon.com/news/devices/conversation-mode-helps-interactions-with-alexa-feel-more-natural\\" target=\\"_blank\\">About Amazon</a></ins>.</p>\n<p>The standard approach to this problem is to detect a coarse set (typically five) of facial landmarks and then estimate face orientation from them using a geometry-based technique called perspective-n-point (PnP). This method is fast but has low accuracy in real-world scenarios. An alternative is to directly train a model that classifies each image region as device directed or not and apply it to the output of a face detector. But this requires a large, annotated dataset that is expensive to collect.</p>\n<p>Instead, we represent each head as a linear combination of template 3-D heads with different attributes. We trained a deep-neural-network model to infer the coefficients of the templates for a given input image and to determine the orientation of the head in the image. Then we quantized the weights of the model, to reduce its size and execution time.</p>\n<p>In our experiments, this approach reduced the false-rejection rate (FRR) for visual device directedness detection by almost 80% relative to the PnP approach.</p>\n<h4><a id=\\"Audiobased_device_voice_activity_detection_DVAD_28\\"></a><strong>Audio-based device voice activity detection (DVAD)</strong></h4>\\n<p>In addition to visual directedness, Conversation Mode leverages audio cues to determine when speech is directed at the device. To process the audio signal, we use a type of model known as a separable convolutional neural network (CNN). A standard CNN model works by sliding fixed-size filters across the input, looking for telltale patterns wherever they occur. In a separable CNN, the matrices that encode the filters are decomposed into smaller matrices, which are multiplied together to approximate the source matrix, reducing the computational burden.</p>\n<p>We conducted experiments to fine-tune the architecture and optimize the filter size and the matrix decomposition to minimize latency.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/c04656b01e9040d180cd99f38fc479ff_image.png\\" alt=\\"image.png\\" /></p>\n<p>The DVAD model architecture, using separable convolutional neural networks. The inputs to the model are snapshots of the audio frequency spectrum known as log frequency filterbank energies (LFBEs).</p>\n<p>In experiments, the addition of the DVAD model reduced the FRR by 83% relative to a model that used visual data only. The DVAD model is especially effective in reducing false wakes triggered by ambient noise or Alexa’s own responses when the customer is looking at the device but not speaking. Relative to the visual-only model, the addition of DVAD achieved an 80% reduction in false wakes due to ambient noise and a 42% reduction in false wakes triggered by Alexa’s own responses, all without increasing latency.</p>\n<p>We are excited to launch Conversation Mode to our customers and look forward to their feedback. We are continuing to work on multiple improvements, such as “anaphoric barge-ins”, which would allow customers to interrupt a list of options with an exclamation like “That one!” We hope to delight our customers with updates to the feature, while breaking new scientific ground to enable them.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"the_Alexa_AI_team_44\\"></a><strong>the Alexa AI team</strong></h4>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭