Amazon Alexa’s new wake word research at Interspeech

自然语言处理
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Every interaction with Alexa begins with the wake word: usually “Alexa”, but sometimes “Amazon”, “Echo”, or “Computer” — or, now, “++[Hey Samuel](https://www.amazon.com/Samuel-L-Jackson-celebrity-voice/dp/B089NGHR7K)++”. Only after positively identifying the wake word does an Alexa-enabled device send your request to the cloud for further processing.\n\nSix years after the announcement of the first Amazon Echo, the Alexa science team continues to innovate new approaches to wake word recognition, improving Alexa’s responsiveness and accuracy.\n\nAt this year’s Interspeech, for instance, Alexa researchers presented ++[five different papers](https://www.amazon.science/tag/interspeech)++ about new techniques for wake word recognition. One of these — “++[Building a robust word-level wakeword verification network](https://www.amazon.science/publications/building-a-robust-word-level-wakeword-verification-network)++” — describes models that run in the cloud to confirm on-device wake word detections.\n\n![image.png](https://dev-media.amazoncloud.cn/2321c6be0bee4c1abc186ac094ee1a30_image.png)\n\nBecause audio signals can be represented as two-dimensional mappings of frequency (y-axis) against time (x-axis), convolutional neural networks apply naturally to them.\n\nFROM \"ACCURATE DETECTION OF WAKE WORD START AND END USING A CNN\"\n\nAnother paper, “++[Metadata-aware end-to-end keyword spotting](https://www.amazon.science/publications/metadata-aware-end-to-end-keyword-spotting)++”, describes a new system that uses metadata about the state of the Alexa-enabled device — such as the type of device and whether it’s playing music or sounding an alarm — to improve the accuracy of the on-device wake word detector.\n\nThe wake word detectors reported in both papers rely, at least in part, on convolutional neural networks. Originally developed for image processing, convolutional neural nets, or CNNs, repeatedly apply the same “filter” to small chunks of input data. For object recognition, for instance, a CNN might step through an image file in eight-by-eight blocks of pixels, inspecting each block for patterns associated with particular objects. \n\nSince audio signals can be represented as two-dimensional mappings of frequency against time, CNNs apply naturally to them as well. Each of the filters applied to a CNN’s inputs defines a ++[channel](https://www.amazon.science/blog/sizing-neural-networks-to-the-available-hardware)++ through the first layer of the CNN, and usually, the number of channels increases with every layer.\n\n\n#### **Varying norms**\n\n\n“++[Metadata-aware end-to-end keyword spotting](https://www.amazon.science/publications/metadata-aware-end-to-end-keyword-spotting)++” is motivated by the observation that if a device is emitting sound — music, synthesized speech, or an alarm sound — it causes a marked shift in the input signal’s log filter bank energies, or LFBEs. The log filter banks are a set of differently sized frequency bands chosen to emphasize the frequencies in which human hearing is most acute.\n\n![image.png](https://dev-media.amazoncloud.cn/7bdeb3b91dd541a9938aaf8eb4c52f27_image.png)\n\nAverage values of acoustic properties — log filter-bank energies — of wake word signals as measured on-device when the device is emitting sound (orange) and when it’s not (blue).\n\nFROM “METADATA-AWARE END-TO-END KEYWORD SPOTTING”\n\nTo address this problem, applied scientists Hongyi Liu and Apurva Abhyankar and their colleagues include device metadata as an input to their wake word model. The model embeds the metadata, or represents it as points in a multidimensional space, such that location in the space conveys information useful to the model. The model uses the embeddings in two different ways.\n\nOne is as an additional input to the last few layers of the network, which decide whether the acoustic input signal includes the wake word. The final outputs of the convolutional layers are flattened, or strung together into a single long vector. The metadata embedding vector is fed into a fully connected layer — a layer all of whose processing nodes pass their outputs to all of the nodes of the next layer — and the output is concatenated to the flattened audio feature vector. \n\nThis fused vector passes to a final fully connected layer, which issues a judgment about whether the network contains the wake word or not.\n\nThe other use of the metadata embedding is to modulate the outputs of the convolutional layers while they’re processing the input signal. The filters that a CNN applies to inputs are learned during training, and they can vary greatly in size. Consequently, the magnitude of the values passing through the network’s various channels can vary as well.\n\nWith CNNs, it’s common practice to normalize the channels’ outputs between layers, so that they’re all on a similar scale, and no one channel swamps the others. But Liu, Abhyankar, and their colleagues train their model to vary the normalization parameters depending on the metadata vector, which improves the network’s ability to generalize to heterogenous data sets. \n\nThe researchers believe that this model better captures the characteristics of the input audio signal when the Alexa-enabled device is emitting sound. In their paper, they report experiments showing that, on average, a model trained with metadata information achieves a 14.6% improvement in false-reject rate relative to a baseline CNN model.\n\n\n#### **Paying attention**\n\n\nThe metadata-aware wake word detector runs on-device, but the next two papers describe models that run in the cloud. On-device models must have small memory footprints, which means that they sacrifice some processing power. If an on-device model thinks it has detected a wake word, it sends a short snippet of audio to the cloud for confirmation by a larger, more-powerful model.\n\nThe on-device model tries to identify the start of the wake word, but sometimes it misses slightly. To ensure that the cloud-based model receives the whole wake word, the snippet sent by the device includes the half-second of audio preceding the device’s estimate of the wake word’s start.\n\n![image.png](https://dev-media.amazoncloud.cn/a227a51cbabf40aeb8b4d82f9ac9bc1a_image.png)\n\nWake word signals sent to the cloud for verification vary in the quality of their alignment. Sometimes, in trying to identify the start of the wake word, the device misses by a fraction of a second, which can cause difficulty for cloud models trained on well-aligned data.\n\nFROM “BUILDING A ROBUST WORD-LEVEL WAKEWORD VERIFICATION NETWORK”\n\nWhen CNNs are trained on well-aligned data, convolutional-layer outputs that focus on particular regions of the input can become biased toward finding wake word features in those regions. This can result in weaker performance when the alignment is noisy.\n\nIn “++[Building a robust word-level wakeword verification network](https://www.amazon.science/publications/building-a-robust-word-level-wakeword-verification-network)++”, applied scientist Rajath Kumar and his colleagues address this problem by adding recurrent layers to their network, to process the outputs of the convolutional layers. Recurrent layers can capture information as time sequences. Instead of learning where the wake word occurs in the input, the recurrent layers learn how the sequence changes temporally when the wake word is present. \n\nThis allows the researchers to train their network on well-aligned data without suffering much performance drop off on noisy data. To further improve performance, the researchers also use an attention layer to process and re-weight the sequential outputs of the recurrent layers, to emphasize the outputs required for wake word verification. The model is thus a convolutional-recurrent-attention (CRA) model.\n\n![image.png](https://dev-media.amazoncloud.cn/e10c4c47180741e280c46ab7c4af72d2_image.png)\n\nThese diagrams indicate the differences between a conventional CNN architecture (top) and a convolutional-recurrent-attention (CRA) architecture (bottom).\n\nFROM “BUILDING A ROBUST WORD-LEVEL WAKEWORD VERIFICATION NETWORK”\n\nTo evaluate their CRA model, the researchers compared its performance to that of several CNN-only models. Each example in the training data included 195 input frames, or sequential snapshots of the frequency spectrum. Within that 195-frame span, two of the CNN models looked at sliding windows of 76 frames or 100 frames. A third CNN model, and the CRA model, looked at all 195 frames. The models’ performance was assessed relative to a baseline wake word detector that combines a deep neural network with a hidden Markov model, an architecture was the industry standard for some time. \n\nOn accurately aligned inputs, the CRA model offers only a slight improvement over the 195-frame CNN model. Compared to the baseline, the CNN model reduced the false-acceptance rate by 53%, while the CRA reduced it by 55%. On the same task, the 100-frame CNN model achieved only a 35% reduction.\n\nTable showing percentage of decrease in FAR in comparison to 2-stage DNN-HMM.\nOn noisily aligned inputs, the CRA model offered a much more dramatic improvement. Relative to baseline, it reduced the false-acceptance rate by 60%. The 195-frame CNN model managed only 31%, the 100-frame model 44%.\n\n![image.png](https://dev-media.amazoncloud.cn/2c586f5de7014b40b507405efeaadf57_image.png)\n\nABOUT THE AUTHOR\n\n\n#### **[Larry Hardesty](https://www.amazon.science/author/larry-hardesty)**\n\n\nLarry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.","render":"<p>Every interaction with Alexa begins with the wake word: usually “Alexa”, but sometimes “Amazon”, “Echo”, or “Computer” — or, now, “<ins><a href=\"https://www.amazon.com/Samuel-L-Jackson-celebrity-voice/dp/B089NGHR7K\" target=\"_blank\">Hey Samuel</a></ins>”. Only after positively identifying the wake word does an Alexa-enabled device send your request to the cloud for further processing.</p>\n<p>Six years after the announcement of the first Amazon Echo, the Alexa science team continues to innovate new approaches to wake word recognition, improving Alexa’s responsiveness and accuracy.</p>\n<p>At this year’s Interspeech, for instance, Alexa researchers presented <ins><a href=\"https://www.amazon.science/tag/interspeech\" target=\"_blank\">five different papers</a></ins> about new techniques for wake word recognition. One of these — “<ins><a href=\"https://www.amazon.science/publications/building-a-robust-word-level-wakeword-verification-network\" target=\"_blank\">Building a robust word-level wakeword verification network</a></ins>” — describes models that run in the cloud to confirm on-device wake word detections.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/2321c6be0bee4c1abc186ac094ee1a30_image.png\" alt=\"image.png\" /></p>\n<p>Because audio signals can be represented as two-dimensional mappings of frequency (y-axis) against time (x-axis), convolutional neural networks apply naturally to them.</p>\n<p>FROM “ACCURATE DETECTION OF WAKE WORD START AND END USING A CNN”</p>\n<p>Another paper, “<ins><a href=\"https://www.amazon.science/publications/metadata-aware-end-to-end-keyword-spotting\" target=\"_blank\">Metadata-aware end-to-end keyword spotting</a></ins>”, describes a new system that uses metadata about the state of the Alexa-enabled device — such as the type of device and whether it’s playing music or sounding an alarm — to improve the accuracy of the on-device wake word detector.</p>\n<p>The wake word detectors reported in both papers rely, at least in part, on convolutional neural networks. Originally developed for image processing, convolutional neural nets, or CNNs, repeatedly apply the same “filter” to small chunks of input data. For object recognition, for instance, a CNN might step through an image file in eight-by-eight blocks of pixels, inspecting each block for patterns associated with particular objects.</p>\n<p>Since audio signals can be represented as two-dimensional mappings of frequency against time, CNNs apply naturally to them as well. Each of the filters applied to a CNN’s inputs defines a <ins><a href=\"https://www.amazon.science/blog/sizing-neural-networks-to-the-available-hardware\" target=\"_blank\">channel</a></ins> through the first layer of the CNN, and usually, the number of channels increases with every layer.</p>\n<h4><a id=\"Varying_norms_19\"></a><strong>Varying norms</strong></h4>\n<p>“<ins><a href=\"https://www.amazon.science/publications/metadata-aware-end-to-end-keyword-spotting\" target=\"_blank\">Metadata-aware end-to-end keyword spotting</a></ins>” is motivated by the observation that if a device is emitting sound — music, synthesized speech, or an alarm sound — it causes a marked shift in the input signal’s log filter bank energies, or LFBEs. The log filter banks are a set of differently sized frequency bands chosen to emphasize the frequencies in which human hearing is most acute.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/7bdeb3b91dd541a9938aaf8eb4c52f27_image.png\" alt=\"image.png\" /></p>\n<p>Average values of acoustic properties — log filter-bank energies — of wake word signals as measured on-device when the device is emitting sound (orange) and when it’s not (blue).</p>\n<p>FROM “METADATA-AWARE END-TO-END KEYWORD SPOTTING”</p>\n<p>To address this problem, applied scientists Hongyi Liu and Apurva Abhyankar and their colleagues include device metadata as an input to their wake word model. The model embeds the metadata, or represents it as points in a multidimensional space, such that location in the space conveys information useful to the model. The model uses the embeddings in two different ways.</p>\n<p>One is as an additional input to the last few layers of the network, which decide whether the acoustic input signal includes the wake word. The final outputs of the convolutional layers are flattened, or strung together into a single long vector. The metadata embedding vector is fed into a fully connected layer — a layer all of whose processing nodes pass their outputs to all of the nodes of the next layer — and the output is concatenated to the flattened audio feature vector.</p>\n<p>This fused vector passes to a final fully connected layer, which issues a judgment about whether the network contains the wake word or not.</p>\n<p>The other use of the metadata embedding is to modulate the outputs of the convolutional layers while they’re processing the input signal. The filters that a CNN applies to inputs are learned during training, and they can vary greatly in size. Consequently, the magnitude of the values passing through the network’s various channels can vary as well.</p>\n<p>With CNNs, it’s common practice to normalize the channels’ outputs between layers, so that they’re all on a similar scale, and no one channel swamps the others. But Liu, Abhyankar, and their colleagues train their model to vary the normalization parameters depending on the metadata vector, which improves the network’s ability to generalize to heterogenous data sets.</p>\n<p>The researchers believe that this model better captures the characteristics of the input audio signal when the Alexa-enabled device is emitting sound. In their paper, they report experiments showing that, on average, a model trained with metadata information achieves a 14.6% improvement in false-reject rate relative to a baseline CNN model.</p>\n<h4><a id=\"Paying_attention_43\"></a><strong>Paying attention</strong></h4>\n<p>The metadata-aware wake word detector runs on-device, but the next two papers describe models that run in the cloud. On-device models must have small memory footprints, which means that they sacrifice some processing power. If an on-device model thinks it has detected a wake word, it sends a short snippet of audio to the cloud for confirmation by a larger, more-powerful model.</p>\n<p>The on-device model tries to identify the start of the wake word, but sometimes it misses slightly. To ensure that the cloud-based model receives the whole wake word, the snippet sent by the device includes the half-second of audio preceding the device’s estimate of the wake word’s start.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/a227a51cbabf40aeb8b4d82f9ac9bc1a_image.png\" alt=\"image.png\" /></p>\n<p>Wake word signals sent to the cloud for verification vary in the quality of their alignment. Sometimes, in trying to identify the start of the wake word, the device misses by a fraction of a second, which can cause difficulty for cloud models trained on well-aligned data.</p>\n<p>FROM “BUILDING A ROBUST WORD-LEVEL WAKEWORD VERIFICATION NETWORK”</p>\n<p>When CNNs are trained on well-aligned data, convolutional-layer outputs that focus on particular regions of the input can become biased toward finding wake word features in those regions. This can result in weaker performance when the alignment is noisy.</p>\n<p>In “<ins><a href=\"https://www.amazon.science/publications/building-a-robust-word-level-wakeword-verification-network\" target=\"_blank\">Building a robust word-level wakeword verification network</a></ins>”, applied scientist Rajath Kumar and his colleagues address this problem by adding recurrent layers to their network, to process the outputs of the convolutional layers. Recurrent layers can capture information as time sequences. Instead of learning where the wake word occurs in the input, the recurrent layers learn how the sequence changes temporally when the wake word is present.</p>\n<p>This allows the researchers to train their network on well-aligned data without suffering much performance drop off on noisy data. To further improve performance, the researchers also use an attention layer to process and re-weight the sequential outputs of the recurrent layers, to emphasize the outputs required for wake word verification. The model is thus a convolutional-recurrent-attention (CRA) model.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/e10c4c47180741e280c46ab7c4af72d2_image.png\" alt=\"image.png\" /></p>\n<p>These diagrams indicate the differences between a conventional CNN architecture (top) and a convolutional-recurrent-attention (CRA) architecture (bottom).</p>\n<p>FROM “BUILDING A ROBUST WORD-LEVEL WAKEWORD VERIFICATION NETWORK”</p>\n<p>To evaluate their CRA model, the researchers compared its performance to that of several CNN-only models. Each example in the training data included 195 input frames, or sequential snapshots of the frequency spectrum. Within that 195-frame span, two of the CNN models looked at sliding windows of 76 frames or 100 frames. A third CNN model, and the CRA model, looked at all 195 frames. The models’ performance was assessed relative to a baseline wake word detector that combines a deep neural network with a hidden Markov model, an architecture was the industry standard for some time.</p>\n<p>On accurately aligned inputs, the CRA model offers only a slight improvement over the 195-frame CNN model. Compared to the baseline, the CNN model reduced the false-acceptance rate by 53%, while the CRA reduced it by 55%. On the same task, the 100-frame CNN model achieved only a 35% reduction.</p>\n<p>Table showing percentage of decrease in FAR in comparison to 2-stage DNN-HMM.<br />\nOn noisily aligned inputs, the CRA model offered a much more dramatic improvement. Relative to baseline, it reduced the false-acceptance rate by 60%. The 195-frame CNN model managed only 31%, the 100-frame model 44%.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/2c586f5de7014b40b507405efeaadf57_image.png\" alt=\"image.png\" /></p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\"Larry_Hardestyhttpswwwamazonscienceauthorlarryhardesty_80\"></a><strong><a href=\"https://www.amazon.science/author/larry-hardesty\" target=\"_blank\">Larry Hardesty</a></strong></h4>\n<p>Larry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭
contact-us