{"value":"Automatic speech recognition (++[ASR](https://www.amazon.science/tag/asr)++) is the conversion of acoustic speech to text, and with Alexa, the core ASR model for any given language is the same across customers.\n\nBut one of the ways the Alexa AI team improves ASR accuracy is by adapting models, on the fly, to customer context. For instance, Alexa can use acoustic properties of the speaker’s voice during the utterance of the wake word “Alexa” to filter out background voices when processing the customer’s request.\n\n![image.png](https://dev-media.amazoncloud.cn/0f77440ca4954cd687a84b5534e6a2bf_image.png)\n\nAlexa's automatic speech recognition uses customer and device context to improve performance.\n\nAlexa can also use the *device* context to improve performance. For instance, a device with a screen might display a list of possible responses to a query, and Alexa can bias the ASR model toward the list entries when processing subsequent instructions.\n\nRecently, Alexa also introduced a context embedding service, which uses a large neural network trained on a variety of tasks to produce a running sequence of vector representations — or embeddings — of the past several rounds of dialogue, both the customer’s utterances and Alexa’s responses.\n\nThe context embeddings are an on-tap resource for any Alexa machine learning model, and the service can be expanded to include other types of contextual information, such as device type, customers’ skill and content preferences, and the like.\n\n\n#### **Theory into practice**\n\n\nAt Amazon Science, we report regularly on the machine learning models — including those that ++[use context](asru-integrating-speech-recognition-and-language-understanding)++ — that enable improvements to Alexa’s ++[speech recognizer](https://www.amazon.science/tag/asr)++. But rarely do we discuss the engineering effort required to bring those models into production.\n\n[![image.png](https://dev-media.amazoncloud.cn/96ee792e3f2e4c35bb21c53d90ce3a42_image.png)](https://www.amazon.science/blog/on-device-speech-processing-makes-alexa-faster-lower-bandwidth)\n\n**Related content**\n\n[On-device speech processing makes Alexa faster, lower-bandwidth](https://www.amazon.science/blog/on-device-speech-processing-makes-alexa-faster-lower-bandwidth)\n\nTo get a sense for the scale of that effort, consider just one of Alexa’s deployed context-aware ASR models, which uses conversational context to improve accuracy when Alexa asks follow-up questions to confirm its understanding of commands. For instance:\n\nCustomer: “Alexa, call Meg.”\nAlexa: “Do you mean Meg Jones or Meg Bauer?”\nCustomer: “Bauer.”\n\nWhen Alexa hears “Bauer” in the second dialogue turn, it favors the interpretation “Bauer” over the more common “power” based on the context of the previous turn. On its initial deployment, conversational-context awareness reduced the ASR error rate during such interactions by almost 26%.\n\nThe underlying machine learning model factors in the current customer utterance, the text of the previous dialogue turn (both the customer’s utterance and Alexa’s response), and relevant context information from the Alexa services invoked by the utterance. This might include entries from an address book, a list of smart-home devices connected to Alexa, or the local-search function’s classification of names the customer mentioned — names of restaurants, of movie theaters, of gas stations, and so on.\n\nBut once the model has been trained, the engineers’ work is just beginning.\n\n\n#### **Problems of scale**\n\n\nThe first engineering problem is that there’s no way to know in advance which interactions with Alexa will require follow-up questions and responses. Embedding context information is a computationally intensive process. It would be a waste of resources to subject all customer utterances to that process, when only a fraction of them might lead to multiturn interactions.\n\nInstead, Alexa temporarily stores relevant context information on a server; utterances are time stamped and are automatically deleted after a fixed span of time. Only utterances that elicit follow-up questions from Alexa pass to the context embedding model.\n\n[![image.png](https://dev-media.amazoncloud.cn/7c7bdba6fd474f4d8cc8640bbb7d7c9e_image.png)](https://www.amazon.science/blog/reducing-unnecessary-clarification-questions-from-voice-agents)\n\n**Related content**\n\n[Reducing unnecessary clarification questions from voice agents](https://www.amazon.science/blog/reducing-unnecessary-clarification-questions-from-voice-agents)\n\nFor storage, the Alexa engineers are currently using AWS’s [DynamoDB](https://aws.amazon.com/dynamodb/) service. Like all of AWS’s storage options, DynamoDB encrypts the data it stores, so updating an entry in a DynamoDB table requires decrypting it first.\n\nThe engineering team wanted to track multiple dialogue events using only a single table entry; that way, it would be possible to decide whether or when to begin a contextual embedding with a single read operation.\n\nIf the contextual data were stored in the same entry, however, it would have to be decrypted and re-encrypted with every update about the interaction. Repeated for every customer utterance and Alexa reply every day, that would begin to add up, hogging system resources and causing delays.\n\n![image.png](https://dev-media.amazoncloud.cn/1dc50aa557ed492dbf9cebd860786179_image.png)\n\nSenior software development engineer Kyle Goehner.\n\nInstead, the Alexa engineers use a two-table system to store contextual information. One table records the system-level events associated with a particular Alexa interaction, such as the instruction to transcribe the customer’s utterance and the instruction to synthesize Alexa’s reply. Each of these events is represented by a single short text string, in a single table entry.\n\nThe entry also contains references to a second table, which stores the encrypted texts of the customer utterance, Alexa’s reply, and any other contextual data. Each of those data items has its own entry, so once it’s written, it doesn’t need to be decrypted until Alexa has decided to create a context vector for the associated transaction.\n\n“We have tried to keep the database design simple and flexible,” says Kyle Goehner, who led the engineering effort behind the follow-up contextual feature. “Even at the scale of Alexa, science is constantly evolving and our systems need to be easy to understand and adapt.”\n\n\n#### **Computation window**\n\n\nDelaying the creation of the context vector until the necessity arises poses a challenge, however, as it requires the execution of a complex computation in the middle of a customer’s interaction with Alexa. The engineers’ solution was to hide the computation time under Alexa’s reply to the customer’s request.\n\nAll Alexa interactions are initiated by a customer utterance, and almost all customer utterances elicit replies from Alexa. The event that triggers the creation of the context vector is re-opening the microphone to listen for a reply.\n\nThe texts of Alexa’s replies are available to the context model before Alexa actually speaks them, and the instruction to reopen the microphone follows immediately upon the instruction to begin the reply. This gives Alexa a narrow window of opportunity in which to produce the context vector.\n\n![image.png](https://dev-media.amazoncloud.cn/1eeb2b52ab1544f5972ac2e801a1f3d1_image.png)\n\nBecause the instruction to re-open the microphone (expect-speech directive) follows immediately upon the instruction to begin executing Alexa’s reply (speak directive), the reply itself buys the context model enough time to produce a context vector.\n\nIf the context model fails to generate a context vector in the available time, the ASR model simply operates as it normally would, without contextual information. As Goehner puts it, the contextual-ASR model is a “best-effort” model. “We’re trying to introduce accuracy improvement without introducing possible points of failure,” he says.\n\n\n#### **Consistent reads**\n\n\nTo ensure that contextual ASR can work in real time, the Alexa engineers also took advantage of some of DynamoDB’s special features.\n\nLike all good database systems, DynamoDB uses redundancy to ensure data availability; any data written to a DynamoDB server is copied multiple times. If the database is facing heavy demand, however, then when new data is written, there can be a delay in updating the copies. Consequently, a read request that gets routed to one of the copies may sometimes retrieve data that’s out of date.\n\nTo guard against this, every time Alexa writes new information to the contextual-ASR data table, it simultaneously requests the updated version of the entry recording the status of the interaction, ensuring that it never gets stale information. If the entry includes a record of the all-important instruction to re-open the microphone, Alexa initiates the creation of the contextual vector; if it doesn’t, Alexa simply discards the data.\n\n[![image.png](https://dev-media.amazoncloud.cn/83ccf60f02d54d30ab91e6d6810473c2_image.png)\n](https://www.amazon.science/latest-news/how-alexa-learned-arabic)\n\n**Related content**\n\n[How Alexa learned Arabic](https://www.amazon.science/latest-news/how-alexa-learned-arabic)\n\n“This work is the culmination of very close collaboration between scientists and engineers to design contextual machine learning to operate at Alexa scale,” says Debprakash Patnaik, a software development manager who leads the engineering teams behind the new system.\n\n“We launched this service for US English language and saw promising improvements in speech recognition errors,” says Rumit Sehlot, a software development manager at Amazon. “We also made it very easy to experiment with other contextual signals offline to see whether the new context is relevant. One recent success story has been adding the context of local information — for example, when a customer asks about nearby coffee shops and later requests driving directions to one of them.”\n\n“We recognize that after we’ve built and tested our models, the work of bringing those models to our customers has just begun,” adds Ivan Bulyko, an applied-science manager for Alexa Speech. “It takes sound design to make these services at scale, and that’s something the Alexa engineering team reliably provides.”\n\nABOUT THE AUTHOR\n\n\n#### **[Larry Hardesty](https://www.amazon.science/author/larry-hardesty)**\n\n\nLarry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.","render":"<p>Automatic speech recognition (<ins><a href=\"https://www.amazon.science/tag/asr\" target=\"_blank\">ASR</a></ins>) is the conversion of acoustic speech to text, and with Alexa, the core ASR model for any given language is the same across customers.</p>\n<p>But one of the ways the Alexa AI team improves ASR accuracy is by adapting models, on the fly, to customer context. For instance, Alexa can use acoustic properties of the speaker’s voice during the utterance of the wake word “Alexa” to filter out background voices when processing the customer’s request.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/0f77440ca4954cd687a84b5534e6a2bf_image.png\" alt=\"image.png\" /></p>\n<p>Alexa’s automatic speech recognition uses customer and device context to improve performance.</p>\n<p>Alexa can also use the <em>device</em> context to improve performance. For instance, a device with a screen might display a list of possible responses to a query, and Alexa can bias the ASR model toward the list entries when processing subsequent instructions.</p>\n<p>Recently, Alexa also introduced a context embedding service, which uses a large neural network trained on a variety of tasks to produce a running sequence of vector representations — or embeddings — of the past several rounds of dialogue, both the customer’s utterances and Alexa’s responses.</p>\n<p>The context embeddings are an on-tap resource for any Alexa machine learning model, and the service can be expanded to include other types of contextual information, such as device type, customers’ skill and content preferences, and the like.</p>\n<h4><a id=\"Theory_into_practice_15\"></a><strong>Theory into practice</strong></h4>\n<p>At Amazon Science, we report regularly on the machine learning models — including those that <ins><a href=\"asru-integrating-speech-recognition-and-language-understanding\" target=\"_blank\">use context</a></ins> — that enable improvements to Alexa’s <ins><a href=\"https://www.amazon.science/tag/asr\" target=\"_blank\">speech recognizer</a></ins>. But rarely do we discuss the engineering effort required to bring those models into production.</p>\n<p><a href=\"https://www.amazon.science/blog/on-device-speech-processing-makes-alexa-faster-lower-bandwidth\" target=\"_blank\"><img src=\"https://dev-media.amazoncloud.cn/96ee792e3f2e4c35bb21c53d90ce3a42_image.png\" alt=\"image.png\" /></a></p>\n<p><strong>Related content</strong></p>\n<p><a href=\"https://www.amazon.science/blog/on-device-speech-processing-makes-alexa-faster-lower-bandwidth\" target=\"_blank\">On-device speech processing makes Alexa faster, lower-bandwidth</a></p>\n<p>To get a sense for the scale of that effort, consider just one of Alexa’s deployed context-aware ASR models, which uses conversational context to improve accuracy when Alexa asks follow-up questions to confirm its understanding of commands. For instance:</p>\n<p>Customer: “Alexa, call Meg.”<br />\nAlexa: “Do you mean Meg Jones or Meg Bauer?”<br />\nCustomer: “Bauer.”</p>\n<p>When Alexa hears “Bauer” in the second dialogue turn, it favors the interpretation “Bauer” over the more common “power” based on the context of the previous turn. On its initial deployment, conversational-context awareness reduced the ASR error rate during such interactions by almost 26%.</p>\n<p>The underlying machine learning model factors in the current customer utterance, the text of the previous dialogue turn (both the customer’s utterance and Alexa’s response), and relevant context information from the Alexa services invoked by the utterance. This might include entries from an address book, a list of smart-home devices connected to Alexa, or the local-search function’s classification of names the customer mentioned — names of restaurants, of movie theaters, of gas stations, and so on.</p>\n<p>But once the model has been trained, the engineers’ work is just beginning.</p>\n<h4><a id=\"Problems_of_scale_39\"></a><strong>Problems of scale</strong></h4>\n<p>The first engineering problem is that there’s no way to know in advance which interactions with Alexa will require follow-up questions and responses. Embedding context information is a computationally intensive process. It would be a waste of resources to subject all customer utterances to that process, when only a fraction of them might lead to multiturn interactions.</p>\n<p>Instead, Alexa temporarily stores relevant context information on a server; utterances are time stamped and are automatically deleted after a fixed span of time. Only utterances that elicit follow-up questions from Alexa pass to the context embedding model.</p>\n<p><a href=\"https://www.amazon.science/blog/reducing-unnecessary-clarification-questions-from-voice-agents\" target=\"_blank\"><img src=\"https://dev-media.amazoncloud.cn/7c7bdba6fd474f4d8cc8640bbb7d7c9e_image.png\" alt=\"image.png\" /></a></p>\n<p><strong>Related content</strong></p>\n<p><a href=\"https://www.amazon.science/blog/reducing-unnecessary-clarification-questions-from-voice-agents\" target=\"_blank\">Reducing unnecessary clarification questions from voice agents</a></p>\n<p>For storage, the Alexa engineers are currently using AWS’s <a href=\"https://aws.amazon.com/dynamodb/\" target=\"_blank\">DynamoDB</a> service. Like all of AWS’s storage options, DynamoDB encrypts the data it stores, so updating an entry in a DynamoDB table requires decrypting it first.</p>\n<p>The engineering team wanted to track multiple dialogue events using only a single table entry; that way, it would be possible to decide whether or when to begin a contextual embedding with a single read operation.</p>\n<p>If the contextual data were stored in the same entry, however, it would have to be decrypted and re-encrypted with every update about the interaction. Repeated for every customer utterance and Alexa reply every day, that would begin to add up, hogging system resources and causing delays.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/1dc50aa557ed492dbf9cebd860786179_image.png\" alt=\"image.png\" /></p>\n<p>Senior software development engineer Kyle Goehner.</p>\n<p>Instead, the Alexa engineers use a two-table system to store contextual information. One table records the system-level events associated with a particular Alexa interaction, such as the instruction to transcribe the customer’s utterance and the instruction to synthesize Alexa’s reply. Each of these events is represented by a single short text string, in a single table entry.</p>\n<p>The entry also contains references to a second table, which stores the encrypted texts of the customer utterance, Alexa’s reply, and any other contextual data. Each of those data items has its own entry, so once it’s written, it doesn’t need to be decrypted until Alexa has decided to create a context vector for the associated transaction.</p>\n<p>“We have tried to keep the database design simple and flexible,” says Kyle Goehner, who led the engineering effort behind the follow-up contextual feature. “Even at the scale of Alexa, science is constantly evolving and our systems need to be easy to understand and adapt.”</p>\n<h4><a id=\"Computation_window_69\"></a><strong>Computation window</strong></h4>\n<p>Delaying the creation of the context vector until the necessity arises poses a challenge, however, as it requires the execution of a complex computation in the middle of a customer’s interaction with Alexa. The engineers’ solution was to hide the computation time under Alexa’s reply to the customer’s request.</p>\n<p>All Alexa interactions are initiated by a customer utterance, and almost all customer utterances elicit replies from Alexa. The event that triggers the creation of the context vector is re-opening the microphone to listen for a reply.</p>\n<p>The texts of Alexa’s replies are available to the context model before Alexa actually speaks them, and the instruction to reopen the microphone follows immediately upon the instruction to begin the reply. This gives Alexa a narrow window of opportunity in which to produce the context vector.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/1eeb2b52ab1544f5972ac2e801a1f3d1_image.png\" alt=\"image.png\" /></p>\n<p>Because the instruction to re-open the microphone (expect-speech directive) follows immediately upon the instruction to begin executing Alexa’s reply (speak directive), the reply itself buys the context model enough time to produce a context vector.</p>\n<p>If the context model fails to generate a context vector in the available time, the ASR model simply operates as it normally would, without contextual information. As Goehner puts it, the contextual-ASR model is a “best-effort” model. “We’re trying to introduce accuracy improvement without introducing possible points of failure,” he says.</p>\n<h4><a id=\"Consistent_reads_85\"></a><strong>Consistent reads</strong></h4>\n<p>To ensure that contextual ASR can work in real time, the Alexa engineers also took advantage of some of DynamoDB’s special features.</p>\n<p>Like all good database systems, DynamoDB uses redundancy to ensure data availability; any data written to a DynamoDB server is copied multiple times. If the database is facing heavy demand, however, then when new data is written, there can be a delay in updating the copies. Consequently, a read request that gets routed to one of the copies may sometimes retrieve data that’s out of date.</p>\n<p>To guard against this, every time Alexa writes new information to the contextual-ASR data table, it simultaneously requests the updated version of the entry recording the status of the interaction, ensuring that it never gets stale information. If the entry includes a record of the all-important instruction to re-open the microphone, Alexa initiates the creation of the contextual vector; if it doesn’t, Alexa simply discards the data.</p>\n<p><a href=\"https://www.amazon.science/latest-news/how-alexa-learned-arabic\" target=\"_blank\"><img src=\"https://dev-media.amazoncloud.cn/83ccf60f02d54d30ab91e6d6810473c2_image.png\" alt=\"image.png\" /><br />\n</a></p>\n<p><strong>Related content</strong></p>\n<p><a href=\"https://www.amazon.science/latest-news/how-alexa-learned-arabic\" target=\"_blank\">How Alexa learned Arabic</a></p>\n<p>“This work is the culmination of very close collaboration between scientists and engineers to design contextual machine learning to operate at Alexa scale,” says Debprakash Patnaik, a software development manager who leads the engineering teams behind the new system.</p>\n<p>“We launched this service for US English language and saw promising improvements in speech recognition errors,” says Rumit Sehlot, a software development manager at Amazon. “We also made it very easy to experiment with other contextual signals offline to see whether the new context is relevant. One recent success story has been adding the context of local information — for example, when a customer asks about nearby coffee shops and later requests driving directions to one of them.”</p>\n<p>“We recognize that after we’ve built and tested our models, the work of bringing those models to our customers has just begun,” adds Ivan Bulyko, an applied-science manager for Alexa Speech. “It takes sound design to make these services at scale, and that’s something the Alexa engineering team reliably provides.”</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\"Larry_Hardestyhttpswwwamazonscienceauthorlarryhardesty_110\"></a><strong><a href=\"https://www.amazon.science/author/larry-hardesty\" target=\"_blank\">Larry Hardesty</a></strong></h4>\n<p>Larry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.</p>\n"}