On-device speech processing makes Alexa faster, lower-bandwidth

自然语言处理
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"At Amazon, we always look to invent new technology for improving customer experience. One technology we have been working on at Alexa is on-device speech processing, which has multiple benefits: a reduction in latency, or the time it takes Alexa to respond to queries; lowered bandwidth consumption, which is important on portable devices; and increased availability in in-car units and other applications where Internet connectivity is intermittent. On-device processing also enables the fusion of the speech signal with other modalities, like vision, for features such as Alexa’s ++[natural turn-taking](https://www.amazon.science/blog/change-to-alexa-wake-word-process-adds-natural-turn-taking)++.\n\nIn the last year, we’ve continued to build upon Alexa’s on-device speech-processing capabilities. As a result of these inventions, we are launching a new setting that gives customers the option of having the audio of their Alexa voice requests processed locally, without being sent to the cloud.\n\nIn the cloud, storage space and computational capacity are effectively unconstrained. To ensure accuracy, our cloud models can be large and computationally demanding. Executing the same functions on-device means compressing our models into less than 1% as much space — with minimal loss in accuracy.\n\nMoreover, in the cloud, the separate components of Alexa’s speech-processing stack — automatic speech recognition (ASR), whisper detection, and speaker identification — run on separate server nodes with their own powerful processors. On-device, those functions have to share hardware not only with each other but with Alexa’s other core device functions, such as music playback.\n\nRe-creating Alexa’s speech-processing stack on-device was a massive undertaking. New methods for training small-footprint ASR models were part of the solution, but so were innovations in system design and hardware-software codesign. It was a joint effort across science and engineering teams over a span of years. Here’s a quick overview of how it works.\n\n#### **System architecture**\n\nOur on-device ASR model takes in an acoustic speech signal and outputs a set of hypotheses about what the speaker said, ranked according to probability. We represent those hypotheses as a lattice — a ++[graph](https://en.wikipedia.org/wiki/Graph_(discrete_mathematics))++ whose edges represent recognized words and the probability that a given word follows from the previous one.\n\n![image.png](https://dev-media.amazoncloud.cn/d6de927cba5a40aba90d7eb9304c34ab_image.png)\n\nAn example of a lattice representing ASR hypotheses.\n\nWith cloud-based ASR, encrypted audio streams to the cloud in small snippets called “frames”. With on-device ASR, only the lattice is sent to the cloud, where a large and powerful neural language model ++[reranks the hypotheses](https://www.amazon.science/publications/scalable-multi-corpora-neural-language-models-for-asr)++. The lattice can’t be sent until the customer has finished speaking, as words later in a sequence can dramatically change the overall probability of a hypothesis.\n\nThe model that determines when the customer has finished speaking is called an ++[end-pointer](https://www.amazon.science/blog/alexa-scientists-address-challenges-of-end-pointing)++. End-pointers offer a natural trade-off between accuracy and latency: an aggressive end-pointer will initiate speech processing earlier, but it might cut the speaker off prematurely, resulting in a poor customer experience.\n\nOn the device, we in fact run two end-pointers: One is a speculative end-pointer that we have tuned to be about 200 milliseconds faster than the final end-pointer, so we can initiate downstream processing — such as natural-language understanding (NLU) — ahead of the final end-pointed ASR result. In exchange for speed, however, we trade off a little accuracy.\n\nThe final end-pointer takes longer to make a decision but is more accurate. In cases in which the first end-pointer cuts speech off too early, the final end-pointer sends a revised lattice and the instruction to reset downstream processing. In the large majority of cases, however, the aggressive end-pointer is correct, which reduces user-perceived latency, since downstream tasks are initiated earlier.\n\nAnother aspect of ASR that had to move on-device is context awareness. When computing the probabilities in a lattice, the ASR model should, for instance, give added weight to otherwise uncommon names that happen to be in the customer’s address book or the names the customer has assigned to household devices.\n\n![image.png](https://dev-media.amazoncloud.cn/01b85d5a49794a82b5718d2022bd0ec2_image.png)\n\nA diagram of the on-device ASR network, with a closeup of the biasing mechanism that allows the network to ingest dynamic content. (Based on figures in \"++[Context-aware Transformer transducer for speech recognition](https://www.amazon.science/publications/context-aware-transformer-transducer-for-speech-recognition)++\")\n\nContext awareness can’t wait for the cloud because the lattice, though it encodes multiple hypotheses, doesn’t come close to encoding all possible hypotheses. When constructing the lattice, the ASR system has to prune a lot of low-probability hypotheses. If context awareness isn’t built into the on-device model, names of contacts or linked skills might end up getting pruned.\n\n![image.png](https://dev-media.amazoncloud.cn/2bc3e430b1de4772a26149ed5b00e5c5_image.png)\n\nThis attention map indicates that the trained network is attending to the correct entry in a list of Alexa-linked home appliances. (From \"++[Context-aware Transformer transducer for speech recognition](https://www.amazon.science/publications/context-aware-transformer-transducer-for-speech-recognition)++\")\n\nInitially, we use a so-called ++[shallow-fusion model](https://www.amazon.science/publications/personalization-strategies-for-end-to-end-speech-recognition-systems)++ to add context and personalize content on-device. When the system is building the lattice, it boosts the probabilities of contextually relevant words such as contact or appliance names.\n\nThe probability boosts are heuristic, however — they’re not learned jointly with the core ASR model. To achieve even better accuracy on personalized and long-tail content, we have developed a ++[multihead attention-based context-biasing mechanism](https://www.amazon.science/publications/context-aware-transformer-transducer-for-speech-recognition) ++that is jointly trained with the rest of the ASR subnetworks.\n\n#### **Model training**\n\nOn-device ASR required us to build a new model from the ground up, an end-to-end recurrent neural network-transducer (RNN-T) model that directly maps the input speech signal to an output sequence of words. Using a single neural network results in a significantly reduced memory footprint. But we had to develop new techniques, both for inference and for training, to achieve the degree of accuracy and compression that would let this technology handle utterances on-device.\n\nPreviously on Amazon Science, we’ve discussed some of the techniques we used to increase the accuracy of small-footprint end-to-end ASR models. With teacher-student training, for instance, we teach a small, lean model to match the outputs of a more-powerful but slower model. We developed a training methodology that made it possible to do teacher-student training efficiently with ++[a million hours of unannotated speech](https://www.amazon.science/blog/new-speech-recognition-experiments-demonstrate-how-machine-learning-can-scale)++.\n\n![image.png](https://dev-media.amazoncloud.cn/a5e23415ccfd4b4580dbd046c00d3418_image.png)\n\nDuring the training of a context-aware ASR model, a long-short-term-memory (LSTM) encoder encodes both unlabeled and labeled segments of the audio stream, so the model can use the entire input audio to improve ASR accuracy. (From \"++[Improving RNN-T ASR accuracy using context audio](https://www.amazon.science/publications/improving-rnn-t-asr-accuracy-using-context-audio)++\")\n\nTo further boost the accuracy of on-device RNN-T ASR, we developed techniques that allow the ++[neural network to learn and exploit audio context](https://www.amazon.science/publications/improving-rnn-t-asr-accuracy-using-context-audio)++ within a stream. For example, for a stream comprising two utterances, “Alexa” and “Play a song”, the audio context from the keyword segment (“Alexa”) helps the model focus on the foreground speech and speaker. Separately, we implemented a novel ++[discriminative-loss and training algorithm](https://www.amazon.science/publications/efficient-minimum-word-error-rate-training-of-rnn-transducer-for-end-to-end-speech-recognition)++ that aims at directly minimizing the word error rate (WER) of RNN-T ASR.\n\nOn top of these innovations, however, we still had to develop some new compression techniques to get the RNN-T to run efficiently on-device. A neural network consists of simple processing nodes each of which is connected to several others. The connections between nodes have associated weights, which determine how much one node’s output contributes to the computation performed by the next node.\n\nOne way to shrink a neural network’s memory footprint is to quantize its weights — to divide the total range of weights into a small set of intervals and use a single value to represent all the weights in each interval. So, for instance, the weights 0.70, 0.76, and 0.79 might all get quantized to the single value 0.75. Specifying an interval requires fewer bits than specifying several different floating-point values.\n\nIf quantization is done after a network has been trained, performance can suffer. We developed ++[a method of ```\\n<i class=\\"rte2-style-italic\\">quantization-aware</i>```training](https://www.amazon.science/publications/quantization-aware-training-with-absolute-cosine-regularization-for-automatic-speech-recognition)++ that imposes a probability distribution on the network weights during training, so that they can be easily quantized with little effect on performance. Unlike previous quantization-aware training methods, which mostly take quantization into account in the forward pass, ours accounts for quantization in the backward direction, during weight updates, through network loss regularization. And it does that efficiently.\n\nA way to make neural networks run more efficiently — also a vital concern on resource-constrained devices — is to reduce low weights to zero. Computations involving zero weights can be discarded, reducing the computational burden.\n\n![image.png](https://dev-media.amazoncloud.cn/1bfd1842e21f4bfa9b4f4c932f16f690_image.png)\n\nOver successive training epochs, sparsification gradually drops low weights in a weight matrix.\n\nBut again, doing that reduction after the network is trained can compromise performance. We developed a [```\\n<i class=\\"rte2-style-italic\\">sparsification</i>``` method](https://www.amazon.science/publications/sparsification-via-compressed-sensing-for-automatic-speech-recognition) that enables the gradual reduction of low-value weights during training, so the network learns a model amenable to weight pruning.\n\nNeural networks are typically trained on multiple passes through the same set of training data, or epochs. During each epoch, we force the network weights to diverge more and more, so that at the end of the final epoch, a fixed number of weights — say, half — are effectively zero. They can be safely discarded.\n\n![下载 2.gif](https://dev-media.amazoncloud.cn/a4d6107f4eb24dd4b49cf2257355bbba_%E4%B8%8B%E8%BD%BD%20%282%29.gif)\n\nA demonstration of the branching encoder network.\n\nTo improve on-device efficiency, we also developed a branching encoder network that uses two different neural networks to convert speech inputs into numeric representations suitable for speech classification. One network is complex, one simple, and the ASR model decides on the fly whether it can get away with passing an input frame to the simple model, saving computational cost and time. We described this work in more detail in an earlier ++[Amazon Science blog post](https://www.amazon.science/blog/how-to-make-on-device-speech-recognition-practical)++.\n\n#### **Hardware-software codesign**\n\nQuantization and sparsification make no difference to performance if the underlying hardware can’t take advantage of them. Another key to getting ASR to run on-device was the design of Amazon’s ++[AZ family of neural edge processors](https://press.aboutamazon.com/news-releases/news-release-details/introducing-all-new-echo-family-reimagined-inside-and-out)++, which are optimized for our specific approach to compression.\n\nFor one thing, where a typical processor might represent data using 16 or 32 bits, for certain core operations, the AZ processors accelerate computation by using an 8-bit or even lower-bit representation, because that’s all we need to handle quantized values.\n\nThe weights of a neural network are typically represented using a matrix — a big grid of numbers. A matrix half of whose values are zeroes takes up as much space as a matrix that’s all nonzero.\n\nOn computer chips, transferring data tends to be much more time consuming than executing computations. So when we load our matrix into memory, we use a compression scheme that takes advantage of low-bit quantization and zero values. The circuitry for decoding the compressed representation is built into the chip.\n\nIn the neural processor’s memory, the matrix is reconstituted: the zeroes are filled back in. But the processor’s circuitry is designed to recognize zero values and discard computations involving them. So the time savings from sparsification are realized in the hardware itself.\n\nMoving speech recognition on device entails a number of innovations in other areas, such as ++[reduction in the bandwidth required](https://www.amazon.science/blog/how-to-make-on-device-speech-recognition-practical)++ for model updates and ++[compression of NLU models](https://www.amazon.science/blog/new-method-for-compressing-neural-networks-better-preserves-accuracy)++, to ensure basic functionality on devices with intermittent Internet connectivity. And we’re also hard at work on ++[multilingual on-device ASR models](https://www.amazon.science/publications/joint-asr-and-language-identification-using-rnn-t-an-efficent-approach-to-dynamic-language-switching)++ for dynamic language switching, or automatically recognizing which of two languages a customer is speaking and responding in kind.\n\nThe launch of on-device speech processing is a huge step in bringing the benefits of “processing on the edge” to our customers, and we will continue to invent on their behalf in this area.\n\nABOUT THE AUTHOR\n\n#### **[Ariya Rastrow](https://www.amazon.science/author/ariya-rastrow)**\n\nAriya Rastrow is a senior principal scientist in the Alexa AI organization.\n\n#### **Shehzad Mevawalla**\n\nShehzad Mevawalla is the director of ASR in the Alexa Speech organization.\n\n\n","render":"<p>At Amazon, we always look to invent new technology for improving customer experience. One technology we have been working on at Alexa is on-device speech processing, which has multiple benefits: a reduction in latency, or the time it takes Alexa to respond to queries; lowered bandwidth consumption, which is important on portable devices; and increased availability in in-car units and other applications where Internet connectivity is intermittent. On-device processing also enables the fusion of the speech signal with other modalities, like vision, for features such as Alexa’s <ins><a href=\\"https://www.amazon.science/blog/change-to-alexa-wake-word-process-adds-natural-turn-taking\\" target=\\"_blank\\">natural turn-taking</a></ins>.</p>\n<p>In the last year, we’ve continued to build upon Alexa’s on-device speech-processing capabilities. As a result of these inventions, we are launching a new setting that gives customers the option of having the audio of their Alexa voice requests processed locally, without being sent to the cloud.</p>\n<p>In the cloud, storage space and computational capacity are effectively unconstrained. To ensure accuracy, our cloud models can be large and computationally demanding. Executing the same functions on-device means compressing our models into less than 1% as much space — with minimal loss in accuracy.</p>\n<p>Moreover, in the cloud, the separate components of Alexa’s speech-processing stack — automatic speech recognition (ASR), whisper detection, and speaker identification — run on separate server nodes with their own powerful processors. On-device, those functions have to share hardware not only with each other but with Alexa’s other core device functions, such as music playback.</p>\n<p>Re-creating Alexa’s speech-processing stack on-device was a massive undertaking. New methods for training small-footprint ASR models were part of the solution, but so were innovations in system design and hardware-software codesign. It was a joint effort across science and engineering teams over a span of years. Here’s a quick overview of how it works.</p>\n<h4><a id=\\"System_architecture_10\\"></a><strong>System architecture</strong></h4>\\n<p>Our on-device ASR model takes in an acoustic speech signal and outputs a set of hypotheses about what the speaker said, ranked according to probability. We represent those hypotheses as a lattice — a <ins><a href=\\"https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)\\" target=\\"_blank\\">graph</a></ins> whose edges represent recognized words and the probability that a given word follows from the previous one.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/d6de927cba5a40aba90d7eb9304c34ab_image.png\\" alt=\\"image.png\\" /></p>\n<p>An example of a lattice representing ASR hypotheses.</p>\n<p>With cloud-based ASR, encrypted audio streams to the cloud in small snippets called “frames”. With on-device ASR, only the lattice is sent to the cloud, where a large and powerful neural language model <ins><a href=\\"https://www.amazon.science/publications/scalable-multi-corpora-neural-language-models-for-asr\\" target=\\"_blank\\">reranks the hypotheses</a></ins>. The lattice can’t be sent until the customer has finished speaking, as words later in a sequence can dramatically change the overall probability of a hypothesis.</p>\n<p>The model that determines when the customer has finished speaking is called an <ins><a href=\\"https://www.amazon.science/blog/alexa-scientists-address-challenges-of-end-pointing\\" target=\\"_blank\\">end-pointer</a></ins>. End-pointers offer a natural trade-off between accuracy and latency: an aggressive end-pointer will initiate speech processing earlier, but it might cut the speaker off prematurely, resulting in a poor customer experience.</p>\n<p>On the device, we in fact run two end-pointers: One is a speculative end-pointer that we have tuned to be about 200 milliseconds faster than the final end-pointer, so we can initiate downstream processing — such as natural-language understanding (NLU) — ahead of the final end-pointed ASR result. In exchange for speed, however, we trade off a little accuracy.</p>\n<p>The final end-pointer takes longer to make a decision but is more accurate. In cases in which the first end-pointer cuts speech off too early, the final end-pointer sends a revised lattice and the instruction to reset downstream processing. In the large majority of cases, however, the aggressive end-pointer is correct, which reduces user-perceived latency, since downstream tasks are initiated earlier.</p>\n<p>Another aspect of ASR that had to move on-device is context awareness. When computing the probabilities in a lattice, the ASR model should, for instance, give added weight to otherwise uncommon names that happen to be in the customer’s address book or the names the customer has assigned to household devices.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/01b85d5a49794a82b5718d2022bd0ec2_image.png\\" alt=\\"image.png\\" /></p>\n<p>A diagram of the on-device ASR network, with a closeup of the biasing mechanism that allows the network to ingest dynamic content. (Based on figures in “<ins><a href=\\"https://www.amazon.science/publications/context-aware-transformer-transducer-for-speech-recognition\\" target=\\"_blank\\">Context-aware Transformer transducer for speech recognition</a></ins>”)</p>\n<p>Context awareness can’t wait for the cloud because the lattice, though it encodes multiple hypotheses, doesn’t come close to encoding all possible hypotheses. When constructing the lattice, the ASR system has to prune a lot of low-probability hypotheses. If context awareness isn’t built into the on-device model, names of contacts or linked skills might end up getting pruned.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/2bc3e430b1de4772a26149ed5b00e5c5_image.png\\" alt=\\"image.png\\" /></p>\n<p>This attention map indicates that the trained network is attending to the correct entry in a list of Alexa-linked home appliances. (From “<ins><a href=\\"https://www.amazon.science/publications/context-aware-transformer-transducer-for-speech-recognition\\" target=\\"_blank\\">Context-aware Transformer transducer for speech recognition</a></ins>”)</p>\n<p>Initially, we use a so-called <ins><a href=\\"https://www.amazon.science/publications/personalization-strategies-for-end-to-end-speech-recognition-systems\\" target=\\"_blank\\">shallow-fusion model</a></ins> to add context and personalize content on-device. When the system is building the lattice, it boosts the probabilities of contextually relevant words such as contact or appliance names.</p>\n<p>The probability boosts are heuristic, however — they’re not learned jointly with the core ASR model. To achieve even better accuracy on personalized and long-tail content, we have developed a ++<a href=\\"https://www.amazon.science/publications/context-aware-transformer-transducer-for-speech-recognition\\" target=\\"_blank\\">multihead attention-based context-biasing mechanism</a> ++that is jointly trained with the rest of the ASR subnetworks.</p>\\n<h4><a id=\\"Model_training_42\\"></a><strong>Model training</strong></h4>\\n<p>On-device ASR required us to build a new model from the ground up, an end-to-end recurrent neural network-transducer (RNN-T) model that directly maps the input speech signal to an output sequence of words. Using a single neural network results in a significantly reduced memory footprint. But we had to develop new techniques, both for inference and for training, to achieve the degree of accuracy and compression that would let this technology handle utterances on-device.</p>\n<p>Previously on Amazon Science, we’ve discussed some of the techniques we used to increase the accuracy of small-footprint end-to-end ASR models. With teacher-student training, for instance, we teach a small, lean model to match the outputs of a more-powerful but slower model. We developed a training methodology that made it possible to do teacher-student training efficiently with <ins><a href=\\"https://www.amazon.science/blog/new-speech-recognition-experiments-demonstrate-how-machine-learning-can-scale\\" target=\\"_blank\\">a million hours of unannotated speech</a></ins>.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/a5e23415ccfd4b4580dbd046c00d3418_image.png\\" alt=\\"image.png\\" /></p>\n<p>During the training of a context-aware ASR model, a long-short-term-memory (LSTM) encoder encodes both unlabeled and labeled segments of the audio stream, so the model can use the entire input audio to improve ASR accuracy. (From “<ins><a href=\\"https://www.amazon.science/publications/improving-rnn-t-asr-accuracy-using-context-audio\\" target=\\"_blank\\">Improving RNN-T ASR accuracy using context audio</a></ins>”)</p>\n<p>To further boost the accuracy of on-device RNN-T ASR, we developed techniques that allow the <ins><a href=\\"https://www.amazon.science/publications/improving-rnn-t-asr-accuracy-using-context-audio\\" target=\\"_blank\\">neural network to learn and exploit audio context</a></ins> within a stream. For example, for a stream comprising two utterances, “Alexa” and “Play a song”, the audio context from the keyword segment (“Alexa”) helps the model focus on the foreground speech and speaker. Separately, we implemented a novel <ins><a href=\\"https://www.amazon.science/publications/efficient-minimum-word-error-rate-training-of-rnn-transducer-for-end-to-end-speech-recognition\\" target=\\"_blank\\">discriminative-loss and training algorithm</a></ins> that aims at directly minimizing the word error rate (WER) of RNN-T ASR.</p>\n<p>On top of these innovations, however, we still had to develop some new compression techniques to get the RNN-T to run efficiently on-device. A neural network consists of simple processing nodes each of which is connected to several others. The connections between nodes have associated weights, which determine how much one node’s output contributes to the computation performed by the next node.</p>\n<p>One way to shrink a neural network’s memory footprint is to quantize its weights — to divide the total range of weights into a small set of intervals and use a single value to represent all the weights in each interval. So, for instance, the weights 0.70, 0.76, and 0.79 might all get quantized to the single value 0.75. Specifying an interval requires fewer bits than specifying several different floating-point values.</p>\n<p>If quantization is done after a network has been trained, performance can suffer. We developed <ins><a href=\\"https://www.amazon.science/publications/quantization-aware-training-with-absolute-cosine-regularization-for-automatic-speech-recognition\\" target=\\"_blank\\">a method of <code> &lt;i class=&quot;rte2-style-italic&quot;&gt;quantization-aware&lt;/i&gt;</code>training</a></ins> that imposes a probability distribution on the network weights during training, so that they can be easily quantized with little effect on performance. Unlike previous quantization-aware training methods, which mostly take quantization into account in the forward pass, ours accounts for quantization in the backward direction, during weight updates, through network loss regularization. And it does that efficiently.</p>\\n<p>A way to make neural networks run more efficiently — also a vital concern on resource-constrained devices — is to reduce low weights to zero. Computations involving zero weights can be discarded, reducing the computational burden.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/1bfd1842e21f4bfa9b4f4c932f16f690_image.png\\" alt=\\"image.png\\" /></p>\n<p>Over successive training epochs, sparsification gradually drops low weights in a weight matrix.</p>\n<p>But again, doing that reduction after the network is trained can compromise performance. We developed a <a href=\\"https://www.amazon.science/publications/sparsification-via-compressed-sensing-for-automatic-speech-recognition\\" target=\\"_blank\\"><code> &lt;i class=&quot;rte2-style-italic&quot;&gt;sparsification&lt;/i&gt;</code> method</a> that enables the gradual reduction of low-value weights during training, so the network learns a model amenable to weight pruning.</p>\n<p>Neural networks are typically trained on multiple passes through the same set of training data, or epochs. During each epoch, we force the network weights to diverge more and more, so that at the end of the final epoch, a fixed number of weights — say, half — are effectively zero. They can be safely discarded.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/a4d6107f4eb24dd4b49cf2257355bbba_%E4%B8%8B%E8%BD%BD%20%282%29.gif\\" alt=\\"下载 2.gif\\" /></p>\n<p>A demonstration of the branching encoder network.</p>\n<p>To improve on-device efficiency, we also developed a branching encoder network that uses two different neural networks to convert speech inputs into numeric representations suitable for speech classification. One network is complex, one simple, and the ASR model decides on the fly whether it can get away with passing an input frame to the simple model, saving computational cost and time. We described this work in more detail in an earlier <ins><a href=\\"https://www.amazon.science/blog/how-to-make-on-device-speech-recognition-practical\\" target=\\"_blank\\">Amazon Science blog post</a></ins>.</p>\n<h4><a id=\\"Hardwaresoftware_codesign_78\\"></a><strong>Hardware-software codesign</strong></h4>\\n<p>Quantization and sparsification make no difference to performance if the underlying hardware can’t take advantage of them. Another key to getting ASR to run on-device was the design of Amazon’s <ins><a href=\\"https://press.aboutamazon.com/news-releases/news-release-details/introducing-all-new-echo-family-reimagined-inside-and-out\\" target=\\"_blank\\">AZ family of neural edge processors</a></ins>, which are optimized for our specific approach to compression.</p>\n<p>For one thing, where a typical processor might represent data using 16 or 32 bits, for certain core operations, the AZ processors accelerate computation by using an 8-bit or even lower-bit representation, because that’s all we need to handle quantized values.</p>\n<p>The weights of a neural network are typically represented using a matrix — a big grid of numbers. A matrix half of whose values are zeroes takes up as much space as a matrix that’s all nonzero.</p>\n<p>On computer chips, transferring data tends to be much more time consuming than executing computations. So when we load our matrix into memory, we use a compression scheme that takes advantage of low-bit quantization and zero values. The circuitry for decoding the compressed representation is built into the chip.</p>\n<p>In the neural processor’s memory, the matrix is reconstituted: the zeroes are filled back in. But the processor’s circuitry is designed to recognize zero values and discard computations involving them. So the time savings from sparsification are realized in the hardware itself.</p>\n<p>Moving speech recognition on device entails a number of innovations in other areas, such as <ins><a href=\\"https://www.amazon.science/blog/how-to-make-on-device-speech-recognition-practical\\" target=\\"_blank\\">reduction in the bandwidth required</a></ins> for model updates and <ins><a href=\\"https://www.amazon.science/blog/new-method-for-compressing-neural-networks-better-preserves-accuracy\\" target=\\"_blank\\">compression of NLU models</a></ins>, to ensure basic functionality on devices with intermittent Internet connectivity. And we’re also hard at work on <ins><a href=\\"https://www.amazon.science/publications/joint-asr-and-language-identification-using-rnn-t-an-efficent-approach-to-dynamic-language-switching\\" target=\\"_blank\\">multilingual on-device ASR models</a></ins> for dynamic language switching, or automatically recognizing which of two languages a customer is speaking and responding in kind.</p>\n<p>The launch of on-device speech processing is a huge step in bringing the benefits of “processing on the edge” to our customers, and we will continue to invent on their behalf in this area.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Ariya_Rastrowhttpswwwamazonscienceauthorariyarastrow_96\\"></a><strong><a href=\\"https://www.amazon.science/author/ariya-rastrow\\" target=\\"_blank\\">Ariya Rastrow</a></strong></h4>\n<p>Ariya Rastrow is a senior principal scientist in the Alexa AI organization.</p>\n<h4><a id=\\"Shehzad_Mevawalla_100\\"></a><strong>Shehzad Mevawalla</strong></h4>\\n<p>Shehzad Mevawalla is the director of ASR in the Alexa Speech organization.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭