How to make on-device speech recognition practical

机器学习
自然语言处理
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Historically, Alexa’s automatic-speech-recognition models, which convert speech to text, have run in the cloud. But in recent years, we’ve been working to move more of Alexa’s computational capacity to the edge of the network — to Alexa-enabled devices themselves.\n\nThe move to the edge promises faster response times, since data doesn’t have to travel to and from the cloud; lower consumption of Internet bandwidth, which is important in some applications; and availability on devices with inconsistent Internet connections, such as Alexa-enabled in-car sound systems.\n\nAt this year’s Interspeech, we and our colleagues presented two papers describing some of the innovations we’re introducing to make it practical to run Alexa at the edge.\n\nIn one paper, “++[Amortized neural networks for low-latency speech recognition](https://www.amazon.science/publications/amortized-neural-networks-for-low-latency-speech-recognition)++”, we show how to reduce the computational cost of neural-network-based automatic speech recognition (ASR) by 45% with no loss in accuracy. Our method also has lower latencies than similar methods for reducing computation, meaning that it enables Alexa to respond more quickly to customer requests.\n\nIn the other paper, “++[Learning a neural diff for speech models](https://www.amazon.science/publications/learning-a-neural-diff-for-speech-models)++”, we show how to dramatically reduce the bandwidth required to update neural models on the edge. Instead of transmitting a complete model, we transmit a set of updates for some select parameters. In our experiments, this reduced the size of the update by as much as 98% with negligible effect on model accuracy.\n\n#### **Amortized neural networks**\n\nNeural ASR models are usually encoder-decoder models. The input to the encoder is a sequence of short speech snippets called frames, which the encoder converts into a representation that’s useful for decoding. The decoder translates that representation into text.\n\nNeural encoders can be massive, requiring millions of computations for each input. But much of a speech signal is uninformative, consisting of pauses between syllables or redundant sounds. Passing uninformative frames through a huge encoder is just wasted computation.\n\nOur approach is to use multiple encoders, of differing complexity, and decide on the fly which should handle a given frame of speech. That decision is made by a small neural network called an arbitrator, which must process every input frame before it’s encoded. The arbitrator adds some computational overhead to the process, but the time savings from using a leaner encoder is more than enough to offset it.\n\nResearchers have tried similar approaches in domains other than speech, but when they trained their models, they minimized the average complexity of the frame-encoding process. That leaves open the possibility that the last few frames of the signal may pass to the more complex encoder, causing delays (increasing latency).\n\n![image.png](https://dev-media.amazoncloud.cn/435e9b4255604f37b3887417a3f5656b_image.png)\n\nBoth processing flows above (a and b) distribute the same number of frames to the fast and slow (F and S) encoders, respectively, resulting in the same average computational cost. But the top flow incurs a significantly higher latency.\n\nIn our paper, we propose a new loss function that adds a penalty (Lamr in the figure above) for routing frames to the fast encoder when we don’t have a significant audio backlog. Without the penalty term, our branched-encoder model reduces latency to 29 to 234 milliseconds, versus thousands of milliseconds for models with a single encoder. But adding the penalty term cuts latency even further, to the 2-to-9-millisecond range.\n\n![下载.gif](https://dev-media.amazoncloud.cn/78440f490f4d4dc6a2d5c198378221e5_%E4%B8%8B%E8%BD%BD.gif)\n\nThe audio backlog is one of the factors that the arbitrator considers when deciding which encoder should receive a given frame of audio.\n\nIn our experiments, we used two encoders, one complex and one lean, although in principle, our approach could generalize to larger numbers of encoders.\n\nWe train the arbitrator and both encoders together, end to end. During training, the same input passes through both encoders, and based on the accuracy of the resulting speech transcription, the arbitrator learns a probability distribution, which describes how often it should route frames with certain characteristics to the slow or fast encoder.\n\nOver multiple epochs — multiple passes through the training data — we turn up the “temperature” on the arbitrator, skewing the distribution it learns more dramatically. In the first epoch, the split for a certain type of frame might be 70%-30% toward one encoder or the other. After three or four epochs, however, all of the splits are more like 99.99%-0.01% — essentially binary classifications.\n\nWe used three baselines in our experiments, all of which were single-encoder models. One was the full-parameter model, and the other two were compressed versions of the same model. One of these was compressed through sparsification (pruning of nonessential network weights), the other through matrix factorization (decomposing the model’s weight matrix into two smaller matrices that are multiplied together). \n\nAgainst the baselines, we compared two versions of our model, which were compressed through the same two methods. We ran all the models on a single-threaded processor at 650 million FLOPs per second.\n\nOur sparse model had the lowest latency —two milliseconds, compared to 3,410 to 6,154 milliseconds for the baselines — and our matrix factorization model required the fewest number of floating-point operations per frame — 23 million, versus 30 million to 43 million for the baselines. Our accuracy remained comparable, however — a word error rate of 8.6% to 8.7%, versus 8.5% to 8.7% for the baselines.\n\n#### **Neural diffs**\n\nThe ASR models that power Alexa are constantly being updated. During the Olympics, for instance, we anticipated a large spike in requests that used words like “Ledecky” and “Kalisz” and updated our models accordingly.\n\nWith cloud-based ASR, when we’ve updated a model, we simply send copies of it to a handful of servers in a data center. But with edge ASR, we may ultimately need to send updates to millions of devices simultaneously. So one of our research goals is to minimize the bandwidth requirements for edge updates.\n\nIn our other Interspeech paper, we borrow an idea from software engineering — that of the diff, or a file that charts the differences between the previous version of a codebase and the current one.\n\nOur idea was that, if we could develop the equivalent of a diff for neural networks, we could use it to update on-device ASR models, rather than having to transmit all the parameters of a complete network with every update.\n\nWe experimented with two different approaches to creating a diff, matrix sparsification and hashing. With matrix sparsification we begin with two matrices of the same size, one that represents the weights of the connections in the existing ASR model and one that’s all zeroes.\n\nThen, when we retrain the ASR model on new data, we update, not the parameters of the old model, but the entries in the second matrix — the diff. The updated model is a linear combination of the original weights and the values in the diff.\n\n![image.png](https://dev-media.amazoncloud.cn/d785def445e04f369a5d867e72731747_image.png)\n\nOver successive training epochs, we prune the entries of matrices with too many non-zeroes, gradually sparsifying the diff.\n\nWhen training the diff, we use an iterative procedure that prunes matrices with too many non-zero entries. As we did when training the arbitrator in the branched-encoder network, we turn up the temperature over successive epochs to make the diff sparser and sparser.\n\nOur other approach to creating diffs was to use a hash function, a function that maps a large number of mathematical objects to a much smaller number of storage locations, or “buckets”. Hash functions are designed to distribute objects evenly across buckets, regardless of the objects’ values.\n\nWith this approach, we hash the locations in the diff matrix to buckets, and then, during training, we update the values in the buckets, rather than the values in the matrices. Since each bucket corresponds to multiple locations in the diff matrix, this reduces the amount of data we need to transfer to update a model. \n\n![image.png](https://dev-media.amazoncloud.cn/49209bd1a1bf420bae8f9c93462ab22c_image.png)\n\nWith hash diffing, a small number of weights (in the hash buckets at bottom) are used across a matrix with a larger number of entries.\n\nCREDIT: GLYNIS CONDON\n\nOne of the advantages of our approach, relative to other approaches to compression, such as matrix factorization, is that with each update, our diffs can target a different set of model weights. By contrast, traditional compression methods will typically lock you into modifying the same set of high-importance weights with each update.\n\n![下载 1.gif](https://dev-media.amazoncloud.cn/3e56f07de349437b973f3f095c719470_%E4%B8%8B%E8%BD%BD%20%281%29.gif)\n\nAn advantage of our diffing approach is that we can target a different set of weights with each model update, which gives us more flexibility in adapting to a changing data landscape.\n\nIn our experiments, we investigated the effects of three to five consecutive model updates, using different diffs for each. Hash diffing sometimes worked better for the first few updates, but over repeated iterations, models updated through hash diffing diverged more from full-parameter models. With sparsification diffing, the word error rate of a model updated five times in a row was less than 1% away from that of the full-parameter model, with diffs whose size was set at 10% of the full model’s.\n\nABOUT THE AUTHOR\n\n#### **[Grant Strimel](https://www.amazon.science/author/grant-p-strimel)**\n\nGrant Strimel is a senior applied scientist with Alexa AI.\n","render":"<p>Historically, Alexa’s automatic-speech-recognition models, which convert speech to text, have run in the cloud. But in recent years, we’ve been working to move more of Alexa’s computational capacity to the edge of the network — to Alexa-enabled devices themselves.</p>\n<p>The move to the edge promises faster response times, since data doesn’t have to travel to and from the cloud; lower consumption of Internet bandwidth, which is important in some applications; and availability on devices with inconsistent Internet connections, such as Alexa-enabled in-car sound systems.</p>\n<p>At this year’s Interspeech, we and our colleagues presented two papers describing some of the innovations we’re introducing to make it practical to run Alexa at the edge.</p>\n<p>In one paper, “<ins><a href=\\"https://www.amazon.science/publications/amortized-neural-networks-for-low-latency-speech-recognition\\" target=\\"_blank\\">Amortized neural networks for low-latency speech recognition</a></ins>”, we show how to reduce the computational cost of neural-network-based automatic speech recognition (ASR) by 45% with no loss in accuracy. Our method also has lower latencies than similar methods for reducing computation, meaning that it enables Alexa to respond more quickly to customer requests.</p>\n<p>In the other paper, “<ins><a href=\\"https://www.amazon.science/publications/learning-a-neural-diff-for-speech-models\\" target=\\"_blank\\">Learning a neural diff for speech models</a></ins>”, we show how to dramatically reduce the bandwidth required to update neural models on the edge. Instead of transmitting a complete model, we transmit a set of updates for some select parameters. In our experiments, this reduced the size of the update by as much as 98% with negligible effect on model accuracy.</p>\n<h4><a id=\\"Amortized_neural_networks_10\\"></a><strong>Amortized neural networks</strong></h4>\\n<p>Neural ASR models are usually encoder-decoder models. The input to the encoder is a sequence of short speech snippets called frames, which the encoder converts into a representation that’s useful for decoding. The decoder translates that representation into text.</p>\n<p>Neural encoders can be massive, requiring millions of computations for each input. But much of a speech signal is uninformative, consisting of pauses between syllables or redundant sounds. Passing uninformative frames through a huge encoder is just wasted computation.</p>\n<p>Our approach is to use multiple encoders, of differing complexity, and decide on the fly which should handle a given frame of speech. That decision is made by a small neural network called an arbitrator, which must process every input frame before it’s encoded. The arbitrator adds some computational overhead to the process, but the time savings from using a leaner encoder is more than enough to offset it.</p>\n<p>Researchers have tried similar approaches in domains other than speech, but when they trained their models, they minimized the average complexity of the frame-encoding process. That leaves open the possibility that the last few frames of the signal may pass to the more complex encoder, causing delays (increasing latency).</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/435e9b4255604f37b3887417a3f5656b_image.png\\" alt=\\"image.png\\" /></p>\n<p>Both processing flows above (a and b) distribute the same number of frames to the fast and slow (F and S) encoders, respectively, resulting in the same average computational cost. But the top flow incurs a significantly higher latency.</p>\n<p>In our paper, we propose a new loss function that adds a penalty (Lamr in the figure above) for routing frames to the fast encoder when we don’t have a significant audio backlog. Without the penalty term, our branched-encoder model reduces latency to 29 to 234 milliseconds, versus thousands of milliseconds for models with a single encoder. But adding the penalty term cuts latency even further, to the 2-to-9-millisecond range.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/78440f490f4d4dc6a2d5c198378221e5_%E4%B8%8B%E8%BD%BD.gif\\" alt=\\"下载.gif\\" /></p>\n<p>The audio backlog is one of the factors that the arbitrator considers when deciding which encoder should receive a given frame of audio.</p>\n<p>In our experiments, we used two encoders, one complex and one lean, although in principle, our approach could generalize to larger numbers of encoders.</p>\n<p>We train the arbitrator and both encoders together, end to end. During training, the same input passes through both encoders, and based on the accuracy of the resulting speech transcription, the arbitrator learns a probability distribution, which describes how often it should route frames with certain characteristics to the slow or fast encoder.</p>\n<p>Over multiple epochs — multiple passes through the training data — we turn up the “temperature” on the arbitrator, skewing the distribution it learns more dramatically. In the first epoch, the split for a certain type of frame might be 70%-30% toward one encoder or the other. After three or four epochs, however, all of the splits are more like 99.99%-0.01% — essentially binary classifications.</p>\n<p>We used three baselines in our experiments, all of which were single-encoder models. One was the full-parameter model, and the other two were compressed versions of the same model. One of these was compressed through sparsification (pruning of nonessential network weights), the other through matrix factorization (decomposing the model’s weight matrix into two smaller matrices that are multiplied together).</p>\n<p>Against the baselines, we compared two versions of our model, which were compressed through the same two methods. We ran all the models on a single-threaded processor at 650 million FLOPs per second.</p>\n<p>Our sparse model had the lowest latency —two milliseconds, compared to 3,410 to 6,154 milliseconds for the baselines — and our matrix factorization model required the fewest number of floating-point operations per frame — 23 million, versus 30 million to 43 million for the baselines. Our accuracy remained comparable, however — a word error rate of 8.6% to 8.7%, versus 8.5% to 8.7% for the baselines.</p>\n<h4><a id=\\"Neural_diffs_42\\"></a><strong>Neural diffs</strong></h4>\\n<p>The ASR models that power Alexa are constantly being updated. During the Olympics, for instance, we anticipated a large spike in requests that used words like “Ledecky” and “Kalisz” and updated our models accordingly.</p>\n<p>With cloud-based ASR, when we’ve updated a model, we simply send copies of it to a handful of servers in a data center. But with edge ASR, we may ultimately need to send updates to millions of devices simultaneously. So one of our research goals is to minimize the bandwidth requirements for edge updates.</p>\n<p>In our other Interspeech paper, we borrow an idea from software engineering — that of the diff, or a file that charts the differences between the previous version of a codebase and the current one.</p>\n<p>Our idea was that, if we could develop the equivalent of a diff for neural networks, we could use it to update on-device ASR models, rather than having to transmit all the parameters of a complete network with every update.</p>\n<p>We experimented with two different approaches to creating a diff, matrix sparsification and hashing. With matrix sparsification we begin with two matrices of the same size, one that represents the weights of the connections in the existing ASR model and one that’s all zeroes.</p>\n<p>Then, when we retrain the ASR model on new data, we update, not the parameters of the old model, but the entries in the second matrix — the diff. The updated model is a linear combination of the original weights and the values in the diff.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/d785def445e04f369a5d867e72731747_image.png\\" alt=\\"image.png\\" /></p>\n<p>Over successive training epochs, we prune the entries of matrices with too many non-zeroes, gradually sparsifying the diff.</p>\n<p>When training the diff, we use an iterative procedure that prunes matrices with too many non-zero entries. As we did when training the arbitrator in the branched-encoder network, we turn up the temperature over successive epochs to make the diff sparser and sparser.</p>\n<p>Our other approach to creating diffs was to use a hash function, a function that maps a large number of mathematical objects to a much smaller number of storage locations, or “buckets”. Hash functions are designed to distribute objects evenly across buckets, regardless of the objects’ values.</p>\n<p>With this approach, we hash the locations in the diff matrix to buckets, and then, during training, we update the values in the buckets, rather than the values in the matrices. Since each bucket corresponds to multiple locations in the diff matrix, this reduces the amount of data we need to transfer to update a model.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/49209bd1a1bf420bae8f9c93462ab22c_image.png\\" alt=\\"image.png\\" /></p>\n<p>With hash diffing, a small number of weights (in the hash buckets at bottom) are used across a matrix with a larger number of entries.</p>\n<p>CREDIT: GLYNIS CONDON</p>\n<p>One of the advantages of our approach, relative to other approaches to compression, such as matrix factorization, is that with each update, our diffs can target a different set of model weights. By contrast, traditional compression methods will typically lock you into modifying the same set of high-importance weights with each update.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/3e56f07de349437b973f3f095c719470_%E4%B8%8B%E8%BD%BD%20%281%29.gif\\" alt=\\"下载 1.gif\\" /></p>\n<p>An advantage of our diffing approach is that we can target a different set of weights with each model update, which gives us more flexibility in adapting to a changing data landscape.</p>\n<p>In our experiments, we investigated the effects of three to five consecutive model updates, using different diffs for each. Hash diffing sometimes worked better for the first few updates, but over repeated iterations, models updated through hash diffing diverged more from full-parameter models. With sparsification diffing, the word error rate of a model updated five times in a row was less than 1% away from that of the full-parameter model, with diffs whose size was set at 10% of the full model’s.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Grant_Strimelhttpswwwamazonscienceauthorgrantpstrimel_82\\"></a><strong><a href=\\"https://www.amazon.science/author/grant-p-strimel\\" target=\\"_blank\\">Grant Strimel</a></strong></h4>\n<p>Grant Strimel is a senior applied scientist with Alexa AI.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭