RescoreBERT: Using BERT models to improve ASR rescoring

0
0
{"value":"When someone speaks to a voice agent like Alexa, an automatic speech recognition ([ASR](https://www.amazon.science/tag/asr)) model converts the speech to text. Typically, the core ASR model is trained on limited data, which means that it can struggle with rare words and phrases. So the ASR model’s hypotheses usually pass to a language model — a model that encodes the probabilities of sequences of words — trained on a much larger body of texts. The language model reranks the hypotheses, with the goal of improving ASR accuracy.\n\nIn [natural-language processing](https://www.amazon.science/tag/nlp) , one of the most widely used language models is BERT (bidirectional encoder representations from Transformers). To use BERT as a rescoring model, one typically masks each input token and computes its log-likelihood from the rest of the input, then sums those scores to produce a total score called PLL (pseudo log-likelihood). However, this computation is very expensive, which makes it impractical for real-time ASR. For rescoring, most ASR models use more efficient long-short-term-memory (LSTM) language models.\n\nAt this year’s International Conference on Acoustics, Speech, and Signal Processing ([ICASSP](https://www.amazon.science/conferences-and-events/icassp-2022)), we presented a paper in which we propose [a new model, namely RescoreBERT](https://www.amazon.science/publications/rescorebert-discriminative-speech-recognition-rescoring-with-bert), that leverages BERT’s power for second-pass rescoring.\n\nIn our experiments, RescoreBERT reduced an ASR model’s error rate by up to 13% relative to a traditional LSTM-based rescoring model. At the same time, thanks to a combination of knowledge distillation and discriminative training, it remains efficient enough for commercial deployment. In fact, we recently partnered with the Alexa team working on the [Alexa Teacher Model](https://www.amazon.science/blog/ambient-intelligence-will-accelerate-advancements-in-general-ai) — a large, pretrained, multilingual model with billions of parameters that encodes language as well as salient patterns of interactions with Alexa — and deployed RescoreBERT to production to delight Alexa customers.\n\n\n#### **Rescoring**\n\n\nTo get a sense for the value of rescoring, suppose that an ASR model outputs these hypotheses, from more to less likely: (a) “is fishing the opposite of fusion”, (b) “is fission the opposite of fusion”, and (c) “is fission the opposite of fashion”. Without second-pass rescoring, ASR would give an incorrect output: “is fishing the opposite of fusion”. If the second-pass language model does its job well, it should give priority to the hypothesis “is fission the opposite of fusion” and correctly rerank the hypotheses. A language model trained from scratch on a limited set of data will often struggle with rare words such as “fission”.\n\n![下载.jpg](https://dev-media.amazoncloud.cn/1301b85a21a944049125332d3f413d2a_%E4%B8%8B%E8%BD%BD.jpg)\n\nThe RescoreBERT model. Each ASR hypothesis is demarcated by a CLS (classification) token and then encoded by BERT. The encoding of the CLS token itself is a representation of the entire sentence, which passes to a feed-forward neural network. That network computes sentence-level second-pass scores, which are then interpolated with the first-pass scores for reranking.\n\n\n#### **Distillation**\n\n\nTo reduce the computational expense of computing PLL scores, we adapt [previous work](https://www.amazon.science/publications/masked-language-model-scoring) from Amazon and pass the BERT model’s output through a neural network trained to mimic the PLL scores assigned by a larger, “teacher” model. We name this method MLM (masked language model) distillation, because the distilled model is trained to match the teacher’s predictions of masked inputs.\n\nThe score output by the distilled model is interpolated with the original score to produce a final score. By distilling PLL scores from a large BERT model to a much smaller BERT model, this approach reduces latency.\n\n\n#### **Discriminative training**\n\n\nBecause the first- and second-pass scores are linearly interpolated, it’s not enough for the rescoring model to assign the correct hypothesis a better (in this case, lower) score; the interpolated score for the correct hypothesis also has to be the lowest among all hypotheses.\n\nAs a result, it would be beneficial to account for first-pass scores when training the second-pass rescoring model. However, the MLM distillation aims to distill the PLL scores and hence does not account for the first-pass scores. To account for the first-pass scores, we apply discriminative training after MLM distillation.\n\nSpecifically, we train RescoreBERT with the objective that, if one uses the linearly interpolated score between the first-pass and second-pass scores to rerank the hypotheses, it will minimize ASR errors. To capture this objective, previous research has used the loss function MWER (minimum word error rate), minimizing the expected number of word errors predicted from ASR hypothesis scores.\n\nWe introduce a new loss function, named MWED (matching word error distribution). This loss function matches the distribution of the hypothesis scores to the distribution of word errors for individual hypotheses. We show that MWED is a strong alternative to the standard MWER, improving performance in English, although it slightly degrades performance in Japanese.\n\nFinally, to demonstrate the advantage of discriminative training, we show that while BERT trained with MLM distillation can improve WER by 3%-6% relative to LSTM, RescoreBERT, trained with a discriminative objective, can improve it by 7%-13% on the same test sets.\n\nABOUT THE AUTHOR\n\n#### **[Yile Gu](https://www.amazon.science/author/yile-gu)**\n\nYile Gu is a senior applied scientist in the Alexa AI organization.\n","render":"<p>When someone speaks to a voice agent like Alexa, an automatic speech recognition (<a href=\\"https://www.amazon.science/tag/asr\\" target=\\"_blank\\">ASR</a>) model converts the speech to text. Typically, the core ASR model is trained on limited data, which means that it can struggle with rare words and phrases. So the ASR model’s hypotheses usually pass to a language model — a model that encodes the probabilities of sequences of words — trained on a much larger body of texts. The language model reranks the hypotheses, with the goal of improving ASR accuracy.</p>\\n<p>In <a href=\\"https://www.amazon.science/tag/nlp\\" target=\\"_blank\\">natural-language processing</a> , one of the most widely used language models is BERT (bidirectional encoder representations from Transformers). To use BERT as a rescoring model, one typically masks each input token and computes its log-likelihood from the rest of the input, then sums those scores to produce a total score called PLL (pseudo log-likelihood). However, this computation is very expensive, which makes it impractical for real-time ASR. For rescoring, most ASR models use more efficient long-short-term-memory (LSTM) language models.</p>\\n<p>At this year’s International Conference on Acoustics, Speech, and Signal Processing (<a href=\\"https://www.amazon.science/conferences-and-events/icassp-2022\\" target=\\"_blank\\">ICASSP</a>), we presented a paper in which we propose <a href=\\"https://www.amazon.science/publications/rescorebert-discriminative-speech-recognition-rescoring-with-bert\\" target=\\"_blank\\">a new model, namely RescoreBERT</a>, that leverages BERT’s power for second-pass rescoring.</p>\\n<p>In our experiments, RescoreBERT reduced an ASR model’s error rate by up to 13% relative to a traditional LSTM-based rescoring model. At the same time, thanks to a combination of knowledge distillation and discriminative training, it remains efficient enough for commercial deployment. In fact, we recently partnered with the Alexa team working on the <a href=\\"https://www.amazon.science/blog/ambient-intelligence-will-accelerate-advancements-in-general-ai\\" target=\\"_blank\\">Alexa Teacher Model</a> — a large, pretrained, multilingual model with billions of parameters that encodes language as well as salient patterns of interactions with Alexa — and deployed RescoreBERT to production to delight Alexa customers.</p>\\n<h4><a id=\\"Rescoring_9\\"></a><strong>Rescoring</strong></h4>\\n<p>To get a sense for the value of rescoring, suppose that an ASR model outputs these hypotheses, from more to less likely: (a) “is fishing the opposite of fusion”, (b) “is fission the opposite of fusion”, and © “is fission the opposite of fashion”. Without second-pass rescoring, ASR would give an incorrect output: “is fishing the opposite of fusion”. If the second-pass language model does its job well, it should give priority to the hypothesis “is fission the opposite of fusion” and correctly rerank the hypotheses. A language model trained from scratch on a limited set of data will often struggle with rare words such as “fission”.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/1301b85a21a944049125332d3f413d2a_%E4%B8%8B%E8%BD%BD.jpg\\" alt=\\"下载.jpg\\" /></p>\n<p>The RescoreBERT model. Each ASR hypothesis is demarcated by a CLS (classification) token and then encoded by BERT. The encoding of the CLS token itself is a representation of the entire sentence, which passes to a feed-forward neural network. That network computes sentence-level second-pass scores, which are then interpolated with the first-pass scores for reranking.</p>\n<h4><a id=\\"Distillation_19\\"></a><strong>Distillation</strong></h4>\\n<p>To reduce the computational expense of computing PLL scores, we adapt <a href=\\"https://www.amazon.science/publications/masked-language-model-scoring\\" target=\\"_blank\\">previous work</a> from Amazon and pass the BERT model’s output through a neural network trained to mimic the PLL scores assigned by a larger, “teacher” model. We name this method MLM (masked language model) distillation, because the distilled model is trained to match the teacher’s predictions of masked inputs.</p>\\n<p>The score output by the distilled model is interpolated with the original score to produce a final score. By distilling PLL scores from a large BERT model to a much smaller BERT model, this approach reduces latency.</p>\n<h4><a id=\\"Discriminative_training_27\\"></a><strong>Discriminative training</strong></h4>\\n<p>Because the first- and second-pass scores are linearly interpolated, it’s not enough for the rescoring model to assign the correct hypothesis a better (in this case, lower) score; the interpolated score for the correct hypothesis also has to be the lowest among all hypotheses.</p>\n<p>As a result, it would be beneficial to account for first-pass scores when training the second-pass rescoring model. However, the MLM distillation aims to distill the PLL scores and hence does not account for the first-pass scores. To account for the first-pass scores, we apply discriminative training after MLM distillation.</p>\n<p>Specifically, we train RescoreBERT with the objective that, if one uses the linearly interpolated score between the first-pass and second-pass scores to rerank the hypotheses, it will minimize ASR errors. To capture this objective, previous research has used the loss function MWER (minimum word error rate), minimizing the expected number of word errors predicted from ASR hypothesis scores.</p>\n<p>We introduce a new loss function, named MWED (matching word error distribution). This loss function matches the distribution of the hypothesis scores to the distribution of word errors for individual hypotheses. We show that MWED is a strong alternative to the standard MWER, improving performance in English, although it slightly degrades performance in Japanese.</p>\n<p>Finally, to demonstrate the advantage of discriminative training, we show that while BERT trained with MLM distillation can improve WER by 3%-6% relative to LSTM, RescoreBERT, trained with a discriminative objective, can improve it by 7%-13% on the same test sets.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Yile_Guhttpswwwamazonscienceauthoryilegu_42\\"></a><strong><a href=\\"https://www.amazon.science/author/yile-gu\\" target=\\"_blank\\">Yile Gu</a></strong></h4>\n<p>Yile Gu is a senior applied scientist in the Alexa AI organization.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭