The range of Amazon's speech research is on display at Interspeech

机器学习
自然语言处理
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Katrin Kirchhoff is the director of speech processing for Amazon Web Services, and her organization has a trio of papers at this year’s Interspeech conference, which begins next week.\n\n![image.png](https://dev-media.amazoncloud.cn/e038c2a2b00a4919ac86bab0c9e65b65_image.png)\n\nKatrin Kirchhoff, director of speech processing for Amazon Web Services.\n\n“++[One paper](https://www.amazon.science/publications/speaker-conversation-factorial-designs-for-diarization-error-analysis)++ is on novel evaluation metrics for speaker diarization,” Kirchhoff says. “Speaker diarization is the task of determining who speaks when, and errors in that domain can be due to vocal characteristics of speakers, but they can also be due to conversational patterns. So, for instance, speaker diarization is harder when you have a lot of short turns from speakers, very frequent speaker changes, and usually our metrics don't really disentangle those different causes. So this is a new paper that proposes new ways of looking at this and proposes to measure the contributions in different ways.\n\n“++[Another paper](https://www.amazon.science/publications/best-of-both-worlds-robust-accented-speech-recognition-with-adversarial-transfer-learning)++ is on ++[adversarial learning](https://www.amazon.science/tag/adversarial-learning)++ for accented speech, and the ++[third](https://www.amazon.science/publications/adapting-long-context-nlm-for-asr-rescoring-in-conversational-agents)++ is on incorporating more contextual information into ++[ASR](https://www.amazon.science/tag/asr)++ [automatic speech recognition] for dialogue systems. So in the case where you have an ASR system as a front end for a dialogue system, it's really important to actually model things like dialogue state and the longer conversational history to improve ASR performance. That's the theme of the third paper.”\n\n#### **Speech at AWS**\n\nThe diversity of those papers’ topics is a good indicator of the breadth of speech research at Amazon Web Services (AWS).\n\n“My teams work on a wide range of science topics relevant to cloud-based spoken language processing, starting with robustness to different audio conditions like noise and reverberation, all the way to different machine learning techniques,” Kirchhoff says. “We look into unsupervised, ++[semi-supervised](https://www.amazon.science/tag/semi-supervised-learning)++, and ++[self-supervised](https://www.amazon.science/tag/self-supervised-learning)++ learning.”\n\n“That's actually a really broad trend these days, and also a trend that I see everywhere at Interspeech this year. Our machine learning models are very data-hungry, and labeled data is difficult to produce for speech. For a lot of tasks and a lot of languages, we simply don't have those kinds of data resources.\n\n#### **Amazon at Interspeech**\n\nRead more about Amazon's involvement at ++[Interspeech](https://www.amazon.science/conferences-and-events/interspeech-2021)++ — papers, organizing-committee membership, workshops and special sessions, and more.\n\n“So everybody's training self-supervised representations these days, which means that we use proxy tasks to make models learn something about the input signal without having explicit ground truth labels — by, say, predicting certain frequency bands from others, or by masking time slices and then trying to predict the content from the surrounding signal, or teaching the model which speech segments are from the same signal as opposed to different signals. \n\n“The question is, is there a single representation that's universally best for various downstream processing tasks? That is, can you use the same representation as a starting point for tasks like ASR, speaker recognition, and language identification? And then taking that one step further, can we actually use that, not only for speech, but for audio processing more generally? So at AWS, we're starting to look into that. \n\n“Other areas of interest for us are fields like continual learning or few-shot learning, which means, again, ‘How can you learn models without a lot of labeled data?’ But rather than going the completely unsupervised way, we look at what you can do with just a very small number of samples from a given class or from a given task.\n\n“ASR systems often need to process speech collected in vastly different scenarios and domains, which can include proper names or particular phrases, stylistic patterns, et cetera, that are rare overall but frequent in a particular application. You need to figure how to prime your system to recognize them accurately, and how to do that with just a handful of observed samples.”\n\n#### **Non-autoregressive processing**\n\nSome of the research in Kirchhoff’s organization involves real-time processing of short audio snippets, but several AWS products — such as ++[Amazon Transcribe](https://aws.amazon.com/transcribe/)++, ++[Amazon Transcribe Medical](https://aws.amazon.com/transcribe/medical/)++, and ++[Contact Lens](https://aws.amazon.com/connect/contact-lens/)++ — require transcription of longer audio files, such as movies, lectures, and dictations. In this context, the ASR model has the entire speech signal available to it before it begins transcribing. \n\nThis has fueled Kirchhoff’s interest in the topic of non-autoregressive processing. In fact, together with colleagues at Yahoo and Carnegie Mellon University, Kirchhoff is co-organizing a special session at Interspeech titled ++[Non-Autoregressive Sequential Modeling for Speech Processing](https://sw005320.github.io/INTERSPEECH21_SS_NAR_SP/)++.\n\n#### **Non-autoregressive processing means that all decoding steps are conducted in parallel. The question is, how do you get the same performance when you're not conditioning each step on all of the previous steps?**\n\nKatrin Kirchhoff\n\n“Traditionally, you have a decoder in an ASR system that combines different knowledge sources and then generates an output hypothesis in a step-by-step fashion, where each step is conditioned on the previous time step,” Kirchhoff explains. “You essentially run over the speech signal in one direction, left to right, and each processing step is conditioned on the previous one. \n\n“Non-autoregressive processing means that all decoding steps are conducted in parallel. So all steps happen simultaneously, and each step can be conditioned on a context in both directions. This challenges the intuitive notion that speech is generated sequentially in time and that, therefore, decoding should work in the same way. But it also means that the decoding process can be very heavily parallelized, and it can be much more efficient and much faster than traditional decoding approaches. And since it's heavily parallelizable, it can also benefit much more from developments in deep-learning hardware.\n\n“The question is, how do you get the same performance when you're not conditioning each step on all of the previous steps? Because there's clearly information flow that needs to happen across these different time steps. How do you still model that interaction?”\n\nSome of the papers at the special Interspeech session will address that question, but Kirchhoff’s group provided one provisional answer to it in June, at the annual meeting of the North American branch of the Association for Computational Linguistics (NAACL), in a paper titled “++[Align-Refine: Non-Autoregressive Speech Recognition via Iterative Realignment](https://www.amazon.science/publications/align-refine-non-autoregressive-speech-recognition-via-iterative-realignment)++”.\n\n“That is applying non-autoregressive decoding to speech recognition,” Kirchhoff says. “We call our approach ‘align-refine’. We essentially iterate the process: each iteration takes the decoding hypothesis from the previous iteration and tries to improve and refine it, rather than doing it in a single step. Since all decoding steps happen in parallel for each iteration, there’s still a vast gain in efficiency.”\n\n“What I really liked about the special session is that we had submissions both from ASR and from other areas of speech processing, like ++[TTS](https://www.amazon.science/tag/text-to-speech)++ [text-to-speech],” Kirchhoff adds. “It's very interesting that you can generalize approaches across different fields, because traditionally they've been quite separate — non-autoregressive decoding originated in machine translation. So there’s increasingly a convergence between natural-language processing, ASR, and TTS. There's a lot of commonality in the approaches that we use.”\n\nABOUT THE AUTHOR\n\n#### **[Larry Hardesty](https://www.amazon.science/author/larry-hardesty)**\n\nLarry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.\n\n\n\n\n\n","render":"<p>Katrin Kirchhoff is the director of speech processing for Amazon Web Services, and her organization has a trio of papers at this year’s Interspeech conference, which begins next week.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/e038c2a2b00a4919ac86bab0c9e65b65_image.png\\" alt=\\"image.png\\" /></p>\n<p>Katrin Kirchhoff, director of speech processing for Amazon Web Services.</p>\n<p>“<ins><a href=\\"https://www.amazon.science/publications/speaker-conversation-factorial-designs-for-diarization-error-analysis\\" target=\\"_blank\\">One paper</a></ins> is on novel evaluation metrics for speaker diarization,” Kirchhoff says. “Speaker diarization is the task of determining who speaks when, and errors in that domain can be due to vocal characteristics of speakers, but they can also be due to conversational patterns. So, for instance, speaker diarization is harder when you have a lot of short turns from speakers, very frequent speaker changes, and usually our metrics don’t really disentangle those different causes. So this is a new paper that proposes new ways of looking at this and proposes to measure the contributions in different ways.</p>\n<p>“<ins><a href=\\"https://www.amazon.science/publications/best-of-both-worlds-robust-accented-speech-recognition-with-adversarial-transfer-learning\\" target=\\"_blank\\">Another paper</a></ins> is on <ins><a href=\\"https://www.amazon.science/tag/adversarial-learning\\" target=\\"_blank\\">adversarial learning</a></ins> for accented speech, and the <ins><a href=\\"https://www.amazon.science/publications/adapting-long-context-nlm-for-asr-rescoring-in-conversational-agents\\" target=\\"_blank\\">third</a></ins> is on incorporating more contextual information into <ins><a href=\\"https://www.amazon.science/tag/asr\\" target=\\"_blank\\">ASR</a></ins> [automatic speech recognition] for dialogue systems. So in the case where you have an ASR system as a front end for a dialogue system, it’s really important to actually model things like dialogue state and the longer conversational history to improve ASR performance. That’s the theme of the third paper.”</p>\n<h4><a id=\\"Speech_at_AWS_10\\"></a><strong>Speech at AWS</strong></h4>\\n<p>The diversity of those papers’ topics is a good indicator of the breadth of speech research at Amazon Web Services (AWS).</p>\n<p>“My teams work on a wide range of science topics relevant to cloud-based spoken language processing, starting with robustness to different audio conditions like noise and reverberation, all the way to different machine learning techniques,” Kirchhoff says. “We look into unsupervised, <ins><a href=\\"https://www.amazon.science/tag/semi-supervised-learning\\" target=\\"_blank\\">semi-supervised</a></ins>, and <ins><a href=\\"https://www.amazon.science/tag/self-supervised-learning\\" target=\\"_blank\\">self-supervised</a></ins> learning.”</p>\n<p>“That’s actually a really broad trend these days, and also a trend that I see everywhere at Interspeech this year. Our machine learning models are very data-hungry, and labeled data is difficult to produce for speech. For a lot of tasks and a lot of languages, we simply don’t have those kinds of data resources.</p>\n<h4><a id=\\"Amazon_at_Interspeech_18\\"></a><strong>Amazon at Interspeech</strong></h4>\\n<p>Read more about Amazon’s involvement at <ins><a href=\\"https://www.amazon.science/conferences-and-events/interspeech-2021\\" target=\\"_blank\\">Interspeech</a></ins> — papers, organizing-committee membership, workshops and special sessions, and more.</p>\n<p>“So everybody’s training self-supervised representations these days, which means that we use proxy tasks to make models learn something about the input signal without having explicit ground truth labels — by, say, predicting certain frequency bands from others, or by masking time slices and then trying to predict the content from the surrounding signal, or teaching the model which speech segments are from the same signal as opposed to different signals.</p>\n<p>“The question is, is there a single representation that’s universally best for various downstream processing tasks? That is, can you use the same representation as a starting point for tasks like ASR, speaker recognition, and language identification? And then taking that one step further, can we actually use that, not only for speech, but for audio processing more generally? So at AWS, we’re starting to look into that.</p>\n<p>“Other areas of interest for us are fields like continual learning or few-shot learning, which means, again, ‘How can you learn models without a lot of labeled data?’ But rather than going the completely unsupervised way, we look at what you can do with just a very small number of samples from a given class or from a given task.</p>\n<p>“ASR systems often need to process speech collected in vastly different scenarios and domains, which can include proper names or particular phrases, stylistic patterns, et cetera, that are rare overall but frequent in a particular application. You need to figure how to prime your system to recognize them accurately, and how to do that with just a handful of observed samples.”</p>\n<h4><a id=\\"Nonautoregressive_processing_30\\"></a><strong>Non-autoregressive processing</strong></h4>\\n<p>Some of the research in Kirchhoff’s organization involves real-time processing of short audio snippets, but several AWS products — such as <ins><a href=\\"https://aws.amazon.com/transcribe/\\" target=\\"_blank\\">Amazon Transcribe</a></ins>, <ins><a href=\\"https://aws.amazon.com/transcribe/medical/\\" target=\\"_blank\\">Amazon Transcribe Medical</a></ins>, and <ins><a href=\\"https://aws.amazon.com/connect/contact-lens/\\" target=\\"_blank\\">Contact Lens</a></ins> — require transcription of longer audio files, such as movies, lectures, and dictations. In this context, the ASR model has the entire speech signal available to it before it begins transcribing.</p>\n<p>This has fueled Kirchhoff’s interest in the topic of non-autoregressive processing. In fact, together with colleagues at Yahoo and Carnegie Mellon University, Kirchhoff is co-organizing a special session at Interspeech titled <ins><a href=\\"https://sw005320.github.io/INTERSPEECH21_SS_NAR_SP/\\" target=\\"_blank\\">Non-Autoregressive Sequential Modeling for Speech Processing</a></ins>.</p>\n<h4><a id=\\"Nonautoregressive_processing_means_that_all_decoding_steps_are_conducted_in_parallel_The_question_is_how_do_you_get_the_same_performance_when_youre_not_conditioning_each_step_on_all_of_the_previous_steps_36\\"></a><strong>Non-autoregressive processing means that all decoding steps are conducted in parallel. The question is, how do you get the same performance when you’re not conditioning each step on all of the previous steps?</strong></h4>\\n<p>Katrin Kirchhoff</p>\n<p>“Traditionally, you have a decoder in an ASR system that combines different knowledge sources and then generates an output hypothesis in a step-by-step fashion, where each step is conditioned on the previous time step,” Kirchhoff explains. “You essentially run over the speech signal in one direction, left to right, and each processing step is conditioned on the previous one.</p>\n<p>“Non-autoregressive processing means that all decoding steps are conducted in parallel. So all steps happen simultaneously, and each step can be conditioned on a context in both directions. This challenges the intuitive notion that speech is generated sequentially in time and that, therefore, decoding should work in the same way. But it also means that the decoding process can be very heavily parallelized, and it can be much more efficient and much faster than traditional decoding approaches. And since it’s heavily parallelizable, it can also benefit much more from developments in deep-learning hardware.</p>\n<p>“The question is, how do you get the same performance when you’re not conditioning each step on all of the previous steps? Because there’s clearly information flow that needs to happen across these different time steps. How do you still model that interaction?”</p>\n<p>Some of the papers at the special Interspeech session will address that question, but Kirchhoff’s group provided one provisional answer to it in June, at the annual meeting of the North American branch of the Association for Computational Linguistics (NAACL), in a paper titled “<ins><a href=\\"https://www.amazon.science/publications/align-refine-non-autoregressive-speech-recognition-via-iterative-realignment\\" target=\\"_blank\\">Align-Refine: Non-Autoregressive Speech Recognition via Iterative Realignment</a></ins>”.</p>\n<p>“That is applying non-autoregressive decoding to speech recognition,” Kirchhoff says. “We call our approach ‘align-refine’. We essentially iterate the process: each iteration takes the decoding hypothesis from the previous iteration and tries to improve and refine it, rather than doing it in a single step. Since all decoding steps happen in parallel for each iteration, there’s still a vast gain in efficiency.”</p>\n<p>“What I really liked about the special session is that we had submissions both from ASR and from other areas of speech processing, like <ins><a href=\\"https://www.amazon.science/tag/text-to-speech\\" target=\\"_blank\\">TTS</a></ins> [text-to-speech],” Kirchhoff adds. “It’s very interesting that you can generalize approaches across different fields, because traditionally they’ve been quite separate — non-autoregressive decoding originated in machine translation. So there’s increasingly a convergence between natural-language processing, ASR, and TTS. There’s a lot of commonality in the approaches that we use.”</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Larry_Hardestyhttpswwwamazonscienceauthorlarryhardesty_54\\"></a><strong><a href=\\"https://www.amazon.science/author/larry-hardesty\\" target=\\"_blank\\">Larry Hardesty</a></strong></h4>\n<p>Larry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭