Scalable framework lets multiple text-to-speech models coexist

自然语言处理
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Voice agents like Alexa often have a variety of different speech synthesizers, which differ in attributes such as [expressivity](https://www.amazon.science/blog/how-to-build-highly-expressive-speech-models), [personality](https://www.aboutamazon.com/news/devices/alexa-introduce-me-to-melissa-and-shaq), [language](https://www.amazon.science/blog/english-language-alexa-voice-learns-to-speak-spanish), and [speaking style](https://www.amazon.science/blog/new-text-to-speech-generator-and-rephraser-move-alexa-toward-concept-to-speech). The machine learning models underlying these different applications can have completely different architectures, and integrating those architectures in a single voice service can be a time-consuming and challenging process.\n\nTo make that process easier and faster, Amazon’s Text-to-Speech group has developed a universal model integration framework that allows us to customize production voice models in a quick and scalable way.\n\n\n#### **Model variety**\n\n\nState-of-the-art voice models typically use two large neural networks to synthesize speech from text inputs.\n\nThe first network, called an acoustic model, takes text as input and generates a mel-spectrogram, an image that represents acoustic parameters such as pitch and energy of speech over time. The second network, called a vocoder, takes the mel-spectrogram as an input and produces an audio waveform of speech as the final output.\n\nWhile we have released a [universal architecture for the vocoder model](https://www.amazon.science/blog/neural-text-to-speech-makes-speech-synthesizers-much-more-versatile) that supports a wide variety of speaking styles, we still use different acoustic-model architectures to generate this diversity of speaking styles.\n\nThe [most common architecture for the acoustic mode](https://www.amazon.science/publications/enhancing-audio-quality-for-expressive-neural-text-to-speech)l relies on an attention mechanism, which learns which elements of the input text are most relevant to the current time slice — or “frame” — of the output spectrogram. With this mechanism, the network implicitly models the speech duration of different chunks of the text.\n\nThe same model also uses the technique of “teacher-forcing”, where the previously generated frame of speech is used as an input to produce the next one. While such an architecture can generate expressive and natural-sounding speech, it is prone to intelligibility errors such as mumbling or dropping or repeating words, and errors easily compound from one frame to the next.\n\n[More-modern architectures](https://www.amazon.science/publications/non-autoregressive-tts-with-explicit-duration-modelling-for-low-resource-highly-expressive-speech) address these issues by explicitly modeling the durations of text chunks and generating speech frames in parallel, which is more efficient and stable than relying on previously generated frames as input. To align the text and speech sequences, the model simply “upsamples”, or repeats its encoding of a chunk of text (its representation vector), for as many speech frames as are dictated by the external duration model.\n\nThe continuous evolution of complex TTS models employed in different contexts — such as Alexa Q&A, storytelling for children, and smart-home automation — creates the need for a scalable framework that can handle them all.\n\n\n#### **The challenge of integration**\n\n\nTo integrate acoustic models into production, we need a component that takes an input text utterance and returns a mel-spectrogram. The first difficulty is that speech is usually generated in sequential chunks, rather than being synthesized all at once. To minimize latency, our framework should return data as quickly as possible. A naive solution that wraps the whole model in code and processes everything with a single function call will be unacceptably slow.\n\nAnother challenge is adjusting the model to work with various hardware accelerators. As an example, to benefit from the high-performance [Amazon Web Services Inferentia](https://aws.amazon.com/machine-learning/inferentia/) runtime, we need to ensure that all tensors have fixed sizes (set once, during the model compilation phase). This means that we need to\n\n- add logic that splits longer utterances into smaller chunks that fit specific input sizes (depending on the model);\n- add logic that ensures proper padding; and\n- decide which functionality should be handled directly by the model and which by the integration layer.\n\nWhen we want to run the same model on general-purpose GPUs, we probably don’t need these changes, and it would be useful if the framework could switch back and forth between contexts in an easy way. We therefore decouple the TTS model into a set of more specialized integration components, capable of doing all the required logic.####\n\n\n#### **Integration components** \n\n\nThe integration layer encapsulates the model in a set of components capable of transforming an input utterance into a mel-spectrogram. As the model usually operates in two stages — preprocessing data and generating data on demand — it is convenient to use two types of components:\n\n- a SequenceBlock, which takes an input tensor and returns a transformed tensor (the input can be the result of applying another SequenceBlock), and\n- a StreamableBlock, which generates data (e.g., frames) on demand. As an input it takes the results of another StreamableBlock (blocks can form a pipeline) and/or data generated by a SequenceBlock.\n\nThese simple abstractions offer great flexibility in creating variants of acoustic models. Here’s an example:\n\n![下载.jpg](https://dev-media.amazoncloud.cn/5fbb1eb45a1245d7834e34acfc57a255_%E4%B8%8B%E8%BD%BD.jpg)\n\nAn example of an acoustic model built using the SequenceBlock and StreamableBlock abstractions.\n\n\nThe acoustic model consists of\n\n\n- two encoders (SequenceBlocks), which convert the input text embedding into one-dimensional representation tensors, one for encoded text and one for predicted durations;\n- an upsampler (a StreamableBlock, which takes the encoders’ results as an input), which creates intermediary, speech-length sequences, according to the data returned by the encoders; and \n- a decoder (a StreamableBlock), which generates mel-spectrogram frames.\n\nThe whole model is encapsulated in a specialized StreamableBlock called StreamablePipeline, which contains exactly one SequenceBlock and one StreamableBlock:\n\n\n- the SequenceBlockContainer is a specialized SequenceBlock that consists of a set of nested SequenceBlocks capable of running neural-network encoders;\n- the StreamableStack is specialized StreamableBlock that decodes outputs from the upsampler and creates mel-spectrogram frames.\n\nThe integration framework ensures that all components are run in the correct order, and depending on the specific versions of components, it allows for the use of various hardware accelerators.\n\n\n#### **The integration layer**\n\n\nThe acoustic model is provided as a plugin, which we call an “addon”. An addon consists of exported neural networks, each represented as a named set of symbols and parameters (encoder, decoder, etc.), along with configuration data. One of the configuration attributes, called “stack”, specifies how integration components should be connected together to build a working integration layer. Here’s the code for the stack attribute that describes the architecture above:\n\n```\\n'stack'=[\\n\\t{'type' : 'StreamablePipeline, \\n\\t 'sequence_block' : {'type' : 'Encoders'},\\n\\t 'streamable_block' : \\n\\t\\t{'type': 'StreamableStack', \\n\\t\\t 'stack' : [ \\n\\t\\t\\t{'type' : 'Upsampler'}, \\n\\t\\t\\t{'type' : 'Decoder'} \\n\\t\\t]} \\n\\t} \\n]\\n\\n```\n\nThis definition will create an integration layer consisting of a StreamablePipeline with\n\n\n- All encoders specified in the addon (the framework will automatically create all required components);\n- An upsampler, which generates intermediate data for the decoder; and\n- the decoder specified in the addon, which generates the final frames.\n\n\nThe JSON format allows us to make easy changes. For example, we can create a specialized component that runs all sequence blocks in parallel on a specific hardware accelerator and name it CustomizedEncoders. In this case, the only change in the configuration specification is to replace the name “Encoders” with “CustomizedEncoders”.\nRunning experiments using components with additional diagnostic or digital-signal-processing effects is also trivial. A new component’s only requirement is to extend one of two generic abstractions; other than that, there are no other restrictions. Even replacing one StreamableBlock with the whole nested sequence-to-sequence stack is perfectly fine, according to the framework design.\n\nThis framework is already used in production. It is a vital pillar of our recent, successful integration of state-of-the-art TTS architectures (without attention) and legacy models.\n\n**Acknowledgments**:[ Daniel Korzekwa](https://www.amazon.science/author/daniel-korzekwa)\n\nABOUT THE AUTHOR\n\n#### [Rafal Sienkiewicz](https://www.amazon.science/author/rafal-sienkiewicz)\n\nRafal Sienkiewicz is a senior software development engineer in Amazon's Devices organization.\n\n\n#### [Raahil Shah](https://www.amazon.science/author/raahil-shah)\n\nRaahil Shah is an applied scientist in the Amazon Text-to-Speech group.","render":"<p>Voice agents like Alexa often have a variety of different speech synthesizers, which differ in attributes such as <a href=\\"https://www.amazon.science/blog/how-to-build-highly-expressive-speech-models\\" target=\\"_blank\\">expressivity</a>, <a href=\\"https://www.aboutamazon.com/news/devices/alexa-introduce-me-to-melissa-and-shaq\\" target=\\"_blank\\">personality</a>, <a href=\\"https://www.amazon.science/blog/english-language-alexa-voice-learns-to-speak-spanish\\" target=\\"_blank\\">language</a>, and <a href=\\"https://www.amazon.science/blog/new-text-to-speech-generator-and-rephraser-move-alexa-toward-concept-to-speech\\" target=\\"_blank\\">speaking style</a>. The machine learning models underlying these different applications can have completely different architectures, and integrating those architectures in a single voice service can be a time-consuming and challenging process.</p>\\n<p>To make that process easier and faster, Amazon’s Text-to-Speech group has developed a universal model integration framework that allows us to customize production voice models in a quick and scalable way.</p>\n<h4><a id=\\"Model_variety_5\\"></a><strong>Model variety</strong></h4>\\n<p>State-of-the-art voice models typically use two large neural networks to synthesize speech from text inputs.</p>\n<p>The first network, called an acoustic model, takes text as input and generates a mel-spectrogram, an image that represents acoustic parameters such as pitch and energy of speech over time. The second network, called a vocoder, takes the mel-spectrogram as an input and produces an audio waveform of speech as the final output.</p>\n<p>While we have released a <a href=\\"https://www.amazon.science/blog/neural-text-to-speech-makes-speech-synthesizers-much-more-versatile\\" target=\\"_blank\\">universal architecture for the vocoder model</a> that supports a wide variety of speaking styles, we still use different acoustic-model architectures to generate this diversity of speaking styles.</p>\\n<p>The <a href=\\"https://www.amazon.science/publications/enhancing-audio-quality-for-expressive-neural-text-to-speech\\" target=\\"_blank\\">most common architecture for the acoustic mode</a>l relies on an attention mechanism, which learns which elements of the input text are most relevant to the current time slice — or “frame” — of the output spectrogram. With this mechanism, the network implicitly models the speech duration of different chunks of the text.</p>\\n<p>The same model also uses the technique of “teacher-forcing”, where the previously generated frame of speech is used as an input to produce the next one. While such an architecture can generate expressive and natural-sounding speech, it is prone to intelligibility errors such as mumbling or dropping or repeating words, and errors easily compound from one frame to the next.</p>\n<p><a href=\\"https://www.amazon.science/publications/non-autoregressive-tts-with-explicit-duration-modelling-for-low-resource-highly-expressive-speech\\" target=\\"_blank\\">More-modern architectures</a> address these issues by explicitly modeling the durations of text chunks and generating speech frames in parallel, which is more efficient and stable than relying on previously generated frames as input. To align the text and speech sequences, the model simply “upsamples”, or repeats its encoding of a chunk of text (its representation vector), for as many speech frames as are dictated by the external duration model.</p>\\n<p>The continuous evolution of complex TTS models employed in different contexts — such as Alexa Q&amp;A, storytelling for children, and smart-home automation — creates the need for a scalable framework that can handle them all.</p>\n<h4><a id=\\"The_challenge_of_integration_23\\"></a><strong>The challenge of integration</strong></h4>\\n<p>To integrate acoustic models into production, we need a component that takes an input text utterance and returns a mel-spectrogram. The first difficulty is that speech is usually generated in sequential chunks, rather than being synthesized all at once. To minimize latency, our framework should return data as quickly as possible. A naive solution that wraps the whole model in code and processes everything with a single function call will be unacceptably slow.</p>\n<p>Another challenge is adjusting the model to work with various hardware accelerators. As an example, to benefit from the high-performance <a href=\\"https://aws.amazon.com/machine-learning/inferentia/\\" target=\\"_blank\\">Amazon Web Services Inferentia</a> runtime, we need to ensure that all tensors have fixed sizes (set once, during the model compilation phase). This means that we need to</p>\\n<ul>\\n<li>add logic that splits longer utterances into smaller chunks that fit specific input sizes (depending on the model);</li>\n<li>add logic that ensures proper padding; and</li>\n<li>decide which functionality should be handled directly by the model and which by the integration layer.</li>\n</ul>\\n<p>When we want to run the same model on general-purpose GPUs, we probably don’t need these changes, and it would be useful if the framework could switch back and forth between contexts in an easy way. We therefore decouple the TTS model into a set of more specialized integration components, capable of doing all the required logic.####</p>\n<h4><a id=\\"Integration_components_37\\"></a><strong>Integration components</strong></h4>\\n<p>The integration layer encapsulates the model in a set of components capable of transforming an input utterance into a mel-spectrogram. As the model usually operates in two stages — preprocessing data and generating data on demand — it is convenient to use two types of components:</p>\n<ul>\\n<li>a SequenceBlock, which takes an input tensor and returns a transformed tensor (the input can be the result of applying another SequenceBlock), and</li>\n<li>a StreamableBlock, which generates data (e.g., frames) on demand. As an input it takes the results of another StreamableBlock (blocks can form a pipeline) and/or data generated by a SequenceBlock.</li>\n</ul>\\n<p>These simple abstractions offer great flexibility in creating variants of acoustic models. Here’s an example:</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/5fbb1eb45a1245d7834e34acfc57a255_%E4%B8%8B%E8%BD%BD.jpg\\" alt=\\"下载.jpg\\" /></p>\n<p>An example of an acoustic model built using the SequenceBlock and StreamableBlock abstractions.</p>\n<p>The acoustic model consists of</p>\n<ul>\\n<li>two encoders (SequenceBlocks), which convert the input text embedding into one-dimensional representation tensors, one for encoded text and one for predicted durations;</li>\n<li>an upsampler (a StreamableBlock, which takes the encoders’ results as an input), which creates intermediary, speech-length sequences, according to the data returned by the encoders; and</li>\n<li>a decoder (a StreamableBlock), which generates mel-spectrogram frames.</li>\n</ul>\\n<p>The whole model is encapsulated in a specialized StreamableBlock called StreamablePipeline, which contains exactly one SequenceBlock and one StreamableBlock:</p>\n<ul>\\n<li>the SequenceBlockContainer is a specialized SequenceBlock that consists of a set of nested SequenceBlocks capable of running neural-network encoders;</li>\n<li>the StreamableStack is specialized StreamableBlock that decodes outputs from the upsampler and creates mel-spectrogram frames.</li>\n</ul>\\n<p>The integration framework ensures that all components are run in the correct order, and depending on the specific versions of components, it allows for the use of various hardware accelerators.</p>\n<h4><a id=\\"The_integration_layer_68\\"></a><strong>The integration layer</strong></h4>\\n<p>The acoustic model is provided as a plugin, which we call an “addon”. An addon consists of exported neural networks, each represented as a named set of symbols and parameters (encoder, decoder, etc.), along with configuration data. One of the configuration attributes, called “stack”, specifies how integration components should be connected together to build a working integration layer. Here’s the code for the stack attribute that describes the architecture above:</p>\n<pre><code class=\\"lang-\\">'stack'=[\\n\\t{'type' : 'StreamablePipeline, \\n\\t 'sequence_block' : {'type' : 'Encoders'},\\n\\t 'streamable_block' : \\n\\t\\t{'type': 'StreamableStack', \\n\\t\\t 'stack' : [ \\n\\t\\t\\t{'type' : 'Upsampler'}, \\n\\t\\t\\t{'type' : 'Decoder'} \\n\\t\\t]} \\n\\t} \\n]\\n\\n</code></pre>\\n<p>This definition will create an integration layer consisting of a StreamablePipeline with</p>\n<ul>\\n<li>All encoders specified in the addon (the framework will automatically create all required components);</li>\n<li>An upsampler, which generates intermediate data for the decoder; and</li>\n<li>the decoder specified in the addon, which generates the final frames.</li>\n</ul>\\n<p>The JSON format allows us to make easy changes. For example, we can create a specialized component that runs all sequence blocks in parallel on a specific hardware accelerator and name it CustomizedEncoders. In this case, the only change in the configuration specification is to replace the name “Encoders” with “CustomizedEncoders”.<br />\\nRunning experiments using components with additional diagnostic or digital-signal-processing effects is also trivial. A new component’s only requirement is to extend one of two generic abstractions; other than that, there are no other restrictions. Even replacing one StreamableBlock with the whole nested sequence-to-sequence stack is perfectly fine, according to the framework design.</p>\n<p>This framework is already used in production. It is a vital pillar of our recent, successful integration of state-of-the-art TTS architectures (without attention) and legacy models.</p>\n<p><strong>Acknowledgments</strong>:<a href=\\"https://www.amazon.science/author/daniel-korzekwa\\" target=\\"_blank\\"> Daniel Korzekwa</a></p>\\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Rafal_Sienkiewiczhttpswwwamazonscienceauthorrafalsienkiewicz_105\\"></a><a href=\\"https://www.amazon.science/author/rafal-sienkiewicz\\" target=\\"_blank\\">Rafal Sienkiewicz</a></h4>\\n<p>Rafal Sienkiewicz is a senior software development engineer in Amazon’s Devices organization.</p>\n<h4><a id=\\"Raahil_Shahhttpswwwamazonscienceauthorraahilshah_110\\"></a><a href=\\"https://www.amazon.science/author/raahil-shah\\" target=\\"_blank\\">Raahil Shah</a></h4>\\n<p>Raahil Shah is an applied scientist in the Amazon Text-to-Speech group.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭