Amazon Text-to-Speech group's research at ICASSP 2022

0
0
{"value":"The automatic conversion of ++[text to speech](https://www.amazon.science/tag/text-to-speech)++ is crucial to Alexa: it’s how Alexa communicates with customers. The models developed by the Amazon Text-to-Speech group are also available to Amazon Web Services (Amazon Web Services) customers through ++[Polly](https://aws.amazon.com/polly/)++, the Amazon Web Services text-to-speech service.\n\nThe Text-to-Speech (TTS) group has four papers at this year’s International Conference on Acoustics, Speech, and Signal Processing (++[ICASSP](https://www.amazon.science/conferences-and-events/icassp-2022)++), all of which deal with either voice conversion (preserving prosodic features while converting one synthetic voice to another), data augmentation, or both.\n\nIn “++[Voice Filter: Few-shot text-to-speech speaker adaptation using voice conversion as a post-processing module](https://www.amazon.science/publications/voicefilter-few-shot-text-to-speech-speaker-adaptation-using-voice-conversion-as-a-post-processing-module)++”, the Amazon TTS group addresses the problem of ++[few-shot](https://www.amazon.science/tag/few-shot-learning)++ speaker adaptation, or learning a new synthetic voice from just a handful of training examples. The paper reformulates the problem as learning a voice conversion model that’s applied to the output of a high-quality TTS model, a conceptual shift from the existing few-shot-TTS paradigm.\n\n#### **More ICASSP coverage on Amazon Science**\n\n- Alexa AI senior principal scientist Andreas Stolcke ++[highlights five of the 21 ICASSP papers](https://www.amazon.science/blog/alexas-speech-recognition-research-at-icassp-2022)++ from Alexa's automatic-speech-recognition team.\n\n- The 50-plus Amazon papers at ICASSP, ++[sorted by research topic](https://www.amazon.science/blog/a-quick-guide-to-amazons-50-plus-icassp-papers)++.\n\nIn “++[Cross-speaker style transfer for text-to-speech using data augmentation](https://www.amazon.science/publications/cross-speaker-style-transfer-for-text-to-speech-using-data-augmentation)++”, the team shows how to build a TTS model capable of expressive speech, even when the only available training data for the target voice consists of neutral speech. The idea is to first train a voice conversion model, which converts samples of expressive speech in other voices into the target voice, and then use the converted speech as additional training data for the TTS model.\n\nIn “++[Distribution augmentation for low-resource expressive text-to-speech](https://www.amazon.science/publications/distribution-augmentation-for-low-resource-expressive-text-to-speech)++”, the TTS group expands the range of texts used to train a TTS model by recombining excerpts from existing examples to produce new examples. The trick is to maintain the syntactic coherence of the synthetic examples, so that the TTS model won’t waste resources learning improbable sequences of phonemes. (This is the one data augmentation paper that doesn’t rely on voice conversion.)\n\n![下载.jpg](https://dev-media.amazoncloud.cn/1c2ac2c77e234d23b92d55f41f3f99e9_%E4%B8%8B%E8%BD%BD.jpg)\n\nIn this example of data augmentation through recombination of existing training examples, the verb phrase “shook her head”, as identified by a syntactic parse, is substituted for the verb phrase “lied” in the sentence “he never lied”. The original acoustic signals (bottom row) are cut and spliced at the corresponding points. From \"++[Distribution augmentation for low-resource expressive text-to-speech](https://www.amazon.science/publications/distribution-augmentation-for-low-resource-expressive-text-to-speech)++\".\n\nFinally, in “++[Text-free non-parallel many-to-many voice conversion using normalising flows](https://www.amazon.science/publications/text-free-non-parallel-many-to-many-voice-conversion-using-normalising-flows)++”, the team adapts the concept of normalizing flows, which have been used widely for TTS, to the problem of voice conversion. Like most deep-learning models, normalizing flows learn functions that produce vector representations of input data. The difference is that the functions are invertible, so the inputs can be recovered from the representations. The team hypothesized that preserving more information from the input data would yield better voice conversion, and early experiments bear that hypothesis out.\n\n#### **Voice filter**\n\nThe idea behind “++[Voice Filter: Few-shot text-to-speech speaker adaptation using voice conversion as a post-processing module](https://www.amazon.science/publications/voicefilter-few-shot-text-to-speech-speaker-adaptation-using-voice-conversion-as-a-post-processing-module)++” is that for few-shot learning, it’s easier to take the output of an existing, high-quality TTS model — a voice spectrogram — and adapt that to a new target voice than it is to adapt the model itself.\n\nThe key to the approach is that the voice filter, which converts the TTS model’s output to a new voice, is trained on synthetic data created by the TTS model itself.\n\n![下载 1.jpg](https://dev-media.amazoncloud.cn/0fb914c8de8e4a0c8d2621cd0f8d37d7_%E4%B8%8B%E8%BD%BD%20%281%29.jpg)\n\nThe training procedure for the voice filter.\n\nThe TTS model is duration controllable, meaning that the input text is encoded to indicate the duration that each phoneme should have in the output speech. This enables the researchers to create two parallel corpora of training data. One corpus consists of real training examples, from 120 different speakers. The other corpus is synthetic speech generated by the TTS model, but with durations that match those of the multispeaker examples.\n\nThe voice filter is trained on the parallel corpora, and then, for few-shot learning, the researchers simply fine-tune it on a new speaker. In experiments, the researchers found that this approach produced speech whose quality was comparable to that produced by conventional models trained on 30 times as much data.\n\n#### **Cross-speaker style transfer**\n\nThe voice conversion model that the researchers use in “++[Cross-speaker style transfer for text-to-speech using data augmentation](https://www.amazon.science/publications/cross-speaker-style-transfer-for-text-to-speech-using-data-augmentation)++” is based on the CopyCat model ++[previously reported](https://www.amazon.science/blog/more-natural-prosody-for-synthesized-speech)++ on the Amazon Science blog. The converted expressive data is added to the neutral data to produce the dataset used to train the TTS model\n\nThe TTS model takes two inputs: a text sequence and a style vector. During training, the text sequence passes to the TTS model, and the spectrogram of the target speech sample passes to a reference encoder, which produces the style embedding. At inference time, of course, there is no input spectrogram. But the researchers show that they can control the style of the TTS model’s output by feeding it a precomputed style embedding.\n\n![下载 2.jpg](https://dev-media.amazoncloud.cn/00132eb925b44b8da01dc632fb17c93d_%E4%B8%8B%E8%BD%BD%20%282%29.jpg)\n\nThe voice conversion model (left) and text-to-speech model (right) used for cross-speaker style transfer. The reference encoders are used only during training. From \"++[Cross-speaker style transfer for text-to-speech using data augmentation](https://www.amazon.science/publications/cross-speaker-style-transfer-for-text-to-speech-using-data-augmentation)++\".\n\nThe researchers assessed the model based on human evaluation using the MUSHRA perception scale. Human evaluators reported that, relative to a benchmark model, the new model reduced the gap in perceived style similarity between synthesized and real speech by an average of 58% across 14 different speakers.\n\n#### **Distribution augmentation**\n\n“++[Distribution augmentation for low-resource expressive text-to-speech](https://www.amazon.science/publications/distribution-augmentation-for-low-resource-expressive-text-to-speech)++” considers the case in which training data for a new voice is lacking. The goal is to permute the texts of the existing examples, producing new examples, and recombine excerpts from the corresponding speech samples to produce new samples. This does not increase the acoustic diversity of the training targets, but it does increase the linguistic diversity of the training inputs.\n\nTo ensure that the synthetic training examples do not become too syntactically incoherent, the researchers construct parse trees for the input texts and then swap syntactically equivalent branches across trees (see figure, above). Swapping the corresponding sections of the acoustic signal requires good alignment between text and signal, which is accomplished by existing forced-alignment models.\n\nDuring training, to ensure that the resulting TTS model doesn’t become overbiased toward the synthetic examples, the researchers also include a special input token to indicate points at which two existing samples have been fused together. The expectation is that the model will learn to privilege phonemic sequences internal to the real samples over phonemic sequences that cross boundaries between fused samples. At inference time, the value of the token is simply set to 0 across all inputs.\n\n![下载 3.jpg](https://dev-media.amazoncloud.cn/8b33ddaa14884e588c4486274010b7ff_%E4%B8%8B%E8%BD%BD%20%283%29.jpg)\n\nThe “augmentation tag” marks the boundary between acoustic signals taken from two different training examples, to prevent overbiasing the TTS model toward synthetic data. From \"++[Distribution augmentation for low-resource expressive text-to-speech](https://www.amazon.science/publications/distribution-augmentation-for-low-resource-expressive-text-to-speech)++\".\n\nThe quality of the model’s speech output was assessed by 60 human evaluators, who compared it to speech output by a baseline model, on five different datasets. Across the board, the output of the new model received better scores than the output of the benchmark model.\n\n#### **Normalizing flows**\n\nA normalizing flow learns to map input data to a representational space in a way that maximizes the approximation of some prior distribution. The word “flow” indicates that the mapping can be the result of passing the data through a series of invertible transformations, and the enforcement of the distribution imposes the normalization.\n\nIn “++[Text-free non-parallel many-to-many voice conversion using normalising flows](https://www.amazon.science/publications/text-free-non-parallel-many-to-many-voice-conversion-using-normalising-flows)++”, Amazon TTS researchers consider a flow whose inputs are a source spectrogram, a phoneme embedding, a speaker identity embedding, the fundamental frequency of the acoustic signal, and a flag denoting whether a frame of input audio is voiced or unvoiced. The flow maps the inputs to a distribution of phoneme frequencies in a particular application domain.\n\nTypically, a normalizing flow will learn both the distribution and the mapping from the training data. But here, the researchers pretrain the flow on a standard TTS task, for which training data is plentiful, to learn the distribution in advance.\n\nBecause the flow is reversible, a vector in the representational space can be mapped back to a set of source inputs, provided that the other model inputs (phoneme embedding, speaker ID, and so on) are available. To use normalizing flows to perform speech conversion, the researchers simply substitute one speaker for another during this reverse mapping.\n\n![下载 4.jpg](https://dev-media.amazoncloud.cn/9e74d37cd0594f5ea7987a3fd76c1175_%E4%B8%8B%E8%BD%BD%20%284%29.jpg)\n\nAn overview of TTS researchers' use of normalizing flows to do voice conversion. From \"++[Text-free non-parallel many-to-many voice conversion using normalising flows](https://www.amazon.science/publications/text-free-non-parallel-many-to-many-voice-conversion-using-normalising-flows)++\".\n\nThe researchers examine two different experimental setting, one in which the voice conversion model takes both text sequences and spectrograms as inputs and one in which it takes spectrograms only. In the second case, the pretrained normalizing-flow model significantly outperformed the benchmarks. A normalizing-flow model that learned the phoneme distribution directly from the training data didn’t fare as well, indicating the importance of the pretraining step.\n\nABOUT THE AUTHOR\n\n#### **[Andrew Breen](https://www.amazon.science/auhor/andrew-breen)**\n\nAndrew Breen is the senior manager of text-to-speech research in the Amazon Text-to-Speech group.","render":"<p>The automatic conversion of <ins><a href=\\"https://www.amazon.science/tag/text-to-speech\\" target=\\"_blank\\">text to speech</a></ins> is crucial to Alexa: it’s how Alexa communicates with customers. The models developed by the Amazon Text-to-Speech group are also available to Amazon Web Services (Amazon Web Services) customers through <ins><a href=\\"https://aws.amazon.com/polly/\\" target=\\"_blank\\">Polly</a></ins>, the Amazon Web Services text-to-speech service.</p>\n<p>The Text-to-Speech (TTS) group has four papers at this year’s International Conference on Acoustics, Speech, and Signal Processing (<ins><a href=\\"https://www.amazon.science/conferences-and-events/icassp-2022\\" target=\\"_blank\\">ICASSP</a></ins>), all of which deal with either voice conversion (preserving prosodic features while converting one synthetic voice to another), data augmentation, or both.</p>\n<p>In “<ins><a href=\\"https://www.amazon.science/publications/voicefilter-few-shot-text-to-speech-speaker-adaptation-using-voice-conversion-as-a-post-processing-module\\" target=\\"_blank\\">Voice Filter: Few-shot text-to-speech speaker adaptation using voice conversion as a post-processing module</a></ins>”, the Amazon TTS group addresses the problem of <ins><a href=\\"https://www.amazon.science/tag/few-shot-learning\\" target=\\"_blank\\">few-shot</a></ins> speaker adaptation, or learning a new synthetic voice from just a handful of training examples. The paper reformulates the problem as learning a voice conversion model that’s applied to the output of a high-quality TTS model, a conceptual shift from the existing few-shot-TTS paradigm.</p>\n<h4><a id=\\"More_ICASSP_coverage_on_Amazon_Science_6\\"></a><strong>More ICASSP coverage on Amazon Science</strong></h4>\\n<ul>\\n<li>\\n<p>Alexa AI senior principal scientist Andreas Stolcke <ins><a href=\\"https://www.amazon.science/blog/alexas-speech-recognition-research-at-icassp-2022\\" target=\\"_blank\\">highlights five of the 21 ICASSP papers</a></ins> from Alexa’s automatic-speech-recognition team.</p>\n</li>\\n<li>\\n<p>The 50-plus Amazon papers at ICASSP, <ins><a href=\\"https://www.amazon.science/blog/a-quick-guide-to-amazons-50-plus-icassp-papers\\" target=\\"_blank\\">sorted by research topic</a></ins>.</p>\n</li>\\n</ul>\n<p>In “<ins><a href=\\"https://www.amazon.science/publications/cross-speaker-style-transfer-for-text-to-speech-using-data-augmentation\\" target=\\"_blank\\">Cross-speaker style transfer for text-to-speech using data augmentation</a></ins>”, the team shows how to build a TTS model capable of expressive speech, even when the only available training data for the target voice consists of neutral speech. The idea is to first train a voice conversion model, which converts samples of expressive speech in other voices into the target voice, and then use the converted speech as additional training data for the TTS model.</p>\n<p>In “<ins><a href=\\"https://www.amazon.science/publications/distribution-augmentation-for-low-resource-expressive-text-to-speech\\" target=\\"_blank\\">Distribution augmentation for low-resource expressive text-to-speech</a></ins>”, the TTS group expands the range of texts used to train a TTS model by recombining excerpts from existing examples to produce new examples. The trick is to maintain the syntactic coherence of the synthetic examples, so that the TTS model won’t waste resources learning improbable sequences of phonemes. (This is the one data augmentation paper that doesn’t rely on voice conversion.)</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/1c2ac2c77e234d23b92d55f41f3f99e9_%E4%B8%8B%E8%BD%BD.jpg\\" alt=\\"下载.jpg\\" /></p>\n<p>In this example of data augmentation through recombination of existing training examples, the verb phrase “shook her head”, as identified by a syntactic parse, is substituted for the verb phrase “lied” in the sentence “he never lied”. The original acoustic signals (bottom row) are cut and spliced at the corresponding points. From “<ins><a href=\\"https://www.amazon.science/publications/distribution-augmentation-for-low-resource-expressive-text-to-speech\\" target=\\"_blank\\">Distribution augmentation for low-resource expressive text-to-speech</a></ins>”.</p>\n<p>Finally, in “<ins><a href=\\"https://www.amazon.science/publications/text-free-non-parallel-many-to-many-voice-conversion-using-normalising-flows\\" target=\\"_blank\\">Text-free non-parallel many-to-many voice conversion using normalising flows</a></ins>”, the team adapts the concept of normalizing flows, which have been used widely for TTS, to the problem of voice conversion. Like most deep-learning models, normalizing flows learn functions that produce vector representations of input data. The difference is that the functions are invertible, so the inputs can be recovered from the representations. The team hypothesized that preserving more information from the input data would yield better voice conversion, and early experiments bear that hypothesis out.</p>\n<h4><a id=\\"Voice_filter_22\\"></a><strong>Voice filter</strong></h4>\\n<p>The idea behind “<ins><a href=\\"https://www.amazon.science/publications/voicefilter-few-shot-text-to-speech-speaker-adaptation-using-voice-conversion-as-a-post-processing-module\\" target=\\"_blank\\">Voice Filter: Few-shot text-to-speech speaker adaptation using voice conversion as a post-processing module</a></ins>” is that for few-shot learning, it’s easier to take the output of an existing, high-quality TTS model — a voice spectrogram — and adapt that to a new target voice than it is to adapt the model itself.</p>\n<p>The key to the approach is that the voice filter, which converts the TTS model’s output to a new voice, is trained on synthetic data created by the TTS model itself.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/0fb914c8de8e4a0c8d2621cd0f8d37d7_%E4%B8%8B%E8%BD%BD%20%281%29.jpg\\" alt=\\"下载 1.jpg\\" /></p>\n<p>The training procedure for the voice filter.</p>\n<p>The TTS model is duration controllable, meaning that the input text is encoded to indicate the duration that each phoneme should have in the output speech. This enables the researchers to create two parallel corpora of training data. One corpus consists of real training examples, from 120 different speakers. The other corpus is synthetic speech generated by the TTS model, but with durations that match those of the multispeaker examples.</p>\n<p>The voice filter is trained on the parallel corpora, and then, for few-shot learning, the researchers simply fine-tune it on a new speaker. In experiments, the researchers found that this approach produced speech whose quality was comparable to that produced by conventional models trained on 30 times as much data.</p>\n<h4><a id=\\"Crossspeaker_style_transfer_36\\"></a><strong>Cross-speaker style transfer</strong></h4>\\n<p>The voice conversion model that the researchers use in “<ins><a href=\\"https://www.amazon.science/publications/cross-speaker-style-transfer-for-text-to-speech-using-data-augmentation\\" target=\\"_blank\\">Cross-speaker style transfer for text-to-speech using data augmentation</a></ins>” is based on the CopyCat model <ins><a href=\\"https://www.amazon.science/blog/more-natural-prosody-for-synthesized-speech\\" target=\\"_blank\\">previously reported</a></ins> on the Amazon Science blog. The converted expressive data is added to the neutral data to produce the dataset used to train the TTS model</p>\n<p>The TTS model takes two inputs: a text sequence and a style vector. During training, the text sequence passes to the TTS model, and the spectrogram of the target speech sample passes to a reference encoder, which produces the style embedding. At inference time, of course, there is no input spectrogram. But the researchers show that they can control the style of the TTS model’s output by feeding it a precomputed style embedding.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/00132eb925b44b8da01dc632fb17c93d_%E4%B8%8B%E8%BD%BD%20%282%29.jpg\\" alt=\\"下载 2.jpg\\" /></p>\n<p>The voice conversion model (left) and text-to-speech model (right) used for cross-speaker style transfer. The reference encoders are used only during training. From “<ins><a href=\\"https://www.amazon.science/publications/cross-speaker-style-transfer-for-text-to-speech-using-data-augmentation\\" target=\\"_blank\\">Cross-speaker style transfer for text-to-speech using data augmentation</a></ins>”.</p>\n<p>The researchers assessed the model based on human evaluation using the MUSHRA perception scale. Human evaluators reported that, relative to a benchmark model, the new model reduced the gap in perceived style similarity between synthesized and real speech by an average of 58% across 14 different speakers.</p>\n<h4><a id=\\"Distribution_augmentation_48\\"></a><strong>Distribution augmentation</strong></h4>\\n<p>“<ins><a href=\\"https://www.amazon.science/publications/distribution-augmentation-for-low-resource-expressive-text-to-speech\\" target=\\"_blank\\">Distribution augmentation for low-resource expressive text-to-speech</a></ins>” considers the case in which training data for a new voice is lacking. The goal is to permute the texts of the existing examples, producing new examples, and recombine excerpts from the corresponding speech samples to produce new samples. This does not increase the acoustic diversity of the training targets, but it does increase the linguistic diversity of the training inputs.</p>\n<p>To ensure that the synthetic training examples do not become too syntactically incoherent, the researchers construct parse trees for the input texts and then swap syntactically equivalent branches across trees (see figure, above). Swapping the corresponding sections of the acoustic signal requires good alignment between text and signal, which is accomplished by existing forced-alignment models.</p>\n<p>During training, to ensure that the resulting TTS model doesn’t become overbiased toward the synthetic examples, the researchers also include a special input token to indicate points at which two existing samples have been fused together. The expectation is that the model will learn to privilege phonemic sequences internal to the real samples over phonemic sequences that cross boundaries between fused samples. At inference time, the value of the token is simply set to 0 across all inputs.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/8b33ddaa14884e588c4486274010b7ff_%E4%B8%8B%E8%BD%BD%20%283%29.jpg\\" alt=\\"下载 3.jpg\\" /></p>\n<p>The “augmentation tag” marks the boundary between acoustic signals taken from two different training examples, to prevent overbiasing the TTS model toward synthetic data. From “<ins><a href=\\"https://www.amazon.science/publications/distribution-augmentation-for-low-resource-expressive-text-to-speech\\" target=\\"_blank\\">Distribution augmentation for low-resource expressive text-to-speech</a></ins>”.</p>\n<p>The quality of the model’s speech output was assessed by 60 human evaluators, who compared it to speech output by a baseline model, on five different datasets. Across the board, the output of the new model received better scores than the output of the benchmark model.</p>\n<h4><a id=\\"Normalizing_flows_62\\"></a><strong>Normalizing flows</strong></h4>\\n<p>A normalizing flow learns to map input data to a representational space in a way that maximizes the approximation of some prior distribution. The word “flow” indicates that the mapping can be the result of passing the data through a series of invertible transformations, and the enforcement of the distribution imposes the normalization.</p>\n<p>In “<ins><a href=\\"https://www.amazon.science/publications/text-free-non-parallel-many-to-many-voice-conversion-using-normalising-flows\\" target=\\"_blank\\">Text-free non-parallel many-to-many voice conversion using normalising flows</a></ins>”, Amazon TTS researchers consider a flow whose inputs are a source spectrogram, a phoneme embedding, a speaker identity embedding, the fundamental frequency of the acoustic signal, and a flag denoting whether a frame of input audio is voiced or unvoiced. The flow maps the inputs to a distribution of phoneme frequencies in a particular application domain.</p>\n<p>Typically, a normalizing flow will learn both the distribution and the mapping from the training data. But here, the researchers pretrain the flow on a standard TTS task, for which training data is plentiful, to learn the distribution in advance.</p>\n<p>Because the flow is reversible, a vector in the representational space can be mapped back to a set of source inputs, provided that the other model inputs (phoneme embedding, speaker ID, and so on) are available. To use normalizing flows to perform speech conversion, the researchers simply substitute one speaker for another during this reverse mapping.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/9e74d37cd0594f5ea7987a3fd76c1175_%E4%B8%8B%E8%BD%BD%20%284%29.jpg\\" alt=\\"下载 4.jpg\\" /></p>\n<p>An overview of TTS researchers’ use of normalizing flows to do voice conversion. From “<ins><a href=\\"https://www.amazon.science/publications/text-free-non-parallel-many-to-many-voice-conversion-using-normalising-flows\\" target=\\"_blank\\">Text-free non-parallel many-to-many voice conversion using normalising flows</a></ins>”.</p>\n<p>The researchers examine two different experimental setting, one in which the voice conversion model takes both text sequences and spectrograms as inputs and one in which it takes spectrograms only. In the second case, the pretrained normalizing-flow model significantly outperformed the benchmarks. A normalizing-flow model that learned the phoneme distribution directly from the training data didn’t fare as well, indicating the importance of the pretraining step.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Andrew_Breenhttpswwwamazonscienceauhorandrewbreen_80\\"></a><strong><a href=\\"https://www.amazon.science/auhor/andrew-breen\\" target=\\"_blank\\">Andrew Breen</a></strong></h4>\n<p>Andrew Breen is the senior manager of text-to-speech research in the Amazon Text-to-Speech group.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭