Text normalization with only 3% as much training data

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"With services like Alexa, which use synthesized speech for output, text normalization (TN) is usually the first step in the process of text-to-speech conversion. TN takes raw text as input— say, the string 6-21-21 — and expands it into a verbalized form that a text-to-speech model can use to produce the final speech — “twenty first of June twenty twenty one”.\n\nHistorically, TN algorithms relied on hard-coded rules, which didn’t generalize across languages and were hard to maintain: a typical rule-based TN system for a single language might have thousands of rules, which evolve over time and whose development requires linguistic expertise.\n\n![image.png](https://dev-media.amazoncloud.cn/d7a2b3fa5c4d4ec5b4e04d865d1787b7_image.png)\n\nText normalization converts the output of computational processes — such as the natural-language-understanding models that handle Alexa customers' requests — into a form that will make sense when read out as synthesized speech.\nCREDIT: GLYNIS CONDON\n\nMore recently, academic and industry researchers have begun developing machine-learning-based TN models. But these have drawbacks, too. \n\nSequence-to-sequence models occasionally make unacceptable errors, such as converting “\$5” to “five pounds”. Semiotic-classification models require domain-specific information classes created by linguistic experts — classes such as emoticonor telephone number — which limits their generalizability. And both types of models require large amounts of training data, which makes it difficult to scale them across languages.\\n\\nAt this year’s meeting of the North American Chapter of the Association for Computational Linguistics ([NAACL](https://www.amazon.science/conferences-and-events/naacl-2021)), my colleagues and I are presenting [a new text normalization model](https://www.amazon.science/publications/proteno-text-normalization-with-limited-data-for-fast-deployment-in-text-to-speech-systems), called Proteno, that addresses these challenges.\\n\\nWe evaluated Proteno on three languages, English, Spanish, and Tamil. There’s a large body of research on TN in English, but no TN datasets existed for Spanish and Tamil. Consequently, we created our own datasets, which [we have publicly released](https://github.com/amazon-research/proteno) for use by other TN researchers.\\n\\nProteno specifies only a few, low-level normalization classes — such as ordinal number, cardinal number, or Roman numeral — which generalize well across languages. From the data, Proteno then learns a huge variety of additional, fine-grained classes. \\n\\nIn our experiments on English, for instance, we used eight predefined classes, and Proteno automatically generated another 2,658. By contrast, semiotic-classification models typically have about 20 classes.\\n\\nProteno also uses a simple but effective scheme for tokenization, or splitting texts up into smaller chunks. Prior tokenization techniques required linguistic knowledge or data-heavy training; Proteno’s tokenization technique, by contrast, simply breaks text up at spaces and at transitions between Unicode classes, such as letter, numeral, or punctuation mark. Consequently, it can generalize across languages, it enables the majority of normalizations to be learned from the data, and it reduces the incidence of unacceptable errors. \\n\\nTogether, these techniques also allow Proteno to make do with much less training data than previous machine learning approaches. In our experiments, Proteno offered comparable performance with the previous state of the art in English — while requiring only 3% as much training data. \\n\\nBecause there were no prior TN models trained on Spanish and Tamil, we had no benchmarks for our experiments. But on comparable amounts of training data, the Proteno models trained on Tamil and Spanish achieved accuracies comparable to that of the one trained on English (99.1% for Spanish, 96.7% for Tamil, and 97.4% for English).\\n\\n#### **Methods**\\n\\nProteno treats TN as a sequence classification problem, where most of the classes are learned. The figure below illustrates Proteno’s training and run-time processing pipelines, which have slightly different orders.\\n\\n![image.png](https://dev-media.amazoncloud.cn/273183efadde41dfa95223ebdec13ef2_image.png)\\n\\nThe training pipeline consists of the following steps:\\n\\n- **Tokenization:** Previous tokenization methods relied on language-specific rules devised by linguists. For instance, the string 6-21-21 would be treated as a single token of the type date. We propose a granular tokenization mechanism that is language independent and applicable to any space-separated language. The text to be normalized is first split at its spaces and then further split wherever there is a change in the Unicodeclass. The string 6-21-21 thus becomes five tokens, and we count on Proteno to learn how to handle them properly.\\n- **Annotation:** The tokenized, unnormalized text is annotated token by token, which gives us a one-to-one mapping between each unnormalized token and its ground-truth normalization. This data will be used to train the model.\\n- **Class generation:** Each token is then mapped to a class. A class may receive tokens only of a particular type; so, for instance, the class corresponding to dollars will not accept the type pounds, and vice versa. This prevents the model from making unacceptable errors. Each class also has an associated normalization function.\\nThere are two kinds of classes:\\na.**Predefined:** We define a limited number of classes (around 8-10) containing basic normalization rules. A small subset of these (3-5) contain language-specific rules, such as how to distinguish cardinal and ordinal uses of a number. Other classes — such as self, digit, and Roman numerals — remain similar across many languages.\\nb.**Auto-generated (AG):** The model also generates classes automatically by analyzing the unnormalized-to-normalized token mappings in the dataset. If no existing class (pre-coded or AG) can generate the target normalization for a token in the training data, a new class is automatically generated. For instance, if the dataset includes the annotation “12→December\\", and if none of the existing classes can generate this normalization, then the class “12_to_December_AG\\" is created. This class accepts only “12\\", and its normalization function returns “December\\". AGs enable Proteno to learn the majority of normalizations automatically from data.\\n- **Classification:** We model TN as a sequence-tagging problem, where the input is a sequence of unnormalized tokens and the output is the sequence of classes that can generate the normalized text. We experimented with four different types of classifiers: conditional random fields (CRFs), bi-directional long-short-term-memory models (bi-LSTMs), bi-LSTM-CRF combinations, and Transformers.\\n\\n#### **Datasets**\\n\\nAs the goal of Proteno is to be applicable to multiple languages, we evaluated it across three languages, English, Spanish, and Tamil. English had significantly more auto-generated classes than Tamil or Spanish, as written English tends to use many more abbreviations than the other two languages. \\n\\n![image.png](https://dev-media.amazoncloud.cn/c79e50c7b1a64a1481f6ab71a37a671a_image.png)\\n\\nTo benchmark Proteno’s performance in English, we could compare it to earlier models on only 11 of the 13 predefined classes found in existing datasets; differences in tokenization schemes meant that there were no logical mappings for the other two classes.\\n\\nThese results indicate that Proteno is a strong candidate for doing TN with low data annotation requirements while curbing unacceptable errors, which would make it a robust and scalable solution for production text-to-speech models.\\n\\n![image.png](https://dev-media.amazoncloud.cn/f9edccf9a1b54ac984a8cde923b530cd_image.png)\\n\\nProteno’s performance on 11 classes found in existing datasets, compared to the performance of two state-of-the-art models trained on 32 times as much data.\\n","render":"<p>With services like Alexa, which use synthesized speech for output, text normalization (TN) is usually the first step in the process of text-to-speech conversion. TN takes raw text as input— say, the string 6-21-21 — and expands it into a verbalized form that a text-to-speech model can use to produce the final speech — “twenty first of June twenty twenty one”.</p>\\n<p>Historically, TN algorithms relied on hard-coded rules, which didn’t generalize across languages and were hard to maintain: a typical rule-based TN system for a single language might have thousands of rules, which evolve over time and whose development requires linguistic expertise.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/d7a2b3fa5c4d4ec5b4e04d865d1787b7_image.png\\" alt=\\"image.png\\" /></p>\\n<p>Text normalization converts the output of computational processes — such as the natural-language-understanding models that handle Alexa customers’ requests — into a form that will make sense when read out as synthesized speech.<br />\\nCREDIT: GLYNIS CONDON</p>\\n<p>More recently, academic and industry researchers have begun developing machine-learning-based TN models. But these have drawbacks, too.</p>\\n<p>Sequence-to-sequence models occasionally make unacceptable errors, such as converting “\$5” to “five pounds”. Semiotic-classification models require domain-specific information classes created by linguistic experts — classes such as emoticonor telephone number — which limits their generalizability. And both types of models require large amounts of training data, which makes it difficult to scale them across languages.</p>\\n<p>At this year’s meeting of the North American Chapter of the Association for Computational Linguistics (<a href=\\"https://www.amazon.science/conferences-and-events/naacl-2021\\" target=\\"_blank\\">NAACL</a>), my colleagues and I are presenting <a href=\\"https://www.amazon.science/publications/proteno-text-normalization-with-limited-data-for-fast-deployment-in-text-to-speech-systems\\" target=\\"_blank\\">a new text normalization model</a>, called Proteno, that addresses these challenges.</p>\\n<p>We evaluated Proteno on three languages, English, Spanish, and Tamil. There’s a large body of research on TN in English, but no TN datasets existed for Spanish and Tamil. Consequently, we created our own datasets, which <a href=\\"https://github.com/amazon-research/proteno\\" target=\\"_blank\\">we have publicly released</a> for use by other TN researchers.</p>\\n<p>Proteno specifies only a few, low-level normalization classes — such as ordinal number, cardinal number, or Roman numeral — which generalize well across languages. From the data, Proteno then learns a huge variety of additional, fine-grained classes.</p>\n<p>In our experiments on English, for instance, we used eight predefined classes, and Proteno automatically generated another 2,658. By contrast, semiotic-classification models typically have about 20 classes.</p>\n<p>Proteno also uses a simple but effective scheme for tokenization, or splitting texts up into smaller chunks. Prior tokenization techniques required linguistic knowledge or data-heavy training; Proteno’s tokenization technique, by contrast, simply breaks text up at spaces and at transitions between Unicode classes, such as letter, numeral, or punctuation mark. Consequently, it can generalize across languages, it enables the majority of normalizations to be learned from the data, and it reduces the incidence of unacceptable errors.</p>\n<p>Together, these techniques also allow Proteno to make do with much less training data than previous machine learning approaches. In our experiments, Proteno offered comparable performance with the previous state of the art in English — while requiring only 3% as much training data.</p>\n<p>Because there were no prior TN models trained on Spanish and Tamil, we had no benchmarks for our experiments. But on comparable amounts of training data, the Proteno models trained on Tamil and Spanish achieved accuracies comparable to that of the one trained on English (99.1% for Spanish, 96.7% for Tamil, and 97.4% for English).</p>\n<h4><a id=\\"Methods_27\\"></a><strong>Methods</strong></h4>\\n<p>Proteno treats TN as a sequence classification problem, where most of the classes are learned. The figure below illustrates Proteno’s training and run-time processing pipelines, which have slightly different orders.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/273183efadde41dfa95223ebdec13ef2_image.png\\" alt=\\"image.png\\" /></p>\n<p>The training pipeline consists of the following steps:</p>\n<ul>\\n<li><strong>Tokenization:</strong> Previous tokenization methods relied on language-specific rules devised by linguists. For instance, the string 6-21-21 would be treated as a single token of the type date. We propose a granular tokenization mechanism that is language independent and applicable to any space-separated language. The text to be normalized is first split at its spaces and then further split wherever there is a change in the Unicodeclass. The string 6-21-21 thus becomes five tokens, and we count on Proteno to learn how to handle them properly.</li>\\n<li><strong>Annotation:</strong> The tokenized, unnormalized text is annotated token by token, which gives us a one-to-one mapping between each unnormalized token and its ground-truth normalization. This data will be used to train the model.</li>\\n<li><strong>Class generation:</strong> Each token is then mapped to a class. A class may receive tokens only of a particular type; so, for instance, the class corresponding to dollars will not accept the type pounds, and vice versa. This prevents the model from making unacceptable errors. Each class also has an associated normalization function.<br />\\nThere are two kinds of classes:<br />\\na.<strong>Predefined:</strong> We define a limited number of classes (around 8-10) containing basic normalization rules. A small subset of these (3-5) contain language-specific rules, such as how to distinguish cardinal and ordinal uses of a number. Other classes — such as self, digit, and Roman numerals — remain similar across many languages.<br />\\nb.<strong>Auto-generated (AG):</strong> The model also generates classes automatically by analyzing the unnormalized-to-normalized token mappings in the dataset. If no existing class (pre-coded or AG) can generate the target normalization for a token in the training data, a new class is automatically generated. For instance, if the dataset includes the annotation “12→December&quot;, and if none of the existing classes can generate this normalization, then the class “12_to_December_AG&quot; is created. This class accepts only “12&quot;, and its normalization function returns “December&quot;. AGs enable Proteno to learn the majority of normalizations automatically from data.</li>\\n<li><strong>Classification:</strong> We model TN as a sequence-tagging problem, where the input is a sequence of unnormalized tokens and the output is the sequence of classes that can generate the normalized text. We experimented with four different types of classifiers: conditional random fields (CRFs), bi-directional long-short-term-memory models (bi-LSTMs), bi-LSTM-CRF combinations, and Transformers.</li>\\n</ul>\n<h4><a id=\\"Datasets_43\\"></a><strong>Datasets</strong></h4>\\n<p>As the goal of Proteno is to be applicable to multiple languages, we evaluated it across three languages, English, Spanish, and Tamil. English had significantly more auto-generated classes than Tamil or Spanish, as written English tends to use many more abbreviations than the other two languages.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/c79e50c7b1a64a1481f6ab71a37a671a_image.png\\" alt=\\"image.png\\" /></p>\n<p>To benchmark Proteno’s performance in English, we could compare it to earlier models on only 11 of the 13 predefined classes found in existing datasets; differences in tokenization schemes meant that there were no logical mappings for the other two classes.</p>\n<p>These results indicate that Proteno is a strong candidate for doing TN with low data annotation requirements while curbing unacceptable errors, which would make it a robust and scalable solution for production text-to-speech models.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/f9edccf9a1b54ac984a8cde923b530cd_image.png\\" alt=\\"image.png\\" /></p>\n<p>Proteno’s performance on 11 classes found in existing datasets, compared to the performance of two state-of-the-art models trained on 32 times as much data.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭