How Alexa learned Arabic

自然语言处理
迁移学习
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"![image.png](https://dev-media.amazoncloud.cn/36d27926e2f1400eb4a4beb2a5b11f31_image.png)\n\nAt launch, the Arabic version of Alexa will be available in the Kingdom of Saudi Arabia and the United Arab Emirates.\n\nThe Arabic version of Alexa launched in December 2021, in the Kingdom of Saudi Arabia and the United Arab Emirates, and like all new Alexa languages, it posed a unique set of challenges.\n\nThe first was to decide what forms of Arabic Alexa should speak. While the official written language in KSA and the UAE is Modern Standard Arabic (MSA), in everyday life, Arabic speakers use dialectal forms of Arabic, with many vernacular variations.\n\nFor customers, engaging with Alexa in their native dialects would be more natural than speaking MSA. So the Alexa AI team — including computational linguists — determined that Arabic Alexa would be able to understand requests in both MSA and Khaleeji (Gulf) dialects.\n\nAlexa’s speech outputs, too, would be in both MSA and a Khaleeji dialect — MSA for formal speech, such as responses to requests for information, and Khaleeji for less formal speech, such as confirmation of alarm times and music selections. This means that someone issuing Alexa a request in one Arabic dialect might get a response in a different one. But that mirrors the experience that Arabic speakers in the region have with each other.\n\n![image.png](https://dev-media.amazoncloud.cn/f07bd37da4d74fe388ed6835cae28eda_image.png)\n\nThe core components of a new Alexa model are automatic speech recognition (++[ASR](https://www.amazon.science/tag/asr)++), which converts speech into text; natural-language understanding (++[NLU](https://www.amazon.science/tag/nlu)++), which interprets the text to initiate actions; and text-to-speech (++[TTS](https://www.amazon.science/tag/text-to-speech)++), which converts NLU outputs into synthesized speech.\n\nA key question for all three components was how to render utterances textually, both as ASR output and TTS input. Written Arabic suppresses short vowel sounds: it would be sort of like spelling the English word “begin” as “bgn”. People are usually able to infer the mssng vwls frm cntxt.\n\nBut in formal and educational texts — such as reading primers for children — vowels and some consonantal sounds are indicated by diacritical marks. So the Alexa AI team had to decide whether the ASR output should include diacritics or not.\n\nOne of the major differences between dialects is the vowel sounds, so omitting diacritics makes it easier to create a speech representation that’s applicable to all dialects, which is useful for ASR and NLU.\n\nMoreover, there is no published writing in forms of Arabic other than MSA, so there’s no standard orthography for them, either. Asking annotators to add diacritics could introduce more ambiguity than it alleviates. In the end, the Alexa AI team decided that ASR output should use only two diacritics, the shaddah and maddah, because they help with pronunciation accuracy on entity names that pass from ASR through NLU to TTS.\n\nThese design decisions had separate implications for the various Alexa AI teams — ASR, NLU, and TTS — and of course, each of the teams faced its own particular challenges as well.\n\n\n#### **ASR**\n\n\nOne of the ASR team’s goals was to provide a consistent output, given the lack of standardized orthography for both dialectal Arabic and foreign loanwords. One of their decisions was to represent loanwords — such as the names of French or American musicians or albums — using Latin script.\n\n![image.png](https://dev-media.amazoncloud.cn/4e24b73b4a3f4ff9baf83846efb0b511_image.png)\n\n*L to R:* Applied-science manager Volker Leutnant and applied scientists Moe Hethnawi and Bashar Awwad Shiekh Hasan\n\nTo that end, they used a so-called catalogue ingestion normalizer, which takes in a catalogue of terms in French and English and converts the corresponding Arabic-script outputs of the ASR model into Latin script.\n\nApplied-science manager ++[Volker Leutnant](https://www.amazon.science/author/volker-leutnant)++ and his colleagues on the Alexa Speech team — including applied scientists Moe Hethnawi and Bashar Awwad Shiekh Hasan — began with an English acoustic model, which started out better attuned to human speech sounds than a randomly initialized model. They trained it using public datasets of Arabic speech in the target Khaleeji dialects and data from ++[Cleo](https://www.amazon.com/Amazon-Cleo/dp/B01N5QDE0Y)++, an Alexa skill that allows multilingual customers to help train new-language models by responding to voice prompts with open-form utterances. The Cleo data included labeled utterances in additional Arabic dialects, allowing the ASR model to provide more consistent user experience for a wider range of customers.\n\n\n#### **NLU**\n\n\nAn NLU model takes in utterances transcribed by ASR and classifies them according to *intent*, such as playing music. It also identifies all the slots in the utterance — such as song names or artist names — and their slot values — such as the particular artist name “++[Ahlam](https://en.wikipedia.org/wiki/Ahlam_(singer))++”.\n\nThe first thing the NLU model needs to do is to tokenize the input, or split it into semantic units that should be processed separately. In many languages, tokenization happens naturally during ASR. But Arabic uses word affixes — prefixes and suffixes — to convey contextual meanings.\n\nSome of those affixes, such as articles and prepositions — the Arabic equivalents of “the” or “to” — are irrelevant to NLU and can be left attached to their word stems. But some, such as possessives, require independent slot tags. The suffix meaning “my”, for instance, in the Arabic for “my music”, tells the NLU model just which music the customer wants played. Language engineer ++[Yangsook](https://www.amazon.science/author/yangsook-park)++ Park and her colleagues designed the tokenizer to split off those important affixes and leave the rest attached to their stems.\n\n![image.png](https://dev-media.amazoncloud.cn/6e7c61d459014fce9098adb5c381a673_image.png)\n\nThe tokenized input passes to the NLU model, which is a trilingual model, able to process inputs in Arabic, French or English. This not only helps the model handle loanwords used in Arabic, but it also enables the transfer of knowledge from French and English, which currently have more abundant training data than Arabic.\n\nResearch science manager ++[Karolina Owczarzak](https://www.amazon.science/author/karolina-owczarzak)++ and her team at Alexa AI — including research scientists ++[Khadige Abboud](https://www.amazon.science/author/khadige-abboud)++, ++[Olga Golovneva](https://www.amazon.science/author/olga-golovneva)++, and ++[Christopher DiPersio](https://www.amazon.science/author/christopher-dipersio)++ — resampled the existing Arabic training data to expand the variety of training examples. For instance, their resampling tool replaces the names of artists or songs in existing utterances with other names from the song catalogue.\n\nA crucial consideration was how many resampled utterances with the same basic structure to include in the training data. Using too many examples based on the same template — such as “let me hear <SongName> by <ArtistName>” or “play the <ArtistName> song <SongName>” —could diminish the model’s performance on other classes of utterance.\n\nTo compute the optimal number of examples per utterance template, the NLU researchers constructed a measure of utterance complexity, which factored in both the number of slots in the utterance template and the number of possible values per slot. The more complex the utterance template, the more examples it required.\n\n![image.png](https://dev-media.amazoncloud.cn/63862532af5b4aa7ad1a488a160f647d_image.png)\n\n*L to R:* Language engineer Yangsook Park, research science manager Karolina Owczarzak, and research scientists Khadige Abboud, Olga Golovneva, and Christopher DiPersio\n\nThe model-training process began with a BERT-based language model, which was pretrained on all three languages using unlabeled data and the standard language-modeling objective. That is, words of sentences were randomly masked out, and the model learned to predict the missing words from those that remained. In this stage, the NLU team augmented the Arabic dataset with data translated from English by AWS Translate.\n\nThen the researchers trained the model to perform NLU tasks by fine-tuning it on a large corpus of annotated French and English data — that is, utterances whose intents and slots had been labeled. The idea was to use the abundant data in those two languages to teach the model some general principles of NLU processing, which could then be transferred to a model fine-tuned on the sparser labeled Arabic data.\n\nFinally, the model was fine-tuned again on equal amounts of labeled training data in all three languages, to ensure that fine-tuning on Arabic didn’t diminish the model’s performance on the other two languages.\n\n\n#### **TTS**\n\n\nWhereas diacritics can get in the way of NLU, they’re indispensable to TTS: the Alexa speech synthesizer needs to know precisely which vowel sounds to produce as output. So when the Arabic TTS model gets a text string from one of Alexa’s functions — such as confirmation of a music selection from the music player — it runs it through a diacritizer, which adds the full set of diacritics back in.\n\n![image.png](https://dev-media.amazoncloud.cn/626acf67a00348089e26da14140d51eb_image.png)\n\n*L to R:* Software engineer Tarek Badr, applied scientist Fan Yang, and language engineer Merouane Benhassine.\n\nThe TTS researchers, led by software engineer Tarek Badr and applied scientist Fan Yang, trained the diacritizer largely on MSA texts, with some supplemental data in Khaleeji dialects, which the Alexa team compiled itself. Inferring the correct diacritics depends on the whole utterance context: as an analogy, whether “crw” represents “craw”, “crew”, or “crow” could usually be determined from context. So the diacritizer model has an attention mechanism that attends over the complete utterance.\n\nOutputs that should be in Khaleeji Arabic then pass through a module that converts the diacritics to representations of the appropriate short-vowels sounds, along with any other necessary transformations. This is a rule-based system that language engineer Merouane Benhassine and his colleagues built to capture the predictable relationships between MSA and Khaleeji Arabic.\n\nThe text-to-speech model itself is a neural network, which takes text as input and outputs acoustic waveforms. It takes advantage of the Amazon TTS team’s recent work on ++[expressive speech](https://www.amazon.science/blog/how-to-build-highly-expressive-speech-models)++ to endow the Arabic TTS model with a lively, conversational style by default.\n\nA new Alexa language is never simply a new language: it’s a new language targeted to a specific new locale, because customer needs and linguistic practices vary by country. Going forward, the Alexa AI team will continue working to expand Arabic to additional locales — even as it continues to extend Alexa to whole new language families.\n\nABOUT THE AUTHOR\n\n\n#### **[Larry Hardesty](https://www.amazon.science/author/larry-hardesty)**\n\n\nLarry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.","render":"<p><img src=\"https://dev-media.amazoncloud.cn/36d27926e2f1400eb4a4beb2a5b11f31_image.png\" alt=\"image.png\" /></p>\n<p>At launch, the Arabic version of Alexa will be available in the Kingdom of Saudi Arabia and the United Arab Emirates.</p>\n<p>The Arabic version of Alexa launched in December 2021, in the Kingdom of Saudi Arabia and the United Arab Emirates, and like all new Alexa languages, it posed a unique set of challenges.</p>\n<p>The first was to decide what forms of Arabic Alexa should speak. While the official written language in KSA and the UAE is Modern Standard Arabic (MSA), in everyday life, Arabic speakers use dialectal forms of Arabic, with many vernacular variations.</p>\n<p>For customers, engaging with Alexa in their native dialects would be more natural than speaking MSA. So the Alexa AI team — including computational linguists — determined that Arabic Alexa would be able to understand requests in both MSA and Khaleeji (Gulf) dialects.</p>\n<p>Alexa’s speech outputs, too, would be in both MSA and a Khaleeji dialect — MSA for formal speech, such as responses to requests for information, and Khaleeji for less formal speech, such as confirmation of alarm times and music selections. This means that someone issuing Alexa a request in one Arabic dialect might get a response in a different one. But that mirrors the experience that Arabic speakers in the region have with each other.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/f07bd37da4d74fe388ed6835cae28eda_image.png\" alt=\"image.png\" /></p>\n<p>The core components of a new Alexa model are automatic speech recognition (<ins><a href=\"https://www.amazon.science/tag/asr\" target=\"_blank\">ASR</a></ins>), which converts speech into text; natural-language understanding (<ins><a href=\"https://www.amazon.science/tag/nlu\" target=\"_blank\">NLU</a></ins>), which interprets the text to initiate actions; and text-to-speech (<ins><a href=\"https://www.amazon.science/tag/text-to-speech\" target=\"_blank\">TTS</a></ins>), which converts NLU outputs into synthesized speech.</p>\n<p>A key question for all three components was how to render utterances textually, both as ASR output and TTS input. Written Arabic suppresses short vowel sounds: it would be sort of like spelling the English word “begin” as “bgn”. People are usually able to infer the mssng vwls frm cntxt.</p>\n<p>But in formal and educational texts — such as reading primers for children — vowels and some consonantal sounds are indicated by diacritical marks. So the Alexa AI team had to decide whether the ASR output should include diacritics or not.</p>\n<p>One of the major differences between dialects is the vowel sounds, so omitting diacritics makes it easier to create a speech representation that’s applicable to all dialects, which is useful for ASR and NLU.</p>\n<p>Moreover, there is no published writing in forms of Arabic other than MSA, so there’s no standard orthography for them, either. Asking annotators to add diacritics could introduce more ambiguity than it alleviates. In the end, the Alexa AI team decided that ASR output should use only two diacritics, the shaddah and maddah, because they help with pronunciation accuracy on entity names that pass from ASR through NLU to TTS.</p>\n<p>These design decisions had separate implications for the various Alexa AI teams — ASR, NLU, and TTS — and of course, each of the teams faced its own particular challenges as well.</p>\n<h4><a id=\"ASR_27\"></a><strong>ASR</strong></h4>\n<p>One of the ASR team’s goals was to provide a consistent output, given the lack of standardized orthography for both dialectal Arabic and foreign loanwords. One of their decisions was to represent loanwords — such as the names of French or American musicians or albums — using Latin script.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/4e24b73b4a3f4ff9baf83846efb0b511_image.png\" alt=\"image.png\" /></p>\n<p><em>L to R:</em> Applied-science manager Volker Leutnant and applied scientists Moe Hethnawi and Bashar Awwad Shiekh Hasan</p>\n<p>To that end, they used a so-called catalogue ingestion normalizer, which takes in a catalogue of terms in French and English and converts the corresponding Arabic-script outputs of the ASR model into Latin script.</p>\n<p>Applied-science manager <ins><a href=\"https://www.amazon.science/author/volker-leutnant\" target=\"_blank\">Volker Leutnant</a></ins> and his colleagues on the Alexa Speech team — including applied scientists Moe Hethnawi and Bashar Awwad Shiekh Hasan — began with an English acoustic model, which started out better attuned to human speech sounds than a randomly initialized model. They trained it using public datasets of Arabic speech in the target Khaleeji dialects and data from <ins><a href=\"https://www.amazon.com/Amazon-Cleo/dp/B01N5QDE0Y\" target=\"_blank\">Cleo</a></ins>, an Alexa skill that allows multilingual customers to help train new-language models by responding to voice prompts with open-form utterances. The Cleo data included labeled utterances in additional Arabic dialects, allowing the ASR model to provide more consistent user experience for a wider range of customers.</p>\n<h4><a id=\"NLU_41\"></a><strong>NLU</strong></h4>\n<p>An NLU model takes in utterances transcribed by ASR and classifies them according to <em>intent</em>, such as playing music. It also identifies all the slots in the utterance — such as song names or artist names — and their slot values — such as the particular artist name “<ins><a href=\"https://en.wikipedia.org/wiki/Ahlam_(singer)\" target=\"_blank\">Ahlam</a></ins>”.</p>\n<p>The first thing the NLU model needs to do is to tokenize the input, or split it into semantic units that should be processed separately. In many languages, tokenization happens naturally during ASR. But Arabic uses word affixes — prefixes and suffixes — to convey contextual meanings.</p>\n<p>Some of those affixes, such as articles and prepositions — the Arabic equivalents of “the” or “to” — are irrelevant to NLU and can be left attached to their word stems. But some, such as possessives, require independent slot tags. The suffix meaning “my”, for instance, in the Arabic for “my music”, tells the NLU model just which music the customer wants played. Language engineer <ins><a href=\"https://www.amazon.science/author/yangsook-park\" target=\"_blank\">Yangsook</a></ins> Park and her colleagues designed the tokenizer to split off those important affixes and leave the rest attached to their stems.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/6e7c61d459014fce9098adb5c381a673_image.png\" alt=\"image.png\" /></p>\n<p>The tokenized input passes to the NLU model, which is a trilingual model, able to process inputs in Arabic, French or English. This not only helps the model handle loanwords used in Arabic, but it also enables the transfer of knowledge from French and English, which currently have more abundant training data than Arabic.</p>\n<p>Research science manager <ins><a href=\"https://www.amazon.science/author/karolina-owczarzak\" target=\"_blank\">Karolina Owczarzak</a></ins> and her team at Alexa AI — including research scientists <ins><a href=\"https://www.amazon.science/author/khadige-abboud\" target=\"_blank\">Khadige Abboud</a></ins>, <ins><a href=\"https://www.amazon.science/author/olga-golovneva\" target=\"_blank\">Olga Golovneva</a></ins>, and <ins><a href=\"https://www.amazon.science/author/christopher-dipersio\" target=\"_blank\">Christopher DiPersio</a></ins> — resampled the existing Arabic training data to expand the variety of training examples. For instance, their resampling tool replaces the names of artists or songs in existing utterances with other names from the song catalogue.</p>\n<p>A crucial consideration was how many resampled utterances with the same basic structure to include in the training data. Using too many examples based on the same template — such as “let me hear &lt;SongName&gt; by &lt;ArtistName&gt;” or “play the &lt;ArtistName&gt; song &lt;SongName&gt;” —could diminish the model’s performance on other classes of utterance.</p>\n<p>To compute the optimal number of examples per utterance template, the NLU researchers constructed a measure of utterance complexity, which factored in both the number of slots in the utterance template and the number of possible values per slot. The more complex the utterance template, the more examples it required.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/63862532af5b4aa7ad1a488a160f647d_image.png\" alt=\"image.png\" /></p>\n<p><em>L to R:</em> Language engineer Yangsook Park, research science manager Karolina Owczarzak, and research scientists Khadige Abboud, Olga Golovneva, and Christopher DiPersio</p>\n<p>The model-training process began with a BERT-based language model, which was pretrained on all three languages using unlabeled data and the standard language-modeling objective. That is, words of sentences were randomly masked out, and the model learned to predict the missing words from those that remained. In this stage, the NLU team augmented the Arabic dataset with data translated from English by AWS Translate.</p>\n<p>Then the researchers trained the model to perform NLU tasks by fine-tuning it on a large corpus of annotated French and English data — that is, utterances whose intents and slots had been labeled. The idea was to use the abundant data in those two languages to teach the model some general principles of NLU processing, which could then be transferred to a model fine-tuned on the sparser labeled Arabic data.</p>\n<p>Finally, the model was fine-tuned again on equal amounts of labeled training data in all three languages, to ensure that fine-tuning on Arabic didn’t diminish the model’s performance on the other two languages.</p>\n<h4><a id=\"TTS_71\"></a><strong>TTS</strong></h4>\n<p>Whereas diacritics can get in the way of NLU, they’re indispensable to TTS: the Alexa speech synthesizer needs to know precisely which vowel sounds to produce as output. So when the Arabic TTS model gets a text string from one of Alexa’s functions — such as confirmation of a music selection from the music player — it runs it through a diacritizer, which adds the full set of diacritics back in.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/626acf67a00348089e26da14140d51eb_image.png\" alt=\"image.png\" /></p>\n<p><em>L to R:</em> Software engineer Tarek Badr, applied scientist Fan Yang, and language engineer Merouane Benhassine.</p>\n<p>The TTS researchers, led by software engineer Tarek Badr and applied scientist Fan Yang, trained the diacritizer largely on MSA texts, with some supplemental data in Khaleeji dialects, which the Alexa team compiled itself. Inferring the correct diacritics depends on the whole utterance context: as an analogy, whether “crw” represents “craw”, “crew”, or “crow” could usually be determined from context. So the diacritizer model has an attention mechanism that attends over the complete utterance.</p>\n<p>Outputs that should be in Khaleeji Arabic then pass through a module that converts the diacritics to representations of the appropriate short-vowels sounds, along with any other necessary transformations. This is a rule-based system that language engineer Merouane Benhassine and his colleagues built to capture the predictable relationships between MSA and Khaleeji Arabic.</p>\n<p>The text-to-speech model itself is a neural network, which takes text as input and outputs acoustic waveforms. It takes advantage of the Amazon TTS team’s recent work on <ins><a href=\"https://www.amazon.science/blog/how-to-build-highly-expressive-speech-models\" target=\"_blank\">expressive speech</a></ins> to endow the Arabic TTS model with a lively, conversational style by default.</p>\n<p>A new Alexa language is never simply a new language: it’s a new language targeted to a specific new locale, because customer needs and linguistic practices vary by country. Going forward, the Alexa AI team will continue working to expand Arabic to additional locales — even as it continues to extend Alexa to whole new language families.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\"Larry_Hardestyhttpswwwamazonscienceauthorlarryhardesty_91\"></a><strong><a href=\"https://www.amazon.science/author/larry-hardesty\" target=\"_blank\">Larry Hardesty</a></strong></h4>\n<p>Larry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭
contact-us