How Amazon is using self-service to democratize AI

自然语言处理
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"In June, one of us (Prem) wrote a blog post for Amazon Science arguing that Alexa, and more generally, the field of AI, is entering a new “++[age of self](https://www.amazon.science/blog/alexa-enters-the-age-of-self)++” in which it will become more self-aware, ++[self-learning](https://www.amazon.science/tag/self-learning)++, and self-service.\n\nAmazon’s progress toward self-service AI was on display today at a virtual event in which we unveiled our new lineup of devices and services.\n\nAmong these were three self-service features for Alexa-enabled devices: preference teaching, Custom Sound Event Detection, and, for camera-based Ring devices, Custom Event Alerts.\n\n- Preference teaching allows customers to explicitly teach Alexa which skills should handle particular types of requests, which sports teams they follow, and which cuisines they prefer;\n- Custom Sound Event Detection allows customers to teach Alexa to recognize particular household sounds — a doorbell sound, for instance — and to initiate particular Alexa Routines when it hears them;\n- Ring Custom Event Alerts let the customer designate a particular region of the image captured by a Ring Video Doorbell camera or Spotlight camera as a region of interest and teach the camera to discriminate different states for that region — a shed door as either open or shut, for instance.\n\n![image.png](https://dev-media.amazoncloud.cn/ba06e8aff93b43fcb48f0aff82341b39_image.png)\n\nWith Ring Custom Event Alerts, the customer specifies a region of interest within a camera’s field of view (purple rectangle, left) and then provides five examples (right) of each of two different states of affairs within that region.\n\nAll of these are examples of ways in which Amazon is working to democratize AI by enabling customers to configure machine learning systems as they see fit, without the need for expertise in programming or machine learning.\n\n#### **Preference teaching**\n\nPreference teaching allows customers to teach Alexa their preferences using natural language — for instance, “Alexa, I’m a big fan of the Patriots”, or “Alexa, I love Thai food”.\n\nIt’s an extension of ++[interactive teaching by customers](https://www.amazon.science/blog/new-alexa-features-interactive-teaching-by-customers)++, which we launched last year. With preference teaching, a salient difference is that customers initiate the teaching, whereas previously, Alexa would initiate it in response to a command it could not understand.\n\nAt the core of both applications are two models: a natural-language-understanding (NLU) model that identifies the user's intent, along with entity names and entity types, and a dialogue management model that manages the interaction with the customer and decides what actions to take.\n\nAn important technical advance this year is that the dialogue management model is, like the NLU model, a deep-neural-network model. We trained it using ++[Alexa Conversations](https://www.amazon.science/blog/science-innovations-power-alexa-conversations-dialogue-management)++, which allows the designer to simply provide examples of the types of dialogues the model should be able to handle. Alexa Conversations then analyzes the examples and automatically generates variations of them, increasing the amount of data available to train the neural dialogue management model 100-fold.\n\n#### **More coverage of devices and services announcements**\n\n- \"++[Unlocking AI for everyone](https://www.aboutamazon.com/news/devices/unlocking-ai-for-everyone)++\"\n- \"++[Astro’s Intelligent Motion brings state-of-the-art navigation to the home](https://www.amazon.science/blog/astros-intelligent-motion-brings-state-of-the-art-navigation-to-the-home)++\"\n- \"++[A more useful way to measure robotic localization error](https://www.amazon.science/blog/a-more-useful-way-to-measure-robotic-localization-error)++\"\n- \"++[The science behind visual ID](https://www.amazon.science/blog/the-science-behind-visual-id)++\"\n\nAt launch, preference teaching will support three classes of preferences: preferred skills for handling weather requests, preferred sports teams, and food preferences. Once our model has identified a customer preference, it searches the relevant knowledge base for a match. If necessary, it will follow up with a request for more information. For instance, if the customer expresses a preference for “the Giants”, and Alexa finds more than one matching name in the sports knowledge base, it might ask, “Did you mean the New York Giants or the San Francisco Giants?”\n\nIn ongoing research, Alexa AI scientists are working to add commonsense to the preference extraction model, so that, for instance, if a customer says, “I don’t eat meat,” Alexa will employ commonsense reasoning to interpret that as a preference for vegetarian restaurants and recipes.\n\n#### **Custom Sound Event Detection and Ring Custom Event Alerts**\n\nCustom Sound Event Detection and Ring Custom Event Alerts use a similar approach to ++[few-shot learning](https://www.amazon.science/tag/few-shot-learning)++, or learning a new classification task from just a handful of examples.\n\nWith Custom Sound Event Detection, the customer provides six to ten examples of a new sound — say, the doorbell ringing — when prompted by Alexa. Alexa uses these samples to build a detector for the new sound. Subsequently, when Alexa detects the sound, it will execute a routine set by the customer — say, flashing the lights in the farthest room of the house.\n\nSimilarly, with Ring Custom Event Alerts, the customer uses a cursor or, on a touch screen, a finger to outline a region of interest — say, the door of a shed — within the field of view of a particular camera.\n\nThen, by sorting through historical image captures from that camera, the customer identifies five examples of a particular state of that region — say, the shed door open — and five examples of an alternative state — say, the shed door closed. Ring Custom Event Alerts can be configured to send the customer an alert if the state of the region of interest changes.\n\nIn both cases, we train neural models on classification tasks — audio classification in one case, video in the other. The models are encoder-decoder models, meaning they have encoder modules that embed inputs, or convert them into vector representations. On the basis of those embeddings, the decoders make predictions.\n\n![image.png](https://dev-media.amazoncloud.cn/cc5e8b320a81446f90921bdbf07e0158_image.png)\n\nA two-dimensional projection of embeddings produced through Custom Sound Event Detection.\n\nFor event detection — whether audio or visual — we use the encoders only. When examples of the same type of event pass through the encoder, the resulting embeddings define a region in the embedding space. Recognizing later instances of the same event is just a matter of gauging their embeddings’ distance from those of the examples.\n\nTo train the encoder for Custom Sound Event Detection, the Alexa team took advantage of ++[self-supervised learning](https://www.amazon.science/tag/self-supervised-learning)++. In the first stage of training, we trained the network simply to reproduce the input signal: that is, from the embedding, the decoder had to reconstruct the encoder’s input. This enabled us to develop a strong encoder using only unlabeled data.\n\nThen we fine-tuned the model on labeled data — sound recordings labeled by type. This enabled the encoder to learn finer distinctions between different types of sounds. Ring Custom Event Alerts uses this approach too, in which we leverage publicly available data.\n\nPreference teaching and custom event detection are just a few of the ways in which we are working to democratize AI. We continue to advance the science of self-service to make AI more customizable and useful for everyone.\n\nABOUT THE AUTHOR\n\n#### **[Prem Natarajan](https://www.amazon.science/author/prem-natarajan)**\n\nPrem Natarjan is the Alexa AI vice president of natural \nunderstanding\n\n#### **[Manoj Sindhwani](https://www.amazon.science/author/manoj-sindhwani)**\n\nManoj Sindhwani is the Amazon vice president for Alexa Speech.\n\n","render":"<p>In June, one of us (Prem) wrote a blog post for Amazon Science arguing that Alexa, and more generally, the field of AI, is entering a new “<ins><a href=\\"https://www.amazon.science/blog/alexa-enters-the-age-of-self\\" target=\\"_blank\\">age of self</a></ins>” in which it will become more self-aware, <ins><a href=\\"https://www.amazon.science/tag/self-learning\\" target=\\"_blank\\">self-learning</a></ins>, and self-service.</p>\n<p>Amazon’s progress toward self-service AI was on display today at a virtual event in which we unveiled our new lineup of devices and services.</p>\n<p>Among these were three self-service features for Alexa-enabled devices: preference teaching, Custom Sound Event Detection, and, for camera-based Ring devices, Custom Event Alerts.</p>\n<ul>\\n<li>Preference teaching allows customers to explicitly teach Alexa which skills should handle particular types of requests, which sports teams they follow, and which cuisines they prefer;</li>\n<li>Custom Sound Event Detection allows customers to teach Alexa to recognize particular household sounds — a doorbell sound, for instance — and to initiate particular Alexa Routines when it hears them;</li>\n<li>Ring Custom Event Alerts let the customer designate a particular region of the image captured by a Ring Video Doorbell camera or Spotlight camera as a region of interest and teach the camera to discriminate different states for that region — a shed door as either open or shut, for instance.</li>\n</ul>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/ba06e8aff93b43fcb48f0aff82341b39_image.png\\" alt=\\"image.png\\" /></p>\n<p>With Ring Custom Event Alerts, the customer specifies a region of interest within a camera’s field of view (purple rectangle, left) and then provides five examples (right) of each of two different states of affairs within that region.</p>\n<p>All of these are examples of ways in which Amazon is working to democratize AI by enabling customers to configure machine learning systems as they see fit, without the need for expertise in programming or machine learning.</p>\n<h4><a id=\\"Preference_teaching_16\\"></a><strong>Preference teaching</strong></h4>\\n<p>Preference teaching allows customers to teach Alexa their preferences using natural language — for instance, “Alexa, I’m a big fan of the Patriots”, or “Alexa, I love Thai food”.</p>\n<p>It’s an extension of <ins><a href=\\"https://www.amazon.science/blog/new-alexa-features-interactive-teaching-by-customers\\" target=\\"_blank\\">interactive teaching by customers</a></ins>, which we launched last year. With preference teaching, a salient difference is that customers initiate the teaching, whereas previously, Alexa would initiate it in response to a command it could not understand.</p>\n<p>At the core of both applications are two models: a natural-language-understanding (NLU) model that identifies the user’s intent, along with entity names and entity types, and a dialogue management model that manages the interaction with the customer and decides what actions to take.</p>\n<p>An important technical advance this year is that the dialogue management model is, like the NLU model, a deep-neural-network model. We trained it using <ins><a href=\\"https://www.amazon.science/blog/science-innovations-power-alexa-conversations-dialogue-management\\" target=\\"_blank\\">Alexa Conversations</a></ins>, which allows the designer to simply provide examples of the types of dialogues the model should be able to handle. Alexa Conversations then analyzes the examples and automatically generates variations of them, increasing the amount of data available to train the neural dialogue management model 100-fold.</p>\n<h4><a id=\\"More_coverage_of_devices_and_services_announcements_26\\"></a><strong>More coverage of devices and services announcements</strong></h4>\\n<ul>\\n<li>“<ins><a href=\\"https://www.aboutamazon.com/news/devices/unlocking-ai-for-everyone\\" target=\\"_blank\\">Unlocking AI for everyone</a></ins>”</li>\n<li>“<ins><a href=\\"https://www.amazon.science/blog/astros-intelligent-motion-brings-state-of-the-art-navigation-to-the-home\\" target=\\"_blank\\">Astro’s Intelligent Motion brings state-of-the-art navigation to the home</a></ins>”</li>\n<li>“<ins><a href=\\"https://www.amazon.science/blog/a-more-useful-way-to-measure-robotic-localization-error\\" target=\\"_blank\\">A more useful way to measure robotic localization error</a></ins>”</li>\n<li>“<ins><a href=\\"https://www.amazon.science/blog/the-science-behind-visual-id\\" target=\\"_blank\\">The science behind visual ID</a></ins>”</li>\n</ul>\\n<p>At launch, preference teaching will support three classes of preferences: preferred skills for handling weather requests, preferred sports teams, and food preferences. Once our model has identified a customer preference, it searches the relevant knowledge base for a match. If necessary, it will follow up with a request for more information. For instance, if the customer expresses a preference for “the Giants”, and Alexa finds more than one matching name in the sports knowledge base, it might ask, “Did you mean the New York Giants or the San Francisco Giants?”</p>\n<p>In ongoing research, Alexa AI scientists are working to add commonsense to the preference extraction model, so that, for instance, if a customer says, “I don’t eat meat,” Alexa will employ commonsense reasoning to interpret that as a preference for vegetarian restaurants and recipes.</p>\n<h4><a id=\\"Custom_Sound_Event_Detection_and_Ring_Custom_Event_Alerts_37\\"></a><strong>Custom Sound Event Detection and Ring Custom Event Alerts</strong></h4>\\n<p>Custom Sound Event Detection and Ring Custom Event Alerts use a similar approach to <ins><a href=\\"https://www.amazon.science/tag/few-shot-learning\\" target=\\"_blank\\">few-shot learning</a></ins>, or learning a new classification task from just a handful of examples.</p>\n<p>With Custom Sound Event Detection, the customer provides six to ten examples of a new sound — say, the doorbell ringing — when prompted by Alexa. Alexa uses these samples to build a detector for the new sound. Subsequently, when Alexa detects the sound, it will execute a routine set by the customer — say, flashing the lights in the farthest room of the house.</p>\n<p>Similarly, with Ring Custom Event Alerts, the customer uses a cursor or, on a touch screen, a finger to outline a region of interest — say, the door of a shed — within the field of view of a particular camera.</p>\n<p>Then, by sorting through historical image captures from that camera, the customer identifies five examples of a particular state of that region — say, the shed door open — and five examples of an alternative state — say, the shed door closed. Ring Custom Event Alerts can be configured to send the customer an alert if the state of the region of interest changes.</p>\n<p>In both cases, we train neural models on classification tasks — audio classification in one case, video in the other. The models are encoder-decoder models, meaning they have encoder modules that embed inputs, or convert them into vector representations. On the basis of those embeddings, the decoders make predictions.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/cc5e8b320a81446f90921bdbf07e0158_image.png\\" alt=\\"image.png\\" /></p>\n<p>A two-dimensional projection of embeddings produced through Custom Sound Event Detection.</p>\n<p>For event detection — whether audio or visual — we use the encoders only. When examples of the same type of event pass through the encoder, the resulting embeddings define a region in the embedding space. Recognizing later instances of the same event is just a matter of gauging their embeddings’ distance from those of the examples.</p>\n<p>To train the encoder for Custom Sound Event Detection, the Alexa team took advantage of <ins><a href=\\"https://www.amazon.science/tag/self-supervised-learning\\" target=\\"_blank\\">self-supervised learning</a></ins>. In the first stage of training, we trained the network simply to reproduce the input signal: that is, from the embedding, the decoder had to reconstruct the encoder’s input. This enabled us to develop a strong encoder using only unlabeled data.</p>\n<p>Then we fine-tuned the model on labeled data — sound recordings labeled by type. This enabled the encoder to learn finer distinctions between different types of sounds. Ring Custom Event Alerts uses this approach too, in which we leverage publicly available data.</p>\n<p>Preference teaching and custom event detection are just a few of the ways in which we are working to democratize AI. We continue to advance the science of self-service to make AI more customizable and useful for everyone.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Prem_Natarajanhttpswwwamazonscienceauthorpremnatarajan_63\\"></a><strong><a href=\\"https://www.amazon.science/author/prem-natarajan\\" target=\\"_blank\\">Prem Natarajan</a></strong></h4>\n<p>Prem Natarjan is the Alexa AI vice president of natural<br />\\nunderstanding</p>\n<h4><a id=\\"Manoj_Sindhwanihttpswwwamazonscienceauthormanojsindhwani_68\\"></a><strong><a href=\\"https://www.amazon.science/author/manoj-sindhwani\\" target=\\"_blank\\">Manoj Sindhwani</a></strong></h4>\n<p>Manoj Sindhwani is the Amazon vice president for Alexa Speech.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭