The science behind an Amazon Echo feature that helped save a puppy

机器学习
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"![image.png](https://dev-media.amazoncloud.cn/4673cfd71302421598df41f3d3366148_image.png)\n\nJonathan and Kathy, an Orlando-based couple, were able to save their French bulldog, Cooper, and prevent fire damage to their house thanks to a smart mobile alert from Alexa Guard.\n\nJonathan and Kathy, an Orlando-based couple, were out visiting a neighbor a few days before Christmas 2020 when Jonathan got an unusual alert from their Amazon Echo. Using the [Alexa App](https://www.amazon.com/b?ie=UTF8&node=18354642011), they dropped in on their Echo device, which allowed them to hear what was happening in their home in real time.\n\n\"You could hear things crackling and popping, and the smoke alarm was going off like crazy,\" Jonathan [told About](https://www.aboutamazon.com/news/devices/alexa-and-ring-devices-help-save-puppy-from-house-fire) Amazon. He then rushed home. \"Upon rolling into the neighborhood, it was very smoky,\" Jonathan said. \"I pulled up into the driveway, opened the garage, and smoke just started billowing out. I went into our house, and more black smoke poured out. It was so thick you couldn't see six inches in front of your face. The only thing I could think of was Cooper.\"\n\n<video src=\\"https://dev-media.amazoncloud.cn/2b8d0d7190a94b10a206783d18f4ab3b_Rec%200001.mp4\\" class=\\"manvaVedio\\" controls=\\"controls\\" style=\\"width:160px;height:160px\\"></video>\n\n**How an Amazon Echo alert helped save Cooper the dog**\n\nJonathan managed to get Cooper, the couple's French bulldog, from his pen as smoke billowed from the house. The fire department was also able to extinguish the fire and minimize damage. However, neither outcome may have occurred if it weren’t for a Smart Alert mobile notification from Alexa.\n\nThe feature that alerted Jonathan is called Alexa Guard, a smart-home capability that relies on [acoustic event detection](https://www.amazon.science/tag/acoustic-event-detection) (AED). AED is an emerging field that focuses on training models to detect and process sounds.\n\n“The technology behind Alexa Guard was developed in an effort to augment the utility of Echo devices,” said [Angel Calvo](https://www.linkedin.cn/uas/login?session_redirect=https%3A%2F%2Fwww.linkedin.cn%2Finjobs%2Fin%2Fcalvoangel%2F), director of software for Alexa Smart Home team.\n\n\n#### **How Guard works**\n\n\nWhen set to away mode, Guard is trained to identify sounds related to home security and safety events, like a smoke alarm sounding, and to distinguish those sounds from something more prosaic, like a microwave beeping.\n\nI am so glad this couple and their pet are OK - We built [#AlexaGuard](https://twitter.com/hashtag/AlexaGuard?src=hash&ref_src=twsrc%5Etfw) with this customer use case in mind, so learning that we helped with Guard to save this puppy from a fire, emphasis why I love my job... kudos to the Alexa Guard team! [https://t.co/rX48tbCNko](https://t.co/rX48tbCNko)\n— Angel Calvo (@ANGELCALVOS) January 6, 2021\n\nThe [detection service relies on two models](https://www.amazon.science/blog/identifying-sounds-in-audio-streams) applied in a two-step system, one on the device, another in the cloud.\n\nThe first step utilizes a recurrent neural network — a type of deep learning model that uses sequential data or time series data to learn — on the Echo device itself. The on-device detection works by converting the audio input into features that feed into a recurrent neural network (RNN).\n\nThe device uses long short-term memory (LSTM) — a type of recurrent neural network that has shown a significant improvement in speech recognition and has high accuracy, “particularly when it’s applied to sequential data,” said Ming Sun, applied science manager for AED. This is particularly important for determining when a specific sound occurred.\n\nThe Echo must also occasionally be able to distinguish between multiple sounds at once. Layered over the RNN is a multi-task learning framework that is trained to detect multiple events. These multiple output layers work like branches off the base neural network, each trained to recognize a different event in the captured audio.\n\nThis helps Echo devices detect multiple concurrent incidents (those which customers have selected for detection) such as footsteps and glass breaking, for example.\n\nLayering multiple output layers over a single neural network also makes the detection system in Echo devices very scalable; the device can be trained to recognize new sounds with minimal additions.\n\n“Without this design, we would need to update the whole model every time we update one existing sound event or add a new sound event,” Sun said. “Now, we only have to update the output layer for a target existing event, or add a new output layer for a new event.”\n\nWhen one of the sounds a customer has selected for detection triggers Guard on the Echo device, that audio is then sent to the cloud for the second verification step to confirm the on-device detection. The cloud runs a much more powerful recognition system to filter out false triggers that might be linked to ambient noise around the home, Sun said.\n\nIf the validation process confirms the sound is the one that the device is actively monitoring for, the customer gets a notification in their Alexa app along with an audio clip of the detection.\n\n\n#### **Getting creative to teach Guard sounds**\n\n\nBecause home security events are relatively rare — and the data sets for these audio events are quite meager — semi-supervised learning and self-supervised learning have been critical as Sun’s team expands and refines Guard’s capabilities.\n\n“Semi-supervised learning relies on small sets of annotated training data to leverage larger sets of unannotated data,” Sun said. “While self-supervised learning utilizes larger sets of unannotated data with training targets derived from data itself in an unsupervised way — no human annotations.\n\n“Another technique is to detect for a longer time and aggregate events to be more accurate,” Sun said. To improve the accuracy of sounds with repeating patterns, the detectors look for shorter repeating patterns, such as an appliance beeping. This allows Guard to distinguish between that type of repetitive beeping and an alarm, which can run for 30 seconds or longer. Guard can also detect the difference between a smoke alarm and a carbon monoxide alarm, and notify customers of a specific risk.\n\n\n#### **Since the very beginning, it’s been critical to build accurate models that consume less resources. We apply lots of optimization so that this system can be as small and efficient as possible.**\n\n\nMing Sun\n\nGuard Plus, a [subscription service](https://www.amazon.com/b?ie=UTF8&node=18021383011) launched in January, detects sounds that could be an intruder — like footsteps, a door closing, or glass breaking — and can send a Smart Alert mobile notification or plays a siren on the Echo device. Alexa can also notify customers about the sound of smoke alarms or carbon monoxide alarms. Because the ambient sounds in places like dense urban environments or apartment complexes can make this tricky, the team added a feature allowing customers to adjust the sensitivity to accommodate the noise in their home environments.\n\nThe limited annotated data the Guard team had access to has also required them to get creative. Glass breaking, for example, is a rare sound, it’s over in two to three seconds, and it varies based on the type of glass. To bolster their data set, the Guard team rented a warehouse and contracted a construction crew to break hundreds of windows: single pane, double pane, different compositions. This allowed the team to build an authentic data set to build the initial model — also called a seed model — before deploying Guard to beta testers. \n\nAll of the strategies Sun’s team employed to optimize the recognition system on Echo devices have minimized the error rate.\n\nThis is where the powerful AED models in the cloud — Guard’s second validation step — are so essential. The chances of false alarm are much smaller when audio is processed through both local and cloud systems, Sun said. And, he emphasized, audio is sent to the cloud only after running it through a device-side model to protect privacy.\n\n“Since the very beginning, it’s been critical to build accurate models that consume less resources,” Sun said. “We apply lots of optimization so that this system can be as small and efficient as possible.”\n\nEdge devices like Echo only send data to the cloud when it’s essential. In the case of Guard, that means the majority of the audio data is processed and discarded by the neural network on the device. Only potential triggers make it to the cloud. For those events, customers are able to view, listen, and delete the audio that Guard detects directly from their Guard History in the Alexa app, or from the Alexa Privacy Settings page.\n\nABOUT THE AUTHOR\n\n#### **Staff writer**","render":"<p><img src=\\"https://dev-media.amazoncloud.cn/4673cfd71302421598df41f3d3366148_image.png\\" alt=\\"image.png\\" /></p>\n<p>Jonathan and Kathy, an Orlando-based couple, were able to save their French bulldog, Cooper, and prevent fire damage to their house thanks to a smart mobile alert from Alexa Guard.</p>\n<p>Jonathan and Kathy, an Orlando-based couple, were out visiting a neighbor a few days before Christmas 2020 when Jonathan got an unusual alert from their Amazon Echo. Using the <a href=\\"https://www.amazon.com/b?ie=UTF8&amp;node=18354642011\\" target=\\"_blank\\">Alexa App</a>, they dropped in on their Echo device, which allowed them to hear what was happening in their home in real time.</p>\\n<p>“You could hear things crackling and popping, and the smoke alarm was going off like crazy,” Jonathan <a href=\\"https://www.aboutamazon.com/news/devices/alexa-and-ring-devices-help-save-puppy-from-house-fire\\" target=\\"_blank\\">told About</a> Amazon. He then rushed home. “Upon rolling into the neighborhood, it was very smoky,” Jonathan said. “I pulled up into the driveway, opened the garage, and smoke just started billowing out. I went into our house, and more black smoke poured out. It was so thick you couldn’t see six inches in front of your face. The only thing I could think of was Cooper.”</p>\\n<p><video src=\\"https://dev-media.amazoncloud.cn/2b8d0d7190a94b10a206783d18f4ab3b_Rec%200001.mp4\\" controls=\\"controls\\"></video></p>\\n<p><strong>How an Amazon Echo alert helped save Cooper the dog</strong></p>\\n<p>Jonathan managed to get Cooper, the couple’s French bulldog, from his pen as smoke billowed from the house. The fire department was also able to extinguish the fire and minimize damage. However, neither outcome may have occurred if it weren’t for a Smart Alert mobile notification from Alexa.</p>\n<p>The feature that alerted Jonathan is called Alexa Guard, a smart-home capability that relies on <a href=\\"https://www.amazon.science/tag/acoustic-event-detection\\" target=\\"_blank\\">acoustic event detection</a> (AED). AED is an emerging field that focuses on training models to detect and process sounds.</p>\\n<p>“The technology behind Alexa Guard was developed in an effort to augment the utility of Echo devices,” said <a href=\\"https://www.linkedin.cn/uas/login?session_redirect=https%3A%2F%2Fwww.linkedin.cn%2Finjobs%2Fin%2Fcalvoangel%2F\\" target=\\"_blank\\">Angel Calvo</a>, director of software for Alexa Smart Home team.</p>\\n<h4><a id=\\"How_Guard_works_19\\"></a><strong>How Guard works</strong></h4>\\n<p>When set to away mode, Guard is trained to identify sounds related to home security and safety events, like a smoke alarm sounding, and to distinguish those sounds from something more prosaic, like a microwave beeping.</p>\n<p>I am so glad this couple and their pet are OK - We built <a href=\\"https://twitter.com/hashtag/AlexaGuard?src=hash&amp;ref_src=twsrc%5Etfw\\" target=\\"_blank\\">#AlexaGuard</a> with this customer use case in mind, so learning that we helped with Guard to save this puppy from a fire, emphasis why I love my job… kudos to the Alexa Guard team! <a href=\\"https://t.co/rX48tbCNko\\" target=\\"_blank\\">https://t.co/rX48tbCNko</a><br />\\n— Angel Calvo (@ANGELCALVOS) January 6, 2021</p>\n<p>The <a href=\\"https://www.amazon.science/blog/identifying-sounds-in-audio-streams\\" target=\\"_blank\\">detection service relies on two models</a> applied in a two-step system, one on the device, another in the cloud.</p>\\n<p>The first step utilizes a recurrent neural network — a type of deep learning model that uses sequential data or time series data to learn — on the Echo device itself. The on-device detection works by converting the audio input into features that feed into a recurrent neural network (RNN).</p>\n<p>The device uses long short-term memory (LSTM) — a type of recurrent neural network that has shown a significant improvement in speech recognition and has high accuracy, “particularly when it’s applied to sequential data,” said Ming Sun, applied science manager for AED. This is particularly important for determining when a specific sound occurred.</p>\n<p>The Echo must also occasionally be able to distinguish between multiple sounds at once. Layered over the RNN is a multi-task learning framework that is trained to detect multiple events. These multiple output layers work like branches off the base neural network, each trained to recognize a different event in the captured audio.</p>\n<p>This helps Echo devices detect multiple concurrent incidents (those which customers have selected for detection) such as footsteps and glass breaking, for example.</p>\n<p>Layering multiple output layers over a single neural network also makes the detection system in Echo devices very scalable; the device can be trained to recognize new sounds with minimal additions.</p>\n<p>“Without this design, we would need to update the whole model every time we update one existing sound event or add a new sound event,” Sun said. “Now, we only have to update the output layer for a target existing event, or add a new output layer for a new event.”</p>\n<p>When one of the sounds a customer has selected for detection triggers Guard on the Echo device, that audio is then sent to the cloud for the second verification step to confirm the on-device detection. The cloud runs a much more powerful recognition system to filter out false triggers that might be linked to ambient noise around the home, Sun said.</p>\n<p>If the validation process confirms the sound is the one that the device is actively monitoring for, the customer gets a notification in their Alexa app along with an audio clip of the detection.</p>\n<h4><a id=\\"Getting_creative_to_teach_Guard_sounds_46\\"></a><strong>Getting creative to teach Guard sounds</strong></h4>\\n<p>Because home security events are relatively rare — and the data sets for these audio events are quite meager — semi-supervised learning and self-supervised learning have been critical as Sun’s team expands and refines Guard’s capabilities.</p>\n<p>“Semi-supervised learning relies on small sets of annotated training data to leverage larger sets of unannotated data,” Sun said. “While self-supervised learning utilizes larger sets of unannotated data with training targets derived from data itself in an unsupervised way — no human annotations.</p>\n<p>“Another technique is to detect for a longer time and aggregate events to be more accurate,” Sun said. To improve the accuracy of sounds with repeating patterns, the detectors look for shorter repeating patterns, such as an appliance beeping. This allows Guard to distinguish between that type of repetitive beeping and an alarm, which can run for 30 seconds or longer. Guard can also detect the difference between a smoke alarm and a carbon monoxide alarm, and notify customers of a specific risk.</p>\n<h4><a id=\\"Since_the_very_beginning_its_been_critical_to_build_accurate_models_that_consume_less_resources_We_apply_lots_of_optimization_so_that_this_system_can_be_as_small_and_efficient_as_possible_56\\"></a><strong>Since the very beginning, it’s been critical to build accurate models that consume less resources. We apply lots of optimization so that this system can be as small and efficient as possible.</strong></h4>\\n<p>Ming Sun</p>\n<p>Guard Plus, a <a href=\\"https://www.amazon.com/b?ie=UTF8&amp;node=18021383011\\" target=\\"_blank\\">subscription service</a> launched in January, detects sounds that could be an intruder — like footsteps, a door closing, or glass breaking — and can send a Smart Alert mobile notification or plays a siren on the Echo device. Alexa can also notify customers about the sound of smoke alarms or carbon monoxide alarms. Because the ambient sounds in places like dense urban environments or apartment complexes can make this tricky, the team added a feature allowing customers to adjust the sensitivity to accommodate the noise in their home environments.</p>\\n<p>The limited annotated data the Guard team had access to has also required them to get creative. Glass breaking, for example, is a rare sound, it’s over in two to three seconds, and it varies based on the type of glass. To bolster their data set, the Guard team rented a warehouse and contracted a construction crew to break hundreds of windows: single pane, double pane, different compositions. This allowed the team to build an authentic data set to build the initial model — also called a seed model — before deploying Guard to beta testers.</p>\n<p>All of the strategies Sun’s team employed to optimize the recognition system on Echo devices have minimized the error rate.</p>\n<p>This is where the powerful AED models in the cloud — Guard’s second validation step — are so essential. The chances of false alarm are much smaller when audio is processed through both local and cloud systems, Sun said. And, he emphasized, audio is sent to the cloud only after running it through a device-side model to protect privacy.</p>\n<p>“Since the very beginning, it’s been critical to build accurate models that consume less resources,” Sun said. “We apply lots of optimization so that this system can be as small and efficient as possible.”</p>\n<p>Edge devices like Echo only send data to the cloud when it’s essential. In the case of Guard, that means the majority of the audio data is processed and discarded by the neural network on the device. Only potential triggers make it to the cloud. For those events, customers are able to view, listen, and delete the audio that Guard detects directly from their Guard History in the Alexa app, or from the Alexa Privacy Settings page.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Staff_writer_75\\"></a><strong>Staff writer</strong></h4>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭