New sound detection approach improves on state of the art

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"\nKnowledge distillation technique for shrinking neural networks yields relative performance increases of up to 122%.\n\nSound detection is a popular application of today’s smart speakers. Alexa customers who activate Alexa Guard when they leave the house, for instance, receive notifications if their Alexa-enabled devices detect sounds such as glass breaking or smoke detectors going off while they’re away.\n\nSound detection — or, technically, acoustic-event detection (AED) — needs to run on-device: a home security application, for example, can’t miss a smoke alarm because of a momentary loss of Internet connectivity. \n\nA popular way to fit AED models on-device is to use knowledge distillation, in which a machine learning model with a small memory footprint is trained to reproduce the outputs of a more powerful but also much larger model.\n\nAt this year’s Interspeech, we presented a ++[new approach](https://www.amazon.science/publications/intra-utterance-similarity-preserving-knowledge-distillation-for-audio-tagging)++ to knowledge distillation for AED systems. In tests, we compared our model to both a baseline model with no knowledge distillation and a model using a state-of-the-art knowledge distillation technique. On a standard metric called area under the precision-recall curve (AUPRC), our model improved on the earlier knowledge distillation model by 27% to 122%, relative to the baseline.\n\nOur technique works by exploiting repetitions in the acoustic signal, which are common in the types of sounds that AED systems are typically trained to detect: the sounds of smoke detector alarms or barking dogs, for instance, have more-or-less recurrent acoustic patterns.\n\n![image.png](https://dev-media.amazoncloud.cn/129a3298894e4a9896e02e2ee891daad_image.png)\n\nThe spectrogram of an emergency vehicle siren, which maps power fluctuations in different frequency bands over time. The repetition in the signal is clearly visible (pink and gold lines).\n\nFROM \"INTRA-UTTERANCE SIMILARITY PRESERVING KNOWLEDGE DISTILLATION FOR AUDIO TAGGING\"\n\nWhile our system did deliver its greatest improvement over baseline on such repetitive signals, it also improved performance on loud, singular sounds such as engine and machinery impacts.\n\nDeep neural networks, like the ones used in most AED models, are arranged into layers; input data is fed to the bottom layer, which processes it and passes the results to the next layer, which processes them and passes the results to the next layer, and so on.\n\nPast work has improved knowledge distillation by using a technique called similarity-preserving knowledge distillation, which relies on similarities between the outputs of different network layers on training examples that share a label. \n\nFor instance, sounds of breaking glass have certain acoustic characteristics not shared by sounds of barking dogs, and the layers’ outputs should reflect that. With similarity-preserving knowledge distillation, similarities inferred by the teacher model help guide the training of the student model.\n\nWe vary this approach to enforce similarities between the outputs of network layers for the same training example. That is, the outputs of the network layers should reflect the repetitions in the input signal. We thus call our approach intra-utterance similarity-preserving (IUSP) knowledge distillation.\n\nWe can enforce similarity between whichever layers of the teacher network — the larger network — and the student network — the smaller network — we want. For a given layer of the teacher model, we produce a matrix that maps its outputs for successive time steps of the input signal against themselves. The values in the matrix cells indicate the correlation between the layer’s outputs at different time steps.\n\n![image.png](https://dev-media.amazoncloud.cn/6cb48434992a490ab8c3b3bdf06b90ca_image.png)\n\nDuring training, we evaluate the student model not only according to how well its final output matches that of the teacher model, but also according to how well the self-correlation matrices of its normalized outputs match the teacher’s.\n\nSince the goal of knowledge distillation is to shrink the size of the machine learning model, the layers and intermediate features of the student model are often smaller — they have fewer processing nodes — than those of the teacher model. \n\nIn that case, we use bilinear interpolation to make the student model’s self-correlation matrices the same size as the teacher’s. That is, we insert additional rows and columns into the matrix, and the value of each added cell is an interpolation between the values of the adjacent cells in the horizontal and vertical directions.\n\nIn our experiments, we used a standard benchmark data set that features eight classes of sound, including alarm sounds, dogs barking, impact sounds, and human speech.\n\nAs a baseline model, we used a standard AED network with no knowledge distillation. To assess our model, we also compared it to a model trained using similarity-preserving knowledge distillation.\n\nWe measured the models’ performance using area under the precision-recall curve, which represents the trade-off between false positives and false negatives, and we experimented with student models of four different sizes. We assessed the knowledge distillation models according to their degree of improvement over the baseline model.\n\nCompared to the other knowledge distillation model, our model’s biggest improvement — a 122% increase in relative AUPRC — came with the smallest student model. The smallest improvement — 27% relative — came with the largest student model. As the purpose of knowledge distillation is to shrink the size of the student model, this indicates that our approach could be of use in real-world settings.\n\nAbout the Author\n\n#### **[Chieh-Chi Kao](https://www.amazon.science/author/chieh-chi-kao)**\n\nChieh-Chi Kao is an applied scientist in the Alexa Speech group at Amazon.\n ","render":"<p>Knowledge distillation technique for shrinking neural networks yields relative performance increases of up to 122%.</p>\n<p>Sound detection is a popular application of today’s smart speakers. Alexa customers who activate Alexa Guard when they leave the house, for instance, receive notifications if their Alexa-enabled devices detect sounds such as glass breaking or smoke detectors going off while they’re away.</p>\n<p>Sound detection — or, technically, acoustic-event detection (AED) — needs to run on-device: a home security application, for example, can’t miss a smoke alarm because of a momentary loss of Internet connectivity.</p>\n<p>A popular way to fit AED models on-device is to use knowledge distillation, in which a machine learning model with a small memory footprint is trained to reproduce the outputs of a more powerful but also much larger model.</p>\n<p>At this year’s Interspeech, we presented a <ins><a href=\\"https://www.amazon.science/publications/intra-utterance-similarity-preserving-knowledge-distillation-for-audio-tagging\\" target=\\"_blank\\">new approach</a></ins> to knowledge distillation for AED systems. In tests, we compared our model to both a baseline model with no knowledge distillation and a model using a state-of-the-art knowledge distillation technique. On a standard metric called area under the precision-recall curve (AUPRC), our model improved on the earlier knowledge distillation model by 27% to 122%, relative to the baseline.</p>\n<p>Our technique works by exploiting repetitions in the acoustic signal, which are common in the types of sounds that AED systems are typically trained to detect: the sounds of smoke detector alarms or barking dogs, for instance, have more-or-less recurrent acoustic patterns.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/129a3298894e4a9896e02e2ee891daad_image.png\\" alt=\\"image.png\\" /></p>\n<p>The spectrogram of an emergency vehicle siren, which maps power fluctuations in different frequency bands over time. The repetition in the signal is clearly visible (pink and gold lines).</p>\n<p>FROM “INTRA-UTTERANCE SIMILARITY PRESERVING KNOWLEDGE DISTILLATION FOR AUDIO TAGGING”</p>\n<p>While our system did deliver its greatest improvement over baseline on such repetitive signals, it also improved performance on loud, singular sounds such as engine and machinery impacts.</p>\n<p>Deep neural networks, like the ones used in most AED models, are arranged into layers; input data is fed to the bottom layer, which processes it and passes the results to the next layer, which processes them and passes the results to the next layer, and so on.</p>\n<p>Past work has improved knowledge distillation by using a technique called similarity-preserving knowledge distillation, which relies on similarities between the outputs of different network layers on training examples that share a label.</p>\n<p>For instance, sounds of breaking glass have certain acoustic characteristics not shared by sounds of barking dogs, and the layers’ outputs should reflect that. With similarity-preserving knowledge distillation, similarities inferred by the teacher model help guide the training of the student model.</p>\n<p>We vary this approach to enforce similarities between the outputs of network layers for the same training example. That is, the outputs of the network layers should reflect the repetitions in the input signal. We thus call our approach intra-utterance similarity-preserving (IUSP) knowledge distillation.</p>\n<p>We can enforce similarity between whichever layers of the teacher network — the larger network — and the student network — the smaller network — we want. For a given layer of the teacher model, we produce a matrix that maps its outputs for successive time steps of the input signal against themselves. The values in the matrix cells indicate the correlation between the layer’s outputs at different time steps.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/6cb48434992a490ab8c3b3bdf06b90ca_image.png\\" alt=\\"image.png\\" /></p>\n<p>During training, we evaluate the student model not only according to how well its final output matches that of the teacher model, but also according to how well the self-correlation matrices of its normalized outputs match the teacher’s.</p>\n<p>Since the goal of knowledge distillation is to shrink the size of the machine learning model, the layers and intermediate features of the student model are often smaller — they have fewer processing nodes — than those of the teacher model.</p>\n<p>In that case, we use bilinear interpolation to make the student model’s self-correlation matrices the same size as the teacher’s. That is, we insert additional rows and columns into the matrix, and the value of each added cell is an interpolation between the values of the adjacent cells in the horizontal and vertical directions.</p>\n<p>In our experiments, we used a standard benchmark data set that features eight classes of sound, including alarm sounds, dogs barking, impact sounds, and human speech.</p>\n<p>As a baseline model, we used a standard AED network with no knowledge distillation. To assess our model, we also compared it to a model trained using similarity-preserving knowledge distillation.</p>\n<p>We measured the models’ performance using area under the precision-recall curve, which represents the trade-off between false positives and false negatives, and we experimented with student models of four different sizes. We assessed the knowledge distillation models according to their degree of improvement over the baseline model.</p>\n<p>Compared to the other knowledge distillation model, our model’s biggest improvement — a 122% increase in relative AUPRC — came with the smallest student model. The smallest improvement — 27% relative — came with the largest student model. As the purpose of knowledge distillation is to shrink the size of the student model, this indicates that our approach could be of use in real-world settings.</p>\n<p>About the Author</p>\n<h4><a id=\\"ChiehChi_Kaohttpswwwamazonscienceauthorchiehchikao_49\\"></a><strong><a href=\\"https://www.amazon.science/author/chieh-chi-kao\\" target=\\"_blank\\">Chieh-Chi Kao</a></strong></h4>\n<p>Chieh-Chi Kao is an applied scientist in the Alexa Speech group at Amazon.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭