Alexa AI co-organizes special sessions at Interspeech

自然语言处理
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"At this year's ++[Interspeech](https://www.amazon.science/conferences-and-events/interspeech-2022)++ conference, in September, Alexa AI is co-organizing four special sessions — themed sessions within the main conferences — all of which are currently seeking paper submissions.\n\nOne session is on machine learning and signal processing in the context of multiple networked smart devices. This session will address topics such as synchronization, arbitration (deciding which device should respond to a query), and privacy.\n\nAnother Interspeech session is on ++[inclusive and fair speech technologies](https://sites.google.com/view/fair-speech-interspeech22/)++. Algorithmic bias has been well studied in natural-language processing and computer vision but less so in speech. Possible paper topics include methods of bias analysis and mitigation, dataset creation, and ASR for atypical speech.\n\nThe third session is on ++[trustworthy speech processing](https://trustworthyspeechprocessing.github.io/)++, which focuses on the development of models whose goals go beyond accuracy to incorporate privacy, interpretability, fairness, ethics, bias mitigation, and related areas.\n\nFinally, the fourth special session is on ++[predicting the intelligibility of speech](https://claritychallenge.github.io/interspeech2022_siphil/)++ — both the raw acoustic signal and the signal generated by hearing aids — to hearing-impaired listeners. This session is related to the ++[Clarity Challenge](https://www.amazon.science/blog/five-year-clarity-challenge-to-help-improve-hearing-aids)++, a five-year challenge to improve hearing aids that Alexa AI is participating in.\n\nThere’s more information about the individual sessions below. Submissions to the special sessions should go through the main-conference ++[submission portal](https://interspeech2022.org/forauthor/submissions.php)++. The submission deadline is **March 21**.\n\n#### **Challenges and opportunities for signal processing and machine learning for multiple smart devices**\n\nThe purpose of this session is to promote research in multiple-device signal processing and machine learning by bringing together industry and academic experts to discuss topics that include but are not limited to\n\n- Multiple-device audio datasets\n- Automatic speech recognition \n- Keyword spotting\n- Device arbitration (i.e., which device should respond to the user’s inquiry)\n- Speech enhancement: de-reverberation, noise reduction, echo reduction \n- Source separation\n- Speaker localization and tracking\n- Privacy-sensitive signal processing and machine learning\n\nThe session will collocate top researchers working in the multisensor domain, and even though their specific applications may be different (e.g., enhancement vs. acoustic-event detection), the similarity of the problem space encourages cross-pollination of techniques.\n\n#### **Amazon organizers:**\n\n- ++[Jarred Barber](https://www.linkedin.com/in/jarred-barber-77947458/)++, applied scientist with Alexa AI\n- Gregory Ciccarelli, applied scientist with Alexa AI\n- ++[Israel Cohen](https://israelcohen.com/)++, Amazon Scholar and professor at Technion-Israel Institute of Technology\n- ++[Tao Zhang](https://www.linkedin.com/in/tao-zhang-515bb618/)++, senior manager of applied science with Alexa AI\n\n#### **Inclusive and Fair Speech Technologies**\n\nAlexa AI is co-organizing this session with leading researchers in the field from around the world. The session will feature a series of oral presentations (or posters with two-minute introductions if more than six papers are accepted) that may address but are not limited to the following topics:\n\n- methods for bias analysis and mitigation, including algorithmic training criteria;\n- creating, managing, and sharing datasets for bias quantification and methods for data augmentation, curation, and coding techniques, with an emphasis on user groups not included in standard corpora;\n- ASR for atypical speech (e.g., ALS, stroke, deafness, Down syndrome);\n- ethical considerations about inclusion, democratization of speech technologies, and making speech interaction seamless for all;\n- applications of personalization techniques while fostering fairness (i.e., fairness-aware personalization)\n\n#### **Amazon organizers:**\n\n- ++[Peng Liu](https://www.linkedin.com/in/peng-liu-8604172b/)++, senior machine learning scientist with Alexa AI\n- ++[Anirudh Mani](https://www.linkedin.com/in/anirudh-mani-1796934b/)++, applied scientist with Alexa AI\n- ++[Tao Zhang](https://www.linkedin.com/in/tao-zhang-515bb618/)++, senior manager of applied science with Alexa AI\n\n#### **Trustworthy Speech Processing**\n\nGiven the ubiquity of machine learning systems, it is important to ensure private and safe handling of data. Speech processing presents a unique set of challenges, given the rich information carried in linguistic and paralinguistic content, including speaker traits and interaction and state characteristics. This special session will bring together new and experienced researchers working on trustworthy machine learning and speech processing, and the session organizers are seeking novel and relevant submissions from academic and industrial research groups showcasing both theoretical and empirical advancements in trustworthy speech processing (TSP)\n\nTopics of interest include but are not limited to:\n\n- Differential privacy\n- Federated learning\n- Ethics in speech processing\n- Model interpretability\n- Quantifying and mitigating bias in speech processing\n- New datasets, frameworks, and benchmarks for TSP\n- Discovery and defense against emerging privacy attacks\n- Trustworthy machine learning in applications of speech processing, such as automatic speech recognition\n\n#### **Amazon organizers:**\n\n- ++[Anil Ramakrishna](http://www.linkedin.com/in/anilramakrishna)++, an applied scientist with Alexa AI\n- ++[Rahul Gupta](http://www.linkedin.com/in/rahul-gupta-16139818)++, an applied-science manager with Alexa AI\n\n#### **Speech intelligibility prediction for hearing-impaired listeners**\n\nDisabling hearing impairment affects 360 million people worldwide, and one of the greatest challenges for hearing-impaired listeners is understanding speech in the presence of background noise. The development of better hearing aids requires prediction models that can take audio signals and use knowledge of the listener's characteristics (e.g., an audiogram) to estimate the signals' intelligibility. These include models that can estimate the intelligibility of natural signals and models that can estimated the intelligibility of signals that have been processed using hearing aid algorithms.\n\n![image.png](https://dev-media.amazoncloud.cn/62b090e322a74796a93f758d21e1a8ff_image.png)\n\nThe ++[Clarity Prediction Challenge](https://claritychallenge.github.io/clarity_CPC1_doc/)++ (part of the five-year ++[Clarity Challenge](http://claritychallenge.org/)++) provides noisy speech signals that have been processed with a number of hearing-aid signal-processing systems and corresponding intelligibility scores and asks contestants to produce models that can predict intelligibility scores given just the signals, their clean references, and a characterisation of each listener’s specific hearing impairment. The challenge will remain open until the Interspeech submission deadline and all entrants are welcome.\n\nThe special session welcomes submission from entrants to the Clarity Prediction Challenge but is also inviting papers on related topics in hearing impairment and speech intelligibility, including, but not limited to\n\n- Statistical speech modeling for intelligibility prediction\n- Modeling energetic and informational noise masking\n- Individualizing intelligibility models using audiometric data\n- Intelligibility prediction in online and low-latency settings\n- Model-driven speech intelligibility enhancement\n- New methodologies for intelligibility model evaluation\n- Speech resources for intelligibility model evaluation\n- Applications of intelligibility modeling in acoustic engineering\n- Modeling interactions between hearing impairment and speaking style\n- Papers using the data supplied with the Clarity Prediction Challenge\n\n#### **Amazon organizer:**\n++[Daniel Korzekwa](https://www.linkedin.com/in/danielkorzekwa/)++, an applied-science manager with Alexa AI\n\nABOUT THE AUTHOR\n\n#### **Staff writer**\n\n\n\n","render":"<p>At this year’s <ins><a href=\"https://www.amazon.science/conferences-and-events/interspeech-2022\" target=\"_blank\">Interspeech</a></ins> conference, in September, Alexa AI is co-organizing four special sessions — themed sessions within the main conferences — all of which are currently seeking paper submissions.</p>\n<p>One session is on machine learning and signal processing in the context of multiple networked smart devices. This session will address topics such as synchronization, arbitration (deciding which device should respond to a query), and privacy.</p>\n<p>Another Interspeech session is on <ins><a href=\"https://sites.google.com/view/fair-speech-interspeech22/\" target=\"_blank\">inclusive and fair speech technologies</a></ins>. Algorithmic bias has been well studied in natural-language processing and computer vision but less so in speech. Possible paper topics include methods of bias analysis and mitigation, dataset creation, and ASR for atypical speech.</p>\n<p>The third session is on <ins><a href=\"https://trustworthyspeechprocessing.github.io/\" target=\"_blank\">trustworthy speech processing</a></ins>, which focuses on the development of models whose goals go beyond accuracy to incorporate privacy, interpretability, fairness, ethics, bias mitigation, and related areas.</p>\n<p>Finally, the fourth special session is on <ins><a href=\"https://claritychallenge.github.io/interspeech2022_siphil/\" target=\"_blank\">predicting the intelligibility of speech</a></ins> — both the raw acoustic signal and the signal generated by hearing aids — to hearing-impaired listeners. This session is related to the <ins><a href=\"https://www.amazon.science/blog/five-year-clarity-challenge-to-help-improve-hearing-aids\" target=\"_blank\">Clarity Challenge</a></ins>, a five-year challenge to improve hearing aids that Alexa AI is participating in.</p>\n<p>There’s more information about the individual sessions below. Submissions to the special sessions should go through the main-conference <ins><a href=\"https://interspeech2022.org/forauthor/submissions.php\" target=\"_blank\">submission portal</a></ins>. The submission deadline is <strong>March 21</strong>.</p>\n<h4><a id=\"Challenges_and_opportunities_for_signal_processing_and_machine_learning_for_multiple_smart_devices_12\"></a><strong>Challenges and opportunities for signal processing and machine learning for multiple smart devices</strong></h4>\n<p>The purpose of this session is to promote research in multiple-device signal processing and machine learning by bringing together industry and academic experts to discuss topics that include but are not limited to</p>\n<ul>\n<li>Multiple-device audio datasets</li>\n<li>Automatic speech recognition</li>\n<li>Keyword spotting</li>\n<li>Device arbitration (i.e., which device should respond to the user’s inquiry)</li>\n<li>Speech enhancement: de-reverberation, noise reduction, echo reduction</li>\n<li>Source separation</li>\n<li>Speaker localization and tracking</li>\n<li>Privacy-sensitive signal processing and machine learning</li>\n</ul>\n<p>The session will collocate top researchers working in the multisensor domain, and even though their specific applications may be different (e.g., enhancement vs. acoustic-event detection), the similarity of the problem space encourages cross-pollination of techniques.</p>\n<h4><a id=\"Amazon_organizers_27\"></a><strong>Amazon organizers:</strong></h4>\n<ul>\n<li><ins><a href=\"https://www.linkedin.com/in/jarred-barber-77947458/\" target=\"_blank\">Jarred Barber</a></ins>, applied scientist with Alexa AI</li>\n<li>Gregory Ciccarelli, applied scientist with Alexa AI</li>\n<li><ins><a href=\"https://israelcohen.com/\" target=\"_blank\">Israel Cohen</a></ins>, Amazon Scholar and professor at Technion-Israel Institute of Technology</li>\n<li><ins><a href=\"https://www.linkedin.com/in/tao-zhang-515bb618/\" target=\"_blank\">Tao Zhang</a></ins>, senior manager of applied science with Alexa AI</li>\n</ul>\n<h4><a id=\"Inclusive_and_Fair_Speech_Technologies_34\"></a><strong>Inclusive and Fair Speech Technologies</strong></h4>\n<p>Alexa AI is co-organizing this session with leading researchers in the field from around the world. The session will feature a series of oral presentations (or posters with two-minute introductions if more than six papers are accepted) that may address but are not limited to the following topics:</p>\n<ul>\n<li>methods for bias analysis and mitigation, including algorithmic training criteria;</li>\n<li>creating, managing, and sharing datasets for bias quantification and methods for data augmentation, curation, and coding techniques, with an emphasis on user groups not included in standard corpora;</li>\n<li>ASR for atypical speech (e.g., ALS, stroke, deafness, Down syndrome);</li>\n<li>ethical considerations about inclusion, democratization of speech technologies, and making speech interaction seamless for all;</li>\n<li>applications of personalization techniques while fostering fairness (i.e., fairness-aware personalization)</li>\n</ul>\n<h4><a id=\"Amazon_organizers_44\"></a><strong>Amazon organizers:</strong></h4>\n<ul>\n<li><ins><a href=\"https://www.linkedin.com/in/peng-liu-8604172b/\" target=\"_blank\">Peng Liu</a></ins>, senior machine learning scientist with Alexa AI</li>\n<li><ins><a href=\"https://www.linkedin.com/in/anirudh-mani-1796934b/\" target=\"_blank\">Anirudh Mani</a></ins>, applied scientist with Alexa AI</li>\n<li><ins><a href=\"https://www.linkedin.com/in/tao-zhang-515bb618/\" target=\"_blank\">Tao Zhang</a></ins>, senior manager of applied science with Alexa AI</li>\n</ul>\n<h4><a id=\"Trustworthy_Speech_Processing_50\"></a><strong>Trustworthy Speech Processing</strong></h4>\n<p>Given the ubiquity of machine learning systems, it is important to ensure private and safe handling of data. Speech processing presents a unique set of challenges, given the rich information carried in linguistic and paralinguistic content, including speaker traits and interaction and state characteristics. This special session will bring together new and experienced researchers working on trustworthy machine learning and speech processing, and the session organizers are seeking novel and relevant submissions from academic and industrial research groups showcasing both theoretical and empirical advancements in trustworthy speech processing (TSP)</p>\n<p>Topics of interest include but are not limited to:</p>\n<ul>\n<li>Differential privacy</li>\n<li>Federated learning</li>\n<li>Ethics in speech processing</li>\n<li>Model interpretability</li>\n<li>Quantifying and mitigating bias in speech processing</li>\n<li>New datasets, frameworks, and benchmarks for TSP</li>\n<li>Discovery and defense against emerging privacy attacks</li>\n<li>Trustworthy machine learning in applications of speech processing, such as automatic speech recognition</li>\n</ul>\n<h4><a id=\"Amazon_organizers_65\"></a><strong>Amazon organizers:</strong></h4>\n<ul>\n<li><ins><a href=\"http://www.linkedin.com/in/anilramakrishna\" target=\"_blank\">Anil Ramakrishna</a></ins>, an applied scientist with Alexa AI</li>\n<li><ins><a href=\"http://www.linkedin.com/in/rahul-gupta-16139818\" target=\"_blank\">Rahul Gupta</a></ins>, an applied-science manager with Alexa AI</li>\n</ul>\n<h4><a id=\"Speech_intelligibility_prediction_for_hearingimpaired_listeners_70\"></a><strong>Speech intelligibility prediction for hearing-impaired listeners</strong></h4>\n<p>Disabling hearing impairment affects 360 million people worldwide, and one of the greatest challenges for hearing-impaired listeners is understanding speech in the presence of background noise. The development of better hearing aids requires prediction models that can take audio signals and use knowledge of the listener’s characteristics (e.g., an audiogram) to estimate the signals’ intelligibility. These include models that can estimate the intelligibility of natural signals and models that can estimated the intelligibility of signals that have been processed using hearing aid algorithms.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/62b090e322a74796a93f758d21e1a8ff_image.png\" alt=\"image.png\" /></p>\n<p>The <ins><a href=\"https://claritychallenge.github.io/clarity_CPC1_doc/\" target=\"_blank\">Clarity Prediction Challenge</a></ins> (part of the five-year <ins><a href=\"http://claritychallenge.org/\" target=\"_blank\">Clarity Challenge</a></ins>) provides noisy speech signals that have been processed with a number of hearing-aid signal-processing systems and corresponding intelligibility scores and asks contestants to produce models that can predict intelligibility scores given just the signals, their clean references, and a characterisation of each listener’s specific hearing impairment. The challenge will remain open until the Interspeech submission deadline and all entrants are welcome.</p>\n<p>The special session welcomes submission from entrants to the Clarity Prediction Challenge but is also inviting papers on related topics in hearing impairment and speech intelligibility, including, but not limited to</p>\n<ul>\n<li>Statistical speech modeling for intelligibility prediction</li>\n<li>Modeling energetic and informational noise masking</li>\n<li>Individualizing intelligibility models using audiometric data</li>\n<li>Intelligibility prediction in online and low-latency settings</li>\n<li>Model-driven speech intelligibility enhancement</li>\n<li>New methodologies for intelligibility model evaluation</li>\n<li>Speech resources for intelligibility model evaluation</li>\n<li>Applications of intelligibility modeling in acoustic engineering</li>\n<li>Modeling interactions between hearing impairment and speaking style</li>\n<li>Papers using the data supplied with the Clarity Prediction Challenge</li>\n</ul>\n<h4><a id=\"Amazon_organizer_91\"></a><strong>Amazon organizer:</strong></h4>\n<p><ins><a href=\"https://www.linkedin.com/in/danielkorzekwa/\" target=\"_blank\">Daniel Korzekwa</a></ins>, an applied-science manager with Alexa AI</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\"Staff_writer_96\"></a><strong>Staff writer</strong></h4>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭
contact-us