{"value":"On September 23, ++[Jasha Droppo](https://www.amazon.science/author/jasha-droppo)++, Alexa AI senior principal applied scientist, joined Jeff Blankenburg, principal Alexa evangelist, on ++[Alexa & Friends](https://www.amazon.science/tag/alexa-friends)++ and discussed his work with Alexa and the significance neural networks and deep learning have had on the field of speech recognition. Droppo also discussed his career, his work on training large models from data sets, the use of neural networks for acoustic modeling, and deep learning.\n\nDroppo authored or co-authored nine ++[Interspeech 2021](https://www.amazon.science/conferences-and-events/interspeech-2021)++ papers. Those include ++[SynthASR: Unlocking synthetic data for speech recognition](https://www.amazon.science/publications/synthasr-unlocking-synthetic-data-for-speech-recognition)++ and ++[CoDERT: Distilling encoder representations with co-learning for transducer-based speech recognition](https://www.amazon.science/publications/codert-distilling-encoder-representations-with-co-learning-for-transducer-based-speech-recognition)++.\n\nDroppo joined the Alexa AI team in January 2019 and has been working on the role speech recognition plays and how it interacts with other important elements, such as wake word, natural language processing, and text to speech.\n\nDroppo received his PhD in electrical engineering from the University of Washington where he developed a discrete theory for time-frequency representations of audio signals, with the focus on speech recognition.\n\nDroppo has been working in the area of speech recognition for 21 years and is best known for his research in algorithms for speech signal and model-based speech feature enhancements, model-based adaption, large-vocabulary speech recognition, and distributed training of neural networks.\n\nABOUT THE AUTHOR\n\n#### **Staff writer**\n\n\n\n\n\n\n\n\n\n\n\n\n","render":"<p>On September 23, <ins><a href=\"https://www.amazon.science/author/jasha-droppo\" target=\"_blank\">Jasha Droppo</a></ins>, Alexa AI senior principal applied scientist, joined Jeff Blankenburg, principal Alexa evangelist, on <ins><a href=\"https://www.amazon.science/tag/alexa-friends\" target=\"_blank\">Alexa & Friends</a></ins> and discussed his work with Alexa and the significance neural networks and deep learning have had on the field of speech recognition. Droppo also discussed his career, his work on training large models from data sets, the use of neural networks for acoustic modeling, and deep learning.</p>\n<p>Droppo authored or co-authored nine <ins><a href=\"https://www.amazon.science/conferences-and-events/interspeech-2021\" target=\"_blank\">Interspeech 2021</a></ins> papers. Those include <ins><a href=\"https://www.amazon.science/publications/synthasr-unlocking-synthetic-data-for-speech-recognition\" target=\"_blank\">SynthASR: Unlocking synthetic data for speech recognition</a></ins> and <ins><a href=\"https://www.amazon.science/publications/codert-distilling-encoder-representations-with-co-learning-for-transducer-based-speech-recognition\" target=\"_blank\">CoDERT: Distilling encoder representations with co-learning for transducer-based speech recognition</a></ins>.</p>\n<p>Droppo joined the Alexa AI team in January 2019 and has been working on the role speech recognition plays and how it interacts with other important elements, such as wake word, natural language processing, and text to speech.</p>\n<p>Droppo received his PhD in electrical engineering from the University of Washington where he developed a discrete theory for time-frequency representations of audio signals, with the focus on speech recognition.</p>\n<p>Droppo has been working in the area of speech recognition for 21 years and is best known for his research in algorithms for speech signal and model-based speech feature enhancements, model-based adaption, large-vocabulary speech recognition, and distributed training of neural networks.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\"Staff_writer_12\"></a><strong>Staff writer</strong></h4>\n"}