On September 23, Jasha Droppo, Alexa AI senior principal applied scientist, joined Jeff Blankenburg, principal Alexa evangelist, on Alexa & Friends and discussed his work with Alexa and the significance neural networks and deep learning have had on the field of speech recognition. Droppo also discussed his career, his work on training large models from data sets, the use of neural networks for acoustic modeling, and deep learning.
Droppo authored or co-authored nine Interspeech 2021 papers. Those include SynthASR: Unlocking synthetic data for speech recognition and CoDERT: Distilling encoder representations with co-learning for transducer-based speech recognition.
Droppo joined the Alexa AI team in January 2019 and has been working on the role speech recognition plays and how it interacts with other important elements, such as wake word, natural language processing, and text to speech.
Droppo received his PhD in electrical engineering from the University of Washington where he developed a discrete theory for time-frequency representations of audio signals, with the focus on speech recognition.
Droppo has been working in the area of speech recognition for 21 years and is best known for his research in algorithms for speech signal and model-based speech feature enhancements, model-based adaption, large-vocabulary speech recognition, and distributed training of neural networks.