When Penny Karanasou presented her first paper at Interspeech, in 2010, she was a PhD student in computer science, writing a thesis on automatic speech recognition.
Six years later, she joined Amazon as a member of the natural-language-understanding group, and for the past two and a half years, she’s been working on text-to-speech, most recently as a senior applied scientist. So she has hands-on experience with all three of Alexa’s core technologies.
She also has a rich history with Interspeech. This year's conference — which starts next week — is her second as an area chair on the program committee and her seventh chairing a conference session.
Given the breadth of her experience in conversational AI, it’s perhaps natural that one of the trends in the field that intrigues her most is the increasing overlap between automated speech recognition (ASR), natural-language understanding (NLU), and text-to-speech (TTS).
“In recent years, with the newly developed neural technologies, we have started seeing more and more overlaps and synergies between different speech fields,” Karanasou says. “One thing is where you can actually use TTS for ASR, which is about generating synthetic data using a TTS system for data augmentation. In English, we might need data for a specific domain or for out-of-vocabulary words or for examples that are in the tail of the data distribution and not very frequently seen. But this is also a method useful for low-resource languages.
“Another approach that combines ASR and TTS is joint training that uses semi-supervised learning to improve both systems. You start with data, and then you train in a cyclic way. You train one system, and you use its output to train the other. And you use some confidence metric or some other selection approach to choose the data that you keep to do the new training. Doing this kind of cyclic training can actually improve both tasks.
“Another thing we observed in recent years is that there are common approaches in both fields. In both TTS and ASR, the community is moving toward all-neural end-to-end systems. We also see context being added in order to have long-form ASR and TTS. So instead of just focusing on one sentence, you take into account more context of what was said before in a dialogue — or any kind of context."
Language understanding and speech
“I think this is also where the NLU influence comes into play," Karanasou says. "With all these language models — like BERT, which is the best known — we see NLU integrated in the speech fields. We see BERT being used in TTS and ASR papers to add more context and syntactic and semantic information to the systems. For example, by having the right syntactic and semantic information, we can also have a better prosody in TTS.”
We see BERT being used in TTS and ASR papers to add more context and syntactic and semantic information.
As Karanasou explains, however, the success of language models like BERT in NLU is itself an example of cross-pollination between disciplines. Language models encode the probabilities of sequences of words, and a word’s co-occurrence with other words turns out to be a good indicator of its meaning. But before their introduction into NLU, language models had long been used in ASR to distinguish between alternative interpretations of the same sequences of sounds (a classic example being “Pulitzer Prize” and “pullet surprise”).
“We had language models developed for ASR,” Karanasou says, “and all of a sudden, BERT, with the Transformer-based architecture that is now used for encoders, decoders, and other modules, came into the picture, and it worked so much better.”
Interspeech has always had its share of papers on both ASR and TTS. The two tasks are, after all, mirrors of each other: text to speech and speech to text. But another indication of the increasing overlap between conversational-AI subfields, Karanasou points out, is the growing number of Interspeech papers on models that take speech as input and perform downstream computations in an end-to-end manner. These include research on spoken-language understanding (or SLU, the combination of speech recognition and NLU), spoken translation, and spoken dialogue.
“Traditionally, we would see these sections on spoken-language understanding in NLP [natural-language processing] conferences,” Karanasou says. “But now we see more SLU sections at conferences like Interspeech.
“Having said all this, we still have to keep in mind that each field has its own challenges and own objectives. ASR is the opposite task of TTS, but you work with different data and different evaluation techniques. For example, TTS is mostly based on subjective evaluations, while ASR minimizes word error rate, so it’s objective evaluation.”
For Karanasou, however, the cross-pollination between subfields of conversational AI is only one example of the advantages of interdisciplinary research.
“I think people should be reading papers from other fields,” she says. “Machine translation of course, which is part of NLU. But more and more, we get ideas even from image processing, from computer vision. It's actually rich, to understand something that has happened in another field and transfer it to your field.”