-
ACL 2019 Workshop on Abusive Language Online2019User-generated text on social media often suffers from a lot of undesired characteristics, including hate speech, abusive language, insults, etc. that are targeted to attack or abuse a specific group of people. Often such text is written differently compared to traditional text, such as news involving either explicit mention of abusive words, obfuscated words and typo-logical errors or implicit abuse i.e
-
ACL 2019 Workshop on NLP for Conversational AI2019Tracking the state of the conversation is a central component in task-oriented spoken dialogue systems. One such approach for tracking the dialogue state is slot carryover, where a model makes a binary decision if a slot from the context is relevant to the current turn. Previous work on the slot carryover task used models that made independent decisions for each slot. A close analysis of the results show
-
ICASSP 20192019For real-world speech recognition applications, noise robustness is still a challenge. In this work, we adopt the teacher-student (T/S) learning technique using a parallel clean and noisy corpus for improving automatic speech recognition (ASR) performance under multimedia noise. On top of that, we apply a logits selection method which only preserves the k highest values to prevent wrong emphasis of knowledge
-
ASRU 20192019Expanding new functionalities efficiently is an ongoing challenge for single-turn task-oriented dialogue systems. In this work, we explore functionality-specific semi-supervised learning via self-training. We consider methods that augment training data automatically from unlabeled data sets in a functionality-targeted manner. In addition, we examine multiple techniques for efficient selection of augmented
-
WASPAA 2019 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics2019We propose a novel application of an attention mechanism in neural speech enhancement, by presenting a U-Net architecture with attention mechanism, which processes the raw waveform directly, and is trained end-to-end. We find that the inclusion of the attention mechanism significantly improves the performance of the model in terms of the objective speech quality metrics, and outperforms all other published
Related content
-
June 16, 2021Watch the replay of the June 15 discussion featuring five Amazon scientists.
-
June 16, 2021Relative to human evaluation of question-answering models, the new method has an error rate of only 7%.
-
June 15, 2021Alexa Fund company unlocks voice-based computing for people who have trouble using their voices.
-
June 11, 2021Proteno model dramatically increases the efficiency of the first step in text-to-speech conversion.
-
June 10, 2021Recasting different natural-language tasks in the same form dramatically improves few-shot multitask learning.
-
June 04, 2021Topics range from the predictable, such as speech recognition and noise cancellation, to singing separation and automatic video dubbing.