-
Web Conference 2023 Workshop on Natural-Language Processing for Social Media2023Language model pre-training has led to state-of-the-art performance in text summarization. While a variety of pre-trained transformer models are available nowadays, they are mostly trained on documents. In this study we introduce self-supervised pre-training to enhance the BERT model’s semantic and structural understanding of dialog texts from social media. We also propose a semisupervised teacher-student
-
ICASSP 20232023This work focuses on modelling a speaker’s accent that does not have a dedicated text-to-speech (TTS) frontend, includ-ing a grapheme-to-phoneme (G2P) module. Prior work on modelling accents assumes a phonetic transcription is avail-able for the target accent, which might not be the case for low-resource, regional accents. In our work, we propose an approach whereby we first augment the target accent data
-
ECIR 20232023AI assistants are gradually becoming embedded in our lives, utilized for everyday tasks like shopping or music. In addition to the everyday utilization of AI assistants, many users engage them with playful shopping requests, gauging their ability to understand – or simply seeking amusement. However, these requests are often not being responded to in the same playful manner, causing dissatisfaction and even
-
EACL 20232023This work focuses on in-context data augmenta-tion for intent detection. Having found that aug-mentation via in-context prompting of large pre-trained language models (PLMs) alone does not improve performance, we introduce a novel approach based on PLMs and pointwise V-information (PVI), a metric that can measure the usefulness of a datapoint for training a model. Our method first fine-tunes a PLM on a
-
Frontiers in Artificial Intelligence2023Communication is a dynamic process through which interlocutors adapt to each other. In the development of conversational agents, this core aspect has been put aside for several years since the main challenge was to obtain conversational neural models able to produce utterances and dialogues that at least at the surface level are human-like. Now that this milestone has been achieved, the importance of paying
Related content
-
January 25, 2022Innovative training methods and model compression techniques combine with clever engineering to keep speech processing local.
-
January 24, 2022Arabic posed unique challenges for speech recognition, language understanding, and speech synthesis.
-
Three top performers emerge in inaugural Alexa Prize TaskBot Challenge—the first conversational AI challenge to incorporate multimodal (voice and vision) customer experiences.
-
January 05, 2022Second-pass language models that rescore automatic-speech-recognition hypotheses benefit from multitask training on natural-language-understanding objectives.
-
January 04, 2022A combination of deep learning, natural language processing, and computer vision enables Amazon to hone in on the right amount of packaging for each product.
-
December 22, 2021New approach improves F1 score of clarification questions by 81%.