-
ACL 20232023Language models have been shown to perform better with an increase in scale on a wide variety of tasks via the in-context learning paradigm. In this paper, we investigate the hypothesis that the ability of a large language model to in-context learn-perform a task is not uniformly spread across all of its underlying components. Using a 66 billion parameter language model (OPT-66B) across a diverse set of
-
ACL Findings 20232023Sequence-to-sequence state-of-the-art systems for dialogue state tracking (DST) use the full dialogue history as input, represent the current state as a list with all the slots, and generate the entire state from scratch at each dialogue turn. This approach is inefficient, especially when the number of slots is large and the conversation is long. In this paper, we propose Diable, a new task formalisation
-
Interspeech 20232023High quality transcription data is crucial for training automatic speech recognition (ASR) systems. However, the existing industry-level data collection pipelines are expensive to researchers, while the quality of crowdsourced transcription is low. In this paper, we propose a reliable method to collect speech transcriptions. We introduce two mechanisms to improve transcription quality: confidence estimation
-
ACL 20232023Attribute-controlled translation (ACT) is a subtask of machine translation that involves controlling stylistic or linguistic attributes (like formality and gender) of translation outputs. While ACT has garnered attention in recent years due to its usefulness in real-world applications, progress in the task is currently limited by dataset availability, since most prior approaches rely on supervised methods
-
ACL 20232023Cross-Lingual Semantic Parsing (CLSP) aims to translate queries in multiple natural languages (NLs) into meaning representations (MRs) such as SQL, lambda calculus, and logic forms. However, existing CLSP models are separately proposed and evaluated on datasets of limited tasks and applications, impeding a comprehensive and unified evaluation of CLSP on a diverse range of NLs and MRs. To this end, we present
Related content
-
October 02, 2020Scientist leads team in London focused on improving voice-shopping experiences with Alexa.
-
September 28, 2020Hear Tur discuss his experience from his work on DARPA programs, how he’s seen the field of conversational AI evolve, and more.
-
September 24, 2020A combination of audio and visual signals guide the device’s movement, so the screen is always in view.
-
September 24, 2020Adjusting prosody and speaking style to conversational context is a first step toward “concept-to-speech”.
-
September 24, 2020Natural turn-taking uses multiple cues — acoustic, linguistic, and visual — to help Alexa interact more naturally, without the need to repeat the wake word.
-
September 24, 2020Deep learning and reasoning enable customers to explicitly teach Alexa how to interpret their novel requests.