-
ACL 20232023User Satisfaction Modeling (USM) is one of the popular choices for task-oriented dialogue systems evaluation, where user satisfaction typically depends on whether the user’s task goals were fulfilled by the system. Task-oriented dialogue systems use task schema, which is a set of task attributes, to encode the user’s task goals. Existing studies on USM neglect explicitly modeling the user’s task goals fulfillment
-
ACL 2023 Workshop on Lexical and Computational Semantics and Semantic Evaluation2023We present the findings of SemEval-2023 Task 2 on Fine-grained Multilingual Named Entity Recognition (MULTICONER 2).1 Divided into 13 tracks, the task focused on methods to identify complex fine-grained named entities (like WRITTENWORK, VEHICLE, MUSICALGRP) across 12 languages, in both monolingual and multilingual scenarios, as well as noisy settings. The task used the MULTICONER V2 dataset, composed of
-
ECIR 20232023Graph Convolutional Networks have recently shown state-of-the-art performance for collaborative filtering-based recommender systems. However, many systems use a pure user-item bipartite interaction graph, ignoring available additional information about the items and users. This paper proposes an effective and general method, TextGCN, that utilizes rich textual information about the graph nodes, specifically
-
ACL Findings 20232023There has been great progress in unifying various table-to-text tasks using a single encoder-decoder model trained via multi-task learning (Xie et al., 2022). However, existing methods typically encode task information with a simple dataset name as a prefix to the encoder. This not only limits the effectiveness of multitask learning, but also hinders the model’s ability to generalize to new domains or tasks
-
ACL Findings 20232023Code-mixing is ubiquitous in multilingual societies, which makes it vital to build models for code-mixed data to power human language interfaces. Existing multilingual transformer models trained on pure corpora lack the ability to intermix words of one language into the structure of another. These models are also not robust to orthographic variations. We propose CoMix1, a pre-training approach to improve
Related content
-
August 25, 2021Katrin Kirchhoff, director of speech processing for Amazon Web Services, on the many scientific challenges her teams are tackling.
-
August 16, 2021Teams' research papers that outline their approaches to development and deployment are now available.
-
August 16, 2021Team Alquist awarded $500,000 prize for top score in finals competition; teams from Stanford University and the University of Buffalo place second and third.
-
August 12, 2021New metric can be calculated 55 times as quickly as its state-of-the-art predecessor, making it practical for model training.
-
August 11, 2021Holleman, the chief scientist of Alexa Fund company Syntiant, explains why the company’s new architecture allows machine learning to be deployed practically anywhere.
-
August 05, 2021New track of the 10th Dialog System Technology Challenge (DSTC10) will target noisy speech environments.