-
The Web Conference 20232023Building machine learning models can be a time-consuming process that often takes several months to implement in typical business scenarios. To ensure consistent model performance and account for variations in data distribution, regular retraining is necessary. This paper introduces a solution for improving online customer service in e-commerce by presenting a universal model for predict-ing labels based
-
CVPR 20232023A key goal for the advancement of AI is to develop technologies that serve the needs not just of one group but of all communities regardless of their geographical region. In fact, a significant proportion of knowledge is locally shared by people from certain regions but may not apply equally in other regions because of cultural differences. If a model is unaware of regional characteristics, it may lead
-
ICASSP 20232023End-to-End Spoken Language Understanding models are generally evaluated according to their overall accuracy, or separately on (a priori defined) data subgroups of interest. We propose a technique for analyzing model performance at the subgroup level, which considers all subgroups that can be defined via a given set of metadata and are above a specified minimum size. The metadata can represent user characteristics
-
ICLR 20232023Like many other machine learning applications, neural machine translation (NMT) benefits from over-parameterized deep neural models. However, these models have been observed to be brittle: NMT model predictions are sensitive to small input changes and can show significant variation across re-training or incremental model updates. This work studies a frequently used method in NMT, pseudo-label training (
-
AAAI 2023 Workshop on Knowledge Augmented Methods for NLP2023The abundance of benchmark datasets supports the recent trend of increased attention given to Question Answering (QA) tasks. However, most of them lack a diverse selection of QA types and more challenging questions. In this work, we present StoryQA, a new task and dataset addressing diverse QA problems for both in-context and out-of-context questions. Additionally, we developed QA models based on large
Related content
-
December 22, 2021New approach improves F1 score of clarification questions by 81%.
-
December 21, 2021Alexa’s chief scientist on how customer-obsessed science is accelerating general intelligence.
-
December 17, 2021Amazon’s Jimmy Kunzmann on how “signal-to-interpretation” models improve availability, performance.
-
December 03, 2021Learn how you can help the university teams competing to develop agents that will assist customers with completing tasks requiring multiple steps.
-
November 30, 2021Submission period extends from December 6, 2021 to January 21, 2022.
-
November 19, 2021Identifying descriptions of events that did not take place in product reviews improves product retrieval results.