-
NAACL 20222022Large language models have achieved high performance on various question answering (QA) benchmarks, but the explainability of their output remains elusive. Structured explanations, called entailment trees, were recently suggested as a way to explain and inspect a QA system’s answer. In order to better generate such entailment trees, we propose an architecture called Iterative Retrieval-Generation Reasoner
-
NAACL 20222022The machine translation (MT) task is typically formulated as that of returning a single translation for an input segment. However, in many cases, multiple different translations are valid and the appropriate translation may depend on the intended target audience, characteristics of the speaker, or even the relationship between speakers. Specific problems arise when dealing with honorifics, particularly
-
NAACL 20222022Understanding human language often necessitates understanding entities and their place in a taxonomy of knowledge—their types. Previous methods to learn entity types rely on training classifiers on datasets with coarse, noisy, and incomplete labels. We introduce a method to instill fine-grained type knowledge in language models with text-to-text pre-training on type-centric questions leveraging knowledge
-
NAACL 20222022Vocabulary selection, or lexical shortlisting, is a well-known technique to improve latency of Neural Machine Translation models by constraining the set of allowed output words during inference. The chosen set is typically determined by separately trained alignment model parameters, independent of the source sentence context at inference time. While vocabulary selection appears competitive with respect
-
NAACL 20222022Multi-task learning (MTL) aims to solve multiple tasks jointly by sharing a base representation among them. This can lead to more efficient learning and better generalization, as compared to learning each task individually. However, one issue that often arises in MTL is the convergence speed between tasks varies due to differences in task difficulty, so it can be a challenge to simultaneously achieve the
Related content
-
December 05, 2022New language data will find immediate adoption by Barcelona Supercomputing Center.
-
December 05, 2022Accounting for data heterogeneity across edge devices enables more useful model updates, both locally and globally.
-
November 30, 2022Prem Natarajan, Alexa AI vice president, and Michael Kearns, an Amazon Scholar, discuss fairness, accountability, transparency, and ethics topics applied to machine learning, automation, robotics, and space themes.
-
November 29, 2022Using the prior model to rerank outputs of the new model increases backward compatibility.
-
November 17, 2022In 2022, the Alexa Trustworthy AI team helped organize a workshop at NAACL and a special session at Interspeech.
-
November 16, 2022From physical constraints to acoustic challenges, learn how Amazon collaborated with NASA and Lockheed Martin to get Alexa to work in space.