-
EMNLP 20182018Topic models are evaluated based on their ability to describe documents well (i.e. low perplexity) and to produce topics that carry coherent semantic meaning. In topic modeling so far, perplexity is a direct optimization target. However, topic coherence, owing to its challenging computation, is not optimized for and is only evaluated after training. In this work, under a neural variational inference framework
-
NAACL 20182018This paper introduces a meaning representation for spoken language understanding. The Alexa meaning representation language (AMRL), unlike previous approaches, which factor spoken utterances into domains, provides a common representation for how people communicate in spoken language. AMRL is a rooted graph, links to a large-scale ontology, supports cross-domain queries, finegrained types, complex utterances
-
ACL 20182018Misinformation such as fake news is one of the big challenges of our society. Research on automated fact-checking has proposed methods based on supervised learning, but these approaches do not consider external evidence apart from labeled training instances. Recent approaches counter this deficit by considering external sources related to a claim. However, these methods require substantial feature modeling
-
ACL 20182018We incorporate an explicit neural interlingua into a multilingual encoder-decoder neural machine translation (NMT) architecture. We demonstrate that our model learns a language-independent representation by performing direct zero-shot translation (without using pivot translation), and by using the source sentence embeddings to create an English Yelp review classifier that, through the mediation of the neural
-
NAACL 20182018We present an effective end-to-end memory network model that jointly (i) predicts whether a given document can be considered as relevant evidence for a given claim, and (ii) extracts snippets of evidence that can be used to reason about the factuality of the target claim. Our model combines the advantages of convolutional and recurrent neural networks as part of a memory network. We further introduce a
Related content
-
April 09, 2021Matsoukas discusses his focus on automatic speech recognition, natural understanding, and dialogue management, as well as how those research domains are making Alexa more intelligent and useful.
-
April 07, 2021Technique that lets devices convey information in natural language improves on state of the art.
-
March 31, 2021Throughout the pandemic, the Alexa team has continued to invent on behalf of our customers.
-
March 26, 2021In the future, says Amazon Scholar Emine Yilmaz, users will interact with computers to identify just the information they need, rather than scrolling through long lists of results.
-
March 24, 2021Human-evaluation studies validate metrics, and experiments show evidence of bias in popular language models.
-
March 19, 2021A model that uses both local and global context improves on the state of the art by 6% and 11% on two benchmark datasets.