-
SIGIR 20192019One challenge with neural ranking is the need for a large amount of manually-labeled relevance judgments for training. In contrast with prior work, we examine the use of weak supervision sources for training that yield pseudo query-document pairs that already exhibit relevance (e.g., newswire headline-content pairs and encyclopedic heading-paragraph pairs). We also propose filtering techniques to eliminate
-
SIGIR 20192019Direct answering of questions that involve multiple entities and relations is a challenge for text-based QA. This problem is most pronounced when answers can be found only by joining evidence from multiple documents. Curated knowledge graphs (KGs) may yield good answers, but are limited by their inherent incompleteness and potential staleness. This paper presents QUEST, a method that can answer complex
-
EMNLP 2019 Workshop on Machine Reading for Question Answering2019We present a system for answering questions based on the full text of books (BookQA), which first selects book passages given a question at hand, and then uses a memory network to reason and predict an answer. To improve generalization, we pretrain our memory network using artificial questions generated from book sentences. We experiment with the recently published NarrativeQA corpus, on the subset of Who
-
IWSLT 20192019We evaluate three simple, normalization-centric changes to improve Transformer training. First, we show that pre-norm residual connections (PRENORM) and smaller initializations enable warmup-free, validation-based training with large learning rates. Second, we propose `2 normalization with a single scale parameter (SCALENORM) for faster training and better performance. Finally, we reaffirm the effectiveness
-
CIKM 20192019Fact-centric information needs are rarely one-shot; users typically ask follow-up questions to explore a topic. In such a conversational setting, the user’s inputs are often incomplete, with entities or predicates left out, and ungrammatical phrases. This poses a huge challenge to question answering (QA) systems that typically rely on cues in full-fledged interrogative sentences. As a solution, we develop
Related content
-
June 03, 2021Rastrow discussed the continued challenges and expanded role of speech recognition, and some of the interesting research and themes that emerged from ICASSP 2021.
-
June 03, 2021Amazon Scholar Heng Ji says that deep learning could benefit from the addition of a little linguistic intuition.
-
June 02, 2021More-autonomous machine learning systems will make Alexa more self-aware, self-learning, and self-service.
-
June 01, 2021The event is over, but Amazon Science interviewed each of the six speakers within the Science of Machine Learning track. See what they had to say.
-
May 26, 2021Teams from three continents will compete to develop agents that assist customers in completing multi-step tasks.
-
May 19, 2021Calibrating noise addition to word density in the embedding space improves utility of privacy-protected text.