ConversationalAI.svg
Research Area

Conversational AI

Building software and systems that help people communicate with computers naturally, as if communicating with family and friends.

Publications

View all View all
  • Kai Hui, Sean MacAvaney, Andrew Yates, Ophir Frieder
    SIGIR 2019
    2019
    One challenge with neural ranking is the need for a large amount of manually-labeled relevance judgments for training. In contrast with prior work, we examine the use of weak supervision sources for training that yield pseudo query-document pairs that already exhibit relevance (e.g., newswire headline-content pairs and encyclopedic heading-paragraph pairs). We also propose filtering techniques to eliminate
  • Abdalghani Abujabal, Xiaolu Lu, Soumajit Pramanik, Rishiraj Saha Roy, Gerhard Weikum, Yafang Wang
    SIGIR 2019
    2019
    Direct answering of questions that involve multiple entities and relations is a challenge for text-based QA. This problem is most pronounced when answers can be found only by joining evidence from multiple documents. Curated knowledge graphs (KGs) may yield good answers, but are limited by their inherent incompleteness and potential staleness. This paper presents QUEST, a method that can answer complex
  • Diego Marcheggiani, Roi Blanco, Lluís Marquez, Stefanos Angelidis, Lea Frerman
    EMNLP 2019 Workshop on Machine Reading for Question Answering
    2019
    We present a system for answering questions based on the full text of books (BookQA), which first selects book passages given a question at hand, and then uses a memory network to reason and predict an answer. To improve generalization, we pretrain our memory network using artificial questions generated from book sentences. We experiment with the recently published NarrativeQA corpus, on the subset of Who
  • Julian Salazar, Toan Q. Nguyen
    IWSLT 2019
    2019
    We evaluate three simple, normalization-centric changes to improve Transformer training. First, we show that pre-norm residual connections (PRENORM) and smaller initializations enable warmup-free, validation-based training with large learning rates. Second, we propose `2 normalization with a single scale parameter (SCALENORM) for faster training and better performance. Finally, we reaffirm the effectiveness
  • Abdalghani Abujabal, Philipp Christmann, Rishiraj Saha Roy, Jyotsna Singh, Gerhard Weikum
    CIKM 2019
    2019
    Fact-centric information needs are rarely one-shot; users typically ask follow-up questions to explore a topic. In such a conversational setting, the user’s inputs are often incomplete, with entities or predicates left out, and ungrammatical phrases. This poses a huge challenge to question answering (QA) systems that typically rely on cues in full-fledged interrogative sentences. As a solution, we develop

Related content

GB, MLN, Edinburgh
We’re looking for a Machine Learning Scientist in the Personalization team for our Edinburgh office experienced in generative AI and large models. You will be responsible for developing and disseminating customer-facing personalized recommendation models. This is a hands-on role with global impact working with a team of world-class engineers and scientists across the Edinburgh offices and wider organization. You will lead the design of machine learning models that scale to very large quantities of data, and serve high-scale low-latency recommendations to all customers worldwide. You will embody scientific rigor, designing and executing experiments to demonstrate the technical efficacy and business value of your methods. You will work alongside aRead more