-
EMNLP 20232023Generative models have been widely applied to solve extractive tasks, where parts of the input are extracted to form the desired output, and have achieved significant success. For example, in extractive question answering (QA), generative models have constantly yielded state-of-the-art results. In this work, we study the issue of tokenization inconsistency that is commonly neglected in training these models
-
Graph meets LLM: A novel approach to collaborative filtering for robust conversational understandingEMNLP 20232023A Personalized Query Rewriting system aims to reduce defective queries to ensure robust conversational functionality by considering individual user behavior and preferences. It’s usually structured as a search-based system, maintaining a user history index of past successful interactions with the conversational AI. However, this approach encounters challenges when dealing with unseen interactions, which
-
FSDM 20232023Adopting AI in financial advisory is a challenging task as there exist multiple sources of information to digest and interpret. Such information consumption processes are very lengthy for financial advisors, reducing the efficiency and timeliness for the advice and recommendation given to their clients. In this work, we introduce a multi-step framework that consumes and combines news and industry-focused
-
Topic knowledge based controlled generation for long documents using retrieval-based language modelsFSDM 20232023Current LLM summarization systems Produce broad overviews which are disconnected from people specific interests and expectations. Basically, people preferences (topics) can be expressed by a collection of semantic keywords. Previous work exploit these keywords as extra input to generate summary. That requires additional human annotations. To tackle these constraints, we propose a novel framework, Topic
-
CIKM 2023 Workshop Personalized Generative AI2023Personalization, the ability to tailor a system to individual users, is an essential factor in user experience with natural language process- ing (NLP) systems. With the emergence of Large Language Models (LLMs), a key question is how to leverage these models to better personalize user experiences. To personalize a language model’s output, a straightforward approach is to incorporate past user data into
Related content
-
March 5, 2019The 2018 Alexa Prize featured eight student teams from four countries, each of which adopted distinctive approaches to some of the central technical questions in conversational AI. We survey those approaches in a paper we released late last year, and the teams themselves go into even greater detail in the papers they submitted to the latest Alexa Prize Proceedings. Here, we touch on just a few of the teams’ innovations.
-
February 27, 2019To ensure that Alexa Prize contestants can concentrate on dialogue systems — the core technology of socialbots — Amazon scientists and engineers built a set of machine learning modules that handle fundamental conversational tasks and a development environment that lets contestants easily mix and match existing modules with those of their own design.
-
January 31, 2019This Sunday's Super Bowl between the New England Patriots and the Los Angeles Rams is expected to draw more than 100 million viewers, some of whom will have Alexa-enabled devices within range of their TV speakers. When Amazon's new Alexa ad airs, and Forest Whitaker asks his Alexa-enabled electric toothbrush to play his podcast, how will we prevent viewers’ devices from mistakenly waking up?
-
January 30, 2019Many of today’s most popular AI systems are, at their core, classifiers. They classify inputs into different categories: this image is a picture of a dog, not a cat; this audio signal is an instance of the word “Boston”, not the word “Seattle”; this sentence is a request to play a video, not a song. But what happens if you need to add a new class to your classifier — if, say, someone releases a new type of automated household appliance that your smart-home system needs to be able to control?
-
January 24, 2019Machine learning systems often act on “features” extracted from input data. In a natural-language-understanding system, for instance, the features might include words’ parts of speech, as assessed by an automatic syntactic parser, or whether a sentence is in the active or passive voice.
-
January 22, 2019Developing a new natural-language-understanding system usually requires training it on thousands of sample utterances, which can be costly and time-consuming to collect and annotate. That’s particularly burdensome for small developers, like many who have contributed to the library of more than 70,000 third-party skills now available for Alexa.