Customer-obsessed science
-
September 30, 2024From pricing estimation and regulatory compliance to inventory management and chatbot assistants, machine learning models help Amazon Pharmacy customers stay healthy and save time and money.
-
September 19, 2024“Agentic workflows” that use multiple, fine-tuned smaller LLMs — rather than one large one — can improve efficiency.
-
September 16, 2024A position paper presented at ACL proposes a framework for more-accurate human evaluation of LLMs.
-
-
September 29 - October 4, 2024
-
October 21 - 25, 2024
-
September 25, 2024
Now open until November 6, Amazon Research Awards will be seeking proposals in the following research areas: AI for Information Security, Automated Reasoning, AWS AI, AWS Cryptography, and Sustainability.
-
Topic knowledge based controlled generation for long documents using retrieval-based language modelsFSDM 20232023Current LLM summarization systems Produce broad overviews which are disconnected from people specific interests and expectations. Basically, people preferences (topics) can be expressed by a collection of semantic keywords. Previous work exploit these keywords as extra input to generate summary. That requires additional human annotations. To tackle these constraints, we propose a novel framework, Topic
-
NeurIPS 2023 Workshop on Efficient Natural Language and Speech Processing (ENLSP)2023Deep neural networks (DNNs) have improved NLP tasks significantly, but training and maintaining such networks could be costly. Model compression techniques, such as, knowledge distillation (KD), have been proposed to address the issue; however, the compression process could be lossy. Motivated by this, our work investigates how a distilled student model differs from its teacher, if the distillation process
-
CIKM 2023 Workshop Personalized Generative AI2023Personalization, the ability to tailor a system to individual users, is an essential factor in user experience with natural language process- ing (NLP) systems. With the emergence of Large Language Models (LLMs), a key question is how to leverage these models to better personalize user experiences. To personalize a language model’s output, a straightforward approach is to incorporate past user data into
-
NeurIPS 2023 Workshop on SyntheticData4ML2023We present CALICO, a method to fine-tune Large Language Models (LLMs) to localize conversational agent training data from one language to another. For slots (named entities), CALICO supports three operations: verbatim copy, literal translation, and localization, i.e. generating slot values more appropriate in the target language, such as city and airport names located in countries where the language is
-
NeurIPS 20232023In recent years, multi-objective optimization (MOO) emerges as a foundational problem underpinning many multi-agent multi-task learning applications. However, existing algorithms in MOO literature remain limited to centralized learning settings, which do not satisfy the distributed nature and data privacy needs of such multi-agent multi-task learning applications. This motivates us to propose a new federated
Resources
-
We look for talent from around the world for applied scientists, data scientists, economists, research scientists, scholars, academics, PhDs, and interns.
-
We collaborate with leading academic organizations to drive innovation and to ensure that research is creating solutions whose benefits are shared broadly.
-
Learn more about the awards and recognitions that Amazon researches from around the world have been honored with during their tenure.