Customer-obsessed science
-
September 19, 2024“Agentic workflows” that use multiple, fine-tuned smaller LLMs — rather than one large one — can improve efficiency.
-
September 16, 2024A position paper presented at ACL proposes a framework for more-accurate human evaluation of LLMs.
-
September 10, 2024Automated reasoning and optimizations specific to CPU microarchitectures improve both performance and assurance of correct implementation.
-
-
September 29 - October 4, 2024
-
October 21 - 25, 2024
-
September 25, 2024
Now open until November 6, Amazon Research Awards will be seeking proposals in the following research areas: AI for Information Security, Automated Reasoning, AWS AI, AWS Cryptography, and Sustainability.
-
2024In recent years, deep neural networks (DNNs) have become integral to many real-world applications. A pressing concern in these deployments pertains to their vulnerability to adversarial attacks. In this work, we focus on the transferability of adversarial examples in a real-world deployment setting involving both a cloud model and an edge model. The cloud model is a black-box victim model, while the edge
-
2024Recent advances in machine learning have demonstrated that multi-modal pre-training can improve automatic speech recognition (ASR) performance compared to randomly initialized models, even when models are fine-tuned on uni-modal tasks. Existing multi-modal pre-training methods for the ASR task have primarily focused on single-stage pre-training where a single unsupervised task is used for pre-training followed
-
2024Recent research has shown that large language models (LLMs) can achieve remarkable translation performance through supervised fine tuning (SFT) using only a small amount of parallel data. However, SFT simply instructs the model to imitate the reference translations at the token level, making it vulnerable to the noise present in the references. Hence, the assistance from SFT often reaches a plateau once
-
2024Large Language Models (LLMs) are powerful tools which have been both dominant and commonplace in the field of Artificial Intelligence. Yet, LLMs have a tendency to devolve into toxic degeneration, wherein otherwise safe and unproblematic models begin generating toxic content. For the sake of social responsibility and inspired by the biological mechanisms of inhibition control, we introduce the paradigm
-
2024Large language models (LLMs) are known to generate biased responses where the opinions of certain groups and populations are underrepresented. Here, we present a novel approach to achieve controllable generation of specific viewpoints using LLMs, that can be leveraged to produce multiple perspectives and to reflect the diverse opinions. Moving beyond the traditional reliance on demographics like age, gender
Resources
-
We look for talent from around the world for applied scientists, data scientists, economists, research scientists, scholars, academics, PhDs, and interns.
-
We collaborate with leading academic organizations to drive innovation and to ensure that research is creating solutions whose benefits are shared broadly.
-
Learn more about the awards and recognitions that Amazon researches from around the world have been honored with during their tenure.