Customer-obsessed science
-
September 30, 2024From pricing estimation and regulatory compliance to inventory management and chatbot assistants, machine learning models help Amazon Pharmacy customers stay healthy and save time and money.
-
September 19, 2024“Agentic workflows” that use multiple, fine-tuned smaller LLMs — rather than one large one — can improve efficiency.
-
September 16, 2024A position paper presented at ACL proposes a framework for more-accurate human evaluation of LLMs.
-
-
September 29 - October 4, 2024
-
October 21 - 25, 2024
-
September 25, 2024
Now open until November 6, Amazon Research Awards will be seeking proposals in the following research areas: AI for Information Security, Automated Reasoning, AWS AI, AWS Cryptography, and Sustainability.
-
Conference on Natural Language Processing (NATP) 20242024We present a supervised learning approach for automatic extraction of keyphrases from single documents. Our solution uses simple-to-compute statistical and positional features of candidate phrases and does not rely on any external knowledge base or on pre-trained language models or word embeddings. The ranking component of our proposed solution is a fairly lightweight ensemble model. Evaluation on benchmark
-
WACV 20242024Video quality can suffer from limited internet speed while being streamed by users. Compression artifacts start to appear when the bitrate decreases to match the available bandwidth. Existing algorithms either focus on removing the compression artifacts at the same video resolution, or on upscaling the video resolution but not removing the artifacts. Super resolution-only approaches will amplify the artifacts
-
IEEE SaTML 20242024We revisit the problem of differentially private squared error linear regression. We observe that existing state- of-the-art methods are sensitive to the choice of hyperparameters — including the “clipping threshold” that cannot be set optimally in a data-independent way. We give a new algorithm for private linear regression based on gradient boosting. We show that our method consistently improves over
-
2024In the realm of spoken language understanding (SLU), numerous natural language understanding (NLU) methodologies have been adapted by supplying large language models (LLMs) with transcribed speech instead of conventional written text. In real-world scenarios, prior to input into an LLM, an automated speech recognition (ASR) system generates an output transcript hypothesis, where inherent errors can degrade
-
2024Recent advancements in Generative AI, such as scaled Transformer large language models (LLM) and diffusion decoders, have revolutionized speech synthesis. With speech encompassing the complexities of natural language and audio dimensionality, many recent models have relied on autoregressive modeling of quantized speech tokens. Such an approach limits speech synthesis to left-to-right generation, making
Resources
-
We look for talent from around the world for applied scientists, data scientists, economists, research scientists, scholars, academics, PhDs, and interns.
-
We collaborate with leading academic organizations to drive innovation and to ensure that research is creating solutions whose benefits are shared broadly.
-
Learn more about the awards and recognitions that Amazon researches from around the world have been honored with during their tenure.