A quick guide to Amazon’s 50-plus papers at EMNLP 2024

Large language models predominate, both as a research subject themselves and as tools for researching topics of particular interest to Amazon, such as speech, recommendations, and information retrieval.

Large language models (LLMs) have come to dominate the field of natural-language processing, so it’s no surprise that they also dominate the research that Amazon scientists are presenting at this year’s Conference on Empirical Methods in Natural-Language Processing (EMNLP). LLM training is the topic with the greatest number of Amazon papers, followed closely by strategies for mitigating misinformation in LLMs’ outputs — including but not limited to hallucinations. At the same time, a number of papers apply LLMs to topics of traditional interest at Amazon, such as speech, recommender systems, and information retrieval. (Papers marked with asterisks were accepted to Findings of EMNLP.)

AI agents

MARCO: Multi-agent real-time chat orchestration
Anubhav Shrimal, Shervin Malmasi, Kriti Biswas, Swarnalatha Raghuraman, Anish Nediyanchath, Yi Zhang, Promod Yenigalla

Code generation

CodeFort: Robust training for code generation models
Yuhao Zhang, Shiqi Wang, Haifeng Qian, Zijian Wang, Mingyue Shang, Linbo Liu, Sanjay Krishna Gouda, Baishakhi Ray, Murali Krishna Ramanathan, Xiaofei Ma, Anoop Deoras

Socratic human feedback (SoHF): Expert steering strategies for LLM code generation
Subramanian Chidambaram, Erran Li, Min Bai, Xiaopeng LI, Kaixiang Lin, Xiong Zhou, Alex C. Williams

Structured object language modeling (SoLM): Native structured objects generation conforming to complex schemas with self-supervised denoising
Amir Tavanaei, Kee Kiat Koo, Hayreddin Ceker, Shaobai Jiang, Qi Li, Julien Han, Karim Bouyarmane

Contrastive decoding

Explaining and improving contrastive decoding by extrapolating the probabilities of a huge and hypothetical LM
Haw-Shiuan Chang, Nanyun Peng, Mohit Bansal, Anil Ramakrishna, Tagyoung Chung

Explaining and improving contrastive decoding by extrapolating the probabilities of a huge and hypothetical LM.png
Given a simple question with clues, contrastive decoding could have an “obvious blindness” (e.g., assigning higher probability to an uncommon answer, such as "invertebrate", than to the most obvious answer, "bees"). In contrast, the asymptotic probability decoding proposed in "Explaining and improving contrastive decoding by extrapolating the probabilities of a huge and hypothetical LM" correctly assigns the highest probability to "bees" by leveraging the probabilities from multiple LMs of different sizes.

Data integration

ASTRA: Automatic schema matching using machine translation
Tarang Chugh, Deepak Zambre

Learning from natural language explanations for generalizable entity matching
Somin Wadhwa, Adit Krishnan, Runhui Wang, Byron C. Wallace, Chris (Luyang) Kong

Pretraining and finetuning language models on geospatial networks for accurate address matching
Saket Maheshwary, Arpan Paul, Saurabh Sohoney

Retrieval augmented spelling correction for e-commerce applications
Xuan Guo, Rohit Patki, Dante Everaert, Christopher Potts

Dataset distillation

Textual dataset distillation via language model embedding
Yefan Tao, Chris (Luyang) Kong, Andrey Kan, Laurent Callot

Textual dataset distillation via language model embedding: DaLLME.png
The DaLLME framework proposed in "Textual dataset distillation via language model embedding" begins by using a language model to transform raw textual data into embedding vectors. A set of distilled vectors is then derived in the embedding space, through a process designed to encapsulate maximum informational content. Finally, the vec2text model translates these distilled vectors back into textual form.

Document understanding

DocKD: Knowledge distillation from LLMs for open-world document understanding models
Sungnyun Kim, Haofu Liao, Srikar Appalaraju, Peng Tang, Zhuowen Tu, Ravi Kumar Satzoda, R. Manmatha, Vijay Mahadevan, Stefano Soatto

Information retrieval

Evaluating D-MERIT of partial-annotation on information retrieval
Royi Rassin, Yaron Fairstein, Oren Kalinsky, Guy Kushilevitz, Nachshon Cohen, Alexander Libov, Yoav Goldberg

Identifying high consideration e-commerce search queries
Zhiyu Chen, Jason Choi, Besnik Fetahu, Shervin Malmasi

Learning when to retrieve, what to rewrite, and how to respond in conversational QA*
Nirmal Roy, Leonardo Ribeiro, Rexhina Blloshmi, Kevin Small

Natural-language understanding

Intent detection in the age of LLMs
Gaurav Arora, Shreya Jain, Srujana Merugu

Intent detection in the age of LLMs.png
"Intent detection in the age of LLMs" proposes a methodology for adaptive in-context learning and chain-of-thought-based intent detection using LLMs.

Predicting entity salience in extremely short documents
Ben Bullough, Harrison Lundberg, Chen Hu, Weihang Xiao

LLM evaluation

AXCEL: Automated eXplainable consistency evaluation using LLMs*
P Aditya Sreekar, Sahil Verma, Suransh Chopra, Sarik Ghazarian, Abhishek Persad, Narayanan Sadagopan

Precise model benchmarking with only a few observations
Riccardo Fogliato, Pratik Patil, Nil-Jana Akpinar, Mathew Monfort

LLM fine tuning

AdaZeta: Adaptive zeroth-order tensor-train adaption for memory-efficient large language models fine-tuning
Yifan Yang, Kai Zhen, Ershad Banijamali, Thanasis Mouchtaris, Zheng Zhang

RoseLoRA: Row and column-wise sparse low-rank adaptation of pre-trained language model for knowledge editing and fine-tuning
Haoyu Wang, Tianci Liu, Ruirui Li, Monica Cheng, Tuo Zhao, Jing Gao

RoseLoRA.png
The row- and column-wise sparse low-rank adaptation (RoseLoRA) framework proposed in "RoseLoRA: Row and column-wise sparse low-rank adaptation of pre-trained language model for knowledge editing and fine-tuning".

LLMs for speech

Speechworthy instruction-tuned language models
Hyundong Cho, Nicolaas Jedema, Leonardo Ribeiro, Karishma Sharma, Pedro Szekely, Alessandro Moschitti, Ruben Janssen, Jonathan May

LLM misinformation mitigation

ECON: On the detection and resolution of evidence conflicts
Cheng Jiayang, Chunkit Chan, Qianqian Zhuang, Lin Qiu, Tianhang Zhang, Tengxiao Liu, Yangqiu Song, Yue Zhang, Pengfei Liu, Zheng Zhang

Generative subgraph retrieval for knowledge graph–grounded dialog generation
Jinyoung Park, Minseok Joo, Joo-Kyung Kim, Hyunwoo J. Kim

HalluMeasure: Fine-grained hallucination measurement using chain-of-thought reasoning
Shayan Ali Akbar, Md Mosharaf Hossain, Tess Wood, Si-Chi Chin, Erica Salinas, Victor Alvarez, Erwin Cornejo

Knowledge-centric hallucination detection
Xiangkun Hu, Dongyu Ru, Lin Qiu, Qipeng Guo, Tianhang Zhang, Yang Xu, Yun Luo, Pengfei Liu, Zheng Zhang, Yue Zhang

LLM reasoning

Auto-evolve: Enhancing large language model’s performance via self-reasoning framework*
Krishna Aswani, Alex Lu, Pranav Patankar, Priya Dhalwani, Iris Tan, Jayant Ganeshmohan, Simon Lacasse

LLM self-correction

LLM self-correction with DeCRIM: Decompose, critique, and refine for enhanced following of instructions with multiple constraints
Thomas Palmeira Ferraz, Kartik Mehta, Yu-Hsiang Lin, Haw-Shiuan Chang, Shereen Oraby, Sijia Liu, Vivek Subramanian, Tagyoung Chung, Mohit Bansal, Nanyun Peng

DeCRIM.png
In the DeCRIM pipeline proposed in "LLM self-correction with DeCRIM: Decompose, critique, and refine for enhanced following of instructions with multiple constraints", an LLM first generates a response to a user request. The Decomposer then breaks down the request into granular constraints, and the Critic model gives feedback on whether the response meets those constraints. If it does, the response is output; if not, the LLM uses the feedback to refine the response.

LLM training

Dancing in chains: Reconciling instruction following and faithfulness in language models
Zhengxuan Wu, Yuhao Zhang, Peng Qi, Yumo Xu, Rujun Han, Yian Zhang, Jifan Chen, Bonan Min, Zhiheng Huang

DEM: Distribution edited model for training with mixed data distributions
Dhananjay Ram, Aditya Rawal, Momchil Hardalov, Nikolaos Pappas, Sheng Zha

DEM: Distribution Edited Model for Training with Mixed Data Distributions
The distribution-edited model D) described in "DEM: Distribution edited model for training with mixed data distributions" results from fine-tuning a pretrained model (Θ) on n individual data distributions (Di) and combining the resulting models with basic element-wise vector operations. Here, the extracted distribution vectors (∆ΘDi ) are multiplied by weight coefficients, and the weighted sum is added to the base model.

Evolutionary contrastive distillation for language model alignment
Julian Katz-Samuels, Zheng Li, Hyokun Yun, Priyanka Nigam, Yi Xu, Vaclav Petricek, Bing Yin, Trishul Chilimbi

Hop, skip, jump to convergence: Dynamics of learning rate transitions for improved training of large language models
Shreyas Subramanian, Vignesh Ganapathiraman, Corey Barrett

Learning from relevant subgoals in successful dialogs using iterative training for task-oriented dialog systems
Magdalena Kaiser, Patrick Ernst, Gyuri Szarvas

Quality matters: Evaluating synthetic data for tool-using LLMs
Shadi Iskander, Nachshon Cohen, Zohar Karnin, Ori Shapira, Sofia Tolmach

Query autocompletion

AmazonQAC: A large-scale, naturalistic query autocomplete dataset
Dante Everaert, Rohit Patki, Tianqi Zheng, Christopher Potts

DiAL: Diversity aware listwise ranking for query auto-complete
Sonali Singh, Sachin Farfade, Prakash Mandayam Comar

Question answering

RAG-QA arena: Evaluating domain robustness for long-form retrieval-augmented question answering
Rujun Han, Yuhao Zhang, Peng Qi, Yumo Xu, Jenyuan Wang, Lan Liu, William Yang Wang, Bonan Min, Vittorio Castelli

Retrieving contextual information for long-form question answering using weak supervision
Philipp Christmann, Svitlana Vakulenko, Ionut Teodor Sorodoc, Bill Byrne, Adrià de Gispert

Recommender systems

Efficient pointwise-pairwise learning-to-rank for news recommendation
Nithish Kannen Senthilkumar, Yao Ma, Gerrit van den Burg, Jean Baptiste Faddoul

Efficient pointwise-pairwise learning-to-rank for news recommendation.png
An illustration of the GLIMPSE framework proposed in "Efficient pointwise-pairwise learning-to-rank for news recommendation". GLIMPSE adopts a multitask approach in which a pretrained language model is fine-tuned on both the relevance prediction task and the pairwise-preference task. During inference, the relevance predictions are used to produce an initial pointwise ranking, which is subsequently improved by one or more right-to-left (RTL) passes using pairwise comparisons.

PEARL: Preference extraction with exemplar augmentation and retrieval with LLM agents
Vijit Malik, Akshay Jagatap, Vinayak Puranik, Anirban Majumder

Sequential LLM framework for fashion recommendation
Han Liu, Xianfeng Tang, Tianlang Chen, Jiapeng Liu, Indu Indu, Henry Peng Zou, Peng Dai, Roberto Fernandez Galan, Mike Porter, Dongmei Jia, Ning Zhang, Lian Xiong

Responsible AI

Attribute controlled fine-tuning for large language models: A case study on detoxification
Tao Meng, Ninareh Mehrabi, Palash Goyal, Anil Ramakrishna, Aram Galstyan, Richard Zemel, Kai-Wei Chang, Rahul Gupta, Charith Peris

FLIRT: Feedback loop in-context red teaming
Ninareh Mehrabi, Palash Goyal, Christophe Dupuy, Qian Hu, Shalini Ghosh, Richard Zemel, Kai-Wei Chang, Aram Galstyan, Rahul Gupta

Order of magnitude speedups for LLM membership inference
Rongting Zhang, Martin Bertran Lopez, Aaron Roth

Synthetic data generation

CorrSynth: A correlated sampling method for diverse dataset generation from LLMs
Suhas Kowshik, Abhishek Divekar, Vijit Malik

A Correlated Sampling Method for Diverse Dataset Generation from LLMs
"CorrSynth: A correlated sampling method for diverse dataset generation from LLMs" introduces a sampling method that uses anti-correlation between examples rather than few-shot generation.

DATA ADVISOR: Dynamic data curation for safety alignment of large language models
Fei Wang, Ninareh Mehrabi, Palash Goyal, Rahul Gupta, Kai-Wei Chang, Aram Galstyan

Evaluating differentially private synthetic data generation in high-stakes domains
Krithika Ramesh, Nupoor Gandhi, Pulkit Madaan, Lisa Bauer, Charith Peris, Anjalie Field

SYNTHESIZRR: Generating diverse datasets with retrieval augmentation
Abhishek Divekar, Greg Durrett

Abstract depiction of the SYNTHESIZRR procedure
Abstract depiction of the procedure proposed in "SYNTHESIZRR: Generating diverse datasets with retrieval augmentation". The content sourcing stage retrieves K unique documents {r1,...,rK} from a large corpus for each in-context covariate xICL. The task-inversion stage uses a parameterized context refinement prompt, Pτ, which takes parameters Rinv (inversion instruction), rk (a retrieved document), and V(yICL) (the verbalized target label). A generalist teacher LLM autoregressively generates a synthetic covariate. Each in-context example thus produces K unique synthetic examples {x̃1,..., x̃K}, which we include in the dataset with target yICL.

Text classification

Distance-aware calibration for pre-trained language models*
Alberto Gasparin, Gianluca Detommaso

Performance-guided LLM knowledge distillation for efficient text classification at scale

Flavio Di Palo, Prateek Singhi, Bilal Fadlallah

Prompt-tuned muti-task taxonomic transformer (PTMTTaxoFormer)
Rajashekar Vasantha, Nhan Nguyen, Yue Zhang

Text summarization

Salient information prompting to steer content in prompt-based abstractive summarization
Lei Xu, Asad Karim, Saket Dingliwal, Aparna Elangovan

Research areas

Related content

US, VA, Arlington
As a Survey Research Scientist within the Reputation Marketing & Insights team, your primary responsibility will be to help manage our employee communications research program, including a global tracking survey. The work will challenge you to be resourceful, think big while staying connected to the details, translate survey, focus group results, and advanced analytics into strategic direction, and embrace a high degree of change and ambiguity at speed. The scope and scale of what we strive to achieve is immense, but it is also meaningful and energizing. This is an individual contributor role. The right candidate possesses endless curiosity and passion for understanding employee perceptions and what drives them. You have end-to-end experience conducting qualitative research, robust large-scale surveys, campaign measurement, as well as advanced modeling skills to uncover perception drivers. You have proficiency in diving deep into large amounts of data and translating research into actionable insights/recommendations for internal communicators. You are an excellent writer who can effectively communicate data-driven insights and recommendations through written documents, presentations, and other internal communication channels. You are a creative problem-solver who seeks to deeply understand the business/communications so you can tailor research that informs stakeholder decision making and strategic messaging tactics. Key job responsibilities - Design and manage the execution of a global tracking survey focused on employee communications - Develop research to identify and test messages to drive employee perceptions - Use advanced statistical methodologies to better understand the relationship between key internal communications metrics and other related measures of perception (e.g., regression, structural equation modeling, latent growth curve modeling, Shapley analysis, etc.) - Develop causal and semi-causal measurement techniques to evaluate the perception impact of internal communications campaigns - Identify opportunities to simplify existing research processes and operate more nimbly - Engage in strategic discussions with internal partner teams to ensure our research generates actionable and on-point findings About the team This team sits within the CCR organization. Our focus is on conducting research that identifies messaging opportunities and informs communication strategies for Amazon as a brand.
US, CA, Santa Clara
Want to work on frontier, world class, AI-powered experiences for health customers and health providers? The Health Science & Analytics group in Amazon's Health Store & Technology organization is looking for a Senior Manager of Applied Science to lead a group of applied scientists and engineers to work hand in hand with physicians to build the future of AI-powered healthcare experiences. We have an ambitious roadmap which includes scaling recently launched products which are already delighting products and the opportunity to build disruptive, new experiences. This role will be responsible for leading the science and technology teams driving these key innovations on behalf of our customers. Key job responsibilities - Independently manage a team of scientists and engineers to sustainably deliver science driven products. - Define the vision and long-term technical roadmap to achieve multi-year business objectives. - Maintain and raise the science bar of the team’s deliverables and keep the broader Amazon Health Services organization apprised of the latest relevant technical developments in the field. - Work across business, clinical, and technical leaders to disambiguate product requirements and socialize progress towards key goals and deliverables. - Proactively identify risks and shape the technical roadmap in anticipation of industry trends in emerging AI subfields.
US, NY, New York
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Senior Applied Scientist to work on pre-training methodologies for Generative Artificial Intelligence (GenAI) models. You will interact closely with our customers and with the academic and research communities. Key job responsibilities Join us to work as an integral part of a team that has experience with GenAI models in this space. We work on these areas: - Scaling laws - Hardware-informed efficient model architecture, low-precision training - Optimization methods, learning objectives, curriculum design - Deep learning theories on efficient hyperparameter search and self-supervised learning - Learning objectives and reinforcement learning methods - Distributed training methods and solutions - AI-assisted research About the team The AGI team has a mission to push the envelope in GenAI with Large Language Models (LLMs) and multimodal systems, in order to provide the best-possible experience for our customers.
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities - Develop ML models for various recommendation & search systems using deep learning, online learning, and optimization methods - Work closely with other scientists, engineers and product managers to expand the depth of our product insights with data, create a variety of experiments to determine the high impact projects to include in planning roadmaps - Stay up-to-date with advancements and the latest modeling techniques in the field - Publish your research findings in top conferences and journals A day in the life We're using advanced approaches such as foundation models to connect information about our videos and customers from a variety of information sources, acquiring and processing data sets on a scale that only a few companies in the world can match. This will enable us to recommend titles effectively, even when we don't have a large behavioral signal (to tackle the cold-start title problem). It will also allow us to find our customer's niche interests, helping them discover groups of titles that they didn't even know existed. We are looking for creative & customer obsessed machine learning scientists who can apply the latest research, state of the art algorithms and ML to build highly scalable page personalization solutions. You'll be a research leader in the space and a hands-on ML practitioner, guiding and collaborating with talented teams of engineers and scientists and senior leaders in the Prime Video organization. You will also have the opportunity to publish your research at internal and external conferences.
US, CA, San Francisco
If you are interested in this position, please apply on Twitch's Career site https://www.twitch.tv/jobs/en/ About Us: Twitch is the world’s biggest live streaming service, with global communities built around gaming, entertainment, music, sports, cooking, and more. It is where thousands of communities come together for whatever, every day. We’re about community, inside and out. You’ll find coworkers who are eager to team up, collaborate, and smash (or elegantly solve) problems together. We’re on a quest to empower live communities, so if this sounds good to you, see what we’re up to on LinkedIn and X, and discover the projects we’re solving on our Blog. Be sure to explore our Interviewing Guide to learn how to ace our interview process. You can work in San Francisco, CA or Seattle, WA. Perks - Medical, Dental, Vision & Disability Insurance - 401(k) - Maternity & Parental Leave - Flexible PTO - Amazon Employee Discount
IN, KA, Bengaluru
AWS Infrastructure Services owns the design, planning, delivery, and operation of all AWS global infrastructure. In other words, we’re the people who keep the cloud running. We support all AWS data centers and all of the servers, storage, networking, power, and cooling equipment that ensure our customers have continual access to the innovation they rely on. We work on the most challenging problems, with thousands of variables impacting the supply chain — and we’re looking for talented people who want to help. You’ll join a diverse team of software, hardware, and network engineers, supply chain specialists, security experts, operations managers, and other vital roles. You’ll collaborate with people across AWS to help us deliver the highest standards for safety and security while providing seemingly infinite capacity at the lowest possible cost for our customers. And you’ll experience an inclusive culture that welcomes bold ideas and empowers you to own them to completion. Do you love problem solving? Are you looking for real world Supply Chain challenges? Do you have a desire to make a major contribution to the future, in the rapid growth environment of Cloud Computing? Amazon Web Services is looking for a highly motivated, Data Scientist to help build scalable, predictive and prescriptive business analytics solutions that supports AWS Supply Chain and Procurement organization. You will be part of the Supply Chain Analytics team working with Global Stakeholders, Data Engineers, Business Intelligence Engineers and Business Analysts to achieve our goals. We are seeking an innovative and technically strong data scientist with a background in optimization, machine learning, and statistical modeling/analysis. This role requires a team member to have strong quantitative modeling skills and the ability to apply optimization/statistical/machine learning methods to complex decision-making problems, with data coming from various data sources. The candidate should have strong communication skills, be able to work closely with stakeholders and translate data-driven findings into actionable insights. The successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and ability to work in a fast-paced and ever-changing environment. Key job responsibilities 1. Demonstrate thorough technical knowledge on feature engineering of massive datasets, effective exploratory data analysis, and model building using industry standard time Series Forecasting techniques like ARIMA, ARIMAX, Holt Winter and formulate ensemble model. 2. Proficiency in both Supervised(Linear/Logistic Regression) and UnSupervised algorithms(k means clustering, Principle Component Analysis, Market Basket analysis). 3. Experience in solving optimization problems like inventory and network optimization . Should have hands on experience in Linear Programming. 4. Work closely with internal stakeholders like the business teams, engineering teams and partner teams and align them with respect to your focus area 5. Detail-oriented and must have an aptitude for solving unstructured problems. You should work in a self-directed environment, own tasks and drive them to completion. 6. Excellent business and communication skills to be able to work with business owners to develop and define key business questions and to build data sets that answer those questions 7. Work with distributed machine learning and statistical algorithms to harness enormous volumes of data at scale to serve our customers About the team Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
US, NY, New York
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! We are looking for a self-motivated, passionate and resourceful Applied Scientist to bring diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. You will spend your time as a hands-on machine learning practitioner and a research leader. You will play a key role on the team, building and guiding machine learning models from the ground up. At the end of the day, you will have the reward of seeing your contributions benefit millions of Amazon.com customers worldwide. Key job responsibilities - Develop AI solutions for various Prime Video Search systems using Deep learning, GenAI, Reinforcement Learning, and optimization methods; - Work closely with engineers and product managers to design, implement and launch AI solutions end-to-end; - Design and conduct offline and online (A/B) experiments to evaluate proposed solutions based on in-depth data analyses; - Effectively communicate technical and non-technical ideas with teammates and stakeholders; - Stay up-to-date with advancements and the latest modeling techniques in the field; - Publish your research findings in top conferences and journals. About the team Prime Video Search Science team owns science solution to power search experience on various devices, from sourcing, relevance, ranking, to name a few. We work closely with the engineering teams to launch our solutions in production.
US, WA, Bellevue
Are you interested in a unique opportunity to advance the accuracy and efficiency of Artificial General Intelligence (AGI) systems? If so, you're at the right place! As a Quantitative Researcher on our team, you will be working at the intersection of mathematics, computer science, and finance, you will collaborate with a diverse team of engineers in a fast-paced, intellectually challenging environment where innovative thinking is encouraged and rewarded. We operate at Amazon's large scale with the energy of a nimble start-up. If you have a learner's mindset, enjoy solving challenging problems, and value an inclusive team culture, you will thrive in this role, and we hope to hear from you. Key job responsibilities * Conduct statistical analyses on web-scale datasets to develop state-of-the-art multimodal large language models * Conceptualize and develop mathematical models, data sampling and preparation strategies to continuously improve existing algorithms * Identify and utilize data sources to drive innovation and improvements to our LLMs About the team We are passionate engineers and scientists dedicated to pushing the boundaries of innovation. We evaluate and represent the customer perspective through accurate benchmarking.
US, WA, Bellevue
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with world-class scientists and engineers to develop novel data, modeling and engineering solutions to support the responsible AI initiatives at AGI. Your work will directly impact our customers in the form of products and services that make use of audio technology. About the team While the rapid advancements in Generative AI have captivated global attention, we see these as just the starting point. Our team is dedicated to pushing the boundaries of what’s possible, leveraging Amazon’s unparalleled ML infrastructure, computing resources, and commitment to responsible AI principles. And Amazon’s leadership principle of customer obsession guides our approach, prioritizing our customers’ needs and preferences each step of the way.
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a highly skilled and experienced Senior Applied Scientist, to lead the development and implementation of algorithms and models for supervised fine-tuning and reinforcement learning through human feedback; with a focus across text, image, and video modalities. As a Senior Applied Scientist, you will play a critical role in driving the development of Generative AI (Gen AI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities - Collaborate with cross-functional teams of engineers, product managers, and scientists to identify and solve complex problems in GenAI - Design and execute experiments to evaluate the performance of different algorithms and models, and iterate quickly to improve results - Think big about the arc of development of GenAI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems - Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports - Mentor and guide junior scientists and engineers, and contribute to the overall growth and development of the team