Vancouver, Canada

3 important themes from Amazon's 2019 NeurIPS papers

Time series forecasting, bandit problems, and optimization are integral to Amazon's efforts to deliver better value for its customers.

Last year, the first 2,000-2,500 publicly released tickets to the Conference on Neural Information Processing Systems, or NeurIPS, sold out in 12 minutes.

This year, the conference organizers moved to a lottery system, allowing aspiring attendees to register in advance and randomly selecting invitees from the pool of registrants. But they also bumped the number of public-release tickets up from around 2,000 to 3,500, testifying to the conference’s continued popularity.

At NeurIPS this year, there are 26 papers with Amazon coauthors. They cover a wide range of topics, but surveying their titles, Alex Smola, a vice president and distinguished scientist in the Amazon Web Services organization, discerns three prominent themes, all tied to Amazon’s efforts to deliver better value for its customers.

Those three themes are time series forecasting (and causality), bandit problems, and optimization.

1. Time series forecasting

Time series forecasting involves measuring some quantity over time — such as the number of deliveries in a particular region in the past six months, or the number of cloud servers required to support a particular site over the past two years — and attempting to project that quantity into the future.

“That’s something that is very dear to Amazon’s heart,” Smola says. “For anything that Amazon does, it’s really beneficial to have a good estimate of what our customers will expect from us ahead of time. Only by being able to do that will we be able to satisfy customers’ demands, be it for products or services.”

A sequence of basis time series, forecast into the near future and summed together to approximate a new time series.
The paper “Think Globally, Act Locally” examines data sets with many correlated time series, such as the demand curves for millions of products sold online. The researchers describe a method for constructing a much smaller set of “basis time series”; the time series for any given product can be approximated by a weighted sum of the bases.
Courtesy of the researchers

The basic mathematical framework for time series forecasting is a century old, but the scale of modern forecasting problems calls for new analytic techniques, Smola says.

“Problems are nowadays highly multivariate,” Smola says. “If you look at the many millions of products that we offer, you want to be able to predict fairly well what will sell, where and to whom.

“You need to make reasonable assumptions on how this very large problem can be decomposed into smaller, more tractable pieces. You make structural approximations, and sometimes those structural approximations are what leads to very different algorithms.

“So you might, for instance, have a global model, and then you have local models that address the specific items or address the specific sales. If you look at ‘Think Globally, Act Locally’” — a NeurIPS paper whose first author is Rajat Sen, an applied scientist in the Amazon Search group — “it’s already in the title. Or look at ‘High-Dimensional Multivariate Forecasting with Low-Rank Gaussian Copula Processes’. In this case, you have a global structure, but it’s only in a small subspace where interesting things happen.”

Side-by-side images depict correlations between taxi traffic at different points in Manhattan at different times of day
The paper "High-Dimensional Multivariate Forecasting with Low-Rank Gaussian Copula Processes" describes a method for predicting correlations among many parallel time series. In one example, the researchers forecast correlations between the taxi traffic at different points in New York City at different times of day. Red lines indicate strong correlations; blue lines indicate strong negative correlations. Weekend midday traffic patterns (left) show negative correlations between locations near the Empire State Building, suggesting that taxis tend to prefer different routes depending on traffic conditions. Weekend evening traffic patterns show positive correlations between the vicinity of the Empire State Building and areas with high concentrations of hotels.
Courtesy of the researchers

An aspect of forecasting that has recently been drawing more attention, Smola says, is causality. Where traditional machine learning models merely infer statistical correlations between data points, “it is ultimately the causal relationship that matters,” Smola says.

“I think that causality is one of the most interesting conceptual developments affecting modern machine learning,” says Bernhard Schölkopf, like Smola a vice president and distinguished scientist in Amazon Web Services. “This is the main topic that I have been interested in for the last decade.”

Two of Schölkopf’s NeurIPS papers — “Perceiving the Arrow of Time in Autoregressive Motion” and “Selecting Causal Brain Features with a Single Conditional Independence Test per Feature” — address questions of causality, as does “Causal Regularization”, a paper by Dominik Janzing, a senior research scientist in Smola’s group.

“Normal machine learning builds on correlations of other statistical dependences,” Schölkopf explains. “This is fine as long as the source of the data doesn't change. For example, if in the training set of an image recognition system, all cows are standing on green pasture, then it is fine for an ML system to use the green as a useful feature in recognizing cows, as long as the test set looks the same. If in the test set, the cows are standing on the beach, then such a purely statistical system can fail.

“More generally: causal learning and inference attempts to understand how systems respond to interventions and other changes, and not just how to predict data that looks more or less the same as the training data.”

2. Bandit problems

The second major theme that Smola discerns in Amazon scientists’ NeurIPS papers is a concern with bandit problems, a phrase that shows up in the titles of Amazon papers such as “MaxGap Bandit: Adaptive Algorithms for Approximate Ranking” and “Low-Rank Bandit Methods for High-Dimensional Dynamic Pricing”. Bandit problems take their name from one-armed bandits, or slot machines.

“It used to be that those bandits were all mechanical, so there would be slight variations between them, and some would have maybe a slightly a higher return than others,” Smola explains. “I walk into a den of iniquity, and I want to find the one-armed bandit where I will lose the least money or maybe make some money. And the only feedback I have is that I pull arms, and I get money or lose money. These are very unreliable, noisy events.”

Bandit problems present what’s known as an explore-exploit trade-off. The gambler must simultaneously explore the environment — determine which machines pay out the most — and exploit the resulting knowledge — concentrate as much money as possible on the high-return machines. Early work on bandit problems concerned identifying the high-return machines with minimal outlays.

“That problem was solved about 20 years ago,” Smola says. “What hasn’t been solved — and this is where things get a lot more interesting — is once you start adding context. Imagine that I get to show you various results as you’re searching for your next ugly Christmas sweater. The unfortunate thing is that the creativity of sweater designers is larger than what you can fit on a page. Now the context is essentially, what time, where from, which user, all those things. We want to find and recommend the ugly Christmas sweater that works specifically for you. This is an example where context is immediately relevant.”

It’s really beneficial to have a good estimate of what our customers will expect from us ahead of time. Only by being able to do that will we be able to satisfy customers’ demands.
Alex Smola, VP and distinguished scientist, Amazon

In the bandit-problem framework, in other words, the high-payout machines change with every new interaction. But there may be external signals that indicate how they’re changing.

Distributed computing, which is inescapable for today’s large websites, changes the structure of the bandit problem, too.

“Say you go to a restaurant, and the cook wants to improve the menu,” Smola says. “You can try out lots of new menu items, and that’s a good way to improve the menu overall. But if you start offering a lot of undercooked dishes because you’re experimenting, then at some point your loyal customers will stay away.

“Now imagine you have 100 restaurants, and they all do the same thing at the same time. They can’t necessarily communicate at the per-second level; maybe every day or every week they chat with each other. Now this entire exploration problem becomes a little more challenging, because if two restaurants try out the same undercooked dish, you make the customer less happy than you could have.

“So how does this map back into Amazon land? Well, if you have many servers doing this recommendation, the explore-exploit trade-off might be too aggressive if every one of them works on their own.”

3. Optimization

Finally, Smola says, “There is a third category of results that has to do with making algorithms faster. If you look at ‘Primal-Dual Block Frank-Wolfe’, ‘Communication-Efficient Distributed SGD with Sketching’, ‘Qsparse-Local-SGD’ — those are the workhorses that run underneath all of this. Making them more efficient is obviously something that we care about, so we can respond to customer requests faster, train algorithms faster.”

Bird’s-eye view

NeurIPS is a huge conference, with more than 1,400 accepted papers that cover a bewildering variety of topics. Beyond the Amazon papers, Caltech professor and Amazon fellow Pietro Perona identifies three research areas as growing in popularity.

“One is understanding how deep networks work, so that we can better design architectures and optimization algorithms to train models,” Perona says. “Another is low-shot learning. Machines are still much less efficient than humans at learning, in that they need more training examples to achieve the same performance. And finally, AI and society — identifying opportunities for social good, sustainable development, and the like.”

NeurIPS is being held this year at the Vancouver Convention Center, and the main conference runs from Dec. 8 to Dec. 12. The Women in Machine Learning Workshop, for which Amazon is a gold-level sponsor, takes place on Dec. 9; the Third Conversational AI workshop, whose organizers include Alexa AI principal scientist Dilek Hakkani-Tür, will be held on Dec. 14.

Amazon's involvement at NeurIPS

Paper and presentation schedule

Tuesday, 12/10 | 10:45-12:45pm | East Exhibition Hall B&C

A Meta-MDP Approach to Exploration for Lifelong Reinforcement Learning | #192
Francisco Garcia (UMass Amherst/Amazon) · Philip Thomas (UMass Amherst)

Blocking Bandits | #17
Soumya Basu (UT Austin) · Rajat Sen (UT Austin/Amazon) · Sujay Sanghavi (UT Austin/Amazon) · Sanjay Shakkottai (UT Austin)

Causal Regularization | #180
Dominik Janzing (Amazon)

Communication-Efficient Distributed SGD with Sketching | #81
Nikita Ivkin (Amazon) · Daniel Rothchild (University of California, Berkeley) · Md Enayat Ullah (Johns Hopkins University) · Vladimir Braverman (Johns Hopkins University) · Ion Stoica (UC Berkeley) · Raman Arora (Johns Hopkins University)

Learning Distributions Generated by One-Layer ReLU Networks | #49
Shanshan Wu (UT Austin) ·Alexandros G. Dimakis (UT Austin) · Sujay Sanghavi (UT Austin/Amazon)

Tuesday, 12/10 | 5:30-7:30pm | East Exhibition Hall B&C

Efficient Communication in Multi-Agent Reinforcement Learning via Variance Based Control | #195
Sai Qian Zhang (Harvard University) · Qi Zhang (Amazon) · Jieyu Lin (University of Toronto)

Extreme Classification in Log Memory using Count-Min Sketch: A Case Study of Amazon Search with 50M Products | #37
Tharun Kumar Reddy Medini (Rice University) · Qixuan Huang (Rice University) · Yiqiu Wang (Massachusetts Institute of Technology) · Vijai Mohan (Amazon) · Anshumali Shrivastava (Rice University/Amazon)

Iterative Least Trimmed Squares for Mixed Linear Regression | #50
Yanyao Shen (UT Austin) · Sujay Sanghavi (UT Austin/Amazon)

Meta-Surrogate Benchmarking for Hyperparameter Optimization | #6
Aaron Klein (Amazon) · Zhenwen Dai (Spotify) · Frank Hutter (University of Freiburg) · Neil Lawrence (University of Cambridge) · Javier Gonzalez (Amazon)

Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification and Local Computations | #32
Debraj Basu (Adobe) · Deepesh Data (UCLA) · Can Karakus (Amazon) · Suhas Diggavi (UCLA)

Selecting Causal Brain Features with a Single Conditional Independence Test per Feature | #139
Atalanti Mastakouri (Max Planck Institute for Intelligent Systems) · Bernhard Schölkopf (MPI for Intelligent Systems/Amazon) · Dominik Janzing (Amazon)

Wednesday, 12/11 | 10:45-12:45pm | East Exhibition Hall B&C

On Single Source Robustness in Deep Fusion Models | #93
Taewan Kim (Amazon) · Joydeep Ghosh (UT Austin)

Perceiving the Arrow of Time in Autoregressive Motion | #155
Kristof Meding (University Tübingen) · Dominik Janzing (Amazon) · Bernhard Schölkopf (MPI for Intelligent Systems/Amazon) · Felix A. Wichmann (University of Tübingen)

Wednesday, 12/11 | 5:00-7:00pm | East Exhibition Hall B&C

Compositional De-Attention Networks | #127
Yi Tay (Nanyang Technological University) · Anh Tuan Luu (MIT) · Aston Zhang (Amazon) · Shuohang Wang (Singapore Management University) · Siu Cheung Hui (Nanyang Technological University)

Low-Rank Bandit Methods for High-Dimensional Dynamic Pricing | #3
Jonas Mueller (Amazon) · Vasilis Syrgkanis (Microsoft Research) · Matt Taddy (Amazon)

MaxGap Bandit: Adaptive Algorithms for Approximate Ranking | #4
Sumeet Katariya (Amazon/University of Wisconsin-Madison) · Ardhendu Tripathy (UW Madison) · Robert Nowak (UW Madison)

Primal-Dual Block Generalized Frank-Wolfe | #165
Qi Lei (UT Austin) · Jiacheng Zhuo (UT Austin) · Constantine Caramanis (UT Austin) · Inderjit S Dhillon (Amazon/UT Austin) · Alexandros Dimakis (UT Austin)

Towards Optimal Off-Policy Evaluation for Reinforcement Learning with Marginalized Importance Sampling | #208
Tengyang Xie (University of Illinois at Urbana-Champaign) · Yifei Ma (Amazon) · Yu-Xiang Wang (UC Santa Barbara)

Thursday, 12/12 | 10:45-12:45pm | East Exhibition Hall B&C

AutoAssist: A Framework to Accelerate Training of Deep Neural Networks | #155
Jiong Zhang (UT Austin) · Hsiang-Fu Yu (Amazon) · Inderjit S Dhillon (UT Austin/Amazon)

Exponentially Convergent Stochastic k-PCA without Variance Reduction | #200 (oral, 10:05-10:20 W Ballroom C)
Cheng Tang (Amazon)

Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift | #54
Stephan Rabanser (Technical University of Munich/Amazon) · Stephan Günnemann (Technical University of Munich) · Zachary Lipton (Carnegie Mellon University/Amazon)

High-Dimensional Multivariate Forecasting with Low-Rank Gaussian Copula Processes | #107
David Salinas (Naverlabs) · Michael Bohlke-Schneider (Amazon) · Laurent Callot (Amazon) · Jan Gasthaus (Amazon) · Roberto Medico (Ghent University)

Learning Search Spaces for Bayesian Optimization: Another View of Hyperparameter Transfer Learning | #30
Valerio Perrone (Amazon) · Huibin Shen (Amazon) · Matthias Seeger (Amazon) · Cedric Archambeau (Amazon) · Rodolphe Jenatton (Amazon)

Mo’States Mo’Problems: Emergency Stop Mechanisms from Observation | #227
Samuel Ainsworth (University of Washington) · Matt Barnes (University of Washington) · Siddhartha Srinivasa (University of Washington/Amazon)

Think Globally, Act Locally: A Deep Neural Network Approach to High-Dimensional Time Series Forecasting | #113
Rajat Sen (Amazon) · Hsiang-Fu Yu (Amazon) · Inderjit S Dhillon (UT Austin/Amazon)

Thursday, 12/12 | 5:00-7:00pm | East Exhibition Hall B&C

Dynamic Local Regret for Non-Convex Online Forecasting | #20
Sergul Aydore (Stevens Institute of Technology) · Tianhao Zhu (Stevens Institute of Technology) · Dean Foster (Amazon)

Interaction Hard Thresholding: Consistent Sparse Quadratic Regression in Sub-quadratic Time and Space | #47
Suo Yang (UT Austin), Yanyao Shen (UT Austin), Sujay Sanghavi (UT Austin/Amazon)

Inverting Deep Generative Models, One Layer at a Time |#48
Qi Lei (University of Texas at Austin) · Ajil Jalal (UT Austin) · Inderjit S Dhillon (UT Austin/Amazon) · Alexandros Dimakis (UT Austin)

Provable Non-linear Inductive Matrix Completion| #215
Kai Zhong (Amazon) · Zhao Song (UT Austin) · Prateek Jain (Microsoft Research) · Inderjit S Dhillon (UT Austin/Amazon)

Amazon researchers on NeurIPS committees and boards

  • Bernhard Schölkopf – Advisory Board
  • Michael I. Jordan – Advisory Board
  • Thorsten Joachims – senior area chair
  • Anshumali Shrivastava – area chair
  • Cedric Archambeau – area chair
  • Peter Gehler – area chair
  • Sujay Sanghavi – committee member

Workshops

Learning with Rich Experience: Integration of Learning Paradigms

Paper: "Meta-Q-Learning" | Rasool Fakoor, Pratik Chaudhari, Stefano Soatto, Alexander J. Smola

Human-Centric Machine Learning

Paper: "Learning Fair and Transferable Representations" | Luco Oneto, Michele Donini, Andreas Maurer, Massimiliano Pontil

Bayesian Deep Learning

Paper: "Online Bayesian Learning for E-Commerce Query Reformulation" | Gaurush Hiranandani, Sumeet Katariya, Nikhil Rao, Karthik Subbian

Meta-Learning

Paper: "Constrained Bayesian Optimization with Max-Value Entropy Search" | Valerio Perrone, Iaroslav Shcherbatyi, Rodolphe Jenatton, Cedric Archambeau, Matthias Seeger

Paper: "A Quantile-Based Approach to Hyperparameter Transfer Learning" | David Salinas, Huibin Shen, Valerio Perrone

Paper: "A Baseline for Few-Shot Image Classification" | Guneet Singh Dhillon, Pratik Chaudhari, Avinash Ravichandran, Stefano Soatto

Conversational AI

Organizer: Dilek Hakkani-Tür

Paper: "The Eighth Dialog System Technology Challenge" | Seokhwan Kim, Michel Galley, Chulaka Gunasekara, Sungjin Lee, Adam Atkinson, Baolin Peng, Hannes Schulz, Jianfeng Gao, Jinchao Li, Mahmoud Adada, Minlie Huang, Luis Lastras, Jonathan K. Kummerfeld, Walter S. Lasecki, Chiori Hori, Anoop Cherian, Tim K. Marks, Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta

Paper: “Just Ask: An Interactive Learning Framework for Vision and Language Navigation” | Ta-Chung Chi, Minmin Shen, Mihail Eric, Seokhwan Kim, Dilek Hakkani-Tur

Paper: “MA-DST: Multi-Attention-Based Scalable Dialog State Tracking” | Adarsh Kumar, Peter Ku, Anuj Kumar Goyal, Angeliki Metallinou, Dilek Hakkani-Tür

Paper: “Investigation of Error Simulation Techniques for Learning Dialog Policies for Conversational Error Recovery” | Maryam Fazel-Zarandi, Longshaokan Wang, Aditya Tiwari, Spyros Matsoukas

Paper: “Towards Personalized Dialog Policies for Conversational Skill Discovery”| Maryam Fazel-Zarandi, Sampat Biswas, Ryan Summers, Ahmed Elmalt, Andy McCraw, Michael McPhillips, John Peach

Paper: “Conversation Quality Evaluation via User Satisfaction Estimation” | Praveen Kumar Bodigutla, Spyros Matsoukas, Lazaros Polymenakos

Paper: “Multi-domain Dialogue State Tracking as Dynamic Knowledge Graph Enhanced Question Answering” | Li Zhou, Kevin Small

Science Meets Engineering of Deep Learning

Paper: "X-BERT: eXtreme Multi-label Text Classification using Bidirectional Encoder from Transformers" Wei-Cheng Chang, Hsiang-Fu Yu, Kai Zhong, Yiming Yang, Inderjit S. Dhillon

Machine Learning with Guarantees

Organizers: Ben London, Thorsten Joachims
Program Committee: Kevin Small, Shiva Kasiviswanathan, Ted Sandler

MLSys: Workshop on Systems for ML

Paper: "Block-Distributed Gradient Boosted Trees" | Theodore Vasiloudis, Hyunsu Cho, Henrik Boström

Women in Machine Learning

Gold sponsor: Amazon

Research areas

Related content

US, CA, Santa Clara
Join the next science and engineering revolution at Amazon's Delivery Foundation Model team, where you'll work alongside world-class scientists and engineers to pioneer the next frontier of logistics through advanced AI and foundation models. We are seeking an exceptional Senior Applied Scientist to help develop innovative foundation models that enable delivery of billions of packages worldwide. In this role, you'll combine highly technical work with scientific leadership, ensuring the team delivers robust solutions for dynamic real-world environments. Your team will leverage Amazon's vast data and computational resources to tackle ambitious problems across a diverse set of Amazon delivery use cases. Key job responsibilities - Design and implement novel deep learning architectures combining a multitude of modalities, including image, video, and geospatial data. - Solve computational problems to train foundation models on vast amounts of Amazon data and infer at Amazon scale, taking advantage of latest developments in hardware and deep learning libraries. - As a foundation model developer, collaborate with multiple science and engineering teams to help build adaptations that power use cases across Amazon Last Mile deliveries, improving experience and safety of a delivery driver, an Amazon customer, and improving efficiency of Amazon delivery network. - Guide technical direction for specific research initiatives, ensuring robust performance in production environments. - Mentor fellow scientists while maintaining strong individual technical contributions. A day in the life As a member of the Delivery Foundation Model team, you’ll spend your day on the following: - Develop and implement novel foundation model architectures, working hands-on with data and our extensive training and evaluation infrastructure - Guide and support fellow scientists in solving complex technical challenges, from trajectory planning to efficient multi-task learning - Guide and support fellow engineers in building scalable and reusable infra to support model training, evaluation, and inference - Lead focused technical initiatives from conception through deployment, ensuring successful integration with production systems- Drive technical discussions within the team and and key stakeholders - Conduct experiments and prototype new ideas - Mentor team members while maintaining significant hands-on contribution to technical solutions About the team The Delivery Foundation Model team combines ambitious research vision with real-world impact. Our foundation models provide generative reasoning capabilities required to meet the demands of Amazon's global Last Mile delivery network. We leverage Amazon's unparalleled computational infrastructure and extensive datasets to deploy state-of-the-art foundation models to improve the safety, quality, and efficiency of Amazon deliveries. Our work spans the full spectrum of foundation model development, from multimodal training using images, videos, and sensor data, to sophisticated modeling strategies that can handle diverse real-world scenarios. We build everything end to end, from data preparation to model training and evaluation to inference, along with all the tooling needed to understand and analyze model performance. Join us if you're excited about pushing the boundaries of what's possible in logistics, working with world-class scientists and engineers, and seeing your innovations deployed at unprecedented scale.
US, WA, Bellevue
At Amazon, we're working to be the world’s most customer-centric company. Driving innovation on behalf of customers is core to our mission, and this position supports one of our largest business to deliver on this mission. As member of the Operations Insights, Planning, Analytics and Technology (IPAT) team, this position owns monthly change management, Controllership and Governance, Risk and Compliance (GRC) process for World Wide Operations IPAT team. Key job responsibilities In the midst of our rapidly expanding scope, we are actively seeking a Data Scientist who possesses strategic thinking skills and a knack for creative problem-solving. This Data Scientist will play a pivotal role in supporting hyper-growth projects. Collaborating closely with cross-functional finance and business leaders within the WW Operations organization, this role should be skilled in ML models development, Optimization models, model implementation, hypothesis testing, high quality analysis, database design, be comfortable dealing with large and complex data sets, and using visualization tools. Join us on this captivating journey in an exhilarating domain, and become a part of making history!
US, NY, New York
Join the next science and engineering revolution at Amazon's Delivery Foundation Model team, where you'll work alongside world-class scientists and engineers to pioneer the next frontier of logistics through advanced AI and foundation models. We are seeking an exceptional Senior Applied Scientist to help develop innovative foundation models that enable delivery of billions of packages worldwide. In this role, you'll combine highly technical work with scientific leadership, ensuring the team delivers robust solutions for dynamic real-world environments. Your team will leverage Amazon's vast data and computational resources to tackle ambitious problems across a diverse set of Amazon delivery use cases. Key job responsibilities - Design and implement novel deep learning architectures combining a multitude of modalities, including image, video, and geospatial data. - Solve computational problems to train foundation models on vast amounts of Amazon data and infer at Amazon scale, taking advantage of latest developments in hardware and deep learning libraries. - As a foundation model developer, collaborate with multiple science and engineering teams to help build adaptations that power use cases across Amazon Last Mile deliveries, improving experience and safety of a delivery driver, an Amazon customer, and improving efficiency of Amazon delivery network. - Guide technical direction for specific research initiatives, ensuring robust performance in production environments. - Mentor fellow scientists while maintaining strong individual technical contributions. A day in the life As a member of the Delivery Foundation Model team, you’ll spend your day on the following: - Develop and implement novel foundation model architectures, working hands-on with data and our extensive training and evaluation infrastructure - Guide and support fellow scientists in solving complex technical challenges, from trajectory planning to efficient multi-task learning - Guide and support fellow engineers in building scalable and reusable infra to support model training, evaluation, and inference - Lead focused technical initiatives from conception through deployment, ensuring successful integration with production systems- Drive technical discussions within the team and and key stakeholders - Conduct experiments and prototype new ideas - Mentor team members while maintaining significant hands-on contribution to technical solutions About the team The Delivery Foundation Model team combines ambitious research vision with real-world impact. Our foundation models provide generative reasoning capabilities required to meet the demands of Amazon's global Last Mile delivery network. We leverage Amazon's unparalleled computational infrastructure and extensive datasets to deploy state-of-the-art foundation models to improve the safety, quality, and efficiency of Amazon deliveries. Our work spans the full spectrum of foundation model development, from multimodal training using images, videos, and sensor data, to sophisticated modeling strategies that can handle diverse real-world scenarios. We build everything end to end, from data preparation to model training and evaluation to inference, along with all the tooling needed to understand and analyze model performance. Join us if you're excited about pushing the boundaries of what's possible in logistics, working with world-class scientists and engineers, and seeing your innovations deployed at unprecedented scale.
US, CA, San Francisco
Amazon has launched a new research lab in San Francisco to develop foundational capabilities for useful AI agents. We’re enabling practical AI to make our customers more productive, empowered, and fulfilled. In particular, our work combines large language models (LLMs) with reinforcement learning (RL) to solve reasoning, planning, and world modeling in both virtual and physical environments. Our research builds on that of Amazon’s broader AGI organization, which recently introduced Amazon Nova, a new generation of state-of-the-art foundation models (FMs). Key job responsibilities You will contribute directly to AI agent development in an engineering management role: leading a software development team focused on our internal platform for acquiring agentic experience at large scale. You will help set direction, align the team’s goals with the broader lab, mentor team members, recruit great people, and stay technically involved. You will be hired as a Member of Technical Staff. About the team Our lab is a small, talent-dense team with the resources and scale of Amazon. We’re entering an exciting new era where agents can redefine what AI makes possible. We’d love for you to join our lab and build it from the ground up!
US, NY, New York
Are you a passionate Applied Scientist (AS) ready to shape the future of digital content creation? At Amazon, we're building Earth's most desired destination for creators to monetize their unique skills, inspire the next generation of customers, and help brands expand their reach. We build innovative products and experiences that drive growth for creators across Amazon's ecosystem. Our team owns the entire Creator product suite, ensuring a cohesive experience, optimizing compensation structures, and launching features that help creators achieve both monetary and non-monetary goals. Key job responsibilities As an AS on our team, you will: - Handle challenging problems that directly impact millions of creators and customers - Independently collect and analyze data - Develop and deliver scalable predictive models, using any necessary programming, machine learning, and statistical analysis software - Collaborate with other scientists, engineers, product managers, and business teams to creatively solve problems, measure and estimate risks, and constructively critique peer research - Consult with engineering teams to design data and modeling pipelines which successfully interface with new and existing software - Participate in design and implementation across teams to contribute to initiatives and develop optimal solutions that benefit the creators organization The successful candidate is a self-starter, comfortable with a dynamic, fast-paced environment, and able to think big while paying careful attention to detail. You have deep knowledge of an area/multiple areas of science, with a track record of applying this knowledge to deliver science solutions in a business setting and a demonstrated ability to operate at scale. You excel in a culture of invention and collaboration.
US, WA, Seattle
The AWS Supply Chain organization is looking for a Sr. Manager of Applied Science to lead science and data teams working on innovative AI-powered supply chain solutions. As part of the AWS Solutions organization, we have a vision to provide business applications, leveraging Amazon’s unique experience and expertise, that are used by millions of companies worldwide to manage day-to-day operations. We will accomplish this by accelerating our customers’ businesses through delivery of intuitive and differentiated technology solutions that solve enduring business challenges. We blend vision with curiosity and Amazon’s real-world experience to build opinionated, turnkey solutions. Where customers prefer to buy over build, we become their trusted partner with solutions that are no-brainers to buy and easy to use. Are you excited about developing state-of-the-art GenAI/Agentic AI based solutions for enterprise applications? As a Sr. Manager of Applied Scientist at AWS Supply Chain, you will bring AI advancements to customer facing enterprise applications. In this role, you will drive the technical vision and strategy for your team while fostering a culture of innovation and scientific excellence. You will be leading a fast-paced, cross-disciplinary team of researchers who are leaders in the field. You will take on challenging problems, distill real requirements, and then deliver solutions that either leverage existing academic and industrial research, or utilize your own out-of-the-box pragmatic thinking. In addition to coming up with novel solutions and prototypes, you may even need to deliver these to production in customer facing products. Key job responsibilities Building and mentoring teams of Applied Scientists, ML Engineers, and Data Scientists. Setting technical direction and research strategy aligned with business goals. Driving innovation in Supply Chains systems using AI/ML models and AI Agents. Collaborating with cross-functional teams to translate research into production. Managing project portfolios and resource allocation.
CA, ON, Toronto
About Sponsored Products and Brands The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. About our team The Targeting and Recommendations team within Sponsored Products and Brands empowers advertisers with intelligent targeting controls and one-click campaign recommendations that automatically populate optimal settings based on ASIN data. This comprehensive suite provides advanced targeting capabilities through AI-generated keyword and ASIN suggestions, sophisticated targeting controls including Negative Targeting, Manual Targeting with Product Attribute Targeting (PAT) and Keyword Targeting (KWT), and Automated Targeting (ATv2). Our vision is to build a revolutionary, highly personalized and context-aware agentic advertiser guidance system that seamlessly integrates Large Language Models (LLMs) with sophisticated tooling, operating across both conversational and traditional ad console experiences while scaling from natural language queries to proactive, intelligent guidance delivery based on deep advertiser understanding, ultimately enhancing both targeting precision and one-click campaign optimization. Through strategic partnerships across Ad Console, Sales, and Marketing teams, we identify high-impact opportunities spanning from strategic product guidance to granular keyword optimization and deliver them through personalized, scalable experiences grounded in state-of-the-art agent architectures, reasoning frameworks, sophisticated tool integration, and model customization approaches including tuning, MCP, and preference optimization. This presents an exceptional opportunity to shape the future of e-commerce advertising through advanced AI technology at unprecedented scale, creating solutions that directly impact millions of advertisers. Key job responsibilities * Design and build targeting and 1 click recommendation agents to guide advertisers in conversational and non-conversational experience. * Design and implement advanced model and agent optimization techniques, including supervised fine-tuning, instruction tuning and preference optimization (e.g., DPO/IPO). * Collaborate with peers across engineering and product to bring scientific innovations into production. * Stay current with the latest research in LLMs, RL, and agent-based AI, and translate findings into practical applications. * Develop agentic architectures that integrate planning, tool use, and long-horizon reasoning. A day in the life As an Applied Scientist on our team, your days will be immersed in collaborative problem-solving and strategic innovation. You'll partner closely with expert applied scientists, software engineers, and product managers to tackle complex advertising challenges through creative, data-driven solutions. Your work will center on developing sophisticated machine learning and AI models, leveraging state-of-the-art techniques in natural language processing, recommendation systems, and agentic AI frameworks. From designing novel targeting algorithms to building personalized guidance systems, you'll contribute to breakthrough innovations
US, NY, New York
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist to work on pre-training methodologies for Generative Artificial Intelligence (GenAI) models. You will interact closely with our customers and with the academic and research communities. Key job responsibilities Join us to work as an integral part of a team that has experience with GenAI models in this space. We work on these areas: - Scaling laws - Hardware-informed efficient model architecture, low-precision training - Optimization methods, learning objectives, curriculum design - Deep learning theories on efficient hyperparameter search and self-supervised learning - Learning objectives and reinforcement learning methods - Distributed training methods and solutions - AI-assisted research About the team The AGI team has a mission to push the envelope in GenAI with Large Language Models (LLMs) and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with talented peers to support the development of algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing in Pasadena, CA, is looking to hire a Principal Quantum Research Scientist. You will join a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers working at the forefront of quantum computing. You should have a deep and broad knowledge of experimental quantum computing and a track record of original scientific contributions. We are looking for candidates with strong engineering principles, resourcefulness and a bias for action, superior problem solving, and excellent communication skills. Working effectively within a team environment is essential. As principal research scientist you will be expected to lead new ideas and stay abreast of the field of experimental quantum computation. Key job responsibilities Key job responsibilities In this role, you will work on improvements in all components of SC qubits quantum hardware, from qubits and resonators to quantum-limited amplifiers. You will also work on their integration into multiqubit chips. This will require designing new experiments, collecting statistically significant data through automation, analyzing the results, and summarizing conclusions in written form. Finally, you will work with hardware engineers, material scientists, and circuit designers to advance the state of the art of SC qubits hardware. About the team About the team Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a US export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility.