“Ambient intelligence" will accelerate advances in general AI

Alexa’s chief scientist on how customer-obsessed science is accelerating general intelligence.

As the world has become more connected, and computing has permeated our surroundings, a new AI paradigm is emerging: ambient intelligence. In this paradigm, our environment responds to our requests and anticipates our needs, provides information or suggests actions, and then recedes into the background.

Rohit Prasad.jpg
Rohit Prasad, Alexa head scientist and senior vice president at Amazon.

This vision of ambient intelligence is not that different from the one on Star Trek. But for most of the last decade, the focus has been reactive assistance — for example, ensuring that customer-initiated requests to Alexa meet customers’ expectations.

In the ambient-intelligence vision, an AI service such as Alexa makes sense of the state of your environment, including devices, sensors, objects, people, and activity around you, to help you in every situation where you need assistance — either reactively (customer initiated) or proactively (AI initiated).

Realizing the ultimate potential of ambient intelligence requires Alexa to bring the best of machine-intelligence capabilities together with the best of human-intelligence capabilities, which is the barometer of general intelligence today.

The most pragmatic definition of general intelligence is the ability to (1) learn multiple tasks jointly, versus modeling each task independently; (2) continually adapt to changes within a set of known tasks, without explicit human supervision; and (3) learn new tasks directly by interacting with end users.

While these general-intelligence characteristics apply to all types of AI systems, for interactive AI services such as Alexa, two more attributes are critical: (1) multisensory and multimodal intelligence — the ability to process data from multiple input sensors (e.g., microphones, cameras, ultrasound), fuse sensor data for improved understanding of customer goals, and generate output in different modalities (e.g., speech, text, image, video); and (2) interaction skills — the ability to converse in a human-like manner, which encompasses not just command of natural language but also the ability to recognize and respond to affect.

What this means for our customers is that Alexa will become

  • More competent: Alexa’s functionalities and skills will expand much faster through multitask intelligence. Additionally, Alexa will improve through self-learning, becoming less reliant on labeled data;
  • More natural and conversational: Alexa interactions will be as free flowing as human interactions through multisensory intelligence, generalizable language models, commonsense reasoning, and affect modeling; 
  • More personalized: Alexa will adapt to each individual using speech and computer vision. Further, customers will be able to directly personalize Alexa explicitly and implicitly;  
  • More insightful and proactive: Alexa will anticipate customer needs through awareness of the shared environment, make suggestions, and even act on customers’ behalf;  
  • More trustworthy:  Alexa will have the same attributes that we cherish in trustworthy people, such as discretion, fairness, and ethical behavior.

In the past year, Alexa has made considerable progress on all these fronts.

More competent

Alexa receives billions of requests per month, and it is critical for it to answer each of these requests to customers’ satisfaction. In 2021, through advances in automatic speech recognition (ASR), natural-language understanding (NLU), and action resolution, Alexa has become 13% more accurate than the previous year — even as the complexity of customer requests has increased.

Alexa has more than 130,000 third-party skills, whose diversity is a testament to their developers’ creativity. Further, it is available in more than 15 language variants across more than 80 countries, most recently Khaleeji Arabic in Saudi Arabia.

Through advances in large pretrained language models, we are making it easier to expand Alexa’s functionality in terms of both skills and languages. Specifically, we have trained an “Alexa Teacher Model,” a large, pretrained, multilingual model with billions of parameters that encodes language as well as salient patterns of interactions with Alexa. Instead of building new task-specific NLU models (e.g., a skill, a feature, or a language) from scratch on task-specific data, we can build them by fine-tuning the Alexa Teacher model, which provides substantial gains in performance from the same amount of task-specific training data.

While today, the Alexa Teacher Model itself is impractical for real-time language understanding, once it is distilled and fine-tuned, it is compact enough to run in real time but remains more accurate than a similar-sized model trained from scratch. The capacity to generalize across tasks, which the language model enables, is one of the hallmarks of general intelligence.

ATM pipeline.png
The Alexa Teacher Model (AlexaTM) pipeline. The Alexa Teacher Model is trained on a large set of GPUs (left), then distilled into smaller variants (center), whose size depends on their uses. The end user adapts a distilled model to its particular use by fine-tuning it on in-domain data (right).

Models derived from the Alexa Teacher Model have helped reduce customer friction in several locales and will help facilitate and scale multilingual and multimodal use cases in coming years.

Still, faster deployment of new functionality is not sufficient. Customer interactions with Alexa are ever evolving, so Alexa needs to improve continuously. To that end, we have expanded Alexa’s self-learning capability — in particular, its ability to automatically learn from implicit feedback, e.g., when a customer cuts Alexa off in order to rephrase a query.

Currently, we have two methods for learning from implicit feedback. One is a mechanism that learns to automatically reformulate the ASR output to ensure a more accurate response, and the other automatically annotates interaction data to enable the retraining of NLU models with minimal human involvement.

At this year’s Conference on Empirical Methods in Natural Language Processing (EMNLP), Alexa AI researchers presented papers reporting our progress on both these fronts.

Learning how to rewrite customer requests requires identifying which successful requests are rephrases of unsuccessful ones. Past work on rephrase detection considered sentences in pairs, determining the likelihood that one is a rephrase of the other. In our EMNLP paper, we explain how to use temporal features of the dialogue history to better identify rephrases, with an accuracy improvement of 28% on one test dataset.

Rephrases.png
Earlier rephrase detection models computed similarity scores between pairs of queries (right), which could lead to inaccuracies. A new model instead uses full dialogue context (left) to more accurately detect rephrases by leveraging session-level semantic information. From “Contextual rephrase detection for reducing friction in dialogue systems”.

In the other paper, we describe a scalable framework for using automatically annotated data to continually update our NLU models. This paper shows how to operationalize our previous work on automatic annotation, to deliver immediate results to our customers.

More natural and conversational

As magical as it is to interact with Alexa by simply saying its name, repeating the name during longer interactions feels unnatural: when we’re talking to other people, we don’t use their names on every turn.

This year, we took a major step toward making interactions with Alexa more natural through Conversation Mode, which leverages Echo Show 10’s camera to enable wake-word-free interactions by improving the detection of device directedness (i.e., the intent of addressing Alexa) — even when there are multiple people in the room, conversing with each other as well as with Alexa.

Conversation Mode uses novel computer vision algorithms to gauge customers’ physical orientations toward the device, which indicate whether they’re addressing Alexa or each other. The combination of visual and audio information dramatically improves device-directed-speech detection relative to either modality used independently. Further, on-device speech recognition using fully neural recurrent-neural-network transducers ensures that Alexa recognizes conversational speech with low latency.

We have also started extending Alexa’s conversational memory, going beyond anaphoric references within an interaction session (e.g., “What is its resolution?” while shopping for TVs) to temporarily maintain memory across sessions in certain situations. For example, for high-consideration purchases such as TVs, Alexa remembers your last interaction and starts off your next interaction where you left off. This capability required us to extend Alexa Conversations, which trains deep-learning-based models on synthetic data automatically generated from a small amount of developer-provided data.

As effective as large neural transformer-based language models are for generating textual responses, they lack the commonsense and knowledge grounding they need to be truly useful in large-scale human-machine interactions. This fall, to help foster the type of invention needed to overcome these challenges, we released the commonsense dialogue dataset, which consists of more than 11,000 newly collected dialogues. In each dialogue, successive turns are related by relationship triples in the public commonsense knowledge graph Conceptnet, such as <doctor, LocateAt, hospital> or <specialist, TypeOf, doctor>.

Commonsense dialogue.png
In each dialogue in the commonsense-dialogue dataset, successive turns are related by relationship triples in the public commonsense knowledge graph Conceptnet, such as <piano, RelatedTo, musical> or <musical, RelatedTo, violin>.

Another way to inject common sense into dialogue models is to enable them to import information from online or other sources as needed, on the fly. At the NeurIPS Workshop on Efficient Natural Language and Speech Processing (ENLSP) earlier this month, Alexa researchers won a best-paper award for doing just that. They propose a few-shot-learning approach to training a knowledge-seeking-turn detector, which can recognize customer questions that can’t be answered through existing API calls.

This year, we also published several papers on affect modeling. At the International Conference on Acoustics, Speech, and Signal Processing, we presented the use of contrastive unsupervised learning to improve emotion recognition when training data is scarce; and at the Spoken Language Technologies conference, we described the adaptation of pretrained language models, which have been so successful at natural-language-processing tasks, to the problem of social and emotional commonsense reasoning.

On the flip side, when human speakers recognize shifts in the emotional states of people they’re talking to, they modify the affect in their responses. At the Speech Synthesis Workshop (SSW11) this summer, we extended our previous work on prosody variation to modify the affective characteristics of synthesized speech.

More personalized

AI’s ability to conform to customers as opposed to the other way around differentiates it from other technological advancements. This fall, we launched multiple new services that allow our customers to personalize AI in a self-serve fashion.

With preference teaching, customers can explicitly teach Alexa which skills should handle weather-related questions, which sports teams they follow, and which cuisines they prefer.

CustomAED_embedding.png
A two-dimensional projection of embeddings produced through Custom Sound Event Detection. New sounds are identified by their location in the embedding space.

With Custom Sound Event Detection, customers can train Alexa to recognize new sounds — such as a doorbell ringing — from just a handful of examples. Custom Sound Event Detection uses proximity in a neural network’s representational space to recognize instances of the same sound.

Custom Event Alerts for Ring Video Doorbell cameras and Spotlight cameras works in a similar way. With just a few examples, customers can train their devices to recognize certain states of affairs in the world — such as a shed door that has been left open.

In August, we introduced adaptive volume for Alexa, which lets Echo devices adjust their volume according to ambient-noise levels, so that the perceived noise level stays consistent for the customer. One of the key elements of the approach is algorithmically separating the speech signal and the noise signal, so that they’re separate inputs to the volume adaptation model.

We also launched adaptive listening for US English, an opt-in feature that gives customers more time to finish speaking before Alexa responds, making Alexa a more accessible, patient listener. For speakers with certain speech impediments, adaptive listening has reduced the friction in their Alexa interactions by more than two-thirds.

Finally, Alexa customers can choose to interact with celebrity personalities such as Amitabh Bachchan, Melissa McCarthy, Samuel L. Jackson, or Shaquille O'Neal. At the end of the year, we even brought holiday cheer to Alexa interactions by launching the festive personality of Santa Claus.

More insightful and proactive

Today, one in four smart-home interactions is initiated by Alexa, due to the expansion of its predictive and proactive features such as hunches and routines.

Since 2018, Alexa hunches have recognized anomalies in customers’ daily routines and suggested corrections — noticing that a light was left on at night and offering to turn it off, for instance. This year, we gave customers the option of making hunches more proactive, so Alexa can act on their behalf. When proactive hunches are enabled, Alexa will turn that light off for you without asking first.

Routines let you initiate a sequence of actions with a single trigger word, rather than issuing the same instructions over and over again. Previously, customers had to specify which actions they wanted to string together. But this year, we began phasing in inferred routines. With inferred routines, Alexa recognizes sequences of actions that customers commonly repeat — such as, say, turning on the kitchen lights, starting the coffee maker, and playing the “Wake Up!” playlist — and suggests combining them into a routine. To save the routine, the customer simply accepts Alexa’s suggestion.

We have also continued to expand latent-goal prediction, where Alexa recognizes the larger customer need implied by an initial request and suggests actions or skills to fulfill that need. For instance, a customer asks, “Who won the Celtics game?”, and after answering, Alexa asks, “Would you like to know when the Celtics are playing next?”

Latent-goal prediction uses pointwise mutual information to measure the likelihood of an interaction pattern in a given context relative to its likelihood across all Alexa traffic, and it uses bandit learning to track whether recommendations are helping or not and suppress underperforming experiences.

We have also introduced visual ID on our latest Echo device, Echo Show 15. With visual ID, Alexa shows notes and other reminders just for you (e.g., “Leave a note for Jack that his new passport has arrived”). Visual ID is also available on Astro, an Alexa-enabled home robot that extends environment and state awareness to your physical space. Astro can follow you playing media or find you to deliver calls, messages, timers, alarms, or reminders. With a Ring Protect prosubscription, Astro can also proactively patrol your home and investigate anomalous activities.

More trustworthy

Preserving customer privacy is an uncompromisable tenet for us and an invention area. Differential privacy in particular is one of our key areas of focus. This year, we won a best-paper award at the annual meeting of the Florida Artificial Intelligence Research Society (FLAIRS) for an approach to improving the performance of machine learning models while still meeting the privacy standards imposed by differential-privacy analysis.

At the Conference of the European Chapter of the Association for Computational Linguistics, we presented a method for protecting privacy by automatically rephrasing training text while preserving their semantic sense, in a way that, again, meets differential-privacy standards.

Biased language models still.jpg
Alexa AI researchers constructed a dataset of more than 23,000 text generation prompts, each consisting of six to nine words of a sentence on Wikipedia. The prompts can be used to test language models for bias.
Credit: Glynis Condon

We want Alexa to work equally well for everyone. To that end, in addition to our partnership with the National Science Foundation in the area of fairness in AI, we are pursuing research into detecting and mitigating inappropriate bias. At the ACM Conference on Fairness, Accountability, and Transparency (FAccT) and the Conference of the European Association for Computational Linguistics, we published a pair of papers on measuring bias in language models and detecting bias in datasets for training models that recognize unreliable news.

The path ahead

I recognize that there are multiple paths to general AI, each with years of fundamental research ahead of it. I believe Alexa and its underlying vision of ambient intelligence offer a pragmatic path to general AI— one where every advancement makes Alexa more useful for our customers in their daily lives.

I am in awe at the rate of invention from the Alexa team in the most difficult circumstances. As we wrap up yet another year of the COVID pandemic, I hope the advances the worldwide community of AI researchers is making in every discipline of AI will help us prevent future pandemics.

Research areas

Related content

US, VA, Arlington
Are you fascinated by the power of Large Language Models (LLM) and Artificial Intelligence (AI) to transform the way we learn and interact with technology? Are you passionate about applying advanced machine learning (ML) techniques to solve complex challenges in the cloud learning space? If so, AWS Training & Certification (T&C) team has an exciting opportunity for you as an Applied Scientist. At AWS T&C, we strive to be leaders in not only how we learn about the latest AI/ML development and AWS services, but also how the same technologies transform the way we learn about them. As an Applied Scientist, you will join a talented and collaborative team that is dedicated to driving innovation and delivering exceptional experiences in our Skill Builder platform for both new learners and seasoned developers. You will be a part of a global team that is focused on transforming how people learn. The position will interact with global leaders and teams across the globe as well as different business and technical organizations. Join us at the AWS T&C Science Team and become a part of a global team that is redefining the future of cloud learning. With access to vast amounts of data, exciting new technology, and a diverse community of talented individuals, you will have the opportunity to make a meaningful impact on the ways how worldwide learners engage with our learning system and builders develop on our platform. Together, we will drive innovation, solve complex problems, and shape the future of future-generation cloud builders. Please visit https://skillbuilder.awsto learn more. Key job responsibilities - Apply your expertise in LLM to design, develop, and implement scalable machine learning solutions that address challenges in discovery and engagement for our international audiences. - Collaborate with cross-functional teams, including software engineers, data engineers, scientists, and product managers, to define project requirements, establish success metrics, and deliver high-quality solutions. - Conduct thorough data analysis to gain insights, identify patterns, and drive actionable recommendations that enhance operational performance and customer experiences across Skill Builder. - Continuously explore and evaluate state-of-the-art techniques and methodologies to improve the accuracy and efficiency of AI/ML systems. - Communicate complex technical concepts effectively to both technical and non-technical stakeholders, providing clear explanations and guidance on proposed solutions and their potential impact. About the team Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.
US, MA, N.reading
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of robotics dexterous hands that: - Enable unprecedented generalization across diverse tasks - Are compliant and durable - Can span tasks from power grasps to fine dexterity and nonprehensile manipulation - Can navigate the uncertainty of the environment - Leverage mechanical intelligence, multi-modal sensor feedback and advanced control techniques. The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. Key job responsibilities - Design and implement robust sensing for dexterous manipulation, including but not limited to: Tactile sensing, Position sensing, Force sensing, Non-contact sensing - Prototype the various identified sensing strategies, considering the constraints of the rest of the hand design - Build and test full hand sensing prototypes to validate the performance of the solution - Develop testing and validation strategies, supporting fast integration into the rest of the robot - Partner with cross-functional teams to iterate on concepts and prototypes - Work with Amazon's robotics engineering and operations customers to deeply understand their requirements and develop tailored solutions - Document the designs, performance, and validation of the final system
IL, Tel Aviv
Come build the future of entertainment with us. Are you interested in helping shape the future of movies and television? Do you want to help define the next generation of how and what Amazon customers are watching? Prime Video is a premium streaming service that offers customers a vast collection of TV shows and movies - all with the ease of finding what they love to watch in one place. We offer customers thousands of popular movies and TV shows from Originals and Exclusive content to exciting live sports events. We also offer our members the opportunity to subscribe to add-on channels which they can cancel at any time and to rent or buy new release movies and TV box sets on the Prime Video Store. Prime Video is a fast-paced, growth business - available in over 240 countries and territories worldwide. The team works in a dynamic environment where innovating on behalf of our customers is at the heart of everything we do. If this sounds exciting to you, please read on We are seeking an exceptional Applied Scientist to join our Prime Video Sports personalization team in Israel. Our team is dedicated to developing state-of-the-art science to personalize the customer experience and help customers seamlessly find any live event in our selection. You will have the opportunity to work on innovative, large-scale projects that push the boundaries of what's possible in sports content delivery and engagement. Your expertise will be crucial in tackling complex challenges such as information retrieval, sequential modeling, realtime model optimizations, utilizing Large Language Models (LLMs), and building state-of-the-art complex recommender systems. Key job responsibilities We are looking for an Applied Scientist with domain expertise in Personalization, Information Retrieval, and Recommender Systems, or general ML to develop new algorithms and end-to-end solutions. As part of our team of applied scientists and software development engineers, you will be responsible for researching, designing, developing, and deploying algorithms into production pipelines. Your role will involve working with cutting-edge technologies in recommender systems and search. You'll also tackle unique challenges like temporal information retrieval to improve real-time sports content recommendations. As a technologist, you will drive the publication of original work in top-tier conferences in Machine Learning and Recommender Systems. We expect you to thrive in ambiguous situations, demonstrating outstanding analytical abilities and comfort in collaborating with cross-functional teams and systems. The ideal candidate is a self-starter with the ability to learn and adapt quickly in our fast-paced environment. About the team We are the Prime Video Sports team. In September 2018 Prime Video launched its first full-scale live streaming experience to world-wide Prime customers with NFL Thursday Night Football. That was just the start. Now Amazon has exclusive broadcasting rights to major leagues like NFL Thursday Night Football, Tennis majors like Roland-Garros and English Premier League to list a few and are broadcasting live events across 30+ sports world-wide. Prime Video is expanding not just the breadth of live content that it offers, but the depth of the experience. This is a transformative opportunity, the chance to be at the vanguard of a program that will revolutionize Prime Video, and the live streaming experience of customers everywhere.
US, WA, Seattle
Within Amazon’s Corporate Financial Planning & Analysis team (FP&A), we enjoy a unique vantage point into everything happening within Amazon. This is exciting opportunity for scientist to join our Financial Transformation team, where you will get to harness the power of statistical and machine learning models to revolutionize finance forecasting that spans entire company and business units. As a key player in this innovative group, you'll be at the forefront of applying state-of-the-art scientific approaches and emerging technologies to solve complex financial challenges. Your deep domain expertise will be instrumental in identifying and addressing customer needs, often venturing into uncharted territories where textbook solutions don't exist. You'll have the chance to author Finance AI articles, showcasing your novel work to both internal and external audiences. Key job responsibilities Your role will involve developing production-ready science models/components that directly impact large-scale systems and services, making critical decisions on implementation complexity and technology adoption. You'll be a driving force in MLOps, optimizing compute and inference usage and enhancing system performance. Beyond technical prowess, you'll contribute to financial strategic planning, mentor team members, and represent our tech. organization in the broader scientific community. This role offers a perfect blend of hands-on development, strategic thinking, and thought leadership in the exciting intersection of finance and advanced analytics. Ready to shape the future of financial forecasting? Join us and let's transform the industry together!
CA, QC, Montreal
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, scene understanding, sim2real transfer, multi-modal foundation models, and multi-task learning, designing novel algorithms that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Drive independent research initiatives in robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Lead technical projects from conceptualization through deployment, ensuring robust performance in production environments - Collaborate with platform teams to optimize and scale models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures, leveraging our extensive compute infrastructure to train and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through ground breaking foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
CA, QC, Montreal
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, scene understanding, sim2real transfer, multi-modal foundation models, and multi-task learning, designing novel algorithms that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Drive independent research initiatives in robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Lead technical projects from conceptualization through deployment, ensuring robust performance in production environments - Collaborate with platform teams to optimize and scale models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures, leveraging our extensive compute infrastructure to train and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through ground breaking foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, WA, Seattle
The Sponsored Products and Brands (SPB) team at Amazon Ads is transforming advertising through generative AI technologies. We help millions of customers discover products and engage with brands across Amazon.com and beyond. Our team combines human creativity with artificial intelligence to reinvent the entire advertising lifecycle—from ad creation and optimization to performance analysis and customer insights. We develop responsible AI technologies that balance advertiser needs, enhance shopping experiences, and strengthen the marketplace. Our team values innovation and tackles complex challenges that push the boundaries of what's possible with AI. Join us in shaping the future of advertising. Key job responsibilities This role will redesign how ads create personalized, relevant shopping experiences with customer value at the forefront. Key responsibilities include: - Design and develop solutions using GenAI, deep learning, multi-objective optimization and/or reinforcement learning to transform ad retrieval, auctions, whole-page relevance, and shopping experiences. - Partner with scientists, engineers, and product managers to build scalable, production-ready science solutions. - Apply industry advances in GenAI, Large Language Models (LLMs), and related fields to create innovative prototypes and concepts. - Improve the team's scientific and technical capabilities by implementing algorithms, methodologies, and infrastructure that enable rapid experimentation and scaling. - Mentor junior scientists and engineers to build a high-performing, collaborative team. A day in the life As an Applied Scientist on the Sponsored Products and Brands Off-Search team, you will contribute to the development in Generative AI (GenAI) and Large Language Models (LLMs) to revolutionize our advertising flow, backend optimization, and frontend shopping experiences. This is a rare opportunity to redefine how ads are retrieved, allocated, and/or experienced—elevating them into personalized, contextually aware, and inspiring components of the customer journey. You will have the opportunity to fundamentally transform areas such as ad retrieval, ad allocation, whole-page relevance, and differentiated recommendations through the lens of GenAI. By building novel generative models grounded in both Amazon’s rich data and the world’s collective knowledge, your work will shape how customers engage with ads, discover products, and make purchasing decisions. If you are passionate about applying frontier AI to real-world problems with massive scale and impact, this is your opportunity to define the next chapter of advertising science. About the team The Off-Search team within Sponsored Products and Brands (SPB) is focused on building delightful ad experiences across various surfaces beyond Search on Amazon—such as product detail pages, the homepage, and store-in-store pages—to drive monetization. Our vision is to deliver highly personalized, context-aware advertising that adapts to individual shopper preferences, scales across diverse page types, remains relevant to seasonal and event-driven moments, and integrates seamlessly with organic recommendations such as new arrivals, basket-building content, and fast-delivery options. To execute this vision, we work in close partnership with Amazon Stores stakeholders to lead the expansion and growth of advertising across Amazon-owned and -operated pages beyond Search. We operate full stack—from backend ads-retail edge services, ads retrieval, and ad auctions to shopper-facing experiences—all designed to deliver meaningful value.
US, CA, Santa Clara
The AWS Neuron Science Team is looking for talented scientists to enhance our software stack, accelerating customer adoption of Trainium and Inferentia accelerators. In this role, you will work directly with external and internal customers to identify key adoption barriers and optimization opportunities. You'll collaborate closely with our engineering teams to implement innovative solutions and engage with academic and research communities to advance state-of-the-art ML systems. As part of a strategic growth area for AWS, you'll work alongside distinguished engineers and scientists in an exciting and impactful environment. We actively work on these areas: - AI for Systems: Developing and applying ML/RL approaches for kernel/code generation and optimization - Machine Learning Compiler: Creating advanced compiler techniques for ML workloads - System Robustness: Building tools for accuracy and reliability validation - Efficient Kernel Development: Designing high-performance kernels optimized for our ML accelerator architectures A day in the life AWS Utility Computing (UC) provides product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Additionally, this role may involve exposure to and experience with Amazon's growing suite of generative AI services and other cloud computing offerings across the AWS portfolio. About the team AWS Neuron is the software of Trainium and Inferentia, the AWS Machine Learning chips. Inferentia delivers best-in-class ML inference performance at the lowest cost in the cloud to our AWS customers. Trainium is designed to deliver the best-in-class ML training performance at the lowest training cost in the cloud, and it’s all being enabled by AWS Neuron. Neuron is a Software that include ML compiler and native integration into popular ML frameworks. Our products are being used at scale with external customers like Anthropic and Databricks as well as internal customers like Alexa, Amazon Bedrocks, Amazon Robotics, Amazon Ads, Amazon Rekognition and many more. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a highly skilled and experienced Applied Scientist, to support the development and implementation of state-of-the-art algorithms and models for supervised fine-tuning and reinforcement learning through human feedback and and complex reasoning; with a focus across text, image, and video modalities. As an Applied Scientist, you will play a critical role in supporting the development of Generative AI (Gen AI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities - Collaborate with cross-functional teams of engineers, product managers, and scientists to identify and solve complex problems in Gen AI - Design and execute experiments to evaluate the performance of different algorithms and models, and iterate quickly to improve results - Think big about the arc of development of Gen AI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems - Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports
US, WA, Seattle
Application deadline: Applications will be accepted on an ongoing basis Amazon Ads is re-imagining advertising through cutting-edge generative artificial intelligence (AI) technologies. We combine human creativity with AI to transform every aspect of the advertising life cycle—from ad creation and optimization to performance analysis and customer insights. Our solutions help advertisers grow their brands while enabling millions of customers to discover and purchase products through delightful experiences. We deliver billions of ad impressions and millions of clicks daily, breaking fresh ground in product and technical innovations. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. Why you’ll love this role: This role offers unprecedented breadth in ML applications and access to extensive computational resources and rich datasets that will enable you to build truly innovative solutions. You'll work on projects that span the full advertising life cycle, from sophisticated ranking algorithms and real-time bidding systems to creative optimization and measurement solutions. You'll work alongside talented engineers, scientists, and product leaders in a culture that encourages innovation, experimentation, and bias for action, and you’ll directly influence business strategy through your scientific expertise. What makes this role unique is the combination of scientific rigor with real-world impact. You’ll re-imagine advertising through the lens of advanced ML while solving problems that balance the needs of advertisers, customers, and Amazon's business objectives. Your impact and career growth: Amazon Ads is investing heavily in AI and ML capabilities, creating opportunities for scientists to innovate and make their marks. Your work will directly impact millions. Whether you see yourself growing as an individual contributor or moving into people management, there are clear paths for career progression. This role combines scientific leadership, organizational ability, technical strength, and business understanding. You'll have opportunities to lead technical initiatives, mentor other scientists, and collaborate with senior leadership to shape the future of advertising technology. Most importantly, you'll be part of a community that values scientific excellence and encourages you to push the boundaries of what's possible with AI. Watch two Applied Scientists at Amazon Ads talk about their work: https://www.youtube.com/watch?v=vvHsURsIPEA Learn more about Amazon Ads: https://advertising.amazon.com/ Key job responsibilities As a Senior Applied Scientist in Amazon Ads, you will: - Research and implement cutting-edge ML approaches, including applications of generative AI and large language models - Develop and deploy innovative ML solutions spanning multiple disciplines – from ranking and personalization to natural language processing, computer vision, recommender systems, and large language models - Drive end-to-end projects that tackle ambiguous problems at massive scale, often working with petabytes of data - Build and optimize models that balance multiple stakeholder needs - helping customers discover relevant products while enabling advertisers to achieve their goals efficiently - Build ML models, perform proof-of-concept, experiment, optimize, and deploy your models into production, working closely with cross-functional teams including engineers, product managers, and other scientists - Design and run A/B experiments to validate hypotheses, gather insights from large-scale data analysis, and measure business impact - Develop scalable, efficient processes for model development, validation, and deployment that optimize traffic monetization while maintaining customer experience