On-device speech processing makes Alexa faster, lower-bandwidth

Innovative training methods and model compression techniques combine with clever engineering to keep speech processing local.

At Amazon, we always look to invent new technology for improving customer experience. One technology we have been working on at Alexa is on-device speech processing, which has multiple benefits: a reduction in latency, or the time it takes Alexa to respond to queries; lowered bandwidth consumption, which is important on portable devices; and increased availability in in-car units and other applications where Internet connectivity is intermittent. On-device processing also enables the fusion of the speech signal with other modalities, like vision, for features such as Alexa’s natural turn-taking.

In the last year, we’ve continued to build upon Alexa’s on-device speech-processing capabilities. As a result of these inventions, we are launching a new setting that gives customers the option of having the audio of their Alexa voice requests processed locally, without being sent to the cloud.

In the cloud, storage space and computational capacity are effectively unconstrained. To ensure accuracy, our cloud models can be large and computationally demanding. Executing the same functions on-device means compressing our models into less than 1% as much space — with minimal loss in accuracy.

Moreover, in the cloud, the separate components of Alexa’s speech-processing stack — automatic speech recognition (ASR), whisper detection, and speaker identification — run on separate server nodes with their own powerful processors. On-device, those functions have to share hardware not only with each other but with Alexa’s other core device functions, such as music playback.

Re-creating Alexa’s speech-processing stack on-device was a massive undertaking. New methods for training small-footprint ASR models were part of the solution, but so were innovations in system design and hardware-software codesign. It was a joint effort across science and engineering teams over a span of years. Here’s a quick overview of how it works.

System architecture

Our on-device ASR model takes in an acoustic speech signal and outputs a set of hypotheses about what the speaker said, ranked according to probability. We represent those hypotheses as a lattice — a graph whose edges represent recognized words and the probability that a given word follows from the previous one.

Sample lattice.cropped.png
An example of a lattice representing ASR hypotheses.

With cloud-based ASR, encrypted audio streams to the cloud in small snippets called “frames”. With on-device ASR, only the lattice is sent to the cloud, where a large and powerful neural language model reranks the hypotheses. The lattice can’t be sent until the customer has finished speaking, as words later in a sequence can dramatically change the overall probability of a hypothesis.

The model that determines when the customer has finished speaking is called an end-pointer. End-pointers offer a natural trade-off between accuracy and latency: an aggressive end-pointer will initiate speech processing earlier, but it might cut the speaker off prematurely, resulting in a poor customer experience.

On the device, we in fact run two end-pointers: One is a speculative end-pointer that we have tuned to be about 200 milliseconds faster than the final end-pointer, so we can initiate downstream processing — such as natural-language understanding (NLU) — ahead of the final end-pointed ASR result. In exchange for speed, however, we trade off a little accuracy.

The final end-pointer takes longer to make a decision but is more accurate. In cases in which the first end-pointer cuts speech off too early, the final end-pointer sends a revised lattice and the instruction to reset downstream processing. In the large majority of cases, however, the aggressive end-pointer is correct, which reduces user-perceived latency, since downstream tasks are initiated earlier.

Another aspect of ASR that had to move on-device is context awareness. When computing the probabilities in a lattice, the ASR model should, for instance, give added weight to otherwise uncommon names that happen to be in the customer’s address book or the names the customer has assigned to household devices.

AmazonScience_StaticGraphic
A diagram of the on-device ASR network, with a closeup of the biasing mechanism that allows the network to ingest dynamic content. (Based on figures in "Context-aware Transformer transducer for speech recognition")
Attention map.png
This attention map indicates that the trained network is attending to the correct entry in a list of Alexa-linked home appliances. (From "Context-aware Transformer transducer for speech recognition")

Context awareness can’t wait for the cloud because the lattice, though it encodes multiple hypotheses, doesn’t come close to encoding all possible hypotheses. When constructing the lattice, the ASR system has to prune a lot of low-probability hypotheses. If context awareness isn’t built into the on-device model, names of contacts or linked skills might end up getting pruned.

Initially, we use a so-called shallow-fusion model to add context and personalize content on-device. When the system is building the lattice, it boosts the probabilities of contextually relevant words such as contact or appliance names.

The probability boosts are heuristic, however — they’re not learned jointly with the core ASR model. To achieve even better accuracy on personalized and long-tail content, we have developed a multihead attention-based context-biasing mechanism that is jointly trained with the rest of the ASR subnetworks.

Model training

On-device ASR required us to build a new model from the ground up, an end-to-end recurrent neural network-transducer (RNN-T) model that directly maps the input speech signal to an output sequence of words. Using a single neural network results in a significantly reduced memory footprint. But we had to develop new techniques, both for inference and for training, to achieve the degree of accuracy and compression that would let this technology handle utterances on-device.

Previously on Amazon Science, we’ve discussed some of the techniques we used to increase the accuracy of small-footprint end-to-end ASR models. With teacher-student training, for instance, we teach a small, lean model to match the outputs of a more-powerful but slower model. We developed a training methodology that made it possible to do teacher-student training efficiently with a million hours of unannotated speech.

Stream-level context.png
During the training of a context-aware ASR model, a long-short-term-memory (LSTM) encoder encodes both unlabeled and labeled segments of the audio stream, so the model can use the entire input audio to improve ASR accuracy. (From "Improving RNN-T ASR accuracy using context audio")

To further boost the accuracy of on-device RNN-T ASR, we developed techniques that allow the neural network to learn and exploit audio context within a stream. For example, for a stream comprising two utterances, “Alexa” and “Play a song”, the audio context from the keyword segment (“Alexa”) helps the model focus on the foreground speech and speaker. Separately, we implemented a novel discriminative-loss and training algorithm that aims at directly minimizing the word error rate (WER) of RNN-T ASR.

On top of these innovations, however, we still had to develop some new compression techniques to get the RNN-T to run efficiently on-device. A neural network consists of simple processing nodes each of which is connected to several others. The connections between nodes have associated weights, which determine how much one node’s output contributes to the computation performed by the next node.

One way to shrink a neural network’s memory footprint is to quantize its weights — to divide the total range of weights into a small set of intervals and use a single value to represent all the weights in each interval. So, for instance, the weights 0.70, 0.76, and 0.79 might all get quantized to the single value 0.75. Specifying an interval requires fewer bits than specifying several different floating-point values.

If quantization is done after a network has been trained, performance can suffer. We developed a method of <i class="rte2-style-italic">quantization-aware</i> training that imposes a probability distribution on the network weights during training, so that they can be easily quantized with little effect on performance. Unlike previous quantization-aware training methods, which mostly take quantization into account in the forward pass, ours accounts for quantization in the backward direction, during weight updates, through network loss regularization. And it does that efficiently.

A way to make neural networks run more efficiently — also a vital concern on resource-constrained devices — is to reduce low weights to zero. Computations involving zero weights can be discarded, reducing the computational burden.

Sparsification.png
Over successive training epochs, sparsification gradually drops low weights in a weight matrix.

But again, doing that reduction after the network is trained can compromise performance. We developed a <i class="rte2-style-italic">sparsification</i> method that enables the gradual reduction of low-value weights during training, so the network learns a model amenable to weight pruning.

Neural networks are typically trained on multiple passes through the same set of training data, or epochs. During each epoch, we force the network weights to diverge more and more, so that at the end of the final epoch, a fixed number of weights — say, half — are effectively zero. They can be safely discarded.

AmazonScience_AmnetDemo_V1.gif
A demonstration of the branching encoder network.

To improve on-device efficiency, we also developed a branching encoder network that uses two different neural networks to convert speech inputs into numeric representations suitable for speech classification. One network is complex, one simple, and the ASR model decides on the fly whether it can get away with passing an input frame to the simple model, saving computational cost and time. We described this work in more detail in an earlier Amazon Science blog post.

Hardware-software codesign

Quantization and sparsification make no difference to performance if the underlying hardware can’t take advantage of them. Another key to getting ASR to run on-device was the design of Amazon’s AZ family of neural edge processors, which are optimized for our specific approach to compression.

For one thing, where a typical processor might represent data using 16 or 32 bits, for certain core operations, the AZ processors accelerate computation by using an 8-bit or even lower-bit representation, because that’s all we need to handle quantized values.

The weights of a neural network are typically represented using a matrix — a big grid of numbers. A matrix half of whose values are zeroes takes up as much space as a matrix that’s all nonzero.

On computer chips, transferring data tends to be much more time consuming than executing computations. So when we load our matrix into memory, we use a compression scheme that takes advantage of low-bit quantization and zero values. The circuitry for decoding the compressed representation is built into the chip.

In the neural processor’s memory, the matrix is reconstituted: the zeroes are filled back in. But the processor’s circuitry is designed to recognize zero values and discard computations involving them. So the time savings from sparsification are realized in the hardware itself.

Moving speech recognition on device entails a number of innovations in other areas, such as reduction in the bandwidth required for model updates and compression of NLU models, to ensure basic functionality on devices with intermittent Internet connectivity. And we’re also hard at work on multilingual on-device ASR models for dynamic language switching, or automatically recognizing which of two languages a customer is speaking and responding in kind.

The launch of on-device speech processing is a huge step in bringing the benefits of “processing on the edge” to our customers, and we will continue to invent on their behalf in this area.

Research areas

Related content

US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Principal Applied Scientist with a strong deep learning and generative AI background, to focus on the development of software development skills of Nova foundational models. As a Principal Applied Scientist, you are a trusted part of the technical leadership. You bring business and industry context to science and technology decisions. You set the standard for scientific excellence and make decisions that affect the way we build and integrate algorithms. You solicit differing views across the organization and are willing to change your mind as you learn more. Your artifacts are exemplary and often used as reference across organization. You are a hands-on scientific leader. Your solutions are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility. You tackle intrinsically hard problems, acquiring expertise as needed. You decompose complex problems into straightforward solutions. You amplify your impact by leading scientific reviews within your organization or at your location. You scrutinize and review experimental design, modeling, verification and other research procedures. You probe assumptions, illuminate pitfalls, and foster shared understanding. You align teams toward coherent strategies. You educate, keeping the scientific community up to date on advanced techniques, state of the art approaches, the latest technologies, and trends. You help managers guide the career growth of other scientists by mentoring and play a significant role in hiring and developing scientists and leads. Key job responsibilities You will be responsible for defining key research directions, adopting or inventing new machine learning techniques, conducting rigorous experiments, publishing results, and ensuring that research is translated into practice. You will develop long-term strategies, persuade teams to adopt those strategies, propose goals and deliver on them. You will also participate in organizational planning, hiring, mentorship and leadership development. You will be technically strong and with a passion for building scalable science and engineering solutions. You will serve as a key scientific resource in full-cycle development (conception, design, implementation, testing to documentation, delivery, and maintenance).
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a highly skilled and experienced Applied Scientist, to support the development and implementation of state-of-the-art algorithms and models for supervised fine-tuning and reinforcement learning through human feedback and complex reasoning; with a focus across text, image, and video modalities. As an Applied Scientist, you will play a critical role in supporting the development of Generative AI (Gen AI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities Collaborate with cross-functional teams of engineers, product managers, and scientists to identify and solve complex problems in Gen AI Design and execute experiments to evaluate the performance of different algorithms (PT, SFT, RL) and models, and iterate quickly to improve results Think big about the arc of development of Gen AI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports About the team We are passionate scientists dedicated to pushing the boundaries of innovation in Gen AI with focus on Software Development use cases.
US, VA, Arlington
Are you excited to help the US Intelligence Community design, build, and implement AI algorithms, including advanced Generative AI solutions, to augment decision making while meeting the highest standards for reliability, transparency, and scalability? The Amazon Web Services (AWS) US Federal Professional Services team works directly with US Intelligence Community agencies and other public sector entities to achieve their mission goals through the adoption of Machine Learning (ML) and Generative AI methods. We build models for text, image, video, audio, and multi-modal use cases, leveraging both traditional ML approaches and state-of-the-art generative models including Large Language Models (LLMs), text-to-image generation, and other advanced AI capabilities to fit the mission. Our team collaborates across the entire AWS organization to bring access to product and service teams, to get the right solution delivered and drive feature innovation based on customer needs. At AWS, we're hiring experienced data scientists with a background in both traditional and generative AI who can help our customers understand the opportunities their data presents, and build solutions that earn the customer trust needed for deployment to production systems. In this role, you will work closely with customers to deeply understand their data challenges and requirements, and design tailored solutions that best fit their use cases. You should have broad experience building models using all kinds of data sources, and building data-intensive applications at scale. You should possess excellent business acumen and communication skills to collaborate effectively with stakeholders, develop key business questions, and translate requirements into actionable solutions. You will provide guidance and support to other engineers, sharing industry best practices and driving innovation in the field of data science and AI. This position requires that the candidate selected be a US Citizen and currently possess and maintain an active Top Secret security clearance. Key job responsibilities As a Data Scientist, you will: - Collaborate with AI/ML scientists and architects to research, design, develop, and evaluate AI algorithms to address real-world challenges - Interact with customers directly to understand the business problem, help and aid them in implementation of AI solutions, deliver briefing and deep dive sessions to customers and guide customer on adoption patterns and paths to production. - Create and deliver best practice recommendations, tutorials, blog posts, sample code, and presentations adapted to technical, business, and executive stakeholder - Provide customer and market feedback to Product and Engineering teams to help define product direction - This position may require up to 25% local travel. About the team Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (diversity) conferences, inspire us to never stop embracing our uniqueness. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
US, WA, Seattle
Have you ever wondered what it takes to transform millions of manual network planning decisions into AI-powered precision? Network Planning Solutions is looking for scientific innovators obsessed with building the AI/ML intelligence that makes orchestrating complex global operations feel effortless. Here, you'll do more than just build models; you'll create 'delight' by discovering and deploying the science that delivers exactly what our customers need, right when they need it. If you're ready to transform complex data patterns into breakthrough AI capabilities that power intuitive human experiences, you've found your team. Network Planning Solutions architects and orchestrates Amazon's customer service network of the future. By building AI-native solutions that continuously learn, predict and optimize, we deliver seamless customer experiences and empower associates with high-value work—driving measurable business impact at a global scale. As a Sr. Manager, Applied Science, you will own the scientific innovation and research initiatives that make this vision possible. You will lead a team of applied scientists and collaborate with cross-functional partners to develop and implement breakthrough scientific solutions that redefine our global network. Key job responsibilities Lead AI/ML Innovation for Network Planning Solutions: - Develop and deploy production-ready demand forecasting algorithms that continuously sense and predict customer demand using real-time signals - Build network optimization algorithms that automatically adjust staffing as conditions evolve across the service network - Architect scalable AI/ML infrastructure supporting automated forecasting and network optimization capabilities across the system Drive Scientific Excellence: - Build and mentor a team of applied scientists to deliver breakthrough AI/ML solutions - Design rigorous experiments to validate hypotheses and quantify business impact - Establish scientific excellence mechanisms including evaluation metrics and peer review processes Enable Strategic Transformation: - Drive scientific innovation from research to production - Design and validate next-generation AI-native models while ensuring robust performance, explainability, and seamless integration with existing systems. - Partner with Engineering, Product, and Operations teams to translate AI/ML capabilities into measurable business outcomes - Navigate ambiguity through experimentation while balancing innovation with operational constraints - Influence senior leadership through scientific rigor, translating complex algorithms into clear business value A day in the life Your day will be a dynamic blend of scientific innovation and strategic problem-solving. You'll collaborate with cross-functional teams, design AI algorithms, and translate complex data patterns into intuitive solutions that drive meaningful business impact. About the team We are Network Planning Solutions, a team of scientific innovators dedicated to reshaping how global service networks operate. Our mission is to create AI-native solutions that continuously learn, predict, and optimize customer experiences. We empower our associates to tackle high-value challenges and drive transformative change at a global scale.
US, WA, Seattle
Amazon Advertising is one of Amazon's fastest growing businesses. Amazon's advertising portfolio helps merchants, retail vendors, and brand owners succeed via native advertising, which grows incremental sales of their products sold through Amazon. The primary goals are to help shoppers discover new products they love, be the most efficient way for advertisers to meet their business objectives, and build a sustainable business that continuously innovates on behalf of customers. Our products and solutions are strategically important to enable our Retail and Marketplace businesses to drive long-term growth. We deliver billions of ad impressions and millions of clicks and break fresh ground in product and technical innovations every day! The Creative X team within Amazon Advertising time aims to democratize access to high-quality creatives (audio, images, videos, text) by building AI-driven solutions for advertisers. To accomplish this, we are investing in understanding how best users can leverage Generative AI methods such as latent-diffusion models, large language models (LLM), generative audio (music and speech synthesis), computer vision (CV), reinforced learning (RL) and related. As an Applied Scientist you will be part of a close-knit team of other applied scientists and product managers, UX and engineers who are highly collaborative and at the top of their respective fields. We are looking for talented Applied Scientists who are adept at a variety of skills, especially at the development and use of multi-modal Generative AI and can use state-of-the-art generative music and audio, computer vision, latent diffusion or related foundational models that will accelerate our plans to generate high-quality creatives on behalf of advertisers. Every member of the team is expected to build customer (advertiser) facing features, contribute to the collaborative spirit within the team, publish, patent, and bring SOTA research to raise the bar within the team. As an Applied Scientist on this team, you will: - Drive the invention and development of novel multi-modal agentic architectures and models for the use of Generative AI methods in advertising. - Work closely and integrate end-to-end proof-of-concept Machine Learning projects that have a high degree of ambiguity, scale and complexity. - Build interface-oriented systems that use Machine Learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models. - Curate relevant multi-modal datasets. - Perform hands-on analysis and modeling of experiments with human-in-the-loop that eg increase traffic monetization and merchandise sales, without compromising the shopper experience. - Run A/B experiments, gather data, and perform statistical analysis. - Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. - Mentor and help recruit Applied Scientists to the team. - Present results and explain methods to senior leadership. - Willingness to publish research at internal and external top scientific venues. - Write and pursue IP submissions. Key job responsibilities This role is focused on developing new multi-modal Generative AI methods to augment generative imagery and videos. You will develop new multi-modal paradigms, models, datasets and agentic architectures that will be at the core of advertising-facing tools that we are launching. You may also work on development of ML and GenAI models suitable for advertising. You will conduct literature reviews to stay on the SOTA of the field. You will regularly engage with product managers, UX designers and engineers who will partner with you to productize your work. For reference see our products: Enhanced Video Generator, Creative Agent and Creative Studio. A day in the life On a day-to-day basis, you will be doing your independent research and work to develop models, you will participate in sprint planning, collaborative sessions with your peers, and demo new models and share results with peers, other partner teams and leadership. About the team The team is a dynamic team of applied scientists, UX researchers, engineers and product leaders. We reside in the Creative X organization, which focuses on creating products for advertisers that will improve the quality of the creatives within Amazon Ads. We are open to hiring candidates to work out of one of the following locations: UK (London), USA (Seattle).
US, WA, Bellevue
The Amazon Fulfillment Technologies (AFT) Science team is seeking an exceptional Applied Scientist with strong operations research and optimization expertise to develop production solutions for one of the most complex systems in the world: Amazon's Fulfillment Network. At AFT Science, we design, build, and deploy optimization, statistics, machine learning, and GenAI/LLM solutions that power production systems running across Amazon Fulfillment Centers worldwide. We tackle a wide range of challenges throughout the network, including labor planning and staffing, pick scheduling, stow guidance, and capacity risk management. Our mission is to develop innovative, scalable, and reliable science-driven production solutions that exceed the published state of the art, enabling systems to run optimally and continuously (from every few minutes to every few hours) across our large-scale network. Key job responsibilities As an Applied Scientist, you will collaborate with scientists, software engineers, product managers, and operations leaders to develop optimization-driven solutions that directly impact process efficiency and associate experience in the fulfillment network. Your key responsibilities include: - Develop deep understanding and domain knowledge of operational processes, system architecture, and business requirements - Dive deep into data and code to identify opportunities for continuous improvement and disruptive new approaches - Design and develop scalable mathematical models for production systems to derive optimal or near-optimal solutions for existing and emerging challenges - Create prototypes and simulations for agile experimentation of proposed solutions - Advocate for technical solutions with business stakeholders, engineering teams, and senior leadership - Partner with software engineers to integrate prototypes into production systems - Design and execute experiments to test new or incremental solutions launched in production - Build and monitor metrics to track solution performance and business impact About the team Amazon Fulfillment Technology (AFT) designs, develops, and operates end-to-end fulfillment technology solutions for all Amazon Fulfillment Centers (FCs). We harmonize the physical and virtual worlds so Amazon customers can get what they want, when they want it. The AFT Science team brings expertise in operations research, optimization, statistics, machine learning, and GenAI/LLM, combined with deep domain knowledge of operational processes within FCs and their unique challenges. We prioritize advancements that support AFT tech teams and focus areas rather than specific fields of research or individual business partners. We influence each stage of innovation from inception to deployment, which includes both developing novel solutions and improving existing approaches. Our production systems rely on a diverse set of technologies, and our teams invest in multiple specialties as the needs of each focus area evolve.
US, WA, Seattle
We are looking for an exceptional applied scientist to join the AWS Applied AI Life Sciences organization. You will invent, implement, and deploy state of the art machine learning algorithms and intelligent AI systems to solve complex problems in healthcare and life sciences area, making a meaningful impact on patient lives. You will be at the heart of a growing and exciting focus area for AWS and work with other acclaimed engineers and world famous scientists. Key job responsibilities - Design, develop, and deploy novel Agentic systems and ML solutions for complex healthcare and life sciences challenges - Navigate ambiguity and create clarity in early-stage product development - Collaborate with product managers, engineers, and domain experts to transform research into production-quality features - Mentor junior scientists and participate in tactical and strategic planning A day in the life You will solve real-world problems by getting and analyzing large amounts of data, generate insights and opportunities, design simulations and experiments, and develop statistical and ML models. About the team We are a multidisciplinary team of product managers, engineers, scientists, and domain experts working at the intersection of AI/ML and healthcare. We leverage AWS's expertise in secure, scalable cloud computing and applied AI to solve complex challenges in healthcare and life sciences. Our team values customer obsession, technical excellence, innovation, and a commitment to improving patient outcomes through technology.
US, WA, Seattle
The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through industry leading generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. The Demand Utilization team with Sponsored Products and Brands owns finding the appropriate ads to surface to customers when they search for products on Amazon. We strive to understand our customers’ intent and identify relevant ads which enable them to discover new and alternate products. This also enables sellers on Amazon to showcase their products to customers, which may at times be buried deeper in the search results. Our systems and algorithms operate on one of the world's largest product catalogs, matching shoppers with products - with a high relevance bar and strict latency constraints. We are a team of machine learning scientists and software engineers working on complex solutions to understand the customer intent and present them with ads that are not only relevant to their actual shopping experience, but also non-obtrusive. This area is of strategic importance to Amazon Retail and Marketplace business, driving long term-growth. We are looking for an Applied Scientist II, with a strong background in Machine Learning and Generative AI to optimize serving ads on billions of product pages. The solutions you create would drive step increases in coverage of sponsored ads across the retail website and ensure relevant ads are served to Amazon's customers. You will directly impact our customers' shopping experience while helping our sellers get the maximum ROI from advertising on Amazon. You will be expected to demonstrate strong ownership and should be curious to learn and leverage the rich textual, image, and other contextual signals. This role will challenge you to utilize innovative machine learning techniques in the domain of predictive modeling, natural language processing (NLP), deep learning, reinforcement learning, query understanding, vector search, image recognition, and multi-modal AI to deliver significant impact for the business. In addition, you will be at the forefront of leveraging Generative AI (GenAI) technologies, including Large Language Models (LLMs) and foundation models, to drive advanced language understanding, creative ad content generation, and retrieval-augmented generation (RAG). You will also design and build agentic AI systems capable of autonomous, multi-step reasoning, tool use, and chain-of-thought decision-making, while applying techniques such as prompt engineering, fine-tuning, RLHF (Reinforcement Learning from Human Feedback), and embedding-based retrieval to develop scalable, production-grade solutions. Ideal candidates will have hands-on experience fine-tuning, evaluating, and deploying LLMs at scale, along with a strong understanding of emerging GenAI paradigms including agentic workflows and responsible AI practices. You should be able to work cross-functionally across multiple stakeholders, synthesize the science needs of our business partners, develop models to solve business needs, and implement solutions in production. In addition to being a strongly motivated IC, you will also be responsible for mentoring junior scientists, guiding them to deliver high-impact products and services for Amazon customers and sellers, and fostering a culture of innovation around the latest advancements in Generative AI and LLM technologies. Why you will love this opportunity: Amazon is investing heavily in building a world-class advertising business. This team defines and delivers a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are a highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. Impact and Career Growth: You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. Team video https://youtu.be/zD_6Lzw8raE Key job responsibilities As an Applied Scientist II on this team, you will: - Drive end-to-end Machine Learning projects that have a high degree of ambiguity, scale, complexity. - Perform hands-on analysis and modeling of enormous data sets to develop insights that increase traffic monetization and merchandise sales, without compromising the shopper experience. - Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in deploying your ML models. - Run A/B experiments, gather data, and perform statistical analysis. - Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. - Research new and innovative machine learning approaches.
US, CA, Sunnyvale
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. We leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of robotics foundation models that: - Enable unprecedented generalization across diverse tasks - Integrate multi-modal learning capabilities (visual, tactile, linguistic) - Accelerate skill acquisition through demonstration learning - Enhance robotic perception and environmental understanding - Streamline development processes through reusable capabilities The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. As an Applied Scientist, you will develop and improve machine learning systems that help robots perceive, reason, and act in real-world environments. You will leverage state-of-the-art models (open source and internal research), evaluate them on representative tasks, and adapt/optimize them to meet robustness, safety, and performance needs. You will invent new algorithms where gaps exist. You’ll collaborate closely with research, controls, hardware, and product-facing teams, and your outputs will be used by downstream teams to further customize and deploy on specific robot embodiments. Key job responsibilities - Leverage state-of-the-art models for targeted tasks, environments, and robot embodiments through fine-tuning and optimization. - Execute rapid, rigorous experimentation with reproducible results and solid engineering practices, closing the gap between sim and real environments. - Build and run capability evaluations/benchmarks to clearly profile performance, generalization, and failure modes. - Contribute to the data and training workflow: collection/curation, dataset quality/provenance, and repeatable training recipes. - Write clean, maintainable, well commented and documented code, contribute to training infrastructure, create tools for model evaluation and testing, and implement necessary APIs - Stay current with latest developments in foundation models and robotics, assist in literature reviews and research documentation, prepare technical reports and presentations, and contribute to research discussions and brainstorming sessions. - Work closely with senior scientists, engineers, and leaders across multiple teams, participate in knowledge sharing, support integration efforts with robotics hardware teams, and help document best practices and methodologies.
US, NY, New York
Advertising at Amazon is growing incredibly fast and we are responsible for defining and delivering a collection of advertising products that drive discovery and sales. Amazon Business Ads is equally growing fast ($XXXMs to $XBs) and owns engineering and science for the AB WW ad experience. We build business-to-business (“B2B”) specific ad solutions distributed across retail and ad systems for shopper and advertiser experiences. Some include new ad placements or widgets, creatives, sourcing techniques, ad campaign management capabilities and much more! We consider unique AB qualities which are differentiated from the consumer experience such as varying shopper role types, purchasing complexities based on business size and industry (eg education vs healthcare), AB specific features (eg business discounts, buying policies to restrict and prefer products), and AB buyer behaviors (eg buying in bulk). We are seeking a scientific leader who can drive innovation in complex problem areas and new business initiatives. The ideal candidate will: Technical & Research Requirements: * Demonstrate fluency in Python, R, Matlab or other statistical languages and familiarity with deep learning frameworks like PyTorch, TensorFlow * Lead end-to-end solution development from research to prototyping and experimentation * Write and deploy significant parts of scientifically novel software solutions into production Leadership & Influence: * Drive team's scientific agenda by proposing new initiatives and securing management buy-in including PM, SDM * Mentor colleagues and contribute to their professional development * Build consensus on large projects and influence decisions across different teams in Ads Key Leadership Principles: * Dive Deep: Uncover non-obvious insights in data * Deliver Results: Create solutions aligned with customer and product needs * Learn and Be Curious: Demonstrate self-driven desire to explore new research areas * Earn Trust: Build relationships with stakeholders through understanding business needs