On-device speech processing makes Alexa faster, lower-bandwidth

Innovative training methods and model compression techniques combine with clever engineering to keep speech processing local.

At Amazon, we always look to invent new technology for improving customer experience. One technology we have been working on at Alexa is on-device speech processing, which has multiple benefits: a reduction in latency, or the time it takes Alexa to respond to queries; lowered bandwidth consumption, which is important on portable devices; and increased availability in in-car units and other applications where Internet connectivity is intermittent. On-device processing also enables the fusion of the speech signal with other modalities, like vision, for features such as Alexa’s natural turn-taking.

In the last year, we’ve continued to build upon Alexa’s on-device speech-processing capabilities. As a result of these inventions, we are launching a new setting that gives customers the option of having the audio of their Alexa voice requests processed locally, without being sent to the cloud.

In the cloud, storage space and computational capacity are effectively unconstrained. To ensure accuracy, our cloud models can be large and computationally demanding. Executing the same functions on-device means compressing our models into less than 1% as much space — with minimal loss in accuracy.

Moreover, in the cloud, the separate components of Alexa’s speech-processing stack — automatic speech recognition (ASR), whisper detection, and speaker identification — run on separate server nodes with their own powerful processors. On-device, those functions have to share hardware not only with each other but with Alexa’s other core device functions, such as music playback.

Re-creating Alexa’s speech-processing stack on-device was a massive undertaking. New methods for training small-footprint ASR models were part of the solution, but so were innovations in system design and hardware-software codesign. It was a joint effort across science and engineering teams over a span of years. Here’s a quick overview of how it works.

System architecture

Our on-device ASR model takes in an acoustic speech signal and outputs a set of hypotheses about what the speaker said, ranked according to probability. We represent those hypotheses as a lattice — a graph whose edges represent recognized words and the probability that a given word follows from the previous one.

Sample lattice.cropped.png
An example of a lattice representing ASR hypotheses.

With cloud-based ASR, encrypted audio streams to the cloud in small snippets called “frames”. With on-device ASR, only the lattice is sent to the cloud, where a large and powerful neural language model reranks the hypotheses. The lattice can’t be sent until the customer has finished speaking, as words later in a sequence can dramatically change the overall probability of a hypothesis.

The model that determines when the customer has finished speaking is called an end-pointer. End-pointers offer a natural trade-off between accuracy and latency: an aggressive end-pointer will initiate speech processing earlier, but it might cut the speaker off prematurely, resulting in a poor customer experience.

On the device, we in fact run two end-pointers: One is a speculative end-pointer that we have tuned to be about 200 milliseconds faster than the final end-pointer, so we can initiate downstream processing — such as natural-language understanding (NLU) — ahead of the final end-pointed ASR result. In exchange for speed, however, we trade off a little accuracy.

The final end-pointer takes longer to make a decision but is more accurate. In cases in which the first end-pointer cuts speech off too early, the final end-pointer sends a revised lattice and the instruction to reset downstream processing. In the large majority of cases, however, the aggressive end-pointer is correct, which reduces user-perceived latency, since downstream tasks are initiated earlier.

Another aspect of ASR that had to move on-device is context awareness. When computing the probabilities in a lattice, the ASR model should, for instance, give added weight to otherwise uncommon names that happen to be in the customer’s address book or the names the customer has assigned to household devices.

AmazonScience_StaticGraphic
A diagram of the on-device ASR network, with a closeup of the biasing mechanism that allows the network to ingest dynamic content. (Based on figures in "Context-aware Transformer transducer for speech recognition")
Attention map.png
This attention map indicates that the trained network is attending to the correct entry in a list of Alexa-linked home appliances. (From "Context-aware Transformer transducer for speech recognition")

Context awareness can’t wait for the cloud because the lattice, though it encodes multiple hypotheses, doesn’t come close to encoding all possible hypotheses. When constructing the lattice, the ASR system has to prune a lot of low-probability hypotheses. If context awareness isn’t built into the on-device model, names of contacts or linked skills might end up getting pruned.

Initially, we use a so-called shallow-fusion model to add context and personalize content on-device. When the system is building the lattice, it boosts the probabilities of contextually relevant words such as contact or appliance names.

The probability boosts are heuristic, however — they’re not learned jointly with the core ASR model. To achieve even better accuracy on personalized and long-tail content, we have developed a multihead attention-based context-biasing mechanism that is jointly trained with the rest of the ASR subnetworks.

Model training

On-device ASR required us to build a new model from the ground up, an end-to-end recurrent neural network-transducer (RNN-T) model that directly maps the input speech signal to an output sequence of words. Using a single neural network results in a significantly reduced memory footprint. But we had to develop new techniques, both for inference and for training, to achieve the degree of accuracy and compression that would let this technology handle utterances on-device.

Previously on Amazon Science, we’ve discussed some of the techniques we used to increase the accuracy of small-footprint end-to-end ASR models. With teacher-student training, for instance, we teach a small, lean model to match the outputs of a more-powerful but slower model. We developed a training methodology that made it possible to do teacher-student training efficiently with a million hours of unannotated speech.

Stream-level context.png
During the training of a context-aware ASR model, a long-short-term-memory (LSTM) encoder encodes both unlabeled and labeled segments of the audio stream, so the model can use the entire input audio to improve ASR accuracy. (From "Improving RNN-T ASR accuracy using context audio")

To further boost the accuracy of on-device RNN-T ASR, we developed techniques that allow the neural network to learn and exploit audio context within a stream. For example, for a stream comprising two utterances, “Alexa” and “Play a song”, the audio context from the keyword segment (“Alexa”) helps the model focus on the foreground speech and speaker. Separately, we implemented a novel discriminative-loss and training algorithm that aims at directly minimizing the word error rate (WER) of RNN-T ASR.

On top of these innovations, however, we still had to develop some new compression techniques to get the RNN-T to run efficiently on-device. A neural network consists of simple processing nodes each of which is connected to several others. The connections between nodes have associated weights, which determine how much one node’s output contributes to the computation performed by the next node.

One way to shrink a neural network’s memory footprint is to quantize its weights — to divide the total range of weights into a small set of intervals and use a single value to represent all the weights in each interval. So, for instance, the weights 0.70, 0.76, and 0.79 might all get quantized to the single value 0.75. Specifying an interval requires fewer bits than specifying several different floating-point values.

If quantization is done after a network has been trained, performance can suffer. We developed a method of <i class="rte2-style-italic">quantization-aware</i> training that imposes a probability distribution on the network weights during training, so that they can be easily quantized with little effect on performance. Unlike previous quantization-aware training methods, which mostly take quantization into account in the forward pass, ours accounts for quantization in the backward direction, during weight updates, through network loss regularization. And it does that efficiently.

A way to make neural networks run more efficiently — also a vital concern on resource-constrained devices — is to reduce low weights to zero. Computations involving zero weights can be discarded, reducing the computational burden.

Sparsification.png
Over successive training epochs, sparsification gradually drops low weights in a weight matrix.

But again, doing that reduction after the network is trained can compromise performance. We developed a <i class="rte2-style-italic">sparsification</i> method that enables the gradual reduction of low-value weights during training, so the network learns a model amenable to weight pruning.

Neural networks are typically trained on multiple passes through the same set of training data, or epochs. During each epoch, we force the network weights to diverge more and more, so that at the end of the final epoch, a fixed number of weights — say, half — are effectively zero. They can be safely discarded.

AmazonScience_AmnetDemo_V1.gif
A demonstration of the branching encoder network.

To improve on-device efficiency, we also developed a branching encoder network that uses two different neural networks to convert speech inputs into numeric representations suitable for speech classification. One network is complex, one simple, and the ASR model decides on the fly whether it can get away with passing an input frame to the simple model, saving computational cost and time. We described this work in more detail in an earlier Amazon Science blog post.

Hardware-software codesign

Quantization and sparsification make no difference to performance if the underlying hardware can’t take advantage of them. Another key to getting ASR to run on-device was the design of Amazon’s AZ family of neural edge processors, which are optimized for our specific approach to compression.

For one thing, where a typical processor might represent data using 16 or 32 bits, for certain core operations, the AZ processors accelerate computation by using an 8-bit or even lower-bit representation, because that’s all we need to handle quantized values.

The weights of a neural network are typically represented using a matrix — a big grid of numbers. A matrix half of whose values are zeroes takes up as much space as a matrix that’s all nonzero.

On computer chips, transferring data tends to be much more time consuming than executing computations. So when we load our matrix into memory, we use a compression scheme that takes advantage of low-bit quantization and zero values. The circuitry for decoding the compressed representation is built into the chip.

In the neural processor’s memory, the matrix is reconstituted: the zeroes are filled back in. But the processor’s circuitry is designed to recognize zero values and discard computations involving them. So the time savings from sparsification are realized in the hardware itself.

Moving speech recognition on device entails a number of innovations in other areas, such as reduction in the bandwidth required for model updates and compression of NLU models, to ensure basic functionality on devices with intermittent Internet connectivity. And we’re also hard at work on multilingual on-device ASR models for dynamic language switching, or automatically recognizing which of two languages a customer is speaking and responding in kind.

The launch of on-device speech processing is a huge step in bringing the benefits of “processing on the edge” to our customers, and we will continue to invent on their behalf in this area.

Research areas

Related content

US, WA, Seattle
Here at Amazon, we embrace our differences. We are committed to furthering our culture of diversity and inclusion of our teams within the organization. How do you get items to customers quickly, cost-effectively, and—most importantly—safely, in less than an hour? And how do you do it in a way that can scale? Our teams of hundreds of scientists, engineers, aerospace professionals, and futurists have been working hard to do just that! We are delivering to customers, and are excited for what’s to come. Check out more information about Prime Air on the About Amazon blog (https://www.aboutamazon.com/news/transportation/amazon-prime-air-delivery-drone-reveal-photos). If you are seeking an iterative environment where you can drive innovation, apply state-of-the-art technologies to solve real world delivery challenges, and provide benefits to customers, Prime Air is the place for you. Come work on the Amazon Prime Air Team! Prime Air is seeking an experienced Applied Science Manager to help develop our advanced Navigation algorithms and flight software applications. In this role, you will lead a team of scientists and engineers to conduct analyses, support cross-functional decision-making, define system architectures and requirements, contribute to the development of flight algorithms, and actively identify innovative technological opportunities that will drive significant enhancements to meet our customers' evolving demands. This person must be comfortable working with a team of top-notch software developers and collaborating with our science teams. We’re looking for someone who innovates, and loves solving hard problems. You will work hard, have fun, and make history! Export Control License: This position may require a deemed export control license for compliance with applicable laws and regulations. Placement is contingent on Amazon’s ability to apply for and obtain an export control license on your behalf.
US, VA, Herndon
Application deadline: Applications will be accepted on an ongoing basis Are you excited to help the US Intelligence Community design, build, and implement AI algorithms, including advanced Generative AI solutions, to augment decision making while meeting the highest standards for reliability, transparency, and scalability? The Amazon Web Services (AWS) US Federal Professional Services team works directly with US Intelligence Community agencies and other public sector entities to achieve their mission goals through the adoption of Machine Learning (ML) and Generative AI methods. We build models for text, image, video, audio, and multi-modal use cases, leveraging both traditional ML approaches and state-of-the-art generative models including Large Language Models (LLMs), text-to-image generation, and other advanced AI capabilities to fit the mission. Our team collaborates across the entire AWS organization to bring access to product and service teams, to get the right solution delivered and drive feature innovation based on customer needs. At AWS, we're hiring experienced data scientists with a background in both traditional and generative AI who can help our customers understand the opportunities their data presents, and build solutions that earn the customer trust needed for deployment to production systems. In this role, you will work closely with customers to deeply understand their data challenges and requirements, and design tailored solutions that best fit their use cases. You should have broad experience building models using all kinds of data sources, and building data-intensive applications at scale. You should possess excellent business acumen and communication skills to collaborate effectively with stakeholders, develop key business questions, and translate requirements into actionable solutions. You will provide guidance and support to other engineers, sharing industry best practices and driving innovation in the field of data science and AI. This position requires that the candidate selected must currently possess and maintain an active TS/SCI Security Clearance with Polygraph. The position further requires the candidate to opt into a commensurate clearance for each government agency for which they perform AWS work. Key job responsibilities As an Data Scientist, you will: - Collaborate with AI/ML scientists and architects to research, design, develop, and evaluate AI algorithms to address real-world challenges - Interact with customers directly to understand the business problem, help and aid them in implementation of AI solutions, deliver briefing and deep dive sessions to customers and guide customer on adoption patterns and paths to production. - Create and deliver best practice recommendations, tutorials, blog posts, sample code, and presentations adapted to technical, business, and executive stakeholder - Provide customer and market feedback to Product and Engineering teams to help define product direction - This position may require up to 25% local travel. About the team About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences and inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.
CN, 11, Beijing
职位:Applied scientist 应用科学家实习生 毕业时间:2026年10月 - 2027年7月之间毕业的应届毕业生 · 入职日期:2026年6月及之前 · 实习时间:保证一周实习4-5天全职实习,至少持续3个月 · 工作地点:北京朝阳区 投递须知: 1 填写简历申请时,请把必填和非必填项都填写完整。提交简历之后就无法修改了哦! 2 学校的英文全称请准确填写。中英文对应表请查这里(无法浏览请登录后浏览)https://docs.qq.com/sheet/DVmdaa1BCV0RBbnlR?tab=BB08J2 如果您正在攻读计算机,AI,ML或搜索领域专业的博士或硕士研究生,而且对应用科学家的实习工作感兴趣。如果您也喜爱深入研究棘手的技术问题并提出解决方案,用成功的产品显著地改善人们的生活。 那么,我们诚挚邀请您加入亚马逊的International Technology搜索团队改善Amazon的产品搜索服务。我们的目标是帮助亚马逊的客户找到他们所需的产品,并发现他们感兴趣的新产品。 这会是一份收获满满的工作。您每天的工作都与全球数百万亚马逊客户的体验紧密相关。您将提出和探索创新,基于TB级别的产品和流量数据设计机器学习模型。您将集成这些模型到搜索引擎中为客户提供服务,通过数据,建模和客户反馈来完成闭环。您对模型的选择需要能够平衡业务指标和响应时间的需求。
CN, 44, Shenzhen
职位:Applied scientist 应用科学家实习生 毕业时间:2026年10月 - 2027年7月之间毕业的应届毕业生 · 入职日期:2026年6月及之前 · 实习时间:保证一周实习4-5天全职实习,至少持续3个月 · 工作地点:深圳福田区 投递须知: 1 填写简历申请时,请把必填和非必填项都填写完整。提交简历之后就无法修改了哦! 2 学校的英文全称请准确填写。中英文对应表请查这里(无法浏览请登录后浏览)https://docs.qq.com/sheet/DVmdaa1BCV0RBbnlR?tab=BB08J2 如果您正在攻读计算机,AI,ML领域专业的博士或硕士研究生,而且对应用科学家的实习工作感兴趣。如果您也喜爱深入研究棘手的技术问题并提出解决方案,用成功的产品显著地改善人们的生活。 那么,我们诚挚邀请您加入亚马逊。这会是一份收获满满的工作。您每天的工作都与全球数百万亚马逊客户的体验紧密相关。您将提出和探索创新,基于TB级别的产品和流量数据设计机器学习模型。您将集成这些为客户提供服务,通过数据,建模和客户反馈来完成闭环。您对模型的选择需要能够平衡业务指标和响应时间的需求。
LU, Luxembourg
Join our team as an Applied Scientist II where you'll develop innovative machine learning solutions that directly impact millions of customers. You'll work on ambiguous problems where neither the problem nor solution is well-defined, inventing novel scientific approaches to address customer needs at the project level. This role combines deep scientific expertise with hands-on implementation to deliver production-ready solutions that drive measurable business outcomes. Key job responsibilities Invent: - Design and develop novel machine learning models and algorithms to solve ambiguous customer problems where textbook solutions don't exist - Extend state-of-the-art scientific techniques and invent new approaches driven by customer needs at the project level - Produce internal research reports with the rigor of top-tier publications, documenting scientific findings and methodologies - Stay current with academic literature and research trends, applying latest techniques when appropriate Implement: - Write production-quality code that meets or exceeds SDE I standards, ensuring solutions are testable, maintainable, and scalable - Deploy components directly into production systems supporting large-scale applications and services - Optimize algorithm and model performance through rigorous testing and iterative improvements - Document design decisions and implementation details to enable reproducibility and knowledge transfer - Contribute to operational excellence by analyzing performance gaps and proposing solutions Influence: - Collaborate with cross-functional teams to translate business goals into scientific problems and metrics - Mentor junior scientists and help new teammates understand customer needs and technical solutions - Present findings and recommendations to both technical and non-technical stakeholders - Contribute to team roadmaps, priorities, and strategic planning discussions - Participate in hiring and interviewing to build world-class science teams
US, CA, East Palo Alto
Amazon Aurora DSQL is a serverless, distributed SQL database with virtually unlimited scale, highest availability, and zero infrastructure management. Aurora DSQL provides active-active high availability, providing strong data consistency designed for 99.99% single-Region and 99.999% multi-Region availability. Aurora DSQL automatically manages and scales system resources, so you don't have to worry about maintenance downtime and provisioning, patching, or upgrading infrastructure. As a Senior Applied Scientist, you will be expected to lead research and development in advanced query optimization techniques for distributed sql services. You will innovate in the query planning and execution layer to help Aurora DSQL succeed at delivering high performance for complex OLTP workloads. You will develop novel approaches to stats collection, query planning, execution and optimization. You will drive industry leading research, publish your research and help convert your research into implementations to make Aurora DSQL the fastest sql database for OLTP workloads. AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Key job responsibilities Our engineers collaborate across diverse teams, projects, and environments to have a firsthand impact on our global customer base. You’ll bring a passion for innovation, data, search, analytics, and distributed systems. You’ll also: Solve challenging technical problems, often ones not solved before, at every layer of the stack. Design, implement, test, deploy and maintain innovative software solutions to transform service performance, durability, cost, and security. Build high-quality, highly available, always-on products. Research implementations that deliver the best possible experiences for customers. A day in the life As you design and code solutions to help our team drive efficiencies in software architecture, you’ll create metrics, implement automation and other improvements, and resolve the root cause of software defects. You’ll also: Build high-impact solutions to deliver to our large customer base. Participate in design discussions, code review, and communicate with internal and external stakeholders. Work cross-functionally to help drive business decisions with your technical input. Work in a startup-like development environment, where you’re always working on the most important stuff. About the team Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. About AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
US, CA, Sunnyvale
The Region Flexibility Engineering (RFE) team builds and leverages foundational infrastructure capabilities, tools, and datasets needed to support the rapid global expansion of Amazon's SOA infrastructure. Our team focuses on robust and scalable architecture patterns and engineering best practices, driving adoption of ever-evolving and AWS technologies. RFE is looking for a passionate, results-oriented, inventive Data Scientist to refine and execute experiments towards our grand vision, influence and implement technical solutions for regional placement automation, cross-region libraries, and tooling useful for teams across Amazon. As a Data Scientist in Region Flexibility, you will work to enable Amazon businesses to leverage new AWS regions and improve the efficiency and scale of our business. Our project spans across all of Amazon Stores, Digital and Others (SDO) Businesses and we work closely with AWS teams to advise them on SDO requirements. As innovators who embrace new technology, you will be empowered to choose the right highly scalable and available technology to solve complex problems and will directly influence product design. The end-state architecture will enable services to break region coupling while retaining the ability to keep critical business functions within a region. This architecture will improve customer latency through local affinity to compute resources and reduce the blast radius in case of region failures. We leverage off the sciences of data, information processing, machine learning, and generative AI to improve user experience, automation, service resilience, and operational efficiency. Key job responsibilities As an RFE Data Scientist, you will work closely with product and technical leaders throughout Amazon and will be responsible for influencing technical decisions and building data-driven automation capabilities in areas of development/modeling that you identify as critical future region flexibility offerings. You will identify both enablers and blockers of adoption for region flex, and build models to raise the bar in terms of understanding questions related to data set and service relationships and predict the impact of region changes and provide offerings to mitigate that impact. About the team The Regional Flexibility Engineering (RFE) organization supports the rapid global expansion of Amazon's infrastructure. Our projects support Amazon businesses like Stores, Alexa, Kindle, and Prime Video. We drive adoption of ever-evolving and AWS and non-AWS technologies, and work closely with AWS teams to improve AWS public offerings. Our organization focuses on robust and scalable solutions, simple to use, and delivered with engineering best practices. We leverage and build foundational infrastructure capabilities, tools, and datasets that enable Amazon teams to delight our customers. With millions of people using Amazon’s products every day, we appreciate the importance of making our solutions “just work”.
US, VA, Arlington
Do you want a role with deep meaning and the ability to have a global impact? Hiring top talent is not only critical to Amazon’s success – it can literally change the world. It took a lot of great hires to deliver innovations like AWS, Prime, and Alexa, which make life better for millions of customers around the world. As part of the Intelligent Talent Acquisition (ITA) team, you'll have the opportunity to reinvent Amazon’s hiring process with unprecedented scale, sophistication, and accuracy. ITA is an industry-leading people science and technology organization made up of scientists, engineers, analysts, product professionals, and more. Our shared goal is to fairly and precisely connect the right people to the right jobs. Last year, we delivered over 6 million online candidate assessments, driving a merit-based hiring approach that gives candidates the opportunity to showcase their true skills. Each year we also help Amazon deliver billions of packages around the world by making it possible to hire hundreds of thousands of associates in the right quantity, at the right location, at exactly the right time. You’ll work on state-of-the-art research with advanced software tools, new AI systems, and machine learning algorithms to solve complex hiring challenges. Join ITA in using cutting-edge technologies to transform the hiring landscape and make a meaningful difference in people's lives. Together, we can solve the world's toughest hiring problems. Within ITA, the Global Hiring Science (GHS) team designs and implements innovative hiring solutions at scale. We work in a fast-paced, global environment where we use research to solve complex problems and build scalable hiring products that deliver measurable impact to our customers. We are seeking selection researchers with a strong foundation in hiring assessment development, legally-defensible validation approaches, research and experimental design, and data analysis. Preferred candidates will have experience across the full hiring assessment lifecycle, from solution design to content development and validation to impact analysis. We are looking for equal parts researcher and consultant, who is able to influence customers with insights derived from science and data. You will work closely with cross-functional teams to design new hiring solutions and experiment with measurement methods intended to precisely define exactly what job success looks like and how best to predict it. Key job responsibilities What you’ll do as a GHS Research Scientist: • Design large-scale personnel selection research that shapes Amazon’s global talent assessment practices across a variety of topics (e.g., assessment validation, measuring post-hire impact) • Partner with key stakeholders to create innovative solutions that blend scientific rigor with real-world business impact while navigating complex legal and professional standards • Apply advanced statistical techniques to analyze massive, diverse datasets to uncover insights that optimize our candidate evaluation processes and drive hiring excellence • Explore emerging technologies and innovative methodologies to enhance talent measurement while maintaining Amazon's commitment to scientific integrity • Translate complex research findings into compelling, actionable strategies that influence senior leader/business decisions and shape Amazon's talent acquisition roadmap • Write impactful documents that distill intricate scientific concepts into clear, persuasive communications for diverse audiences, from data scientists to business leaders • Ensure effective teamwork, communication, collaboration, and commitment across multiple teams with competing priorities A day in the life Imagine diving into challenges that impact millions of employees across Amazon's global operations. As a GHS Research Scientist, you'll tackle questions about hiring and organizational effectiveness on a global scale. Your day might begin with analyzing datasets to inform how we attract and select world-class talent. Throughout the day, you'll collaborate with peers in our research community, discussing different research methodologies and sharing innovative approaches to solving unique personnel challenges. This role offers a blend of focused analytical time and interacting with stakeholders across the globe.
US, WA, Seattle
We are looking for a researcher in state-of-the-art LLM technologies for applications across Alexa, AWS, and other Amazon businesses. In this role, you will innovate in the fastest-moving fields of current AI research, in particular in how to integrate a broad range of structured and unstructured information into AI systems (e.g. with RAG techniques), and get to immediately apply your results in highly visible Amazon products. If you are deeply familiar with LLMs, natural language processing, computer vision, and machine learning and thrive in a fast-paced environment, this may be the right opportunity for you. Our fast-paced environment requires a high degree of autonomy to deliver ambitious science innovations all the way to production. You will work with other science and engineering teams as well as business stakeholders to maximize velocity and impact of your deliverables. It's an exciting time to be a leader in AI research. In Amazon's AGI Information team, you can make your mark by improving information-driven experience of Amazon customers worldwide!
US, WA, Seattle
Amazon Prime is looking for an ambitious Economist to help create econometric insights for world-wide Prime. Prime is Amazon's premiere membership program, with over 200M members world-wide. This role is at the center of many major company decisions that impact Amazon's customers. These decisions span a variety of industries, each reflecting the diversity of Prime benefits. These range from fast-free e-commerce shipping, digital content (e.g., exclusive streaming video, music, gaming, photos), and grocery offerings. Prime Science creates insights that power these decisions. As an economist in this role, you will create statistical tools that embed causal interpretations. You will utilize massive data, state-of-the-art scientific computing, econometrics (causal, counterfactual/structural, time-series forecasting, experimentation), and machine-learning, to do so. Some of the science you create will be publishable in internal or external scientific journals and conferences. You will work closely with a team of economists, applied scientists, data professionals (business analysts, business intelligence engineers), product managers, and software engineers. You will create insights from descriptive statistics, as well as from novel statistical and econometric models. You will create internal-to-Amazon-facing automated scientific data products to power company decisions. You will write strategic documents explaining how senior company leaders should utilize these insights to create sustainable value for customers. These leaders will often include the senior-most leaders at Amazon. The team is unique in its exposure to company-wide strategies as well as senior leadership. It operates at the research frontier of utilizing data, econometrics, artificial intelligence, and machine-learning to form business strategies. A successful candidate will have demonstrated a capacity for building, estimating, and defending statistical models (e.g., causal, counterfactual, time-series, machine-learning) using software such as R, Python, or STATA. They will have a willingness to learn and apply a broad set of statistical and computational techniques to supplement deep-training in one area of econometrics. For example, many applications on the team use structural econometrics, machine-learning, and time-series forecasting. They rely on building scalable production software, which involves a broad set of world-class software-building skills often learned on-the-job. As a consequence, already-obtained knowledge of SQL, machine learning, and large-scale scientific computing using distributed computing infrastructures such as Spark-Scala or PySpark would be a plus. Additionally, this candidate will show a track-record of delivering projects well and on-time, preferably in collaboration with other team members (e.g. co-authors). Candidates must have very strong writing and emotional intelligence skills (for collaborative teamwork, often with colleagues in different functional roles), a growth mindset, and a capacity for dealing with a high-level of ambiguity. Endowed with these traits and on-the-job-growth, the role will provide the opportunity to have a large strategic, world-wide impact on the customer experiences of Prime members.