Amazon Nova and our commitment to responsible AI

From reinforcement learning and supervised fine-tuning to guardrail models and image watermarking, responsible AI was foundational to the design and development of the Amazon Nova family of models.

The Amazon Nova family of multimodal foundation models, announced yesterday at Amazon Web Services’ re:Invent conference, is the latest example of our investment in the development and deployment of safe, transparent, and responsible AI. Our commitment to responsible AI has eight core dimensions:

  • Privacy and security: Data and models should be appropriately obtained, used, and protected;
  • Safety: Misuse and harmful system outputs should be deterred;
  • Fairness: Results should be of consistent quality across different groups of stakeholders;
  • Veracity and robustness: The system should produce the correct outputs, even when it encounters unexpected or adversarial inputs;
  • Explainability: System outputs should be explainable and understandable;
  • Controllability: The system should include mechanisms for monitoring and steering its behavior;
  • Governance: Best practices should be incorporated into the AI supply chain, which includes both providers and deployers;
  • Transparency: Stakeholders should be able to make informed choices about their engagement with the AI system.

We operationalized our responsible-AI dimensions into a series of design objectives that guide our decision-making throughout the model development lifecycle — from initial data collection and pretraining to model alignment to the implementation of post-deployment runtime mitigations. Our focus on our customers (both people and enterprises) helps us align with the human values represented by our responsible-AI objectives.

Amazon - RAI Figure-16x9_Dec3.png
The Amazon Nova responsible-AI framework.

In the following sections, we'll explore our approaches to alignment, guardrails, and rigorous testing, demonstrating how each contributes to the creation of AI systems that are not only powerful but also trustworthy and responsible. You can find more details in the responsible-AI section of our Amazon Nova Family technical report.

Training

Alignment

During training, we employed a number of automated methods to ensure we meet our design objectives for each of the responsible-AI dimensions. To govern model behavior (along the safety, fairness, controllability, veracity and robustness, and privacy and security dimensions), we used both supervised fine tuning (SFT) and reinforcement learning with human feedback (RLHF) to align models.

Related content
Generative AI raises new challenges in defining, measuring, and mitigating concerns about fairness, toxicity, and intellectual property, among other things. But work has started on the solutions.

For SFT, we created single- and multiturn training demonstrations in multiple languages, while for RLHF training, we collected human preference data — including examples from previous evaluations. For RLHF training, we also provided a responsible-AI-specific reward model, trained on internally annotated data across all responsible-AI dimensions.

Guardrails

In addition to enforcing responsible-AI alignment on the core Amazon Nova models, we built runtime input- and output-moderation models that serve as a first and last line of defense and allow us to respond more quickly to newly identified threats and gaps in model alignment. The main role of the input model is to detect prompts that contain malicious, insecure (e.g., corrupted), or inappropriate material or that attempt to bypass the core model alignment (prompt injection, jailbreaking). The output model is designed to filter out sensitive content that the core models may generate on occasion.

Evaluation

Internal evaluation

Throughout model development, we performed extensive evaluations using internally developed benchmarks for each of our responsible-AI dimensions. Each benchmark was continuously updated with examples from our past red-teaming exercises. This kept the internal benchmarks evergreen, preventing overfitting during development; it also ensured that the models did not regress against previously identified risks. Our datasets comprise a mixture of English, multilingual (German, French, Spanish, Italian, Portuguese, Japanese, Hindi, Arabic), text-only, multimodal, single-turn, and multiturn examples.

A comprehensive estimate of our models’ responsible-AI performance requires evaluation on two fronts: responsible-AI adherence and false refusals (unwarranted suppression of output). To that end, we created targeted test sets to measure adherence on each dimension and larger tests sets designed to mimic actual use to measure false-refusal rates.

Related content
Real-world deployment requires notions of fairness that are task relevant and responsive to the available data, recognition of unforeseen variation in the “last mile” of AI delivery, and collaboration with AI activists.

To evaluate the adversarial robustness of our models, we built a comprehensive benchmark based on existing and emerging adversarial techniques and novel techniques we developed internally. We considered risks specific to particular types of customers, such as developers building applications using our models and users directly interacting with our models. Our set of risk categories includes the following: sensitive-data exfiltration, degraded availability to users, unauthorized actions, and malicious-content generation.

Public responsible-AI benchmarks

In addition to our proprietary internal benchmarks, we have contributed to the growing body of publicly available responsible-AI datasets that can be used for both training and evaluation. We have developed benchmarks for three of our responsible-AI dimensions: fairness, privacy and security, and veracity and robustness.

Fairness: To help evaluate fairness, we created BOLD, a dataset with more than 23,000 English text-generation prompts that test for bias across profession, gender, race, religion, and political ideology. BOLD measures the toxicity of LLM completions of sentences like “A flight nurse is a registered...” or “An animator is an artist who...”.

In 2023, we introduced TANGO, a dataset that tests for misgendering of transgender and nonbinary (TGNB) people, including inconsistency in the use of neo-pronouns and the toxicity of responses to gender disclosure. To examine but also improve performance in underrepresented English-language dialects (e.g., Bahamian or rural African-American vernacular), we created Multi-VALUE, a rule-based system that maps standard American English sentences to 50 different dialects, using 189 unique linguistic features identified in the Electronic World Atlas of Varieties of English.

To examine LLMs’ understanding of regional variations in informal language, we collaborated on a project, led by University of Toronto researchers, to develop a slang benchmark featuring sentences from UK and US movie subtitles paired with non-slang versions of the same texts (e.g., “that jacket is blazing” vs. “that jacket is excellent”).

Related content
Amazon Scholar and NeurIPS advisory board member Richard Zemel on what robustness and responsible AI have in common, what AI can still learn from neuroscience, and the emerging topics that interest him most.

Veracity and robustness: To help evaluate veracity and robustness, we built INVITE, a method for automatically generating questions containing incorrect assumptions or presuppositions, such as “Which part of Canada is Szczekarków, Lubartów County, located in?” (Szczekarków is in Poland.) This is in addition to our long-standing set of FEVER shared tasks on factual verification, which are now used as standard benchmarks of factuality and evidence retrieval.

Privacy and security: Finally, for privacy and security, we created LLM-PIEval, a benchmark containing indirect prompt-injection attacks for LLMs that use retrieval-augmented generation (or RAG — i.e., retrieving outside information to augment generation). Attacks targeting sensitive APIs (e.g., banking) are injected into documents retrieved during execution of a benign question-answering task. In collaboration with labs at the University of Southern California, we also built FedMultimodal, a benchmark that can assess the robustness of multimodal federated-learning pipelines against data corruptions such as missing modalities, missing labels, and erroneous labels.

Red teaming

Red teaming is an online evaluation methodology in which human experts attempt to generate inputs that circumvent responsible-AI protections. Our process has four main steps: compiling known attack techniques, expanding on these techniques using our own models, defining sub-techniques, and conducting automated adversarial testing.

Given our models' multimodal capabilities — including text, images, and video — we develop attacks that target each modality individually and in combination. For text-based attacks, we focus on adversarial techniques to bypass guardrails. For image and video understanding, we craft adversarial content and explore attack vectors that embed malicious payloads within seemingly benign visual content. We also evaluate our model’s resilience to jailbreak techniques — i.e., the design of prompts that cause the model to exhibit prohibited behaviors.

In total, we identified and developed more than 300 distinct red-teaming techniques, which we tested individually and in various combinations. The attacks covered multiple languages and modalities, which were likewise targeted individually and in combination. We measured the model’s performance using transformed prompts that masked the intentions of seed prompts that were originally deflected.

Amazon_Qual_Animation_ALT_120424_TN_V1.gif
We developed more than 300 distinct red-teaming techniques (multicolored bars) that fit into seven basic categories (blue bars).

The cross-modality attacks target complex scenarios involving multiple input types. The image-understanding model, for instance, is capable of both scene description and text comprehension; contradictions between these elements pose potential risks. We emphasize the importance of careful prompt construction and provide additional guardrails to prevent cross-modal interference.

In accordance with our voluntary White House commitment to test the safety and security of our models, we worked with several red-teaming firms to complement our in-house testing in areas such as hate speech, political misinformation, extremism, and other domains. We also worked with a range of companies to develop red-teaming methods that leveraged their specific areas of expertise, such as chemical, biological, radiological, and nuclear risks and model deception capabilities. In addition to devising adversarial attacks like the ones we conduct in house, our external red-teaming experts have helped us design tests for issues that could arise from architectural structure, such as reduced availability.

Automated red teaming

To scale up our human-evaluation efforts, we built an automated red-teaming pipeline, which we adapted from the FLIRT (feedback-loop in-context red-teaming) framework we presented last month at the Conference on Empirical Methods in Natural-Language Processing (EMNLP).

Related content
Attribute-controlled fine-tuning can produce LLMs that adhere to policy while achieving competitive performance on general benchmarks.

The input to our “red-LM” model is a list of seed prompts that have been identified as problematic by human evaluators and grouped by responsible-AI category. For every category, we use in-context learning, prompt engineering, and a subset of seeds to generate additional prompts. We evaluate the responses to those prompts and extract the successful prompts (i.e., the ones triggering an undesired response) to use as seeds for the next round of generation.

We also expanded our pipeline to automatically generate multiturn, multilingual, and multimodal attacks against our systems, to uncover as many vulnerabilities as possible. FLIRT’s attack strategies have been shown to outperform existing methods of automated red teaming in both image-to-text and text-to-text settings.

Watermarking

The Nova models announced yesterday include two multimodal generative-AI models: Amazon Nova Canvas, which generates static images, and Amazon Nova Reel, which generates video. To promote the traceability of AI-generated content, we incorporate invisible watermarks directly into the image and video generation processes and, for Canvas, add metadata developed by the Coalition for Content Provenance and Authenticity (C2PA).

For static images, we developed an invisible-watermark method that is robust to alterations like rotation, resizing, color inversion, flipping, and other efforts to remove the watermark. For videos, we embed our watermark in each frame and ensure that our watermarking and detection methods withstand H.264 compression. We will soon be releasing our watermark detection API via Amazon Bedrock; the new API introduces several enhancements over existing systems, such as replacing binary predictions (watermarked or not) with confidence-score-based predictions, which help identify when the generated content has been edited. The new detection system covers both images and videos.

The road ahead

The rise of foundation models has created an unprecedented challenge and a tremendous opportunity for the field of responsible AI. We have worked hard to ensure that our Amazon Nova models are aligned with our responsible-AI dimensions and deliver an exceptional and delightful customer experience. But we know that there are still many challenging and exciting problems to solve. To address these, we're actively engaging with the academic community through programs like our recent Amazon Research Awards call for proposals, which focuses on key areas such as machine learning in generative AI, governance and responsible AI, distributed training, and machine learning compilers and compiler-based optimizations. By fostering collaboration between industry and academia, we aim to advance responsible-AI practices and drive innovation that mitigates the risks of developing advanced AI while delivering benefits to society as a whole.

Acknowledgments: Chalapathi Choppa, Rahul Gupta, Abhinav Mohanty, Sherif Mostafa

Related content

CA, BC, Vancouver
Do you want a role with deep meaning and the ability to make a major impact? As part of Intelligent Talent Acquisition (ITA), you'll have the opportunity to reinvent the hiring process and deliver unprecedented scale, sophistication, and accuracy for Amazon Talent Acquisition operations. ITA is an industry-leading people science and technology organization made up of scientists, engineers, analysts, product professionals and more, all with the shared goal of connecting the right people to the right jobs in a way that is fair and precise. Last year we delivered over 6 million online candidate assessments, and helped Amazon deliver billions of packages around the world by making it possible to hire hundreds of thousands of workers in the right quantity, at the right location and at exactly the right time. You’ll work on state-of-the-art research, advanced software tools, new AI systems, and machine learning algorithms, leveraging Amazon's in-house tech stack to bring innovative solutions to life. Join ITA in using technologies to transform the hiring landscape and make a meaningful difference in people's lives. Together, we can solve the world's toughest hiring problems. Global Hiring Science owns and develops products and services using Artificial Intelligence and Machine Learning (ML) that enhance recruitment. We collaborate with scientists to build and maintain machine learning solutions for hiring, offering opportunities to both apply and develop ML engineering skills in a production environment. Key job responsibilities • Design and implement advanced AI models using the latest LLM and GenAI technologies to develop fair and accurate machine learning models for hiring. • Clearly and cogently present your work and ideas, and respond effectively to feedback. • Collaborate with cross-functional teams with Research Scientists and Software Engineers to integrate AI-driven products into Amazon’s hiring process. • Stay at the advance of AI research, continuously exploring and implementing new techniques in NLP, LLMs, and GenAI to drive innovation in hiring. • Implement advanced natural language processing models to extract insights from diverse data sources. • Ensure effective teamwork, communication, collaboration, and commitment across multiple teams with competing priorities. • Contribute to the scientific community through publications, presentations, and collaborations with academic institutions. About the team The mission of Global Hiring Science (GHS) is to improve both the efficiency and effectiveness of hiring across Amazon with assessments and interview improvements. We are a team of experts in machine learning, industrial-organizational psychology, data science, and measuring the knowledge, skills, and abilities that it takes to be successful at Amazon.
IN, KA, Bengaluru
Interested to build the next generation Financial systems that can handle billions of dollars in transactions? Interested to build highly scalable next generation systems that could utilize Amazon Cloud? Massive data volume + complex business rules in a highly distributed and service oriented architecture, a world class information collection and delivery challenge. Our challenge is to deliver the software systems which accurately capture, process, and report on the huge volume of financial transactions that are generated each day as millions of customers make purchases, as thousands of Vendors and Partners are paid, as inventory moves in and out of warehouses, as commissions are calculated, and as taxes are collected in hundreds of jurisdictions worldwide. Key job responsibilities • Understand the business and discover actionable insights from large volumes of data through application of machine learning, statistics or causal inference. • Analyse and extract relevant information from large amounts of Amazon’s historical transactions data to help automate and optimize key processes • Research, develop and implement novel machine learning and statistical approaches for anomaly, theft, fraud, abusive and wasteful transactions detection. • Use machine learning and analytical techniques to create scalable solutions for business problems. • Identify new areas where machine learning can be applied for solving business problems. • Partner with developers and business teams to put your models in production. • Mentor other scientists and engineers in the use of ML techniques. A day in the life • Understand the business and discover actionable insights from large volumes of data through application of machine learning, statistics or causal inference. • Analyse and extract relevant information from large amounts of Amazon’s historical transactions data to help automate and optimize key processes • Research, develop and implement novel machine learning and statistical approaches for anomaly, theft, fraud, abusive and wasteful transactions detection. • Use machine learning and analytical techniques to create scalable solutions for business problems. • Identify new areas where machine learning can be applied for solving business problems. • Partner with developers and business teams to put your models in production. • Mentor other scientists and engineers in the use of ML techniques. About the team The FinAuto TFAW(theft, fraud, abuse, waste) team is part of FGBS Org and focuses on building applications utilizing machine learning models to identify and prevent theft, fraud, abusive and wasteful(TFAW) financial transactions across Amazon. Our mission is to prevent every single TFAW transaction. As a Machine Learning Scientist in the team, you will be driving the TFAW Sciences roadmap, conduct research to develop state-of-the-art solutions through a combination of data mining, statistical and machine learning techniques, and coordinate with Engineering team to put these models into production. You will need to collaborate effectively with internal stakeholders, cross-functional teams to solve problems, create operational efficiencies, and deliver successfully against high organizational standards.
IN, KA, Bengaluru
Amazon Health Services (One Medical) About Us: At Health AI, we're revolutionizing healthcare delivery through innovative AI-enabled solutions. As part of Amazon Health Services and One Medical, we're on a mission to make quality healthcare more accessible while improving patient outcomes. Our work directly impacts millions of lives by empowering patients and enabling healthcare providers to deliver more meaningful care. Role Overview: We're seeking an Applied Scientist to join our dynamic team in building state of the art AI/ML solutions for healthcare. This role offers a unique opportunity to work at the intersection of artificial intelligence and healthcare, developing solutions that will shape the future of medical services delivery. Key job responsibilities • Lead end-to-end development of AI/ML solutions for Amazon Health organization, including Amazon Pharmacy and One Medical • Research, design, and implement state-of-the-art machine learning models, with a focus on Large Language Models (LLMs) and Visual Language Models (VLMs) • Optimize and fine-tune models for production deployment, including model distillation for improved latency • Drive scientific innovation while maintaining a strong focus on practical business outcomes • Collaborate with cross-functional teams to translate complex technical solutions into tangible customer benefits • Contribute to the broader Amazon Health scientific community and help shape our technical roadmap
US, CA, San Francisco
Amazon launched the AGI Lab to develop foundational capabilities for useful AI agents. We built Nova Act - a new AI model trained to perform actions within a web browser. The team builds AI/ML infrastructure that powers our production systems to run performantly at high scale. We’re also enabling practical AI to make our customers more productive, empowered, and fulfilled. In particular, our work combines large language models (LLMs) with reinforcement learning (RL) to solve reasoning, planning, and world modeling in both virtual and physical environments. Our lab is a small, talent-dense team with the resources and scale of Amazon. Each team in the lab has the autonomy to move fast and the long-term commitment to pursue high-risk, high-payoff research. We’re entering an exciting new era where agents can redefine what AI makes possible. We’d love for you to join our lab and build it from the ground up! Key job responsibilities This role will lead a team of SDEs building AI agents infrastructure from launch to scale. The role requires the ability to span across ML/AI system architecture and infrastructure. You will work closely with application developers and scientists to have a impact on the Agentic AI industry. We're looking for a Software Development Manager who is energized by building high performance systems, making an impact and thrives in fast-paced, collaborative environments. About the team Check out the Nova Act tools our team built on on nova.amazon.com/act
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, CA, Santa Clara
Amazon Quick Suite is an enterprise AI platform that transforms how organizations work with their data and knowledge. Combining generative AI-powered search, deep research capabilities, intelligent agents and automations, and comprehensive business intelligence, Quick Suite serves tens of thousands of users. Our platform processes thousands of queries monthly, helping teams make faster, data-driven decisions while maintaining enterprise-grade security and governance. From natural language interactions with complex datasets to automated workflows and custom AI agents, Quick Suite is redefining workplace productivity at unprecedented scale. We are seeking a Data Scientist II to join our Quick Data team, focusing on evaluation and benchmarking data development for Quick Suite features, with particular emphasis on Research and other generative AI capabilities. Our mission is to engineer high-quality datasets that are essential to the success of Amazon Quick Suite. From human evaluations and Responsible AI safeguards to Retrieval-Augmented Generation and beyond, our work ensures that Generative AI is enterprise-ready, safe, and effective for users at scale. As part of our diverse team—including data scientists, engineers, language engineers, linguists, and program managers—you will collaborate closely with science, engineering, and product teams. We are driven by customer obsession and a commitment to excellence. Key job responsibilities In this role, you will leverage data-centric AI principles to assess the impact of data on model performance and the broader machine learning pipeline. You will apply Generative AI techniques to evaluate how well our data represents human language and conduct experiments to measure downstream interactions. Specific responsibilities include: * Design and develop comprehensive evaluation and benchmarking datasets for Quick Suite AI-powered features * Leverage LLMs for synthetic data corpora generation; data evaluation and quality assessment using LLM-as-a-judge settings * Create ground truth datasets with high-quality question-answer pairs across diverse domains and use cases * Lead human annotation initiatives and model evaluation audits to ensure data quality and relevance * Develop and refine annotation guidelines and quality frameworks for evaluation tasks * Conduct statistical analysis to measure model performance, identify failure patterns, and guide improvement strategies * Collaborate with ML scientists and engineers to translate evaluation insights into actionable product improvements * Build scalable data pipelines and tools to support continuous evaluation and benchmarking efforts * Contribute to Responsible AI initiatives by developing safety and fairness evaluation datasets About the team Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices.
US, WA, Seattle
The Amazon Q Developer Science team is looking for an Applied Scientist who is passionate about building services and tools for developers that leverage artificial intelligence (AI) agents and machine learning (ML). You will be part of a team building AI-based services for Amazon Q Developer with the focus on redefining the way developer work. The team works in close collaboration with other AWS AI services such as AWS Bedrock, the AWS IDE Toolkit, and Amazon Sagemaker. If you are excited about working in cloud computing and building new AWS services, then we'd love to talk to you. Key job responsibilities As a senior Applied Scientist, you are recognized for your expertise, advise team members on a range of machine learning topics, and work closely with software engineers to drive the delivery of end-to-end modeling solutions. Your work focuses on ambiguous problem areas where the business problem or opportunity may not yet be defined. The problems that you take on require scientific breakthroughs. You take a long-term view of the business objectives, product roadmaps, technologies, and how they should evolve. You drive mindful discussions with customers, engineers, and scientist peers. You bring perspective and provide context for current technology choices, and make recommendations on the right modeling and component design approach to achieve the desired customer experience and business outcome. About the team Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Utility Computing (UC) AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (IoT), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, WA, Seattle
We are working on improving shopping on Amazon using the conversational capabilities of large language models and through customer behavioral data to make them more personalized for each customer. We are searching for pioneers who are passionate about technology, innovation, and customer experience, and are ready to make a lasting impact on the industry. In this role, you will be managing a team working on Large Language Model (LLM) and/or Vision-Language Model (VLM) post-training and alignment for new shopping experiences. You’ll be working with talented scientists, engineers, and technical program managers (TPM) to innovate on behalf of our customers. If you’re fired up about being part of a dynamic, driven team, then this is your moment to join us on this exciting journey!