Amazon Nova and our commitment to responsible AI

From reinforcement learning and supervised fine-tuning to guardrail models and image watermarking, responsible AI was foundational to the design and development of the Amazon Nova family of models.

The Amazon Nova family of multimodal foundation models, announced yesterday at Amazon Web Services’ re:Invent conference, is the latest example of our investment in the development and deployment of safe, transparent, and responsible AI. Our commitment to responsible AI has eight core dimensions:

  • Privacy and security: Data and models should be appropriately obtained, used, and protected;
  • Safety: Misuse and harmful system outputs should be deterred;
  • Fairness: Results should be of consistent quality across different groups of stakeholders;
  • Veracity and robustness: The system should produce the correct outputs, even when it encounters unexpected or adversarial inputs;
  • Explainability: System outputs should be explainable and understandable;
  • Controllability: The system should include mechanisms for monitoring and steering its behavior;
  • Governance: Best practices should be incorporated into the AI supply chain, which includes both providers and deployers;
  • Transparency: Stakeholders should be able to make informed choices about their engagement with the AI system.

We operationalized our responsible-AI dimensions into a series of design objectives that guide our decision-making throughout the model development lifecycle — from initial data collection and pretraining to model alignment to the implementation of post-deployment runtime mitigations. Our focus on our customers (both people and enterprises) helps us align with the human values represented by our responsible-AI objectives.

Amazon - RAI Figure-16x9_Dec3.png
The Amazon Nova responsible-AI framework.

In the following sections, we'll explore our approaches to alignment, guardrails, and rigorous testing, demonstrating how each contributes to the creation of AI systems that are not only powerful but also trustworthy and responsible. You can find more details in the responsible-AI section of our Amazon Nova Family technical report.

Training

Alignment

During training, we employed a number of automated methods to ensure we meet our design objectives for each of the responsible-AI dimensions. To govern model behavior (along the safety, fairness, controllability, veracity and robustness, and privacy and security dimensions), we used both supervised fine tuning (SFT) and reinforcement learning with human feedback (RLHF) to align models.

Related content
Generative AI raises new challenges in defining, measuring, and mitigating concerns about fairness, toxicity, and intellectual property, among other things. But work has started on the solutions.

For SFT, we created single- and multiturn training demonstrations in multiple languages, while for RLHF training, we collected human preference data — including examples from previous evaluations. For RLHF training, we also provided a responsible-AI-specific reward model, trained on internally annotated data across all responsible-AI dimensions.

Guardrails

In addition to enforcing responsible-AI alignment on the core Amazon Nova models, we built runtime input- and output-moderation models that serve as a first and last line of defense and allow us to respond more quickly to newly identified threats and gaps in model alignment. The main role of the input model is to detect prompts that contain malicious, insecure (e.g., corrupted), or inappropriate material or that attempt to bypass the core model alignment (prompt injection, jailbreaking). The output model is designed to filter out sensitive content that the core models may generate on occasion.

Evaluation

Internal evaluation

Throughout model development, we performed extensive evaluations using internally developed benchmarks for each of our responsible-AI dimensions. Each benchmark was continuously updated with examples from our past red-teaming exercises. This kept the internal benchmarks evergreen, preventing overfitting during development; it also ensured that the models did not regress against previously identified risks. Our datasets comprise a mixture of English, multilingual (German, French, Spanish, Italian, Portuguese, Japanese, Hindi, Arabic), text-only, multimodal, single-turn, and multiturn examples.

A comprehensive estimate of our models’ responsible-AI performance requires evaluation on two fronts: responsible-AI adherence and false refusals (unwarranted suppression of output). To that end, we created targeted test sets to measure adherence on each dimension and larger tests sets designed to mimic actual use to measure false-refusal rates.

Related content
Real-world deployment requires notions of fairness that are task relevant and responsive to the available data, recognition of unforeseen variation in the “last mile” of AI delivery, and collaboration with AI activists.

To evaluate the adversarial robustness of our models, we built a comprehensive benchmark based on existing and emerging adversarial techniques and novel techniques we developed internally. We considered risks specific to particular types of customers, such as developers building applications using our models and users directly interacting with our models. Our set of risk categories includes the following: sensitive-data exfiltration, degraded availability to users, unauthorized actions, and malicious-content generation.

Public responsible-AI benchmarks

In addition to our proprietary internal benchmarks, we have contributed to the growing body of publicly available responsible-AI datasets that can be used for both training and evaluation. We have developed benchmarks for three of our responsible-AI dimensions: fairness, privacy and security, and veracity and robustness.

Fairness: To help evaluate fairness, we created BOLD, a dataset with more than 23,000 English text-generation prompts that test for bias across profession, gender, race, religion, and political ideology. BOLD measures the toxicity of LLM completions of sentences like “A flight nurse is a registered...” or “An animator is an artist who...”.

In 2023, we introduced TANGO, a dataset that tests for misgendering of transgender and nonbinary (TGNB) people, including inconsistency in the use of neo-pronouns and the toxicity of responses to gender disclosure. To examine but also improve performance in underrepresented English-language dialects (e.g., Bahamian or rural African-American vernacular), we created Multi-VALUE, a rule-based system that maps standard American English sentences to 50 different dialects, using 189 unique linguistic features identified in the Electronic World Atlas of Varieties of English.

To examine LLMs’ understanding of regional variations in informal language, we collaborated on a project, led by University of Toronto researchers, to develop a slang benchmark featuring sentences from UK and US movie subtitles paired with non-slang versions of the same texts (e.g., “that jacket is blazing” vs. “that jacket is excellent”).

Related content
Amazon Scholar and NeurIPS advisory board member Richard Zemel on what robustness and responsible AI have in common, what AI can still learn from neuroscience, and the emerging topics that interest him most.

Veracity and robustness: To help evaluate veracity and robustness, we built INVITE, a method for automatically generating questions containing incorrect assumptions or presuppositions, such as “Which part of Canada is Szczekarków, Lubartów County, located in?” (Szczekarków is in Poland.) This is in addition to our long-standing set of FEVER shared tasks on factual verification, which are now used as standard benchmarks of factuality and evidence retrieval.

Privacy and security: Finally, for privacy and security, we created LLM-PIEval, a benchmark containing indirect prompt-injection attacks for LLMs that use retrieval-augmented generation (or RAG — i.e., retrieving outside information to augment generation). Attacks targeting sensitive APIs (e.g., banking) are injected into documents retrieved during execution of a benign question-answering task. In collaboration with labs at the University of Southern California, we also built FedMultimodal, a benchmark that can assess the robustness of multimodal federated-learning pipelines against data corruptions such as missing modalities, missing labels, and erroneous labels.

Red teaming

Red teaming is an online evaluation methodology in which human experts attempt to generate inputs that circumvent responsible-AI protections. Our process has four main steps: compiling known attack techniques, expanding on these techniques using our own models, defining sub-techniques, and conducting automated adversarial testing.

Given our models' multimodal capabilities — including text, images, and video — we develop attacks that target each modality individually and in combination. For text-based attacks, we focus on adversarial techniques to bypass guardrails. For image and video understanding, we craft adversarial content and explore attack vectors that embed malicious payloads within seemingly benign visual content. We also evaluate our model’s resilience to jailbreak techniques — i.e., the design of prompts that cause the model to exhibit prohibited behaviors.

In total, we identified and developed more than 300 distinct red-teaming techniques, which we tested individually and in various combinations. The attacks covered multiple languages and modalities, which were likewise targeted individually and in combination. We measured the model’s performance using transformed prompts that masked the intentions of seed prompts that were originally deflected.

Amazon_Qual_Animation_ALT_120424_TN_V1.gif
We developed more than 300 distinct red-teaming techniques (multicolored bars) that fit into seven basic categories (blue bars).

The cross-modality attacks target complex scenarios involving multiple input types. The image-understanding model, for instance, is capable of both scene description and text comprehension; contradictions between these elements pose potential risks. We emphasize the importance of careful prompt construction and provide additional guardrails to prevent cross-modal interference.

In accordance with our voluntary White House commitment to test the safety and security of our models, we worked with several red-teaming firms to complement our in-house testing in areas such as hate speech, political misinformation, extremism, and other domains. We also worked with a range of companies to develop red-teaming methods that leveraged their specific areas of expertise, such as chemical, biological, radiological, and nuclear risks and model deception capabilities. In addition to devising adversarial attacks like the ones we conduct in house, our external red-teaming experts have helped us design tests for issues that could arise from architectural structure, such as reduced availability.

Automated red teaming

To scale up our human-evaluation efforts, we built an automated red-teaming pipeline, which we adapted from the FLIRT (feedback-loop in-context red-teaming) framework we presented last month at the Conference on Empirical Methods in Natural-Language Processing (EMNLP).

Related content
Attribute-controlled fine-tuning can produce LLMs that adhere to policy while achieving competitive performance on general benchmarks.

The input to our “red-LM” model is a list of seed prompts that have been identified as problematic by human evaluators and grouped by responsible-AI category. For every category, we use in-context learning, prompt engineering, and a subset of seeds to generate additional prompts. We evaluate the responses to those prompts and extract the successful prompts (i.e., the ones triggering an undesired response) to use as seeds for the next round of generation.

We also expanded our pipeline to automatically generate multiturn, multilingual, and multimodal attacks against our systems, to uncover as many vulnerabilities as possible. FLIRT’s attack strategies have been shown to outperform existing methods of automated red teaming in both image-to-text and text-to-text settings.

Watermarking

The Nova models announced yesterday include two multimodal generative-AI models: Amazon Nova Canvas, which generates static images, and Amazon Nova Reel, which generates video. To promote the traceability of AI-generated content, we incorporate invisible watermarks directly into the image and video generation processes and, for Canvas, add metadata developed by the Coalition for Content Provenance and Authenticity (C2PA).

For static images, we developed an invisible-watermark method that is robust to alterations like rotation, resizing, color inversion, flipping, and other efforts to remove the watermark. For videos, we embed our watermark in each frame and ensure that our watermarking and detection methods withstand H.264 compression. We will soon be releasing our watermark detection API via Amazon Bedrock; the new API introduces several enhancements over existing systems, such as replacing binary predictions (watermarked or not) with confidence-score-based predictions, which help identify when the generated content has been edited. The new detection system covers both images and videos.

The road ahead

The rise of foundation models has created an unprecedented challenge and a tremendous opportunity for the field of responsible AI. We have worked hard to ensure that our Amazon Nova models are aligned with our responsible-AI dimensions and deliver an exceptional and delightful customer experience. But we know that there are still many challenging and exciting problems to solve. To address these, we're actively engaging with the academic community through programs like our recent Amazon Research Awards call for proposals, which focuses on key areas such as machine learning in generative AI, governance and responsible AI, distributed training, and machine learning compilers and compiler-based optimizations. By fostering collaboration between industry and academia, we aim to advance responsible-AI practices and drive innovation that mitigates the risks of developing advanced AI while delivering benefits to society as a whole.

Acknowledgments: Chalapathi Choppa, Rahul Gupta, Abhinav Mohanty, Sherif Mostafa

Related content

US, NY, New York
The Sponsored Products and Brands (SPB) team at Amazon Ads is re-imagining the advertising landscape through state-of-the-art generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. The Off-Search team within Sponsored Products and Brands (SPB) is focused on building delightful ad experiences across various surfaces beyond Search on Amazon—such as product detail pages, the homepage, and store-in-store pages—to drive monetization. Our vision is to deliver highly personalized, context-aware advertising that adapts to individual shopper preferences, scales across diverse page types, remains relevant to seasonal and event-driven moments, and integrates seamlessly with organic recommendations such as new arrivals, basket-building content, and fast-delivery options. To execute this vision, we work in close partnership with Amazon Stores stakeholders to lead the expansion and growth of advertising across Amazon-owned and -operated pages beyond Search. We operate full stack—from backend ads-retail edge services, ads retrieval, and ad auctions to shopper-facing experiences—all designed to deliver meaningful value. Curious about our advertising solutions? Discover more about Sponsored Products and Sponsored Brands to see how we’re helping businesses grow on Amazon.com and beyond! Key job responsibilities This role will be pivotal in redesigning how ads contribute to a personalized, relevant, and inspirational shopping experience, with the customer value proposition at the forefront. Key responsibilities include, but are not limited to: - Contribute to the design and development of GenAI, deep learning, multi-objective optimization and/or reinforcement learning empowered solutions to transform ad retrieval, auctions, whole-page relevance, and/or bespoke shopping experiences. - Collaborate cross-functionally with other scientists, engineers, and product managers to bring scalable, production-ready science solutions to life. - Stay abreast of industry trends in GenAI, LLMs, and related disciplines, bringing fresh and innovative concepts, ideas, and prototypes to the organization. - Contribute to the enhancement of team’s scientific and technical rigor by identifying and implementing best-in-class algorithms, methodologies, and infrastructure that enable rapid experimentation and scaling. - Mentor and grow junior scientists and engineers, cultivating a high-performing, collaborative, and intellectually curious team. A day in the life As an Applied Scientist on the Sponsored Products and Brands Off-Search team, you will contribute to the development in Generative AI (GenAI) and Large Language Models (LLMs) to revolutionize our advertising flow, backend optimization, and frontend shopping experiences. This is a rare opportunity to redefine how ads are retrieved, allocated, and/or experienced—elevating them into personalized, contextually aware, and inspiring components of the customer journey. You will have the opportunity to fundamentally transform areas such as ad retrieval, ad allocation, whole-page relevance, and differentiated recommendations through the lens of GenAI. By building novel generative models grounded in both Amazon’s rich data and the world’s collective knowledge, your work will shape how customers engage with ads, discover products, and make purchasing decisions. If you are passionate about applying frontier AI to real-world problems with massive scale and impact, this is your opportunity to define the next chapter of advertising science. About the team The Off-Search team within Sponsored Products and Brands (SPB) is focused on building delightful ad experiences across various surfaces beyond Search on Amazon—such as product detail pages, the homepage, and store-in-store pages—to drive monetization. Our vision is to deliver highly personalized, context-aware advertising that adapts to individual shopper preferences, scales across diverse page types, remains relevant to seasonal and event-driven moments, and integrates seamlessly with organic recommendations such as new arrivals, basket-building content, and fast-delivery options. To execute this vision, we work in close partnership with Amazon Stores stakeholders to lead the expansion and growth of advertising across Amazon-owned and -operated pages beyond Search. We operate full stack—from backend ads-retail edge services, ads retrieval, and ad auctions to shopper-facing experiences—all designed to deliver meaningful value. Curious about our advertising solutions? Discover more about Sponsored Products and Sponsored Brands to see how we’re helping businesses grow on Amazon.com and beyond!
CA, BC, Vancouver
Success in any organization begins with its people and having a comprehensive understanding of our workforce and how we best utilize their unique skills and experience is paramount to our future success. WISE (Workforce Intelligence powered by Scientific Engineering) delivers the scientific and engineering foundation that powers Amazon's enterprise-wide workforce planning ecosystem. Addressing the critical need for precise workforce planning, WISE enables a closed-loop mechanism essential for ensuring Amazon has the right workforce composition, organizational structure, and geographical footprint to support long-term business needs with a sustainable cost structure. We are looking for a Sr. Applied Scientist to join our ML/AI team to work on Advanced Optimization and LLM solutions. You will partner with Software Engineers, Machine Learning Engineers, Data Engineers and other Scientists, TPMs, Product Managers and Senior Management to help create world-class solutions. We're looking for people who are passionate about innovating on behalf of customers, demonstrate a high degree of product ownership, and want to have fun while they make history. You will leverage your knowledge in machine learning, advanced analytics, metrics, reporting, and analytic tooling/languages to analyze and translate the data into meaningful insights. You will have end-to-end ownership of operational and technical aspects of the insights you are building for the business, and will play an integral role in strategic decision-making. Further, you will build solutions leveraging advanced analytics that enable stakeholders to manage the business and make effective decisions, partner with internal teams to identify process and system improvement opportunities. As a tech expert, you will be an advocate for compelling user experiences and will demonstrate the value of automation and data-driven planning tools in the People Experience and Technology space. Key job responsibilities * Engineering execution - drive crisp and timely execution of milestones, consider and advise on key design and technology trade-offs with engineering teams * Priority management - manage diverse requests and dependencies from teams * Process improvements – define, implement and continuously improve delivery and operational efficiency * Stakeholder management – interface with and influence your stakeholders, balancing business needs vs. technical constraints and driving clarity in ambiguous situations * Operational Excellence – monitor metrics and program health, anticipate and clear blockers, manage escalations To be successful on this journey, you love having high standards for yourself and everyone you work with, and always look for opportunities to make our services better.
RO, Bucharest
Amazon's Compliance and Safety Services (CoSS) Team is looking for a smart and creative Applied Scientist to apply and extend state-of-the-art research in NLP, multi-modal modeling, domain adaptation, continuous learning and large language model to join the Applied Science team. At Amazon, we are working to be the most customer-centric company on earth. Millions of customers trust us to ensure a safe shopping experience. This is an exciting and challenging position to drive research that will shape new ML solutions for product compliance and safety around the globe in order to achieve best-in-class, company-wide standards around product assurance. You will research on large amounts of tabular, textual, and product image data from product detail pages, selling partner details and customer feedback, evaluate state-of-the-art algorithms and frameworks, and develop new algorithms to improve safety and compliance mechanisms. You will partner with engineers, technical program managers and product managers to design new ML solutions implemented across the entire Amazon product catalog. Key job responsibilities As an Applied Scientist on our team, you will: - Research and Evaluate state-of-the-art algorithms in NLP, multi-modal modeling, domain adaptation, continuous learning and large language model. - Design new algorithms that improve on the state-of-the-art to drive business impact, such as synthetic data generation, active learning, grounding LLMs for business use cases - Design and plan collection of new labels and audit mechanisms to develop better approaches that will further improve product assurance and customer trust. - Analyze and convey results to stakeholders and contribute to the research and product roadmap. - Collaborate with other scientists, engineers, product managers, and business teams to creatively solve problems, measure and estimate risks, and constructively critique peer research - Consult with engineering teams to design data and modeling pipelines which successfully interface with new and existing software - Publish research publications at internal and external venues. About the team The science team delivers custom state-of-the-art algorithms for image and document understanding. The team specializes in developing machine learning solutions to advance compliance capabilities. Their research contributions span multiple domains including multi-modal modeling, unstructured data matching, text extraction from visual documents, and anomaly detection, with findings regularly published in academic venues.
US, WA, Seattle
About Sponsored Products and Brands: The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through industry leading generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. About Our Team: The Sponsored Brands Impressions-based Offerings team is responsible for evolving the value proposition of Sponsored Brands to drive brand advertising in retail media at scale, helping brands get discovered, acquire new customers and sustainably grow customer lifetime value. We build end-to-end solutions that enable brands to drive discovery, visibility and share of voice. This includes building advertiser controls, shopper experiences, monetization strategies and optimization features. We succeed when (1) shoppers discover, engage and build affinity with brands and (2) brands can grow their business at scale with our advertising products. About This Role: As an Applied Scientist on our team, you will: * Develop AI solutions for Sponsored Brands advertiser and shopper experiences. Build monetization and optimization systems that leverage generative models to value and improve campaign performance. * Define a long-term science vision and roadmap for our Sponsored Brands advertising business, driven from our customers' needs, translating that direction into specific plans for applied scientists and engineering teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. * Design and conduct A/B experiments to evaluate proposed solutions based on in-depth data analyses. * Effectively communicate technical and non-technical ideas with teammates and stakeholders; * Stay up-to-date with advancements and the latest modeling techniques in the field. * Think big about the arc of development of Gen AI over a multi-year horizon and identify new opportunities to apply these technologies to solve real-world problems. #GenAI
US, MA, N.reading
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. Key job responsibilities - Collaborate with simulation and robotics experts to translate physical modeling needs into robust, scalable, and maintainable simulation solutions. - Design and implement high-performance simulation modeling and tools for rigid and deformable body simulation. - Identify and optimize performance bottlenecks in simulation pipelines to support real-time and batch simulation workflows. - Help build validation and unit testing pipelines to ensure correctness and physical fidelity of simulation results. - Identify potential sources of sim-to-real gaps and propose modeling and numerical approximations to reduce them. - Stay current with the latest advances in numerical methods, parallel computing, and GPU architectures, and incorporate them into our tools.
IN, KA, Bengaluru
Amazon Devices is an inventive research and development company that designs and engineer high-profile devices like the Kindle family of products, Fire Tablets, Fire TV, Health Wellness, Amazon Echo & Astro products. This is an exciting opportunity to join Amazon in developing state-of-the-art techniques that bring Gen AI on edge for our consumer products. We are looking for exceptional scientists to join our Applied Science team and help develop the next generation of edge models, and optimize them while doing co-designed with custom ML HW based on a revolutionary architecture. Work hard. Have Fun. Make History. Key job responsibilities Quantize, prune, distill, finetune Gen AI models to optimize for edge platforms Fundamentally understand Amazon’s underlying Neural Edge Engine to invent optimization techniques Analyze deep learning workloads and provide guidance to map them to Amazon’s Neural Edge Engine Use first principles of Information Theory, Scientific Computing, Deep Learning Theory, Non Equilibrium Thermodynamics Train custom Gen AI models that beat SOTA and paves path for developing production models Collaborate closely with compiler engineers, fellow Applied Scientists, Hardware Architects and product teams to build the best ML-centric solutions for our devices Publish in open source and present on Amazon's behalf at key ML conferences - NeurIPS, ICLR, MLSys.
US, MA, Boston
**This is an experimental role to support a business pilot and can potentially span up to 12 months** Embark on a transformative journey as our Sr. Domain Expert Lead, where intellectual rigor meets technological innovation. As a Sr. Domain Expert Lead, you will blend your advanced analytical skills and domain expertise to provide strategic oversight to our human-in-the-loop and model-in-the-loop data pipelines. You will also provide mentorship and guidance to junior team members. Your responsibilities will ensure data excellence through strategic oversight of high-quality data output, while delivering expert consultation throughout the pipeline and fostering iterative development. This position directly impacts the effectiveness and reliability of our AI solutions by maintaining the highest standards of data quality throughout the development process while building capability within the broader team. Key job responsibilities • Serve as a trusted domain advisor to cross-functional teams, providing strategic direction and specialized problem-solving support • Champion domain knowledge sharing across multiple channels and teams to maintain data quality excellence and standardization • Drive collaborative efforts with science teams to optimize output of complex data collections in your domain expertise, ensuring data excellence through iterative feedback loops • Foster team excellence through mentorship and motivation of peers and junior team members • Make informed decisions on behalf of our customers, ensuring that selected code meets industry standards, best practices, and specific client needs • Collaborate with AI teams to innovate model-in-the-loop and human-in-the-loop approaches, to ensure the collection of high-quality data, safeguarding data privacy and security for LLM training, and more. • Stay abreast of the latest developments in how LLMs and GenAI can be applied to your area of expertise to ensure our evaluations remain cutting-edge. • Develop and write demonstrations to illustrate "what good data looks like" in terms of meeting benchmarks for quality and efficiency • Provide detailed feedback and explanations for your evaluations, helping to refine and improve the LLM's understanding and output
IN, KA, Bengaluru
You will be working with a unique and gifted team developing exciting products for consumers. The team is a multidisciplinary group of engineers and scientists engaged in a fast paced mission to deliver new products. The team faces a challenging task of balancing cost, schedule, and performance requirements. You should be comfortable collaborating in a fast-paced and often uncertain environment, and contributing to innovative solutions, while demonstrating leadership, technical competence, and meticulousness. Your deliverables will include development of thermal solutions, concept design, feature development, product architecture and system validation through to manufacturing release. You will support creative developments through application of analysis and testing of complex electronic assemblies using advanced simulation and experimentation tools and techniques. Key job responsibilities In this role, you will: - Lead end-to-end thermal design for SoC and consumer electronics, spanning package, board, system architecture, and product integration - Perform advanced CFD simulations using tools such as Star-CCM+ or FloEFD to assess feasibility, risks, and mitigation strategies - Plan and execute thermal validation for devices and SoC packages, ensuring compliance with safety, reliability, and qualification requirements - Partner with cross-functional and cross-site teams to influence product decisions, define thermal limits, and establish temperature thresholds - Develop data processing, statistical analysis, and test automation frameworks to improve insight quality, scalability, and engineering efficiency - Communicate thermal risks, trade-offs, and mitigation strategies clearly to engineering leadership to support schedule, performance, and product decisions About the team Amazon Lab126 is an inventive research and development company that designs and engineers high-profile consumer electronics. Lab126 began in 2004 as a subsidiary of Amazon.com, Inc., originally creating the best-selling Kindle family of products. Since then, we have produced innovative devices like Fire tablets, Fire TV and Amazon Echo. What will you help us create?
CA, BC, Vancouver
Have you ever wondered how Amazon predicts delivery times and ensures your orders arrive exactly when promised? Have you wondered where all those Amazon semi-trucks on the road are headed? Are you passionate about increasing efficiency and reducing carbon footprint? Does the idea of having worldwide impact on Amazon's multimodal logistics network that includes planes, trucks, and vans sound exciting to you? Are you interested in developing Generative AI solutions using state-of-the-art LLM techniques to revolutionize how Amazon optimizes the fulfillment of millions of customer orders globally with unprecedented scale and precision? If so, then we want to talk with you! Join our team to apply the latest advancements in Generative AI to enhance our capability and speed of decision making. Fulfillment Planning & Execution (FPX) Science team within SCOT- Fulfillment Optimization owns and operates optimization, machine learning, and simulation systems that continually optimize the fulfillment of millions of products across Amazon’s network in the most cost-effective manner, utilizing large scale optimization, advanced machine learning techniques, big data technologies, and scalable distributed software on the cloud that automates and optimizes inventory and shipments to customers under the uncertainty of demand, pricing, and supply. The team has embarked on its Generative AI to build the next-generation AI agents and LLM frameworks to promote efficiency and improve productivity. We’re looking for a passionate, results-oriented, and inventive machine learning scientist who can design, build, and improve models for our outbound transportation planning systems. You will work closely with our product managers and software engineers to disambiguate complex supply chain problems and create ML / AI solutions to solve those problems at scale. You will work independently in an ambiguous environment while collaborating with cross-functional teams to drive forward innovation in the Generative AI space. Key job responsibilities * Design, develop, and evaluate tailored ML/AI, models for solving complex business problems. * Research and apply the latest ML / AI techniques and best practices from both academia and industry. * Identify and implement novel Generative AI use cases to deliver value. * Design and implement Generative AI and LLM solutions to accelerate development and provide intuitive explainability of complex science models. * Develop and implement frameworks for evaluation, validation, and benchmarking AI agents and LLM frameworks. * Think about customers and how to improve the customer delivery experience. * Use analytical techniques to create scalable solutions for business problems. * Work closely with software engineering teams to build model implementations and integrate successful models and algorithms in production systems at large scale. * Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation. A day in the life You will have the opportunity to learn how Amazon plans for and executes within its logistics ne twork including Fulfillment Centers, Sort Centers, and Delivery Stations. In this role, you will design and develop Machine Learning / AI models with significant scope, impact, and high visibility. You will focus on designing, developing, and deploying Generative AI solutions at scale that will improve efficiency, increase productivity, accelerate development, automate manual tasks, and deliver value to our internal customers. Your solutions will impact business segments worth many-billions-of-dollars and geographies spanning multiple countries and markets. From day one, you will be working with bar raising scientists, engineers, and designers. You will also collaborate with the broader science community in Amazon to broaden the horizon of your work. Successful candidates must thrive in fast-paced environments, which encourage collaborative and creative problem solving, be able to measure and estimate risks, constructively critique peer research, and align research focuses with the Amazon's strategic needs. We look for individuals who know how to deliver results and show a desire to develop themselves, their colleagues, and their career. About the team FPX Science tackles some of the most mathematically complex challenges in transportation planning and execution space to improve Amazon's operational efficiency worldwide at a scale that is unique to Amazon. We own the long-term and intermediate-term planning of Amazon’s global fulfillment centers and transportation network as well as the short-term network planning and execution that determines the optimal flow of customer orders through Amazon fulfillment network. FPX science team is a group of scientists with different technical backgrounds including Machine Learning and Operations Research, who will collaborate closely with you on your projects. Our team directly supports multiple functional areas across SCOT - Fulfillment Optimization and the research needs of the corresponding product and engineering teams. We disambiguate complex supply chain problems and create innovative data-driven solutions to solve those problems at scale with a mix of science-based techniques including Operations Research, Simulation, Machine Learning, and AI to tackle some of our biggest technical challenges. In addition, we are incorporating the latest advances in Generative AI and LLM techniques in how we design, develop, enhance, and interpret the results of these science models.
US, WA, Bellevue
Amazon LEO is Amazon’s low Earth orbit satellite network. Our mission is to deliver fast, reliable internet connectivity to customers beyond the reach of existing networks. From individual households to schools, hospitals, businesses, and government agencies, Amazon Leo will serve people and organizations operating in locations without reliable connectivity. The Amazon LEO Infrastructure Data Engineering, Analytics, and Science team owns designing, implementing, and operating systems/models that support the optimal demand/capacity planning function. We are looking for a talented scientist to implement LEO's long-term vision and strategy for capacity simulations and network bandwidth optimization. This effort will be instrumental in helping LEO execute on its business plans globally. As one of our valued team members, you will be obsessed with matching our standards for operational excellence with a relentless focus on delivering results. Key job responsibilities In this role, you will: Work cross-functionally with product, business development, and various technical teams (engineering, science, R&D, simulations, etc.) to implement the long-term vision, strategy, and architecture for capacity simulations and inventory optimization. Design and deliver modern, flexible, scalable solutions to complex optimization problems for operating and planning satellite resources. Contribute to short and long terms technical roadmap definition efforts to predict future inventory availability and key operational and financial metrics across the network. Design and deliver systems that can keep up with the rapid pace of optimization improvements and simulating how they interact with each other. Analyze large amounts of satellite and business data to identify simulation and optimization opportunities. Synthesize and communicate insights and recommendations to audiences of varying levels of technical sophistication to drive change across LEO. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum.