A line of Amazon packages are seen traveling down a conveyor belt
Amazon associates are always on the lookout for damaged items, but an extra pair of “eyes” may one day support them in this task, powered by machine-learning approaches being developed by Amazon’s Robotics AI team in Berlin, Germany.

The surprisingly subtle challenge of automating damage detection

Why detecting damage is so tricky at Amazon’s scale — and how researchers are training robots to help with that gargantuan task.

With billions of customer orders flowing through Amazon’s global network of fulfillment centers (FCs) every year, it is an unfortunate but inevitable fact that some of those items will suffer accidental damage during their journey through a warehouse.

Amazon associates are always on the lookout for damaged items in the FC, but an extra pair of “eyes” may one day support them in this task, powered by machine-learning approaches being developed by Amazon’s Robotics AI team in Berlin, Germany.

Related content
The customer-obsessed science produced by teams in Berlin is integrated in several Amazon products and services, including retail, Alexa, robotics, and more.

As well as avoiding delays in shipping and improving warehouse efficiency, this particular form of artificial intelligence has the benefit of aiming to reduce waste by shipping fewer damaged goods in the first place, ensuring customers have fewer damaged items to return.

For every thousand items that make their way through an FC prior to being dispatched to the customer, fewer than one becomes damaged. That is a tiny proportion, relatively speaking, but working at the scale of Amazon this nevertheless adds up to a challenging problem.

Damage detection is important because while damage is a costly problem in itself, it becomes even more costly the longer the damage goes undetected.

Amazon associates examine items at multiple occasions through the fulfillment process, of course, but if damage occurs late in the journey and a compromised item makes it as far as the final packaging station, an associate must sideline it so that a replacement can be requested, potentially delaying delivery. As associate must then further examine the sidelined item to determine its future.

Related content
New statistical model reduces shipment damage by 24% while cutting shipping costs by 5%.

Toward the end of 2020, Sebastian Hoefer, senior applied scientist with the Amazon Robotics AI team, supported by his Amazon colleagues, successfully pitched a novel project to address this problem. The idea: combine computer vision and machine learning (ML) approaches in an attempt to automate the detection of product damage in Amazon FCs.

“You want to avoid damage altogether, but in order to do so you need to first detect it,” notes Hoefer. “We are building that capability, so that robots in the future will be able to utilize it and assist in damage detection.”

Needles in a haystack

Damage detection is a challenging scientific problem, for two main reasons.

Damage caused in Amazon FCs is rare, and that’s clearly a good thing. But that also makes it challenging because we need to find these needles in the haystack, and identify the many forms damage can take.
Ariel Gordon

The first reason is purely practical — there is precious little data on which to train ML models.

“Damage caused in Amazon FCs is rare, and that’s clearly a good thing,” says Ariel Gordon, a principal applied scientist supporting Hoefer’s team from Seattle. “But that also makes it challenging because we need to find these needles in the haystack, and identify the many forms damage can take.”

The second reason takes us into the theoretical long grass of artificial intelligence more generally.

For an adult human, everyday damage detection feels easy — we cannot help but notice damage, because our ability to do so has been honed as a fundamental life skill. Yet whether something is sufficiently damaged to render it unsellable is subjective, often ambiguous, and depends on the context, says Maksim Lapin, an Amazon senior applied scientist in Berlin. “Is it damage that is tolerable from the customer point of view, like minor damage to external packaging that will be thrown into the recycling anyway?” Lapin asks. “Or is it damage of a similar degree on the product itself, which would definitely need to be flagged?”

A side by side image shows a perforated white mailer, on the left is a standard image, on the right is the damage as "seen" by Amazon's damage detection models
Damage in Amazon fulfillment centers can be hard to spot, unlike this perforation captured by a standard camera (left) and Amazon's damage detection models (right.)

In addition, the nature of product damage makes it difficult to even define what damage is for ML models. Damage is both heterogenous — any item or product can be damaged — and can take many forms, from rips to holes to a single broken part of a larger set. Multiplied over Amazon’s massive catalogue of items, the challenge becomes enormous.

In short, do ML models stand a chance?

Off to “Damage Land”

To find out, Hoefer’s team first needed to obtain that data in a standardized format amenable to machine learning. They set about collecting it at an FC near Hamburg, Germany, called HAM2, in a section of the warehouse affectionately known as “Damage Land”. Damaged items end up there while decisions are made on whether such items can be sold at a discount, refurbished, donated or, as a last resort, disposed of.

The team set up a sensor-laden, illuminated booth in Damage Land.

“I’m very proud that HAM2 was picked up as pilot site for this initiative,” says Julia Dembeck, a senior operations manager at HAM2, who set up the Damage Taskforce to coordinate the project’s many stakeholders. “Our aim was to support the project wholeheartedly.”

After workshops with Amazon associates to explain the project and its goals, associates started placing damaged items on a tray in the booth, which snapped images using an array of RGB and depth cameras. They then manually annotated the damage in the images using a linked computer terminal.

Annotating damage detection

“The results were amazing and got even better when associates shared their best practices on the optimal way to place items in the tray,” says Dembeck. Types of damage included things like crushes, tears, holes, deconstruction (e.g., contents breaking out from its container) and spillages.

The associates collected about 30,000 product images in this way, two-thirds of which were images of damaged items.

“We also collected images of non-damaged items because otherwise we cannot train our models to distinguish between the two,” says Hoefer. “Twenty thousand pictures of damage are not a lot in ‘big data’ terms, but it is a lot given the rarity of damage.”

With data in hand, the team first applied a supervised learning ML approach, a workhorse in computer vision. They used the data as a labelled training set that would allow the algorithm to build a generalizable model of what damage can look like. When put through its paces on images of products it had never seen before, the model’s early results were promising.

When analyzing a previously unseen image of a product, the model would ascribe a damage confidence score. The higher the score, the more confident it was that the item was damaged.

The researchers had to tune the sensitivity of the model by deciding upon the confidence threshold at which the model would declare a product unfit for sending to a customer. Set that threshold too high, and modest but significant damage could be missed. Set it too low, and the model would declare some undamaged items to be damaged, a false positive.

“We did a back-of-the-envelope calculation and found that if we're sidelining more than a tiny fraction of all items going through this process, then we're going to overwhelm with false positives,” says Hoefer.

Since those preliminary results in late 2021, the team has made significant improvements.

“We’re now optimizing the model to reduce its false positive rate, and our accuracy is increasing week to week,” says Hoefer.

Different types of damage

However, the supervised learning approach alone, while promising, suffers some drawbacks.

For example, what is the model to make of the packaging of a phone protector kit that shows a smashed screen? What is it to make of a cleaning product whose box is awash with apparent spills? What about a blister pack that is entirely undamaged and should hold three razor blades but for some reason contains just two — the “broken set” problem? What about a bag of ground coffee that appears uncompromised but is sitting next to a little puddle of brown powder?

Again, for humans, making sense of such situations is second nature. We not only know what damage looks like, but also quickly learn what undamaged products should look like. We learn to spot anomalies.

Hoefer’s team decided to incorporate this ability into their damage detection system, to create a more rounded and accurate model. Again, more data was needed, because if you want to know what an item should look like, you need standardized imagery of it. This is where recent work pioneered by Amazon’s Multimodal Identification (MMID) team, part of Berlin's Robotics AI group, came in.

The MMID team has developed a computer vision tool that enables the identification of a product purely from images of it. This is useful in cases where the all-important product barcode is smudged, missing, or wrong.

In fact, it was largely the MMID team that developed the sensor-laden photo booth hardware now being put to use by Hoefer’s team. The MMID team needed it to create a gallery of standardized reference images of pristine products.

Related content
A combination of deep learning, natural language processing, and computer vision enables Amazon to hone in on the right amount of packaging for each product.

“Damage detection could also exploit the same approach by identifying discrepancies between a product image and a gallery of reference images,” says Anton Milan, an Amazon senior applied scientist who is working across MMID and damage detection in Berlin. “In fact, our previous work on MMID allowed us to quickly take off exploring this direction in damage detection by evaluating and tweaking existing solutions.”

By incorporating the MMID team’s product image data and adapting that team’s techniques and models to sharpen their own, the damage-detection system now has a fighting chance of spotting broken sets. It is also much less likely to be fooled by damage-like images printed on the packaging of products, because it can check product imagery taken during the fulfillment process against the image of a pristine version of that product.

“Essentially, we are developing the model’s ability to say ‘something is amiss here’, and that’s a very useful signal,” says Gordon. “It's also problematic, though, because sometimes products change their design. So, the model has to be ‘alive’, continuously learning and updating in accordance with new packaging styles.”

The team is currently exploring how to combine the contributions of both discriminative and anomaly-based ML approaches to give the most accurate assessment of product damage. At the same time, they are developing hardware for trial deployment in an FC, and also collecting more data on damaged items.

The whole enterprise has come together fast, says Hoefer. “We pitched the idea just 18 months ago, and already we have an array of hardware and a team of 15 people making it a reality. As a scientist, this is super rewarding. And if it works as well as we hope, it could be sitting in across the network of Amazon fulfillment centers within a couple of years.”

Hoefer anticipates that the project will ultimately improve customer experience while also reducing waste.

Related content
Amazon Lab126 and the Center for Risk and Reliability will study how devices are accidentally damaged — and how to help ensure they survive more of those incidents.

“Once the technology matures, we expect to see a decrease in customer returns due to damage, because we will be able to identify and fix damaged products before dispatching them to customers. Not only that, by identifying damage early in the fulfillment chain, we will be able to work with vendors to build more robust products. This will again result in reducing damage overall — an important long-term goal of the project,” says Hoefer.

Also looking to the future, Lapin imagines this technology beyond warehousing.

“We are building these capabilities for the highly controlled environments of Amazon fulfillment centers, but I can see some future version of it being deployed in the wild, so to speak, in more chaotic bricks-and-mortar stores, where customers interact with products in unpredictable ways,” says Lapin.

Related content

  • Staff writer
    October 21, 2025
    Initiative will fund over 100 doctoral students researching machine learning, computer vision, and natural-language processing at nine universities.
  • Staff writer
    December 29, 2025
    From foundation model safety frameworks and formal verification at cloud scale to advanced robotics and multimodal AI reasoning, these are the most viewed publications from Amazon scientists and collaborators in 2025.
  • Staff writer
    December 29, 2025
    From quantum computing breakthroughs and foundation models for robotics to the evolution of Amazon Aurora and advances in agentic AI, these are the posts that captured readers' attention in 2025.
US, WA, Bellevue
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
GB, London
As a STRUC Economist Intern, you'll specialize in structural econometric analysis to estimate fundamental preferences and strategic effects in complex business environments. Your responsibilities include: Analyze large-scale datasets using structural econometric techniques to solve complex business challenges Applying discrete choice models and methods, including logistic regression family models (such as BLP, nested logit) and models with alternative distributional assumptions Utilizing advanced structural methods including dynamic models of customer or firm decisions over time, applied game theory (entry and exit of firms), auction models, and labor market models Building datasets and performing data analysis at scale Collaborating with economists, scientists, and business leaders to develop data-driven insights and strategic recommendations Tackling diverse challenges including pricing analysis, competition modeling, strategic behavior estimation, contract design, and marketing strategy optimization Helping business partners formalize and estimate business objectives to drive optimal decision-making and customer value Build and refine comprehensive datasets for in-depth structural economic analysis Present complex analytical findings to business leaders and stakeholders
US, WA, Seattle
At Amazon Selection and Catalog Systems (ASCS), our mission is to power the online buying experience for customers worldwide so they can find, discover, and buy any product they want. We innovate on behalf of our customers to ensure uniqueness and consistency of product identity and to infer relationships between products in Amazon Catalog to drive the selection gateway for the search and browse experiences on the website. We're solving a fundamental AI challenge: establishing product identity and relationships at unprecedented scale. Using Generative AI, Visual Language Models (VLMs), and multimodal reasoning, we determine what makes each product unique and how products relate to one another across Amazon's catalog. The scale is staggering: billions of products, petabytes of multimodal data, millions of sellers, dozens of languages, and infinite product diversity—from electronics to groceries to digital content. The research challenges are immense. GenAI and VLMs hold transformative promise for catalog understanding, but we operate where traditional methods fail: ambiguous problem spaces, incomplete and noisy data, inherent uncertainty, reasoning across both images and textual data, and explaining decisions at scale. Establishing product identities and groupings requires sophisticated models that reason across text, images, and structured data—while maintaining accuracy and trust for high-stakes business decisions affecting millions of customers daily. Amazon's Item and Relationship Platform group is looking for an innovative and customer-focused applied scientist to help us make the world's best product catalog even better. In this role, you will partner with technology and business leaders to build new state-of-the-art algorithms, models, and services to infer product-to-product relationships that matter to our customers. You will pioneer advanced GenAI solutions that power next-generation agentic shopping experiences, working in a collaborative environment where you can experiment with massive data from the world's largest product catalog, tackle problems at the frontier of AI research, rapidly implement and deploy your algorithmic ideas at scale, across millions of customers. Key job responsibilities Key job responsibilities include: * Formulate open research problems at the intersection of GenAI, multimodal reasoning, and large-scale information retrieval—defining the scientific questions that transform ambiguous, real-world catalog challenges into publishable, high-impact research * Push the boundaries of VLMs, foundation models, and agentic architectures by designing novel approaches to product identity, relationship inference, and catalog understanding—where the problem complexity (billions of products, multimodal signals, inherent ambiguity) demands methods that don't yet exist * Advance the science of efficient model deployment—developing distillation, compression, and LLM/VLM serving optimization strategies that preserve frontier-level multimodal reasoning in compact, production-grade architectures while dramatically reducing latency, cost, and infrastructure footprint at billion-product scale * Make frontier models reliable—advancing uncertainty calibration, confidence estimation, and interpretability methods so that frontier-scale GenAI systems can be trusted for autonomous catalog decisions impacting millions of customers daily * Own the full research lifecycle from problem formulation through production deployment—designing rigorous experiments over petabytes of multimodal data, iterating on ideas rapidly, and seeing your research directly improve the shopping experience for hundreds of millions of customers * Shape the team's research vision by defining technical roadmaps that balance foundational scientific inquiry with measurable product impact * Mentor scientists and engineers on advanced ML techniques, experimental design, and scientific rigor—building deep organizational capability in GenAI and multimodal AI * Represent the team in the broader science community—publishing findings, delivering tech talks, and staying at the forefront of GenAI, VLM, and agentic system research