Although there are hundreds of millions of products stored in Amazon fulfillment centers, it’s very rare for customers to report shipped products as damaged. However, Amazon’s culture of customer obsession means that teams are actively working to find and remove even that relatively small number of imperfect products before they’re delivered to customers.
One of those teams includes scientists who are using generative AI and computer vision, powered by AWS services such as Amazon Bedrock and Amazon SageMaker, to help spot, isolate, and remove imperfect items.
Inside Amazon fulfillment centers across North America, products ranging from dog food and phone cases to T-shirts and books pass through imaging tunnels for a wide variety of uses, including sorting products based on their intended destination. Those use cases have been extended to include the use of artificial intelligence to inspect individual items for defects.
For example, optical character recognition (OCR) — the process that converts an image of text into a machine-readable text format — checks expiration dates on product packaging to ensure expired items are not sent to customers. Computer vision (CV) models — trained with reference images from the product catalog and actual images of products sent to customers — pore over color and monochrome images for signs of product damage such as bent book covers.
Additionally, a recent breakthrough solution leverages the ability of generative AI to process multimodal information by synthesizing evidence from images captured during the Amazon fulfillment process and combining it with written customer feedback to trigger even faster corrective actions.
This effort, referred to collectively as Project P.I., which stands for “private investigator”, encompasses the team’s vision of using a detective-like toolset to uncover both defects and, wherever possible, their cause — to address the issue at its root before a product reaches the customer.
"We want to equip ourselves with the most powerful, scalable tools and levers to help us protect our customers’ trust,” said Pingping Shan, director of perfect order experience at Amazon.
Defect detection
Project P.I. is an outgrowth of Amazon’s product quality program, and the tools and systems developed by the team’s scientists include machine learning models that assist selling partners with listing products with accurate information.
“The product quality team is constantly looking for ways to both reduce the burden on the sellers and to proactively verify the condition of inventory in fulfillment centers,” Shan said.
An early solution was an OCR model that checks the labeling information when inventory arrives and compares that to the information in Amazon’s database. If a mismatches occurs — such as a pallet of dog food with an earlier sell-by date than the date in the database — the team can isolate and inspect the pallet and prevent any expired products from reaching the customer.
When an item-level defect is detected, Amazon takes several steps to resolve the issue, including investigating whether the item is one in a defective batch and, if so, isolating the batch from the rest of the items, explained Angela Ke, a senior product manager.
“We want to make sure that customers don’t have to experience issues with product quality. That’s really the vision of Project P.I.,” she said. “We want to get it right for customers the first time, so we want to inspect the products before they leave our fulfillment center, and we incorporate AI to streamline the workflow.”
Customer feedback aids model training
Despite the team’s best efforts, sometimes product quality issues only become known after an item has been delivered to customers, noted Mark Ma, a principal product manager. Those arise in cases where customers have filed a return noting the issue. In those instances, the team tracks down the batch the product came from, verifies the issue, removes those items from fulfillment center shelves, issues refunds, and communicates the issue to the seller.
“We know that that correcting the defects after they happen is not the best way to protect and improve the customer experience. That’s why we started exploring what kind of data we can gather further upstream,” he said. Those discussions eventually led to leveraging the tunnel images to better identify products with defects and take surgical and proactive action to address them — before they’re packaged and shipped.
One of the early challenges with that approach entailed training CV models to correctly identify defects, noted Vincent Gao, a senior science manager on the product quality team.
“It’s like finding a needle in a haystack,” he said. “We needed a model that could accurately identify those among all the other normal products. Otherwise, we could be finding a lot of false positives making the fulfillment process inefficient.”
Gao’s team turned to an ensemble approach that combines self-supervised models with supervised transformer models —a neural-network architecture that uses attention mechanisms to improve performance on machine learning tasks — to spot the difference between normal and defective items. By learning what the “correct” product looks like from fulfillment center images associated with normal orders, the model can compare an item on its way to be packaged against its “normal” image and provide a measurement of how much it differs.
This approach allowed the team to more reliably spot obvious product defects, such as a book with a torn cover or an empty canister of tennis balls, yet it still couldn’t account for some of the fine grain details like a mislabeled T-shirt size or bent box.
To achieve that, the team turned to customer feedback to help train a variety of ML models that can spot the difference between normal and defective items. This more detailed, labeled data was used to refine the model to detect the types of defects customers notice.
“Using that, we are able to be more targeted on the areas that we want to identify so that we can enable the models to learn more on those finer details,” Gao said.
Leveraging generative AI
Today, the science team is leveraging breakthroughs in generative AI to make product defect detection more scalable and robust. For example, the team launched a multimodal large language model (MLLM) that’s been trained to identify damage such as broken seals, torn boxes, and bent book covers, and report in plain language the damage it detects.
The LLM is working side-by-side with the visual language model to analyze data from different sources and modalities to help us make a decision.
“We use the MLLM to ingest and understand the images from fulfillment centers to identify damage patters with zero-shot learning capability — meaning the model can recognize something it has not seen in training. That is a significant plus when it comes to identifying damage patterns given their vast variation,” Ma explained. “Then we use the model to summarize common damage patterns, which enable us to work more upstream with our selling partners and manufactures to proactively address these issues.”
With traditional CV technologies, a model would be trained for each damage scenario – broken seal, torn box, etc. – Gao said, resulting in an unscalable ensemble of dozens to hundreds of models. The MLLM, on the other hand, is a single and scalable unified solution.
“That’s the new power we now have on top of the classic computer vision,” Shan said.
The Project P.I. team has also recently put into production a generative AI system that uses an MLLM to investigate the root cause of negative customer experiences. The system first reviews customer feedback about the issue and then analyzes product images collected by the tunnels and other data sources to confirm the root cause.
For example, if a customer contacts Amazon because they ordered twin-size sheets but received king-size, the generative AI system cross-references that feedback with fulfillment center images. The system will ask questions such as, “Is the product label visible in the image?” “Does the label read king or twin?”
The system’s vision-language model in turn looks at the images, extracts the text from the label, and answers the questions. The LLM converts the answers into a plainspoken summary of the investigation.
“The LLM is working side-by-side with the visual language model to analyze data from different sources and modalities to help us make a decision,” said Gao. “We can actually have the LLM trigger the vision-language model to finish all the different verification tasks.”
Proof of concept in the fulfillment center
Since May 2022, the product quality team has been rolling out their item-level product defect detection solutions using imaging tunnels at several fulfillment centers in North America.
The results have been promising. The system has proven itself adept at sorting through the millions of items that pass through the tunnels each month and accurately identifying both expired items and issues such as wrong color or size.
In the future, the team aims to implement near real-time product defect detection with local image processing. In this scenario, defective items could be pulled off the conveyor belt and a replacement item automatically ordered, thus eliminating disruptions to the fulfillment process.
“Ultimately, we want to be behind the scenes. We don’t need our customers to know this is going on,” said Keiko Akashi, a senior manager of product management at Amazon. “The customer should be getting a perfect order and not even know that the expired or damaged item existed.”
Sidelining defective items will also result in fewer returns, which has an added sustainability benefit, noted Gao.
“We want to intercept the wrong items or defective items,” he said. “That translates to less back and forth shipping overhead, while also delivering a better customer experience.”
New avenues for investigation
Seamless integration of these solutions across the Amazon fulfillment center network will require refinements to the AI models such as the ability to parse a potential misperception of a defect from an actual defect. For example, a “manufactured on” date might be conflated with an “expiration” date or sneakers that arrive without a shoebox are the wrong item instead of a step to reduce packaging, noted Ke.
What’s more, there are challenges adapting CV models to the unique nuances of each fulfillment center and region, such as the size and color of the totes used to convey items around fulfillment centers, and the ability to extract data across a multitude of languages.
“There’s a lot of information that’s written in words,” Ke explained. “So how do we make sure that the model is picking up the right language and translating it correctly? That’s another challenge our science team is trying to solve.”
As the team has gone down this road, they’ve amassed data that shows the defects sometimes are the result of what happens outside of Amazon’s fulfillment centers.
“It could have been a carrier issue,” noted Akashi. “When customers say, ‘Hey, it came damaged,’ we can look into our outbound images and see that nothing has gone wrong. Then we can go figure out what else is going on.”
The team also plans to make data on defects more easily accessible to selling partners, Akashi added. For example, if Amazon discovered a seller accidentally put stickers with the wrong size on a product, Amazon would communicate the issue to help prevent the error from happening again.
“There’s an opportunity to get this information in front of our selling partners so they have visibility to their own inventory, and they can also have more succinct root causes to why these returns are happening,” she explained. “We’re excited that the data that we’re gathering and the AI models we are creating will benefit our customers and selling partners."