The articles below, presented in chronological order, are ones that appealed most to our science- and technology-curious readers.
Learn how a combination of deep learning, natural language processing, and computer vision enables Amazon to hone in on the right amount of packaging for each product. Over the past six years this effort has reduced per-shipment packaging weight by 36% and eliminated more than a million tons of packaging, equivalent to more than 2 billion shipping boxes.
“Nobody designs a car to come in second,” observed Pat Symonds, chief technical officer at FORMULA 1. “But for this project, we were looking at how cars perform in the wake of another car, as opposed to running in clean air.”
Instead of relying on time-consuming and costly physical tests, F1 used computational fluid dynamics, which provides a virtual environment to study the flow of fluids (in this case the air around the F1 car) without ever having to manufacture a single part.
Learn how the F1 engineering team collaborated with Amazon Web Services to develop new design specifications to help make races more competitive.
Since 2018, Amazon Music customers in the US who aren’t sure what to choose have been able to converse with Alexa. The technical complexity of this challenge is hard to overstate, but progress in machine learning (ML) at Amazon has recently made the Alexa music recommender experience even more successful and satisfying for customers.
To achieve that, the Amazon Music Conversations team developed the next-generation of conversation-based music recommender, one that harnesses ML to bring the Alexa music recommender closer to being a genuine, responsive conversation.
Learn how the Amazon Music Conversations team is using pioneering machine learning to make Alexa's discernment better than ever.
Earlier this year, Amazon Web Services expanded the widely popular MLU course offerings with MLU Explain, a public website containing visual essays that incorporate fun animations and “scrolly-telling” to explain machine learning concepts in an accessible manner.
“MLU Explain is a series of interactive articles covering core machine learning concepts, and they're meant to provide supplementary material that's educational within a light, but still informative format,” said Jared Wilber, a data scientist who both teaches some of the MLU courses as well as develops fascinating visual explainers (like the one below) for those courses.
“There are so many people who have very strong technical skills, but who don’t know a ton about machine learning,” he says. “So, our goals for MLU are twofold: the first is to teach machine learning to people who have no experience with how it works and how they can use it, and the second is to help people who already have some experience and want to sharpen their skills.”
Learn how the MLU Explain articles are helping Wilber and his team meet those goals.
The AI stack at the center of the Zoox driving system broadly consists of three processes, which occur in order: perception, prediction, and planning. These equate to seeing the world and how everything around the vehicle is currently moving, predicting how everything will move next, and deciding how to move from A to B given those predictions.
“Predicting the future — the intentions and movements of other agents in the scene — is a core component of safe, autonomous driving,” says Kai Wang, director of the Zoox Prediction team.
Learn how the combination of cutting-edge hardware, sensor technology, and bespoke machine learning approaches can predict trajectories of vehicles, people, and even animals, as far as eight seconds into the future.
“This project uses a combination of various techniques,” said Andrea Qualizza, a SCOT senior principal scientist. “There is mathematical optimization, local search, capacitated vehicle routing problem solvers — all of that came together because these techniques considered various aspects of the problem and linked them very naturally with the way our systems work."
That project — Customer Order and Network Density OptimizeR or CONDOR — is notable because of it ability to determine the right tradeoff between the levels of complexity and optimality.
“We can enable carriers to deliver more packages to more customers on time, while reducing miles driven and carbon emissions from fuel,” Qualizza said. “That is the essence of CONDOR; it revisits all those decisions and finds those opportunities for us to further delight customers.”
For humans, finding and fetching a bottle of ketchup from a cluttered refrigerator without toppling the milk carton is a routine task. For robots, this remains a challenge of epic complexity.
At Amazon, scientists are addressing this challenge by teaching robots to understand cluttered environments in three dimensions, locate specific items, and safely retrieve them using a move called the pinch grasp — that unique thumb-and-finger hold that many people take for granted.
Watch the pinch grasping arm sort through items“In robotics, we don’t have the mechanical ability of a five-finger dexterous hand,” said Aaron Parness, a senior manager for applied science at Amazon Robotics AI. “But we are starting to get some of the ability to reason and think about how to grasp. We’re starting to catch up. Where pinch-grasping is really interesting is taking something mechanically simple and making it highly functional.”
Learn how the pinch grasping robot achieved a ten-fold reduction in damage on items such as books and boxes in tests.
Amazon’s Supply Chain Optimization Technologies organization is responsible for computing the delivery promises Amazon Store customers see when ordering, forecasting demand for its hundreds of millions of products, deciding which products to stock and in what quantities, allocating stock to warehouses and fulfillment centers (FCs) in anticipation of regional customer needs, offering markdown pricing when necessary, working out how to consolidate customer orders for maximum efficiency, coordinating inbound and inventory management from millions of sellers worldwide, and so much more.
“At SCOT, using science and technology to optimize the supply chain is not just an enabler, it's our core focus,” says Ashish Agiwal, vice president, Fulfillment Optimization.
Learn how the SCOT team has evolved over time to meet a challenge of staggering complexity.
The rate of innovation in machine learning is simply off the chart — what is possible today was barely on the drawing board even a handful of years ago. At Amazon, this has manifested in a robotic system that can not only identify potential space in a cluttered storage bin, but also sensitively manipulate that bin’s contents to create that space before successfully placing additional items inside.
The stowing process“Robots and people work together in a hybrid system. Robots handle repetitive tasks and easily reach to the high and low shelves. Humans handle more complex items that require intuition and dexterity. The net effect will be more efficient operations that are also safer for our workers.” Robots and humans working side by side is key to the long-term expansion of this technology beyond retail, said Aaron Parness, Robotics AI senior manager of applied science.
“Think of robots loading delicate groceries or, longer term, loading dishwashers or helping people with tasks around the house. Robots with a sense of force in their control loop is a new paradigm in compliant-robotics applications.”
Learn how Amazon Robotics researchers achieved a result that, until recently, was impossible.
When an item comes into an Amazon fulfillment center, employees use barcodes to verify its identity at several different points along its journey to a delivery vehicle. Each time, the item has to be picked up and the barcode located and scanned. Sometimes, the barcode is damaged or even missing.
Using modalities to generate a digital fingerprintThat process is repeated millions of times across a massive catalogue of items of varying shapes and sizes, and it can’t easily be automated. Right now, there isn’t a robot versatile enough to manipulate any item that may come into a warehouse and then scan it.
The solution? Augment or even eliminate the barcode. Or, better still, eliminate the reliance on awkward and inefficient manual item identification altogether.
That’s what Amazon is researching using multimodal identification, or MMID. This process uses multiple modalities of information — for example, extracting the appearance and dimensions of an item from an image of that item — to automate identification.
Learn how Amazon researchers are working to eliminate the need for barcodes.