Introduced in January 2019, Amazon’s Scout delivery robot now is slowly shuttling around four areas in the United States: Snohomish County, Wash.; Irvine, Calif.; Franklin, Tenn.; and Atlanta, Georgia. The electrically powered, cooler-sized delivery system is designed to find its way along sidewalks and navigate around pets, people, and a wide variety of other things it encounters while delivering packages to customers’ homes.
To deploy a fleet of fully autonomous delivery robots, Scout must manage changing weather conditions, variations in terrain, unexpected obstacles — a nearly infinite range of variables.
To better understand how Amazon Scout is working to meet those challenges, Amazon Science recently spoke with three scientists who are currently — or were formerly — professors in the robotics field, and now are working on critical components of the service. They are focusing on giving Amazon Scout the tools it needs to navigate to customers by helping the delivery robot see and understand what’s going on around it and giving it an accurate picture of the physical world.
Navigation: Where should Scout go?
Paul Reverdy, an applied scientist, is a relative newcomer to the Scout project, joining Amazon in July 2020. His background in helping automated systems such as robots work with people is extensive, including earning his PhD from Princeton University, his postdoctoral fellowship at the University of Pennsylvania, and his tenure as an assistant professor in aerospace and mechanical engineering at the University of Arizona.
As a key contributor to Scout’s ability to find its way around a neighborhood, Reverdy has a big task. Traditional methods, such as relying on GPS signals, are not adequate to guide Scout, he says. They simply don’t offer enough detail nor are they available all the time.
“Scout has to make a lot of decisions,” Reverdy said. “Some are pretty high level, such as deciding whether it should cross a street or not. Then there are very discrete decisions it must make, such as ‘Can I get through the gap between the hedge and the trash can?’”
That’s where navigation plays a role. Rather than sending a device into territory it doesn’t fully comprehend, Reverdy is creating detailed maps of the world Scout travels within to make sure Scout has the information it needs to plan and react to the world.
“There might be bumps on a sidewalk, or it might be raining, and the sidewalk looks different,” says Reverdy. “Or it could be a higher-level decision: ‘OK, the sidewalk is blocked. Do I try to maneuver into the street? Do I try to navigate around the obstacle?’”
Scout also needs to figure these things out with a modest sensor array. “We have real-world constraints,” says Reverdy. “We need to be intelligent with our sensor data to make sure we perform.”
For Reverdy, the work with Amazon has been an interesting contrast to academia. “The thing that’s really different is working on large-scale software problems,” he says. “In academia you’re often working on your own. At Amazon, things are much more collaborative. Plus, the scale of problems we can look at is substantially larger.”
Perception: Giving Scout a view of the world
Another scientist playing a key role in giving Scout true independence is Hamed Pirsiavash, an Amazon visiting scientist, an assistant professor at the University of Maryland Baltimore County who works on computer vision and machine learning. His job is to help Scout see the world around it and understand what it is seeing or sensing.
“Scout needs to understand what a drivable area is, or what it means when it comes to a stoplight,” says Pirsiavash. “The goal is similar to self-driving cars, with the main difference that Scout mostly travels slowly on sidewalks.”
In some ways, that makes it easier for Scout to understand its environment. In other ways, the task of traversing neighborhood sidewalks is more difficult. Roads are somewhat more predictable — after all, they’re designed for cars. But sidewalks have more varied uses. “It’s a different environment from a street” says Pirsiavash, “as we’re likely to encounter a variety of obstacles, from lawn and garden tools and skateboard ramps, to outdoor furniture and toys.”
What makes Scout possible today are the big advances in computer vision and machine learning that have occurred in the past decade. “The field is advancing every day,” says Pirsiavash. “With large-scale data sets and vast computation now available, we’re able to build a robot that understands the world in a much more sophisticated way.”
For Pirsiavash, Amazon offers a chance to work on real-world, applied-science problems together with more theoretical academic challenges. “Scout has to manage some challenging situations,” Pirsiavash says. “We’ve had cases where a Scout has encountered a basketball hoop that fell across the sidewalk. And of course, people always put their trash bins in different places, and Scout must understand what is happening.”
“I’m really enjoying the work. It’s great to see the results of our work in the field and see how it can benefit people.”
Simulation: Building a virtual world for Scout
Airlines train pilots in simulators so they can learn in a digital jetliner before taking the helm of a real aircraft. Giving Scout the tools it needs to succeed is no different: Detailed simulators give Scout the chance to test its skills in a digital environment.
Benjamin Kunsberg calls it a “digital sandbox” for the robot. “We can give Scout a world with tremendous detail, down to individual blades of grass,” he says.
Kunsberg is an Amazon applied scientist who joined the Scout team in 2019, following four years as an assistant professor of applied mathematics at Brown University in Rhode Island. Previously, he earned his PhD in applied mathematics from Yale University, and a master’s degree in mathematics from Stanford University.
Creating a digital world is a challenging task. It must be accurate enough for Scout to really get a sense of the world, and even small shifts in daylight can have an impact on that. “Small differences not taken into account can make a big difference,” says Kunsberg. “There’s dust in the air, or sun glare.”
In a way, it’s a problem from the movie, “The Matrix”. There, computers designed a virtual world. But how did they know if they got it right? “For some objects, you have no idea how accurate your digital simulation is,” says Kunsberg. “You have to work very hard to come up with benchmarks.”
In some cases, the simulation includes digital scenery similar to a video game. Engineers can add October leaves to a sidewalk, for instance, so Scout can learn that things have changed compared to April. In other cases, the Scout team uses actual photography for training, with team members then outlining and identifying key features to guide the robot’s decisions. That’s slow, but accurate, and can be combined with fully digital simulation to create an accurate view of the world.
Once that world is designed, Scout needs to be trained to understand it. That’s accomplished in part using neural networks — computer systems that recognize relationships among data through a process that, in part, mimics the human brain an approach not available 10 years ago.
Kunsberg has enjoyed the jump from academia to industry.
“This project involves a lot of ideas I had already been thinking about.
“I’ve been really impressed by the graphical engineers and software developers on our team. There’s really no equal in academia.”
What’s next for Scout?
It’s still Day One for Amazon Scout. The team is excited about the positive feedback from customers and results from field tests. The team expects to apply its learnings to keep moving forward on this new delivery system and on Amazon’s path to net zero carbon by 2040.
You can find out more about the team and available jobs here.