blueswarm image.png
Swarm robotics involves scores of individual mobile robots that mimic the collective behavior demonstrated by animals. Certain robots, like the Bluebot pictured here, perform some of the same behaviors as a school of fish, such as aggregation, dispersion, and searching.
Courtesy of Radhika Nagpal, Harvard University

Schooling robots to behave like fish

Radhika Nagpal has created robots that can build towers without anyone in charge. Now she’s turned her focus to fulfillment center robots.

When Radhika Nagpal was starting graduate school in 1994, she and her future husband went snorkeling in the Caribbean. Nagpal, who grew up in a landlocked region of India, had never swum in the ocean before. It blew her away.

“The reef was super healthy and colorful, like being in a National Geographic television show,” she recalled. “As soon as I put my face in the water, this whole swarm of fish came towards me and then swerved to the right.”

Meet the Blueswarm
Blueswarm comprises seven identical miniature Bluebots that combine autonomous 3D multi-fin locomotion with 3D camera-based visual perception.

The fish fascinated her. As she watched, large schools of fish would suddenly stop or switch direction as if they were guided by a single mind. A series of questions occurred to her. How did they communicate with one another? What rules — think of them as algorithms — produced such complex group behaviors? What environmental prompts triggered their actions? And most importantly, what made collectives so much smarter and more successful than their individual members?

Radhika Nagpal is a professor of computer science at Harvard University’s Wyss Institute for Biologically Inspired Engineering and an Amazon Scholar
Radhika Nagpal is a professor of computer science at Harvard University’s Wyss Institute for Biologically Inspired Engineering and an Amazon Scholar.

Since then, Nagpal, a professor of computer science at Harvard University’s Wyss Institute for Biologically Inspired Engineering and an Amazon Scholar, has gone on to build swarming robots. Swarm robotics involves scores of individual mobile robots that mimic the collective behavior demonstrated by animals, e.g. how flocks of birds or schools of fish move together to achieve some end. The robots act as if they, too, were guided by a single mind, or, more precisely, a single computer. Yet they are not.

Instead, they follow a relatively simple set of behavioral rules. Without any external orders or directions, Nagpal’s swarms organize themselves to carry out surprisingly complex tasks, like spontaneously synchronizing their behavior, creating patterns, and even building a tower.

More recently, her lab developed swimming robots that performed some of the same behaviors as a school of fish, such as aggregation, dispersion, and searching. All without a leader.

Nagpal’s work demonstrates both how far we have come in creating self-organizing robot swarms that can perform tasks — and how far we still must go to emulate the complex tapestries woven by nature. It is a gap that Nagpal hopes to close by uncovering the secrets of swarm intelligence to make swarm robots far more useful.

Amorphous computing

The Caribbean fish sparked Nagpal’s imagination because she was already interested in distributed computing, where multiple computers collaborate to solve problems or transfer information without any single computer running the show. At MIT, where she had begun her PhD program, she was drawn to an offshoot of the field called amorphous computing. It investigates how limited, unreliable individuals — from cells to ants to fish — organize themselves to perform often complex tasks consistently without any hierarchies.

Amorphous computing was “hardware agnostic.” This meant that it sought rules that guided this behavior in both living organisms and computer systems. It asked, for example, how identical cells in an embryo form all the organs of an animal, how ants find the most direct route to food, or how fish coordinate their movements. By studying nature, these computer scientists hoped to build computer networks that operated on the same principles.

I got excited about how nature makes these complicated, distributed, mobile networks. Those multi-robot systems became a new direction of my research
Radhika Nagpal

After completing her doctoral work on self-folding materials inspired by how cells form tissues, Nagpal began teaching at Harvard. While there, she was visited by her friend James McLurkin, a pioneer in swarm robotics at MIT and iRobot.

“James is the one that got me into robot swarms by introducing me to all the things that ant and termite colonies do,” Nagpal said. “I got excited about how nature makes these complicated, distributed, mobile networks. James was developing that used similar principles to move around and work together. Those multi-robot systems became a new direction of my research.”

She was particularly taken by Namibian termites, which build large-scale nest mounds with multiple chambers and complex ventilation systems, often as high as 8 feet tall.

“As far as we know, there isn’t a blueprint or an a priori distribution between who’s doing the building and who is not. We know the queen does not set the agenda,” she explained. “These colonies start with hundreds of termites and expand their structure as they grow.”

The question fascinated her. “I have no idea how that works,” she said. “I mean, how do you create systems that are so adaptive?”

Finding the rules

Researchers have spent decades answering that question. One way, they found, is to act locally. Take, for example, a flock of geese at a pond. If one or two birds on the outside of the flock see a predator, they grow agitated and fly off, alerting the next nearest birds. The message percolates through flock. Once a certain number of birds have “voted” to fly off, the rest follow without any hesitation. They are not following a leader, only reacting only to the birds next to them.

How dynamic circle formation works

The same type of local behaviors could be used to make driverless vehicles safer. An autonomous vehicle, Nagpal explains, does not have to reason about all the other cars on the road, only the ones around it. By focusing on nearby vehicles, these distributed systems use less processing power without losing the ability to react to changes very quickly.

Such systems are highly scalable. “Instead of having to reason about everybody, your car only has to reason about its five neighbors,” Nagpal said. “I can make the system very large, but each individual’s reasoning space remains constant. That’s a traditional notion of scalable —the amount of processing per vehicle stays constant, but we’re allowed to increase the size of the system.”

Another key to swarm behavior involves embodied intelligence, the idea that brains interact with the world through bodies that can see, hear, touch, smell, and taste. This is a type of intelligence, too, Nagpal argues.

It’s almost like each individual fish acts like a distributed sensor. Instead of me doing all the work, somebody on the left can say, ‘Hey, I saw something.’ When the group divides the labor so that some of us look out for predators while the rest of us eat, it costs less in terms of energy and resources.
Radhika Nagpal

“When you think of an ant, there is not a concentrated set of neurons there,” she said, referring to the ant’s 20-microgram brain. “Instead, there is a huge amount of awareness in the body itself. I may wonder how an ant solves a problem, but I have to realize that somehow having a physical body full of sensors makes that easier. We do not really understand how to think about that still.”

Local actions, scalable behavior, and embodied intelligence are among the factors that make swarms successful. In fact, researchers have shown that the larger a school of fish, the more successful it is at evading predators, finding food, and not getting lost.

“It’s almost like each individual fish acts like a distributed sensor,” Nagpal said. “Instead of me doing all the work, somebody on the left can say, ‘Hey, I saw something.’ When the group divides the labor so that some of us look out for predators while the rest of us eat, it costs less in terms of energy and resources than trying to eat and look out for predators all by yourself.

“What’s really interesting about large insect colonies and fish schools is that they do really complicated things in a decentralized way, whereas people have a tendency to build hierarchies as soon as we have to work together,” she continued. “There is a cost to that, and if we try to do that with that with robots, we replicate the whole management structure and cost of a hierarchy.”

So Nagpal set out to build robots swarms that worked without top-down organization.

Animal behavior

A typical process in Nagpal’s group starts by identifying an interesting natural behavior and trying to discover the rules that generate those actions. Sometimes, they are surprisingly simple.

Take, for example, some behaviors exhibited by Nagpal’s colony of 1,000 interactive robots, each the size of quarter and each communicating with its nearest neighbors wirelessly. The robots will self-assemble into a simple line with a repeating color pattern based on only two rules: a motion rule that allows them to move around any stationary robots, and a pattern rule that tells them to take on the color of their two nearest neighbors.

Other combinations of simple rules spontaneously synchronize the blinking of robot lights, guide migrations, and get the robots to form the letter “K.”

Most impressively, Nagpal and her lab used a behavior found in termites, called stigmergy, to prompt self-organized robot swarms to build a tower. Stigmergy involves leaving a mark on the environment that triggers a specific behavior by another member of the group.

Stigmergy plays a role in how termites build their huge nests. One termite may sense that a spot would make a good place to build, so it puts down its equivalent of a mud brick. When a second termite comes along, the brick triggers it to place its brick there. As the number of bricks increase, the trigger grows stronger and other termites begin building pillars nearby. When they grow high enough, something triggers the termites to begin connecting them with roofs.

“The building environment has become a physical memory of what should happen next,” Nagpal said.

Nagpal used that type of structural memory to prompt her robotic swarm to build a ziggurat tower. The instructions included a motion rule about how to move through the tower and a pattern rule about where to place the blocks. She then built some small, block-carrying robots that built a smaller but no less impressive structure.

Her lab developed a compiler that could generate algorithms that would enable the robots to build specific types of structures — perhaps towers with minarets — by interacting with stigmergic physical memories. One day, algorithm-driven robots could move sandbags to shore up a levee in a hurricane or buttress a collapsed building. They could even monitor coral reefs, underwater infrastructure, and pipelines — if they could swim.

Schooling robofish

From the start, Nagpal wanted to build her own school of robotic fish, but the hardware was simply too clunky to make them practical. That changed with the advent of smartphones, with their low-cost, low-power processors, sensors, and batteries.

In 2018, she got her chance when she received an Amazon Machine Learning Research Award. This allowed her to build Blueswarm, a group of robotic fish that performed tasks like those she observed in the Caribbean years ago.

Each Bluebot is just four inches long, but it packs a small Raspberry Pi computer, two fish-eye cameras, and three blue LED lights. It also has a tail (caudal) fin for thrust, a dorsal fin to move up or down, and side fins (pectoral fins) to turn, stop, or swim backward.

Bluebots do not use Wi-Fi, GPS, or external cameras to communicate their positions without error. Instead, she wants to explore what behaviors are possible relying only on cameras and local perception of one’s mates.

How multi-behavior search works

Researchers, she explained, find it difficult to rely only upon local perception. It has been difficult to tackle fundamental questions, like how does a robot visually detect other members of the swarm, how they parse information, and what happens when one member moves in front of another. Limiting Bluebot sensing to local perception forces Nagpal and her team to think more deeply about what robots really need to know about their neighbors, especially when data is limited and imprecise. 

Bluebots can mimic several fish school behaviors by tracking LED lights on the neighboring fishbots around them. Using 3D cameras and simple algorithms, they estimate distance between lights on neighboring fish. (The closer they appear, the further the fish.)

Nagpal’s seven Bluebots form a circle (called milling) by turning right if there is a robot in front of them. If there is no robot, they turn left. After a few moments, the school will be swimming in a circle, a formation fish use to trap prey.

They can also search for a target flashing red light. First, the school disperses within the tank. When a Bluebot finds the red LED, it begins to flash its lights. This signals the nearest Bluebots to aggregate, followed by the rest. If a single robot had to conduct a similar search by itself, it would take significantly longer.

These behaviors are impressive for robots, but represent a small subset of fish school behaviors. They also take place in a static fish tank populated by only one school of robot fish. To go further, Nagpal wants to improve their sensors and perhaps use machine learning to discover new rules that could be combined to produce the aquatic equivalent of a tower.

In the end, though, Nagpal does not want to build a better fish. Instead, she wants to apply the lessons she has learned to real-world robots. She is doing just that during a sabbatical working at Amazon, which operates the largest fleet of robots — more than 200,000 units — in the world.

Practical uses

Nagpal had little previous experience working in industry, but she jumped at the chance to work with Amazon.

“There are few others with hundreds of robots moving around safely in a facility space,” she said. “And the opportunity to work on algorithms in a deployed system was very exciting."

There are few others [like Amazon] with hundreds of robots moving around safely in a facility space. And the opportunity to work on algorithms in a deployed system was very exciting.
Radhika Nagpal

“The other factor is that Amazon’s robots do a mix of centralized and decentralized decision-making," she continued. "The robots plan their own paths, but they also use the cloud to know more. That lets us ask: Is it better to know everything about all your neighbors all the time? Or is it better to only know about the neighbors that are closer to you?”

Her current focus is on sortation centers, where robots help route packages to shipping stations sorted by ZIP codes. Not surprisingly, robots setting out from multiple points to dozens of different locations require a degree of coordination. Amazon’s robots are already aware of other robots. If they see one, they will choose an alternate route. But what path should they take, Nagpal asks. She wants to make sure those robots are making the most effective possible choices.

Cities already manage this. They limit access to some roads, change speed limits, and add one-way streets. Computer networks do it as well, rerouting traffic when packet delivery slows down.

Some of those concepts, such as one-way travel lanes, also work in sortation centers. They could act as stigmergic signals to guide robot behavior. She also believes there might be a way to create simple swarm behaviors that enable robots to react to advanced data about incoming packages.

Once her sabbatical is over, Nagpal plans to return to the lab. She wants to keep working on her Bluebots, improving their vision, and turning them loose in environments that look more like the coral reef she went snorkeling in 25 years ago.

She is also dreaming of swarms of bigger robots for use in construction or trash collection.

“Maybe we could do what Amazon is doing, but do it outside,” she said. “We could have swarms of robots that actually do some sort of practical task. At Amazon, that task is delivery. But given Boston’s snowstorms, I think shoveling the sidewalks would be nice.”

Research areas

Related content

US, CA, San Francisco
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, science understanding, locomotion, manipulation, sim2real transfer, multi-modal foundation models and multi-task robot learning, designing novel frameworks that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Drive independent research initiatives across the robotics stack, including robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Lead full-stack robotics projects from conceptualization through deployment, taking a system-level approach that integrates hardware considerations with algorithmic development, ensuring robust performance in production environments - Collaborate with platform and hardware teams to ensure seamless integration across the entire robotics stack, optimizing and scaling models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures and innovative systems and algorithms, leveraging our extensive infrastructure to prototype and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through innovative foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
IN, HR, Gurugram
Our customers have immense faith in our ability to deliver packages timely and as expected. A well planned network seamlessly scales to handle millions of package movements a day. It has monitoring mechanisms that detect failures before they even happen (such as predicting network congestion, operations breakdown), and perform proactive corrective actions. When failures do happen, it has inbuilt redundancies to mitigate impact (such as determine other routes or service providers that can handle the extra load), and avoids relying on single points of failure (service provider, node, or arc). Finally, it is cost optimal, so that customers can be passed the benefit from an efficiently set up network. Amazon Shipping is hiring Applied Scientists to help improve our ability to plan and execute package movements. As an Applied Scientist in Amazon Shipping, you will work on multiple challenging machine learning problems spread across a wide spectrum of business problems. You will build ML models to help our transportation cost auditing platforms effectively audit off-manifest (discrepancies between planned and actual shipping cost). You will build models to improve the quality of financial and planning data by accurately predicting ship cost at a package level. Your models will help forecast the packages required to be pick from shipper warehouses to reduce First Mile shipping cost. Using signals from within the transportation network (such as network load, and velocity of movements derived from package scan events) and outside (such as weather signals), you will build models that predict delivery delay for every package. These models will help improve buyer experience by triggering early corrective actions, and generating proactive customer notifications. Your role will require you to demonstrate Think Big and Invent and Simplify, by refining and translating Transportation domain-related business problems into one or more Machine Learning problems. You will use techniques from a wide array of machine learning paradigms, such as supervised, unsupervised, semi-supervised and reinforcement learning. Your model choices will include, but not be limited to, linear/logistic models, tree based models, deep learning models, ensemble models, and Q-learning models. You will use techniques such as LIME and SHAP to make your models interpretable for your customers. You will employ a family of reusable modelling solutions to ensure that your ML solution scales across multiple regions (such as North America, Europe, Asia) and package movement types (such as small parcel movements and truck movements). You will partner with Applied Scientists and Research Scientists from other teams in US and India working on related business domains. Your models are expected to be of production quality, and will be directly used in production services. You will work as part of a diverse data science and engineering team comprising of other Applied Scientists, Software Development Engineers and Business Intelligence Engineers. You will participate in the Amazon ML community by authoring scientific papers and submitting them to Machine Learning conferences. You will mentor Applied Scientists and Software Development Engineers having a strong interest in ML. You will also be called upon to provide ML consultation outside your team for other problem statements. If you are excited by this charter, come join us!
CN, 31, Shanghai
Are you looking to work at the forefront of Machine Learning and AI? Would you be excited to apply Generative AI algorithms to solve real world problems with significant impact? The Generative AI Innovation Center helps AWS customers implement Generative AI solutions and realize transformational business opportunities. This is a team of strategists, scientists, engineers, and architects working step-by-step with customers to build bespoke solutions that harness the power of generative AI. Starting in 2024, the Innovation Center launched a new Custom Model and Optimization program to help customers develop and scale highly customized generative AI solutions. The team helps customers imagine and scope bespoke use cases that will create the greatest value for their businesses, define paths to navigate technical or business challenges, develop and optimize models to power their solutions, and make plans for launching solutions at scale. The GenAI Innovation Center team provides guidance on best practices for applying generative AI responsibly and cost efficiently. You will work directly with customers and innovate in a fast-paced organization that contributes to game-changing projects and technologies. You will design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. We’re looking for Applied Scientists capable of using GenAI and other techniques to design, evangelize, and implement state-of-the-art solutions for never-before-solved problems. As an Applied Scientist, you will - Collaborate with AI/ML scientists and architects to research, design, develop, and evaluate generative AI solutions to address real-world challenges - Interact with customers directly to understand their business problems, aid them in implementation of generative AI solutions, brief customers and guide them on adoption patterns and paths to production - Help customers optimize their solutions through approaches such as model selection, training or tuning, right-sizing, distillation, and hardware optimization - Provide customer and market feedback to product and engineering teams to help define product direction About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.
US, WA, Redmond
Amazon Leo is Amazon’s low Earth orbit satellite network. Our mission is to deliver fast, reliable internet connectivity to customers beyond the reach of existing networks. From individual households to schools, hospitals, businesses, and government agencies, Amazon Leo will serve people and organizations operating in locations without reliable connectivity. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. This position is part of the Satellite Attitude Determination and Control team. You will design and analyze the control system and algorithms, support development of our flight hardware and software, help integrate the satellite in our labs, participate in flight operations, and see a constellation of satellites flow through the production line in the building next door. Key job responsibilities - Design and analyze algorithms for estimation, flight control, and precise pointing using linear methods and simulation. - Develop and apply models and simulations, with various levels of fidelity, of the satellite and our constellation. - Component level environmental testing, functional and performance checkout, subsystem integration, satellite integration, and in space operations. - Manage the spacecraft constellation as it grows and evolves. - Continuously improve our ability to serve customers by maximizing payload operations time. - Develop autonomy for Fault Detection and Isolation on board the spacecraft. A day in the life This is an opportunity to play a significant role in the design of an entirely new satellite system with challenging performance requirements. The large, integrated constellation brings opportunities for advanced capabilities that need investigation and development. The constellation size also puts emphasis on engineering excellence so our tools and methods, from conceptualization through manufacturing and all phases of test, will be state of the art as will the satellite and supporting infrastructure on the ground. You will find that Amazon Leo's mission is compelling, so our program is staffed with some of the top engineers in the industry. Our daily collaboration with other teams on the program brings constant opportunity for discovery, learning, and growth. About the team Our team has lots of experience with various satellite systems and many other flight vehicles. We have bench strength in both our mission and core GNC disciplines. We design, prototype, test, iterate and learn together. Because GNC is central to safe flight, we tend to drive Concepts of Operation and many system level analyses.
US, CA, San Francisco
If you are interested in this position, please apply on Twitch's Career site https://www.twitch.tv/jobs/en/ About Us: Twitch is the world’s biggest live streaming service, with global communities built around gaming, entertainment, music, sports, cooking, and more. It is where thousands of communities come together for whatever, every day. We’re about community, inside and out. You’ll find coworkers who are eager to team up, collaborate, and smash (or elegantly solve) problems together. We’re on a quest to empower live communities, so if this sounds good to you, see what we’re up to on LinkedIn and X, and discover the projects we’re solving on our Blog. Be sure to explore our Interviewing Guide to learn how to ace our interview process. About the Role We are looking for applied scientists to solve challenging and open-ended problems in the domain of user and content safety. As an applied scientist on Twitch's Community team, you will use machine learning to develop data products tackling problems such as harassment, spam, and illegal content. You will use a wide toolbox of ML tools to handle multiple types of data, including user behavior, metadata, and user generated content such as text and video. You will collaborate with a team of passionate scientists and engineers to develop these models and put them into production, where they can help Twitch's creators and viewers succeed and build communities. You will report to our Senior Applied Science Manager in San Francisco, CA. You can work from San Francisco, CA or Seattle, WA. You Will - Build machine learning products to protect Twitch and its users from abusive behavior such as harassment, spam, and violent or illegal content. - Work backwards from customer problems to develop the right solution for the job, whether a classical ML model or a state-of-the-art one. - Collaborate with Community Health's engineering and product management team to productionize your models into flexible data pipelines and ML-based services. - Continue to learn and experiment with new techniques in ML, software engineering, or safety so that we can better help communities on Twitch grow and stay safe. Perks * Medical, Dental, Vision & Disability Insurance * 401(k) * Maternity & Parental Leave * Flexible PTO * Amazon Employee Discount
US, WA, Redmond
As a Guidance, Navigation & Control Hardware Engineer, you will directly contribute to the planning, selection, development, and acceptance of Guidance, Navigation & Control hardware for Amazon Leo's constellation of satellites. Specializing in critical satellite hardware components including reaction wheels, star trackers, magnetometers, sun sensors, and other spacecraft sensors and actuators, you will play a crucial role in the integration and support of these precision systems. You will work closely with internal Amazon Leo hardware teams who develop these components, as well as Guidance, Navigation & Control engineers, software teams, systems engineering, configuration & data management, and Assembly, Integration & Test teams. A key aspect of your role will be actively resolving hardware issues discovered during both factory testing phases and operational space missions, working hand-in-hand with internal Amazon Leo hardware development teams to implement solutions and ensure optimal satellite performance. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. Key job responsibilities * Planning and coordination of resources necessary to successfully accept and integrate satellite Guidance, Navigation & Control components including reaction wheels, star trackers, magnetometers, and sun sensors provided by internal Amazon Leo teams * Partner with internal Amazon Leo hardware teams to develop and refine spacecraft actuator and sensor solutions, ensuring they meet requirements and providing technical guidance for future satellite designs * Collaborate with internal Amazon Leo hardware development teams to resolve issues discovered during both factory test phases and operational space missions, implementing corrective actions and design improvements * Work with internal Amazon Leo teams to ensure state-of-the-art satellite hardware technologies including precision pointing systems, attitude determination sensors, and spacecraft actuators meet mission requirements * Lead verification and testing activities, ensuring satellite Guidance, Navigation & Control hardware components meet stringent space-qualified requirements * Drive implementation of hardware-in-the-loop testing for satellite systems, coordinating with internal Amazon Leo hardware engineers to validate component performance in simulated space environments * Troubleshoot and resolve complex hardware integration issues working directly with internal Amazon Leo hardware development teams
IN, KA, Bengaluru
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced algorithmic systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning and Data Sciences team for India Consumer Businesses. If you have an entrepreneurial spirit, know how to deliver, love to work with data, are deeply technical, highly innovative and long for the opportunity to build solutions to challenging problems that directly impact the company's bottom-line, we want to talk to you. Major responsibilities - Use machine learning and analytical techniques to create scalable solutions for business problems - Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes - Design, development, evaluate and deploy innovative and highly scalable models for predictive learning - Research and implement novel machine learning and statistical approaches - Work closely with software engineering teams to drive real-time model implementations and new feature creations - Work closely with business owners and operations staff to optimize various business operations - Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation - Mentor other scientists and engineers in the use of ML techniques
US, WA, Seattle
The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through industry leading generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. The Demand Utilization team with Sponsored Products and Brands owns finding the appropriate ads to surface to customers when they search for products on Amazon. We strive to understand our customers’ intent and identify relevant ads which enable them to discover new and alternate products. This also enables sellers on Amazon to showcase their products to customers, which may at times be buried deeper in the search results. Our systems and algorithms operate on one of the world's largest product catalogs, matching shoppers with products - with a high relevance bar and strict latency constraints. We are a team of machine learning scientists and software engineers working on complex solutions to understand the customer intent and present them with ads that are not only relevant to their actual shopping experience, but also non-obtrusive. This area is of strategic importance to Amazon Retail and Marketplace business, driving long term-growth. We are looking for an Applied Scientist III, with a background in Machine Learning to optimize serving ads on billions of product pages. The solutions you create would drive step increases in coverage of sponsored ads across the retail website and ensure relevant ads are served to Amazon's customers. You will directly impact our customers’ shopping experience while helping our sellers get the maximum ROI from advertising on Amazon. You will be expected to demonstrate strong ownership and should be curious to learn and leverage the rich textual, image, and other contextual signals. This role will challenge you to utilize innovative machine learning techniques in the domain of predictive modeling, natural language processing (NLP), deep learning, reinforcement learning, query understanding, vector search (kNN) and image recognition to deliver significant impact for the business. Ideal candidates will be able to work cross functionally across multiple stakeholders, synthesize the science needs of our business partners, develop models to solve business needs, and implement solutions in production. In addition to being a strongly motivated IC, you will also be responsible for mentoring junior scientists and guiding them to deliver high impacting products and services for Amazon customers and sellers. Why you will love this opportunity: Amazon is investing heavily in building a world-class advertising business. This team defines and delivers a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are a highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. Impact and Career Growth: You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. Team video https://youtu.be/zD_6Lzw8raE Key job responsibilities As an Applied Scientist III on this team, you will: - Drive end-to-end Machine Learning projects that have a high degree of ambiguity, scale, complexity. - Perform hands-on analysis and modeling of enormous data sets to develop insights that increase traffic monetization and merchandise sales, without compromising the shopper experience. - Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in deploying your ML models. - Run A/B experiments, gather data, and perform statistical analysis. - Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. - Research new and innovative machine learning approaches.
US, VA, Arlington
Customer Experience and Business Trends (CXBT) is looking for an Applied Scientist to join its team. CXBT's mission is to create best-in-class AI agents that seamlessly integrate multimodal inputs, enabling natural, empathetic, and adaptive interactions. We leverage advanced architectures, cross-modal learning, interpretability, and responsible AI techniques to provide coherent, context-aware responses augmented by real-time knowledge retrieval. As part of CXBT, we have a vision to revolutionize how we understand, test, and optimize customer experiences at scale. Where traditional testing approaches fall short, we create AI-powered solutions that enable rapid experimentation, de-risk product launches, and generate actionable insights, -all before a single real customer is impacted. Be a part of our agentic initiative and shape how Amazon leverages artificial intelligence to run tests at scale and improve customer experiences. As an Applied Scientist, you will research state-of-the-art techniques in agent-based modeling, and lead scientific innovation by building foundational agentic simulation capabilities. If you are passionate about the intersection of AI and human behavior modeling, and want to fundamentally influence how Amazon tests and improves customer experiences, this role offers a great opportunity to make your mark. Key job responsibilities - Design and implement frameworks for creating representative, diverse agents that faithfully capture real-world characteristics - Use state-of-the-art techniques in user modeling and behavioral simulation to build robust agentic frameworks - Develop data simulation approaches that mimic real-world speech interactions. - Research and implement novel algorithms and modeling techniques. - Acquire and curate diverse datasets while ensuring user privacy. - Create robust evaluation metrics and test sets to assess language model performance. - Innovate in data representation and model training techniques. - Apply responsible AI practices throughout the development process. - Write clear, scientific documentation describing methodologies, solutions, and design choices. A day in the life Our team is dedicated to improving Amazon's products and services through evaluation of the end-to-end customer experience using both internal and external processes and technology. Our mission is to deeply understand our customers' experiences, challenge the status quo, and provide insights that drive innovation to improve that experience. Through our analysis and insights, we inform business decisions that directly impact customer experience as customers of new GenAI and LLM technologies. About the team Customer Experience and Business Trends (CXBT) is an organization made up of a diverse suite of functions dedicated to deeply understanding and improving customer experience, globally. We are a team of builders that develop products, services, ideas, and various ways of leveraging data to influence product and service offerings – for almost every business at Amazon – for every customer (e.g., consumers, developers, sellers/brands, employees, investors, streamers, gamers).
US, WA, Seattle
We are looking for a passionate Applied Scientist to contribute to the next generation of agentic AI applications for Amazon advertisers. In this role, you will support the development of agentic architectures, help build tools and datasets, and contribute to systems that can reason, plan, and act autonomously across complex advertiser workflows. You will work alongside senior scientists at the forefront of applied AI, gaining hands-on experience with methods for fine-tuning, reinforcement learning, and preference optimization, while contributing to evaluation frameworks that ensure safety, reliability, and trust at scale. You will work backwards from the needs of advertisers—contributing to customer-facing products that directly help them create, optimize, and grow their campaigns. Beyond building models, you will support the agent ecosystem by experimenting with and applying core primitives such as tool orchestration, multi-step reasoning, and adaptive preference-driven behavior. This role involves tackling well-scoped technical problems, while collaborating with engineers and product managers to bring solutions into production. Key Job Responsibilities - Contribute to building agents that guide advertisers in conversational and non-conversational experiences. - Implement model and agent optimization techniques, including supervised fine-tuning, instruction tuning, and preference optimization (e.g., DPO/IPO) under guidance from senior scientists. - Support dataset curation and tool development for MCP. - Contribute to evaluation pipelines for agent workflows, including automated benchmarks, multi-step reasoning tests, and safety guardrails. - Implement and iterate on agentic architectures (e.g., CoT, ToT, ReAct) that integrate planning, tool use, and long-horizon reasoning. - Support prototyping of multi-agent orchestration frameworks and workflows. - Collaborate with peers across engineering, science, and product to bring scientific innovations into production. - Stay current with the latest research in LLMs, RL, and agent-based AI, and apply findings to practical problems. About the team The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through the latest generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. The Advertiser Guidance team within Sponsored Products and Brands is focused on guiding and supporting 1.6MM advertisers to meet their advertising needs of creating and managing ad campaigns. At this scale, the complexity of diverse advertiser goals, campaign types, and market dynamics creates both a massive technical challenge and a transformative opportunity: even small improvements in guidance systems can have outsized impact on advertiser success and Amazon’s retail ecosystem. Our vision is to build a highly personalized, context-aware agentic advertiser guidance system that leverages LLMs together with tools such as auction simulations, ML models, and optimization algorithms. This agentic framework, will operate across both chat and non-chat experiences in the ad console, scaling to natural language queries as well as proactively delivering guidance based on deep understanding of the advertiser. To execute this vision, we collaborate closely with stakeholders across Ad Console, Sales, and Marketing to identify opportunities—from high-level product guidance down to granular keyword recommendations—and deliver them through a tailored, personalized experience. Our work is grounded in state-of-the-art agent architectures, tool integration, reasoning frameworks, and model customization approaches (including tuning, MCP, and preference optimization), ensuring our systems are both scalable and adaptive.