How a universal model is helping one generation of Amazon robots train the next

New approach can cut the setup time required to develop vision-based machine learning solutions from between six to twelve months to one or two.

A fundamental theme at Amazon is movement. Obtaining a product ordered by a customer and moving that product as quickly and efficiently as possible from its source to the customer’s doorstep.

This video shows robots moving packages around an Amazon fulfillment center.

That journey will often take a package through multiple warehouses and include loadings, unloadings, sortings, and routings. Human associates are crucial to this process and so, increasingly, are robotic manipulators. A rising star in this department is the Robin robotic arm and the computer vision system that makes it possible.

Robin’s visual-perception algorithms can identify and locate packages on a conveyor belt below it, for example, and even distinguish individual packages and their type within a cluttered pile.

This perceptive ability is known as segmentation, and it is central to the development of flexible and adaptive robotic processes for Amazon fulfillment centers. That’s because packages vary enormously in their dimensions and physical characteristics, moving amid an ever-changing mix of packages and against varying backdrops.

Amazon's Robin robot arm is seen lifting packages
Robin’s visual-perception algorithms can identify and locate packages on a conveyor belt below it, for example, and even distinguish individual packages and their type within a cluttered pile.

Robin is a maturing technology, but there is a constant simmering of new ideas just below the surface at Amazon, with teams of scientists and engineers across the Amazon Robotics AI group and beyond collaborating to develop AI-powered robotic solutions to improve warehouse efficiency. A new modeling approach aims to serve them all.

An abundance of packages — but not data

The initial challenge for these early-stage collaborations is often the same.

“The biggest problem that new project teams usually face is data scarcity,” says Cassie Meeker, an Amazon Robotics AI applied scientist, based in Seattle. Obtaining images relevant to a warehouse process of interest takes time and resources, but that’s just the beginning.

Cassie Meeker, an Amazon Robotics AI applied scientist, is seen standing in front of a Robin robot arm
Cassie Meeker, an Amazon Robotics AI applied scientist, says she and her team started their quest to develop universal models by utilizing publicly available datasets to give their model basic classification skills.

“For some machine learning models, you must annotate each training image manually by drawing multiple polygons around the various packages in the picture,” Meeker explains. “It can take five minutes to annotate just one image if it’s cluttered.”

The lack of task-specific training data means teams might base their perceptual models on just a few hundred images, says Meeker: “If they're lucky, they have a thousand. But even a thousand images aren’t a lot for training a model.”

If new projects do not have sufficient variety in their training data, that’s a challenge.

“The production environment is typically very different to a prototyping environment, so when they go into the production phase on the warehouse floor, they will suddenly see all these things they've never seen before and that their perception system can’t identify,” says Meeker. “They could be setting themselves up for failure.”

This difficulty in obtaining data to train segmentation models is partly due to the very specific subject matter: packages. Many computer vision models are trained on enormous, publicly available datasets full of annotated imagery, including everything from aardvarks to zabaglione. A social media company might want to segment faces, or dogs or cats, because that’s what people have lots of pictures of.

“Many publicly available datasets are perfect for that,” says Meeker. “But at Amazon, we have such a specific application and annotation requirements. It just doesn’t translate well from cat pics.”

A ’universal model’ for packages

In short, building a dataset big enough to train a demanding machine learning model requires time and resources, with no guarantee that the novel robotic process you are working toward will prove successful. This became a recurring issue for Amazon Robotics AI. So this year, work began in earnest to address the data scarcity problem. The solution: a “universal model” able to generalize to virtually any package segmentation task.

To develop the model, Meeker and her colleagues first used publicly available datasets to give their model basic classification skills — being able to distinguish boxes or packages from other things, for example. Next, they honed the model, teaching it to distinguish between many types of packaging in warehouse settings — from plastic bags to padded mailers to cardboard boxes of varying appearance — using a trove of training data compiled by the Robin program and half a dozen other Amazon teams over the last few years. This dataset comprised almost half a million annotated images.

Meet the Amazon robot improving safety

Crucially, these images of packages were snapped from a variety of angles — not only straight down from above a conveyor belt — and against a variety of backgrounds. The sheer number and variation of images make the dataset useful in virtually any warehouse location that may benefit from robotic perception and manipulation.

Meeker estimates that starting a project with the universal model can slash the setup time required to develop vision-based ML solutions from between six to twelve months to just one or two. And it has been made available to other Amazon teams in a user-friendly form, so extensive machine learning expertise is not required.

The universal model has already demonstrated its prowess, courtesy of a project run by Amazon Robotics, called Cardinal. Cardinal is a prototype robotic arm-based system that perceives and picks up packages and places them neatly into large containers ready for transport on delivery trucks. Cardinal’s perception system was developed before the universal model was available, so the team spent a lot of time creating a bespoke training dataset for it, says Cardinal’s perception lead, Jeroen van Baar, an Amazon Robotics senior applied scientist, based in North Reading, Massachusetts.

This video shows Cardinal training itself to distinguish between package types.

“We trained the system using 25,000 annotated training images that we created ourselves. But those early training images were taken using a setup with a different appearance to our prototype Cardinal workstation,” van Baar says. “To achieve the performance that we initially desired, we had to fine-tune our model using a thousand new training images taken from that prototype setting.”

After being updated with only those new images, the universal model was as accurate for performing Cardinal’s task as the workstation’s own robust model.

“Had it been available sooner, I would only have captured data specific to our setup and fine-tuned the universal model from there,” says van Baar. “Being able to shorten training time so significantly is a major benefit.”

Related content
Company is testing a new class of robots that use artificial intelligence and computer vision to move freely throughout facilities.

And that’s the point. The universal model can quickly capitalize on any training data produced by a new-project team. This means that when new ideas are tested on the warehouse floor, or existing methods are transplanted to a new Amazon region where things are done slightly differently, the model will have enough data diversity to handle the differences.

Siddhartha Srinivasa, director of Robotics AI, thinks of the universal model as a supportive scaffold that you can use to build your house.

“We're not advocating that everybody live in the same house,” he says. “We're advocating that Amazon teams leverage the scaffolding we're providing to build whatever house they want, because it’s already very powerful, and it is getting better every day.”

Tipping point

Only recently has all this become possible.

“The Robotics AI program is young,” says Meeker. “In the beginning, there was no reason to use other teams’ data, because no one had very much.” But a tipping point has arrived. “We now have enough mature teams in production that we are seeing a real diversity and scaling of data. It is finally generalizable.”

Indeed, while the immediate focus of universal models is identifying and localizing various package types, diverse image data is now accumulating across a range of Amazon programs that cover more aspects of fulfillment centers.

Related content
Why detecting damage is so tricky at Amazon’s scale — and how researchers are training robots to help with that gargantuan task.

The universal model now includes images of unpackaged items, too, allowing it to perform segmentation across a greater diversity of warehouse processes. Initiatives such as multimodal identification, which aims to visually identify items without needing to see a barcode, and the automated damage detection program are accruing product-specific data that could be fed into the universal model, as well as images taken on the fulfillment center floor by the autonomous robots that carry crates of products.

“We’re moving towards a situation in which even data collected by small projects run by interns can be fed into the universal base model, incrementally improving the productivity of the entire robot fleet,” says Srinivasa.

We’re moving towards a situation in which even data collected by small projects run by interns can be fed into the universal base model, incrementally improving the productivity of the entire robot fleet.
Siddhartha Srinivasa

This diversity of data and its aggregation is particularly important for robotic perception within Amazon, especially given customers’ shifting needs, frequently novel Amazon packaging, and the company’s commitment to sustainability that means shipping more items in their own unique packaging.

All of this increases the visual variety of products and packages, making it harder for robots to identify from an image where one package ends and another begins.

Feeding the universal model in this way and having it available to new teams will accelerate the experimentation and deployment of future robotic processes. The use of the universal model is factored into Amazon’s immediate operational plans.

“We’re not doing this because it's cool — though it really is cool — but because it is inevitable,” says Srinivasa.

Related content

US, MA, Boston
AI is the most transformational technology of our time, capable of tackling some of humanity’s most challenging problems. That is why Amazon is investing in generative AI (GenAI) and the responsible development and deployment of large language models (LLMs) across all of our businesses. Come build the future of human-technology interaction with us. We are looking for an Applied Scientist with strong technical skills which includes coding and natural language processing experience in dataset construction, training and evaluating models, and automatic processing of large datasets. You will play a critical role in driving innovation and advancing the state-of-the-art in natural language processing and machine learning. You will work closely with cross-functional teams, including product managers, language engineers, and other scientists. Key job responsibilities Specifically, the Applied Scientist will: • Ensure quality of speech/language/other data throughout all stages of acquisition and processing, including data sourcing/collection, ground truth generation, normalization, transformation, cross-lingual alignment/mapping, etc. • Clean, analyze and select speech/language/other data to achieve goals • Build and test models that elevate the customer experience • Collaborate with colleagues from science, engineering and business backgrounds • Present proposals and results in a clear manner backed by data and coupled with actionable conclusions • Work with engineers to develop efficient data querying infrastructure for both offline and online use cases
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, Sunnyvale
As a Principal Scientist in the Artificial General Intelligence (AGI) organization, you are a trusted part of the technical leadership. You bring business and industry context to science and technology decisions. You set the standard for scientific excellence and make decisions that affect the way we build and integrate algorithms. You solicit differing views across the organization and are willing to change your mind as you learn more. Your artifacts are exemplary and often used as reference across organization. You are a hands-on scientific leader. Your solutions are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility. You tackle intrinsically hard problems, acquiring expertise as needed. You decompose complex problems into straightforward solutions. You amplify your impact by leading scientific reviews within your organization or at your location. You scrutinize and review experimental design, modeling, verification and other research procedures. You probe assumptions, illuminate pitfalls, and foster shared understanding. You align teams toward coherent strategies. You educate, keeping the scientific community up to date on advanced techniques, state of the art approaches, the latest technologies, and trends. You help managers guide the career growth of other scientists by mentoring and play a significant role in hiring and developing scientists and leads. You will play a critical role in driving the development of Generative AI (GenAI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities You will be responsible for defining key research directions, adopting or inventing new machine learning techniques, conducting rigorous experiments, publishing results, and ensuring that research is translated into practice. You will develop long-term strategies, persuade teams to adopt those strategies, propose goals and deliver on them. You will also participate in organizational planning, hiring, mentorship and leadership development. You will be technically exceptional with a passion for building scalable science and engineering solutions. You will serve as a key scientific resource in full-cycle development (conception, design, implementation, testing to documentation, delivery, and maintenance).
US, NY, New York
Do you want to leverage your expertise in translating innovative science into impactful products to improve the lives and work of over a million people worldwide? If so, People eXperience Technology Central Science (PXTCS) would love to discuss how you can make that a reality. PXTCS is an interdisciplinary team that uses economics, behavioral science, statistics, and machine learning to identify products, mechanisms, and process improvements that enhance Amazonians' well-being and their ability to deliver value for Amazon's customers. We collaborate with HR teams across Amazon to make Amazon PXT the most scientific human resources organization in the world. In this role, you will spearhead science design and technical implementation innovations across our predictive modeling and forecasting work-streams. You'll enhance existing models and create new ones, empowering leaders throughout Amazon to make data-driven business decisions. You'll collaborate with scientists and engineers to deliver solutions while working closely with business stakeholders to address their specific needs. Your work will span various business domains (corporate, operations, safety) and analysis levels (individual, group, organizational), utilizing a range of modeling approaches (linear, tree-based, deep neural networks, and LLM-based). You'll develop end-to-end ML solutions from problem formulation to deployment, maintaining high scientific standards and technical excellence throughout the process. As a Sr. Applied Scientist, you'll also contribute to the team's science strategy, keeping pace with emerging AI/ML trends. You'll mentor junior scientists, fostering their growth by identifying high-impact opportunities. Your guidance will span different analysis levels and modeling approaches, enabling stakeholders to make informed, strategic decisions. If you excel at building advanced scientific solutions and are passionate about developing technologies that drive organizational change in the AI era, join us as we work hard, have fun, and make history.
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video subscriptions such as Apple TV+, HBO Max, Peacock, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video team member, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities As an Applied Scientist at Prime Video, you will have end-to-end ownership of the product, related research and experimentation, applying advanced machine learning techniques in computer vision (CV), Generative AI, multimedia understanding and so on. You’ll work on diverse projects that enhance Prime Video’s content localization, image/video understanding, and content personalization, driving impactful innovations for our global audience. Other responsibilities include: - Research and develop generative models for controllable synthesis across images, video, vector graphics, and multimedia - Innovate in advanced diffusion and flow-based methods (e.g., inverse flow matching, parameter efficient training, guided sampling, test-time adaptation) to improve efficiency, controllability, and scalability. - Advance visual grounding, depth and 3D estimation, segmentation, and matting for integration into pre-visualization, compositing, VFX, and post-production pipelines. - Design multimodal GenAI workflows including visual-language model tooling, structured prompt orchestration, agentic pipelines. A day in the life Prime Video is pioneering the use of Generative AI to empower the next generation of creatives. Our mission is to make world-class media creation accessible, scalable, and efficient. We are seeking an Applied Scientist to advance the state of the art in Generative AI and to deliver these innovations as production-ready systems at Amazon scale. Your work will give creators unprecedented freedom and control while driving new efficiencies across Prime Video’s global content and marketing pipelines. This is a newly formed team within Prime Video Science!
US, WA, Seattle
Are you fascinated by the power of Large Language Models (LLM) and applying Generative AI to solve complex challenges within one of Amazon's most significant businesses? Amazon Selection and Catalog Systems (ASCS) builds the systems that host and run the world's largest e-Commerce products catalog, it powers the online buying experience for customers worldwide so they can find, discover and buy anything they want. Amazon's customers rely on the completeness, consistency and correctness of Amazon's product data to make well-informed purchase decisions. We develop LLM applications that make Catalog the best-in-class source of product information for all products worldwide. This problem is challenging due to sheer scale (billions of products in the catalog), diversity (products ranging from electronics to groceries) and multitude of input sources (millions of sellers contributing product data with different quality). We are seeking a passionate, talented, and inventive individual to join the Catalog AI team and help build industry-leading technologies that customers will love. You will apply machine learning and large language model techniques, such as fine-tuning, reinforcement learning, and prompt optimization, to solve real customer problems. You will work closely with scientists and engineers to experiment with new methods, run large-scale evaluations, and bring research ideas into production. Key job responsibilities * Design and implement LLM-based solutions to improve catalog data quality and completeness * Conduct experiments and A/B tests to validate model improvements and measure business impact * Optimize large language models for quality and cost on catalog-specific tasks * Collaborate with engineering teams to deploy models at scale serving billions of products
US, CA, San Francisco
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, scene understanding, sim2real transfer, multi-modal foundation models, and multi-task learning, designing novel algorithms that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Drive independent research initiatives in robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Lead technical projects from conceptualization through deployment, ensuring robust performance in production environments - Collaborate with platform teams to optimize and scale models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures, leveraging our extensive compute infrastructure to train and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through ground breaking foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, CA, San Francisco
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, scene understanding, sim2real transfer, multi-modal foundation models, and multi-task learning, designing novel algorithms that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Drive independent research initiatives in robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Lead technical projects from conceptualization through deployment, ensuring robust performance in production environments - Collaborate with platform teams to optimize and scale models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures, leveraging our extensive compute infrastructure to train and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through ground breaking foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
ES, Barcelona
Are you a MS or PhD student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models, speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for students interested in using a variety of domain expertise to invent, design and implement state-of-the-art solutions for never-before-solved problems. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science https://amazon.jobs/content/en/how-we-hire/university-roles/applied-science Key job responsibilities As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to write technical white papers, create roadmaps and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists and other science interns to develop solutions and deploy them into production. You will have the opportunity to design new algorithms, models, or other technical solutions whilst experiencing Amazon’s customer focused culture. The ideal intern must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, Estonia, France, Germany, Ireland, Israel, Italy, Jordan, Luxembourg, Netherlands, Poland, Romania, Spain, South Africa, UAE, and UK). Please note these are not remote internships.
US, CA, San Francisco
The Amazon AGI SF Lab is focused on developing new foundational capabilities for enabling useful AI agents that can take actions in the digital and physical worlds. In other words, we’re enabling practical AI that can actually do things for us and make our customers more productive, empowered, and fulfilled. The lab is designed to empower AI researchers and engineers to make major breakthroughs with speed and focus toward this goal. Our philosophy combines the agility of a startup with the resources of Amazon. By keeping the team lean, we’re able to maximize the amount of compute per person. Each team in the lab has the autonomy to move fast and the long-term commitment to pursue high-risk, high-payoff research. Key job responsibilities - Develop multimodal Large Language Models (LLMs) to observe, model and derive insights from manual workflows for automation - Work in a joint scrum with engineers for rapid invention, develop automation agent systems, and take them to launch for millions of customers - Collaborate with cross-functional teams of engineers, product managers, and scientists to identify and solve complex problems in GenAI - Design and execute experiments to evaluate the performance of different algorithms and models, and iterate quickly to improve results - Think big about the arc of development of GenAI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems - Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports - Mentor and guide junior scientists and engineers, and contribute to the overall growth and development of the team