AWS VP of AI and data on computer vision research at Amazon

In his keynote address at CVPR, Swami Sivasubramanian considers the many ways that Amazon incorporates computer vision technology into its products and makes it directly available to Amazon Web Services’ customers.

At this year’s Computer Vision and Pattern Recognition Conference (CVPR) — the premier computer vision conference — Amazon Web Services’ vice president for AI and data, Swami Sivasubramanian, gave a keynote address titled “Computer vision at scale: Driving customer innovation and industry adoption”. What follows is an edited version of that talk.

Related content
As in other areas of AI, generative models and foundation models — such as vision-language models — are a hot topic.

Amazon has been working on AI for more than 25 years, and that includes our ongoing innovations in computer vision. Computer vision is part of Amazon’s heritage, ethos, and future — and today, we’re using it in many parts of the company.

Computer vision technology helps power our e-commerce recommendations engine on Amazon.com, as well as the customer reviews you see on our product pages. Our Prime Air drones use computer vision and deep learning, and the Amazon Show uses computer vision to streamline customer interactions with Alexa. Every day, more than half a million vision-enabled robots assist with stocking inventory, filling orders, and sorting packages for delivery.

I’d like to take a closer look at a few such applications, starting with Amazon Ads.

Amazon Ads Image Generator

Advertisers often struggle to create visually appealing and effective ads, especially when it comes to generating multiple variations and optimizing for different placements and audiences. That’s why we developed an AI-powered image generation tool called Amazon Ads Image Generator.

With this tool, advertisers can input product images, logos, and text prompts, and an AI model will generate multiple versions of visually appealing ads tailored to their brands and messaging. The tool aims to simplify and streamline the ad creation process for advertisers, allowing them to produce engaging visuals more efficiently and cost effectively.

Ad Generator.png
Examples of the types of ad variations generated by the Amazon Ads Image Generator.

To build the Image Generator, we used both Amazon machine learning services such as Amazon SageMaker and Amazon SageMaker Jumpstart and human-in-the-loop workflows that ensure high-quality and appropriate images. The architecture consists of modular microservices and separate components for model development, registry, model lifecycle management, selecting the appropriate model, and tracking the job throughout the service, as well as a customer-facing API.

Amazon One

In the retail setting, we’re reimagining identification, entry, and payment with Amazon One, a fast, convenient, and contactless experience that lets customers leave their wallets — and even their phones — at home. Instead, they can use the palms of their hands to enter a facility, identify themselves, pay, present loyalty cards or event tickets, and even verify their ages.

Amazon One is able to recognize the unique lines, grooves, and ridges of your palm and the pattern of veins just under the skin using infrared light. At registration, proprietary algorithms capture and encrypt your palm image within seconds. The Amazon One device uses this information to create your palm signature and connect it to your credit card or your Amazon account.

To ensure Amazon One’s accuracy, we trained it on millions of synthetically generated images with subtle variations, such as illumination conditions and hand poses. We also trained our system to detect fake hands, such as a highly detailed silicon hand replica, and reject them.

Amazon One synthetic images.jpg
Examples of the types of synthetic images used to train the Amazon One model.

Protecting customer data and safeguarding privacy are foundational design principles with Amazon One. Palm images are never stored on-device. Rather, the images are immediately encrypted and sent to a highly secure zone in the Amazon Web Services (AWS) cloud, custom-built for Amazon One, where the customer’s palm signature is created.

Customers like Crunch Fitness are taking advantage of Amazon One and features like the membership linking capability, which addresses a traditional pain point for both customers and the fitness industry. Crunch Fitness announced that it was the first fitness brand to introduce Amazon One as an entry option for its members at select locations nationwide.

NFL Next Gen Stats

Related content
Spliced binned-Pareto distributions are flexible enough to handle symmetric, asymmetric, and multimodal distributions, offering a more consistent metric.

Twenty-five years ago, the height of innovation in NFL broadcasts was the superimposition of a yellow line on the field to mark the first-down distance. These types of on-screen fan experiences have come a long way since then, thanks in large part to AI and machine learning (ML) technologies.

For example, as part of our ongoing partnership with the NFL, we’re delivering Prime Vision with Next Gen Stats during Thursday Night Football to provide insights gleaned by tracking RFID chips embedded in players’ shoulder pads.

One of our most recent innovations is the Defensive Alerts feature shown below, which tracks the movements of defensive players before the snap and uses an ML model to identify “players of interest” most likely to rush the quarterback (circled in red). This unique capability came out of a collaboration between the Thursday Night Football producers, engineers, and our computer vision team.

Defensive alerts.png
The new defensive-alert feature from NFL Nex Gen Stats.

In recent months, Amazon Science has profiled a range of other Amazon computer vision projects, from Project P.I., a fulfillment center technology that uses generative AI and computer vision to help spot, isolate, and remove imperfect products before they’re delivered to customers, to Virtual Try-All, which enables customers to visualize any product in any personal setting.

But for now, I’d like to turn from Amazon products and services that rely on computer vision to the ways in which AWS puts computer vision technologies directly into our customers’ hands.

The AWS ML stack

At AWS, our mission is to make it easy for every developer, data scientist, and researcher to build intelligent applications and leverage AI-enabled services that unlock new value from their data. We do this with the industry’s most comprehensive set of ML tools, which we think of as constituting a three-layer stack.

At the top of the stack are applications that rely on large language models (LLMs), like Amazon Q, our generative-AI-powered assistant for accelerating software development and helping customers extract useful information from their data.

Related content
AWS service enables machine learning innovation on a robust foundation.

At the middle layer, we offer a wide variety of services that enable developers to build powerful AI applications, from our computer vision services and devices to Amazon Bedrock, a secure and easy way to build generative-AI apps with the latest and greatest foundation models and the broadest set of capabilities for security, privacy, and responsible AI.

And at the bottom layer, we provide high-performance, cost-effective infrastructure that is purpose-built for ML.

Let’s look at few examples in more detail, starting with one our most popular vision services: Amazon Rekognition.

Amazon Rekognition

Amazon Rekognition is a fully managed service that uses ML to automatically extract information from images and video files so that customers can build computer vision models and apps more quickly, at lower cost, and with customization for different business needs.

This includes support for a variety of use cases, from content moderation, which enables the detection of unsafe or inappropriate content across images and videos, to custom labels that enable customers to detect objects like brand logos. And most recently we introduced an anti-spoofing feature to help customers verify that only real users, and not spoofs or bad actors, can access their services.

Amazon Textract

Amazon Textract uses optical character recognition to convert images or text — whether from a scanned document, PDF, or a photo of a document — into machine-encoded text. But it goes beyond traditional OCR technology by not only identifying each character, word, and letter but also the contents of fields in forms and information stored in tables.

For example, when presented with queries like the ones below, Textract can create specialized response objects by leveraging a combination of visual, spatial, and language cues. Each object assigns its query a short label, or “alias”. It then provides an answer to the query, the confidence it has in that answer, and the location of the answer on the page.

Textract.png
An example of the outputs of a specialized Textract response object.

Amazon Bedrock

Finally, let’s look at how we’re enabling computer vision technologies with Amazon Bedrock, a fully managed service that makes it easy for customers to build and scale generative-AI applications. Tens of thousands of customers have already selected Amazon Bedrock as the foundation for their generative-AI strategies because it gives them access to the broadest selection of first- and third-party LLMs and foundation models. This includes models from AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, and Stability AI, as well as our own Titan family of models.

Related content
Novel architectures and carefully prepared training data enable state-of-the-art performance.

One of those models is the Titan Image Generator, which enables customers to produce high-quality, realistic images or enhance existing images using natural-language prompts. Amazon Science reported on the Titan Image Generator when we launched it last year at our re:Invent conference.

Responsible AI

We remain committed to the responsible development and deployment of AI technology, around which we made a series of voluntary commitments at the White House last year. To that end, we’ve launched new features and techniques such as invisible watermarks and a new method for assessing “hallucinations” in generative models.

By default, all Titan-generated images contain invisible watermarks, which are designed to help reduce the spread of misinformation by providing a discreet mechanism for identifying AI-generated images. AWS is among the first model providers to widely release built-in invisible watermarks that are integrated into the image outputs and are designed to be tamper-resistant.

Related content
Real-world deployment requires notions of fairness that are task relevant and responsive to the available data, recognition of unforeseen variation in the “last mile” of AI delivery, and collaboration with AI activists.

Hallucination occurs when the data generated by a generative model do not align with reality, as represented by a knowledge base of “facts”. The alignment between representation and fact is referred to as grounding. In the case of vision-language models, the knowledge base to which generated text must align is the evidence provided in images. There is a considerable amount of work ongoing at Amazon on visual grounding, some of which was presented at CVPR.

One of the necessary elements of controlling hallucinations is to be able to measure them. Consider, for example, the following image-prompt pair and the output generated by a vision-language (VL) model. If the model extends its output with the highest-probability next word, it will hallucinate a fridge where the image includes none:

VL kitchen.png
Input image, prompt, and output probabilities from a vision-language model.

 Existing datasets for evaluating hallucinations typically consist of specific questions like “Is there a refrigerator in this image?” But at CVPR, our team presented a paper describing a new benchmark called THRONE, which leverages LLMs themselves to evaluate hallucinations in response to free-form, open-ended prompts such as “Describe what you see”.

In other work, AWS researchers have found that one of the reasons modern transformer-based vision-language models hallucinate is that they cannot retain information about the input image prompt: they progressively “forget” it as more tokens are generated and longer contexts used.

Related content
Method preserves knowledge encoded in teacher model’s attention heads even when student model has fewer of them.

Recently, state space models have resurfaced ideas from the ’70s in a modern key, stacking dynamical models into modular architectures that have arbitrarily long memory residing in their state. But that memory — much like human memory — grows lossier over time, so it cannot be used effectively for grounding. Hybrid models that combine state space models and attention-based networks (such as transformers) are also gaining popularity, given their high recall capabilities over longer contexts. Literally every week, a growing number of variants appear in the literature.

At Amazon, we want to not only make the existing models available for builders to use but also empower researchers to explore and expand the current set of hybrid models. For this reason, we plan to open-source a class of modular hybrid architectures that are designed to make both memory and inference computation more efficient.

To enable efficient memory, these architectures use a more general elementary module that seamlessly integrates both eidetic (exact) and fading (lossy) memory, so the model can learn the optimal tradeoff. To make inference more efficient, we optimize core modules to run on the most efficient hardware — specifically, AWS Trainium, our purpose-built chip for training machine learning models.

It's an exciting time for AI research, with innovations emerging at a breakneck pace. Amazon is committed to making those innovations available to our customers, both indirectly, in the AI-enabled products and services we offer, and directly, through AWS’s commitment to democratize AI.

Research areas

Related content

GB, London
Are you a MS or PhD student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models and speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for students interested in using a variety of domain expertise to invent, design and implement state-of-the-art solutions for never-before-solved problems. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science https://amazon.jobs/content/en/how-we-hire/university-roles/applied-science Key job responsibilities As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to write technical white papers, create roadmaps and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists and other science interns to develop solutions and deploy them into production. You will have the opportunity to design new algorithms, models, or other technical solutions whilst experiencing Amazon’s customer focused culture. The ideal intern must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, Estonia, France, Germany, Ireland, Israel, Italy, Jordan, Luxembourg, Netherlands, Poland, Romania, Spain, South Africa, UAE, and UK). Please note these are not remote internships.
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with talented peers to support the development of GenAI algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in GenAI. About the team The AGI team has a mission to push the envelope with GenAI in LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing (CQC) is a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers on a mission to develop a fault-tolerant quantum computer. Throughout your internship journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied science, where your contributions will shape the future of Quantum Computing and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Quantum Research Science and Applied Science Internships in Santa Clara, CA and Pasadena, CA. We are particularly interested in candidates with expertise in any of the following areas: superconducting qubits, cavity/circuit QED, quantum optics, open quantum systems, superconductivity, electromagnetic simulations of superconducting circuits, microwave engineering, benchmarking, quantum error correction, etc. In this role, you will work alongside global experts to develop and implement novel, scalable solutions that advance the state-of-the-art in the areas of quantum computing. You will tackle challenging, groundbreaking research problems, work with leading edge technology, focus on highly targeted customer use-cases, and launch products that solve problems for Amazon customers. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices.
US, MA, N.reading
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of dexterous manipulation system that: - Enables unprecedented generalization across diverse tasks - Enables contact-rich manipulation in different environments - Seamlessly integrates low-level skills and high-level behaviors - Leverage mechanical intelligence, multi-modal sensor feedback and advanced control techniques. The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. Key job responsibilities - Design and implement methods for dexterous manipulation with single and dual arm manipulation - Leverage simulation and real-world data collection to create large datasets for model development - Develop a hierarchical system that combines low-level control with high-level planning - Utilize state-of-the-art manipulation models and optimal control techniques - Collaborate effectively with multi-disciplinary teams to co-design hardware and algorithms for dexterous manipulation
US, MA, N.reading
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. - We are pioneering the development of robotics dexterous hands that: - Enable unprecedented generalization across diverse tasks - Are compliant but at the same time impact resistant - Can enable power grasps with the same reliability as fine dexterity and nonprehensile manipulation - Can naturally cope with the uncertainty of the environment - Leverage mechanical intelligence, multi-modal sensor feedback and advanced control techniques. The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. Key job responsibilities - Design and implement novel highly dexterous and reliable robotic dexterous hand morphologies - Develop parallel paths for rapid finger design and prototyping combining different actuation and sensing technologies as well as different finger morphologies - Develop new testing and validation strategies to support fast continuous integration and modularity - Build and test full hand prototypes to validate the performance of the solution - Create hybrid approaches combining different actuation technologies, under-actuation, active and passive compliance - Hand integration into rest of the embodiment - Partner with cross-functional teams to rapidly create new concepts and prototypes - Work with Amazon's robotics engineering and operations teams to grasp their requirements and develop tailored solutions - Document the designs, performance, and validation of the final system
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
GB, London
Are you a MS or PhD student interested in a 2026 Research Science Internship, where you would be using your experience to initiate the design, development, execution and implementation of scientific research projects? If so, we want to hear from you! Is your research in machine learning, deep learning, automated reasoning, speech, robotics, computer vision, optimization, or quantum computing? If so, we want to hear from you! We are looking for motivated students with research interests in a variety of science domains to build state-of-the-art solutions for never before solved problems You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science Key job responsibilities As a Research Science Intern, you will have following key job responsibilities; • Work closely with scientists and engineering teams (position-dependent) • Work on an interdisciplinary team on customer-obsessed research • Design new algorithms, models, or other technical solutions • Experience Amazon's customer-focused culture A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months. This role may available across multiple locations in the EMEA region (Austria, Estonia, France, Germany, Ireland, Israel, Italy, Luxembourg, Netherlands, Poland, Romania, Spain, UAE, and UK). Please note these are not remote internships.
IT, Turin
Are you a MS or PhD student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models, speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for students interested in using a variety of domain expertise to invent, design and implement state-of-the-art solutions for never-before-solved problems. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science https://amazon.jobs/content/en/how-we-hire/university-roles/applied-science Key job responsibilities As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to write technical white papers, create roadmaps and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists and other science interns to develop solutions and deploy them into production. You will have the opportunity to design new algorithms, models, or other technical solutions whilst experiencing Amazon’s customer focused culture. The ideal intern must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, Estonia, France, Germany, Ireland, Israel, Italy, Jordan, Luxembourg, Netherlands, Poland, Romania, Spain, South Africa, UAE, and UK). Please note these are not remote internships.
US, WA, Redmond
Amazon Leo is Amazon’s low Earth orbit satellite broadband network. Its mission is to deliver fast, reliable internet to customers and communities around the world, and we’ve designed the system with the capacity, flexibility, and performance to serve a wide range of customers, from individual households to schools, hospitals, businesses, government agencies, and other organizations operating in locations without reliable connectivity. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. We are searching for a senior manager with expertise in the spaceflight aerospace engineering domain of Flight Dynamics, including Mission Design of LEO Constellations, Trajectory, Maneuver Planning, and Navigation. This role will be responsible for the research and development of core spaceflight algorithms that enable the Amazon Leo mission. This role will manage the team responsible for designing and developing flight dynamics innovations for evolving constellation mission needs. Key job responsibilities This position requires expertise in simulation and analysis of astrodynamics models and spaceflight trajectories. This position requires demonstrated achievement in managing technology research portfolios. A strong candidate will have demonstrated achievement in managing spaceflight engineering Guidance, Navigation, and Control teams for full mission lifecycle including design, prototype development and deployment, and operations. Working with the Leo Flight Dynamics Research Science team, you will manage, guide, and direct staff to: • Implement high fidelity modeling techniques for analysis and simulation of large constellation concepts. • Develop algorithms for station-keeping and constellation maintenance. • Perform analysis in support of multi-disciplinary trades within the Amazon Leo team. • Formulate solutions to address collision avoidance and conjunction assessment challenges. • Develop the Leo ground system’s evolving Flight Dynamics System functional requirements. • Work closely with GNC engineers to manage on-orbit performance and develop flight dynamics operations processes About the team The Flight Dynamics Research Science team is staffed with subject matter experts of various areas within the Flight Dynamics domain. It also includes a growing Position, Navigation, and Timing (PNT) team.
LU, Luxembourg
Are you a MS student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models and speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for a customer obsessed Data Scientist Intern who can innovate in a business environment, building and deploying machine learning models to drive step-change innovation and scale it to the EU/worldwide. If this describes you, come and join our Data Science teams at Amazon for an exciting internship opportunity. If you are insatiably curious and always want to learn more, then you’ve come to the right place. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science Key job responsibilities As a Data Science Intern, you will have following key job responsibilities: • Work closely with scientists and engineers to architect and develop new algorithms to implement scientific solutions for Amazon problems. • Work on an interdisciplinary team on customer-obsessed research • Experience Amazon's customer-focused culture • Create and Deliver Machine Learning projects that can be quickly applied starting locally and scaled to EU/worldwide • Build and deploy Machine Learning models using large data-sets and cloud technology. • Create and share with audiences of varying levels technical papers and presentations • Define metrics and design algorithms to estimate customer satisfaction and engagement A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, France, Germany, Ireland, Israel, Italy, Luxembourg, Netherlands, Poland, Romania, Spain and the UK). Please note these are not remote internships.