Echo Show 10, Charcoal, UI.jpg
A a team of designers, engineers, software developers, and scientists spent many months hypothesizing, experimenting, learning, iterating, and ultimately creating Echo Show 10, which was released Thursday.

The intersection of design and science

How a team of designers, scientists, developers, and engineers worked together to create a truly unique device in Echo Show 10.

During the prototyping stages of the journey that brought Echo Show 10 to life, the design, engineering, and science teams behind it encountered a surprise: one of their early assumptions was proving to be wrong.

The feature that most distinguishes the current generation from its predecessors is the way the device utilizes motion to automatically face users as they move around a room and interact with Alexa. This allows users to move around in the kitchen while consulting a recipe, or to move freely when engaging in a video call, with the screen staying in view.

Naturally, or so the team thought, users would want the device to remain facing them, matching where they were at all times. “You walk from the sink to the fridge, say, while you're using the device for a recipe, the device moves with you,” David Rowell, principal UX designer said. Because no hardware existed, the team had to create a method of prototyping, so they turned to virtual reality (VR). That approach enabled Echo Show 10 teams to work together to test assumptions — including their assumption about how the screen should behave. In this case, what they experienced in VR made them change course.

Echo Show 10 animation

“We had a paradigm that we thought worked really well, but once we tested it, we quickly discovered that we don't want to be one-to-one accurate,” said David Jara, senior UX motion designer. In fact, he said, the feedback led them to a somewhat unexpected conclusion: the device should actually lag behind the user. “Even though, from a pragmatic standpoint, you would think, ‘Well, this thing is too slow. Why can't it keep up?’, once you experienced it, the slowed down version was so much more pleasant.”

This was just one instance of a class of feedback and assumption-changing research that required a team of designers, engineers, software developers, and scientists to constantly iterate and adapt. Those teams spent many months hypothesizing, experimenting, learning, iterating, and ultimately creating Echo Show 10, which was released Thursday. Amazon Science talked to some of those team members to find out how they collaborated to tackle the challenges of developing a motorized smart display and device that pairs sound localization technology and computer vision models.

From idea to iteration

“The idea came from the product team about ways we could differentiate Echo Show,” Rowell said. “The idea came up about this rotating device, but we didn't really know what we wanted to use it for, which is when design came in and started creating use cases for how we could take advantage of motion.”

The design team envisioned a device that moved with users in a way that was both smooth and provided utility.

Adding motion to Echo Show was a really big undertaking. There were a lot of challenges, including how do we make sure that the experience is natural.
Dinesh Nair, applied science manager

That presented some significant challenges for the scientists involved in the project. “Adding motion to Echo Show was a really big undertaking,” said Dinesh Nair, an applied science manager in Emerging Devices. “There were a lot of challenges, including how do we make sure that the experience is natural, and not perceived as creepy by the user.”

Not only did the team have to account for creating a motion experience that felt natural, they had to do it all on a relatively small device. "Building state-of-the-art computer vision algorithms that were processed locally on the device was the greatest challenge we faced," said Varsha Hedau, applied science manager.

The multi-faceted nature of the project also prompted the teams to test the device in a fairly new way. “When the project came along, we decided that that VR would be a great way to actually demonstrate Echo Show 10, particularly with motion,” Rowell noted. “How could it move with you? How does it frame you? How do we fine tune all the ways we want machine learning to move with the correct person?”

Behind each of those questions lay challenges for the design, science, and engineering teams. To identify and address those challenges, the far-flung teams collaborated regularly, even in the midst of a pandemic. “It was interesting because we’re spread over many different locations in the US,” Rowell said. “We had a lot of video calls and VR meant teams could very quickly iterate. There was a lot of sharing and VR was great for that.”

Clearing the hurdles

One of the first hurdles the teams had to clear was how to accurately and consistently locate a person.

“The way we initially thought about doing this was to use spatial cues from your voice to estimate where you are,” Nair said. “Using the direction given by Echo’s chosen beam, the idea was to move the device to face you, and then computer vision algorithms would kick in.”

The science behind Echo Show 10

A combination of audio and visual signals guide the device’s movement, so the screen is always in view. Learn more about the science that empowers that intelligent motion.

That approach presented dual challenges. Current Echo devices form beams in multiple directions and then choose the best beam for speech recognition. “One of the issues with beam selection is that the accuracy is plus or minus 30 degrees for our traditional Echo devices,” Nair observed. “Another is issues with interference noise and sound reflections, for example if you place the device in a corner or there is noise near the person.” The acoustic reflections were particularly vexing since they interfere with the direct sound from the person speaking, especially when the device is playing music. Traditional sound source localization algorithms are also susceptible to these problems.

The Audio Technology team addressed these challenges to determine the direction of sound by developing a new sound localization algorithm. “By breaking down sound waves into their fundamental components and training a model to detect the direct sound, we can accurately determine the direction that sound is coming from,” said Phil Hilmes, director of audio technology. That, along with other algorithm developments, led the team to deliver a sound direction algorithm that was more robust to reflections and interference from noise or music playback, even when it is louder than the person’s voice.

Rowell said, “When we originally conceived of the device, we envisioned it being placed in open space, like a kitchen island so you could use the device effectively from multiple rooms.” Customer feedback during beta testing showed this assumption ran into literal walls. “We found that people actually put the device closer to walls so the device had to work well in these positions.” In some of these more challenging positions, using only audio to find the direction is still insufficient for accurate localization and extra clues from other sensors are needed.

Echo Show 10, Charcoal, Living room.jpg
Echo Show 10 designers initially thought it would be placed in open space, like a kitchen island. Feedback during beta testing showed customers placed it closer to walls, so the teams adjusted.

The design team worked with the science teams so the device relied not just on sound, but also on computer vision. Computer vision algorithms allow the device to locate humans within its field of view, helping it improve accuracy and distinguish people from sounds reflecting off walls, or coming from other sources. The teams also developed fusion algorithms for combining computer vision and sound direction into a model that optimized the final movement.

That collaboration enabled the design team to work with the device engineers to limit the device’s rotation. “That approach prevented the device from turning and basically looking away from you or looking at the wall or never looking at you straight on,” Rowell said. “It really tuned in the algorithms and got better at working out where you were.”

The teams undertook a thorough review of every assumption made in the design phase and adapted based on actual customer interactions. That included the realization that the device’s tracking speed didn’t need to be slow so much as it needed to be intelligent.

“The biggest challenge with Echo Show 10 was to make motion work intelligently,” said Meeta Mishra, principal technical program manager for Echo Devices. “The science behind the device movement is based on fusion of various inputs like sound source, user presence, device placement, and lighting conditions, to name a few. The internal dog-fooding, coupled with the work from home situation, brought forward the real user environment for our testing and iterations. This gave us wider exposure of varied home conditions needed to formulate the right user experience that will work in typical households and also strengthened our science models to make this device a delight.”

Frame rates and bounding boxes

Responding to the user feedback about the preference for intelligent motion meant the science and design teams also had to navigate issues around detection. “Video calls often run at 24 frames a second,” Nair observed. “But a deep learning network that accurately detects where you are, those don't run as fast, they’re typically running at 10 frames per second on the device.”

That latency meant several teams had to find a way to bridge the difference between the frame rates. “We had to work with not just the design team, but also the team that worked on the framing software,” Nair said. “We had to figure out how we could give intermediate results between detections by tracking the person.”

By breaking down sound waves into their fundamental components and training a model ... we can accurately determine the direction that sound is coming from.
Phil Hilmes, director of audio technology

Hedau and her team helped deliver the answer in the form of bounding boxes and Kalman filtering, an algorithm that provides estimates of some unknown variables given the measurements observed over time. That approach allows the device to, essentially, make informed guesses about a user’s movement.

During testing, the teams also discovered that the device would need to account for the manner in which a person interacted with it. “We found that when people are on a call, there are two use cases,” Rowell observed. “They're either are very engaged with the call, where they’re close to the device and looking at the device and the other person on the other end, or they're multitasking.”

The solution was born, yet again, from collaboration. “We went through a lot of experiments to model which user experience really works the best,” Hedau said. Those experiments resulted in utilizing the device’s CV to determine the distance between a person and Echo Show 10.

“We have settings based on the distance that the customer is from the device, which is a way to roughly measure how engaged a customer is,” Rowell said. “When a person is really up close, we don't want the device to move too much because the screen just feels like it's fidgety. But if somebody is on a call and multitasking, they're moving a lot. In this instance, we want smoother transitions.”

Looking to the future

The teams behind the Echo Show 10 are, unsurprisingly, already pondering what’s next. Rowell suggested that, in the future, the Echo Show might show a bit of personality. "We can make the device more playful," Rowell said. "We could start to express a lot of personality with the hardware." [Editor’s note: Some of this is currently enabled via APIs; certain games can “take on new personality through the ability to make the device shake in concert with sound effects and on-screen animations.”]

Nair said his team will also focus on making the on-device processing even faster. “A significant portion of the overall on-device processing is CV and deep learning,” he noted. “Deep networks are always evolving, and we will keep pushing that frontier.”

“Our teams are working continuously to further push the performance of our deep learning models in corner cases such a multi-people, low lighting, fast motions, and more,” added Hedau.

Whatever route Echo Show goes next, the teams behind it already know one thing for certain: they can collaborate their way through just about anything. “With Echo Show 10, there were a lot of assumptions we had when we started, but we didn’t know which would prove true until we got there,” Jara said. “We were kind of building the plane as we were flying it.”

Related content

  • Staff writer
    December 29, 2025
    From foundation model safety frameworks and formal verification at cloud scale to advanced robotics and multimodal AI reasoning, these are the most viewed publications from Amazon scientists and collaborators in 2025.
  • Staff writer
    December 29, 2025
    From quantum computing breakthroughs and foundation models for robotics to the evolution of Amazon Aurora and advances in agentic AI, these are the posts that captured readers' attention in 2025.
  • August 26, 2025
    With a novel parallel-computing architecture, a CAD-to-USD pipeline, and the use of OpenUSD as ground truth, a new simulator can explore hundreds of sensor configurations in the time it takes to test just a few physical setups.
IN, HR, Gurugram
Our customers have immense faith in our ability to deliver packages timely and as expected. A well planned network seamlessly scales to handle millions of package movements a day. It has monitoring mechanisms that detect failures before they even happen (such as predicting network congestion, operations breakdown), and perform proactive corrective actions. When failures do happen, it has inbuilt redundancies to mitigate impact (such as determine other routes or service providers that can handle the extra load), and avoids relying on single points of failure (service provider, node, or arc). Finally, it is cost optimal, so that customers can be passed the benefit from an efficiently set up network. Amazon Shipping is hiring Applied Scientists to help improve our ability to plan and execute package movements. As an Applied Scientist in Amazon Shipping, you will work on multiple challenging machine learning problems spread across a wide spectrum of business problems. You will build ML models to help our transportation cost auditing platforms effectively audit off-manifest (discrepancies between planned and actual shipping cost). You will build models to improve the quality of financial and planning data by accurately predicting ship cost at a package level. Your models will help forecast the packages required to be pick from shipper warehouses to reduce First Mile shipping cost. Using signals from within the transportation network (such as network load, and velocity of movements derived from package scan events) and outside (such as weather signals), you will build models that predict delivery delay for every package. These models will help improve buyer experience by triggering early corrective actions, and generating proactive customer notifications. Your role will require you to demonstrate Think Big and Invent and Simplify, by refining and translating Transportation domain-related business problems into one or more Machine Learning problems. You will use techniques from a wide array of machine learning paradigms, such as supervised, unsupervised, semi-supervised and reinforcement learning. Your model choices will include, but not be limited to, linear/logistic models, tree based models, deep learning models, ensemble models, and Q-learning models. You will use techniques such as LIME and SHAP to make your models interpretable for your customers. You will employ a family of reusable modelling solutions to ensure that your ML solution scales across multiple regions (such as North America, Europe, Asia) and package movement types (such as small parcel movements and truck movements). You will partner with Applied Scientists and Research Scientists from other teams in US and India working on related business domains. Your models are expected to be of production quality, and will be directly used in production services. You will work as part of a diverse data science and engineering team comprising of other Applied Scientists, Software Development Engineers and Business Intelligence Engineers. You will participate in the Amazon ML community by authoring scientific papers and submitting them to Machine Learning conferences. You will mentor Applied Scientists and Software Development Engineers having a strong interest in ML. You will also be called upon to provide ML consultation outside your team for other problem statements. If you are excited by this charter, come join us!
CN, 31, Shanghai
Are you looking to work at the forefront of Machine Learning and AI? Would you be excited to apply Generative AI algorithms to solve real world problems with significant impact? The Generative AI Innovation Center helps AWS customers implement Generative AI solutions and realize transformational business opportunities. This is a team of strategists, scientists, engineers, and architects working step-by-step with customers to build bespoke solutions that harness the power of generative AI. Starting in 2024, the Innovation Center launched a new Custom Model and Optimization program to help customers develop and scale highly customized generative AI solutions. The team helps customers imagine and scope bespoke use cases that will create the greatest value for their businesses, define paths to navigate technical or business challenges, develop and optimize models to power their solutions, and make plans for launching solutions at scale. The GenAI Innovation Center team provides guidance on best practices for applying generative AI responsibly and cost efficiently. You will work directly with customers and innovate in a fast-paced organization that contributes to game-changing projects and technologies. You will design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. We’re looking for Applied Scientists capable of using GenAI and other techniques to design, evangelize, and implement state-of-the-art solutions for never-before-solved problems. As an Applied Scientist, you will - Collaborate with AI/ML scientists and architects to research, design, develop, and evaluate generative AI solutions to address real-world challenges - Interact with customers directly to understand their business problems, aid them in implementation of generative AI solutions, brief customers and guide them on adoption patterns and paths to production - Help customers optimize their solutions through approaches such as model selection, training or tuning, right-sizing, distillation, and hardware optimization - Provide customer and market feedback to product and engineering teams to help define product direction About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.
US, WA, Redmond
Amazon Leo is Amazon’s low Earth orbit satellite network. Our mission is to deliver fast, reliable internet connectivity to customers beyond the reach of existing networks. From individual households to schools, hospitals, businesses, and government agencies, Amazon Leo will serve people and organizations operating in locations without reliable connectivity. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. This position is part of the Satellite Attitude Determination and Control team. You will design and analyze the control system and algorithms, support development of our flight hardware and software, help integrate the satellite in our labs, participate in flight operations, and see a constellation of satellites flow through the production line in the building next door. Key job responsibilities - Design and analyze algorithms for estimation, flight control, and precise pointing using linear methods and simulation. - Develop and apply models and simulations, with various levels of fidelity, of the satellite and our constellation. - Component level environmental testing, functional and performance checkout, subsystem integration, satellite integration, and in space operations. - Manage the spacecraft constellation as it grows and evolves. - Continuously improve our ability to serve customers by maximizing payload operations time. - Develop autonomy for Fault Detection and Isolation on board the spacecraft. A day in the life This is an opportunity to play a significant role in the design of an entirely new satellite system with challenging performance requirements. The large, integrated constellation brings opportunities for advanced capabilities that need investigation and development. The constellation size also puts emphasis on engineering excellence so our tools and methods, from conceptualization through manufacturing and all phases of test, will be state of the art as will the satellite and supporting infrastructure on the ground. You will find that Amazon Leo's mission is compelling, so our program is staffed with some of the top engineers in the industry. Our daily collaboration with other teams on the program brings constant opportunity for discovery, learning, and growth. About the team Our team has lots of experience with various satellite systems and many other flight vehicles. We have bench strength in both our mission and core GNC disciplines. We design, prototype, test, iterate and learn together. Because GNC is central to safe flight, we tend to drive Concepts of Operation and many system level analyses.
US, CA, San Francisco
If you are interested in this position, please apply on Twitch's Career site https://www.twitch.tv/jobs/en/ About Us: Twitch is the world’s biggest live streaming service, with global communities built around gaming, entertainment, music, sports, cooking, and more. It is where thousands of communities come together for whatever, every day. We’re about community, inside and out. You’ll find coworkers who are eager to team up, collaborate, and smash (or elegantly solve) problems together. We’re on a quest to empower live communities, so if this sounds good to you, see what we’re up to on LinkedIn and X, and discover the projects we’re solving on our Blog. Be sure to explore our Interviewing Guide to learn how to ace our interview process. About the Role We are looking for applied scientists to solve challenging and open-ended problems in the domain of user and content safety. As an applied scientist on Twitch's Community team, you will use machine learning to develop data products tackling problems such as harassment, spam, and illegal content. You will use a wide toolbox of ML tools to handle multiple types of data, including user behavior, metadata, and user generated content such as text and video. You will collaborate with a team of passionate scientists and engineers to develop these models and put them into production, where they can help Twitch's creators and viewers succeed and build communities. You will report to our Senior Applied Science Manager in San Francisco, CA. You can work from San Francisco, CA or Seattle, WA. You Will - Build machine learning products to protect Twitch and its users from abusive behavior such as harassment, spam, and violent or illegal content. - Work backwards from customer problems to develop the right solution for the job, whether a classical ML model or a state-of-the-art one. - Collaborate with Community Health's engineering and product management team to productionize your models into flexible data pipelines and ML-based services. - Continue to learn and experiment with new techniques in ML, software engineering, or safety so that we can better help communities on Twitch grow and stay safe. Perks * Medical, Dental, Vision & Disability Insurance * 401(k) * Maternity & Parental Leave * Flexible PTO * Amazon Employee Discount
US, WA, Redmond
As a Guidance, Navigation & Control Hardware Engineer, you will directly contribute to the planning, selection, development, and acceptance of Guidance, Navigation & Control hardware for Amazon Leo's constellation of satellites. Specializing in critical satellite hardware components including reaction wheels, star trackers, magnetometers, sun sensors, and other spacecraft sensors and actuators, you will play a crucial role in the integration and support of these precision systems. You will work closely with internal Amazon Leo hardware teams who develop these components, as well as Guidance, Navigation & Control engineers, software teams, systems engineering, configuration & data management, and Assembly, Integration & Test teams. A key aspect of your role will be actively resolving hardware issues discovered during both factory testing phases and operational space missions, working hand-in-hand with internal Amazon Leo hardware development teams to implement solutions and ensure optimal satellite performance. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. Key job responsibilities * Planning and coordination of resources necessary to successfully accept and integrate satellite Guidance, Navigation & Control components including reaction wheels, star trackers, magnetometers, and sun sensors provided by internal Amazon Leo teams * Partner with internal Amazon Leo hardware teams to develop and refine spacecraft actuator and sensor solutions, ensuring they meet requirements and providing technical guidance for future satellite designs * Collaborate with internal Amazon Leo hardware development teams to resolve issues discovered during both factory test phases and operational space missions, implementing corrective actions and design improvements * Work with internal Amazon Leo teams to ensure state-of-the-art satellite hardware technologies including precision pointing systems, attitude determination sensors, and spacecraft actuators meet mission requirements * Lead verification and testing activities, ensuring satellite Guidance, Navigation & Control hardware components meet stringent space-qualified requirements * Drive implementation of hardware-in-the-loop testing for satellite systems, coordinating with internal Amazon Leo hardware engineers to validate component performance in simulated space environments * Troubleshoot and resolve complex hardware integration issues working directly with internal Amazon Leo hardware development teams
IN, KA, Bengaluru
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced algorithmic systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning and Data Sciences team for India Consumer Businesses. If you have an entrepreneurial spirit, know how to deliver, love to work with data, are deeply technical, highly innovative and long for the opportunity to build solutions to challenging problems that directly impact the company's bottom-line, we want to talk to you. Major responsibilities - Use machine learning and analytical techniques to create scalable solutions for business problems - Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes - Design, development, evaluate and deploy innovative and highly scalable models for predictive learning - Research and implement novel machine learning and statistical approaches - Work closely with software engineering teams to drive real-time model implementations and new feature creations - Work closely with business owners and operations staff to optimize various business operations - Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation - Mentor other scientists and engineers in the use of ML techniques
US, WA, Seattle
The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through industry leading generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. The Demand Utilization team with Sponsored Products and Brands owns finding the appropriate ads to surface to customers when they search for products on Amazon. We strive to understand our customers’ intent and identify relevant ads which enable them to discover new and alternate products. This also enables sellers on Amazon to showcase their products to customers, which may at times be buried deeper in the search results. Our systems and algorithms operate on one of the world's largest product catalogs, matching shoppers with products - with a high relevance bar and strict latency constraints. We are a team of machine learning scientists and software engineers working on complex solutions to understand the customer intent and present them with ads that are not only relevant to their actual shopping experience, but also non-obtrusive. This area is of strategic importance to Amazon Retail and Marketplace business, driving long term-growth. We are looking for an Applied Scientist III, with a background in Machine Learning to optimize serving ads on billions of product pages. The solutions you create would drive step increases in coverage of sponsored ads across the retail website and ensure relevant ads are served to Amazon's customers. You will directly impact our customers’ shopping experience while helping our sellers get the maximum ROI from advertising on Amazon. You will be expected to demonstrate strong ownership and should be curious to learn and leverage the rich textual, image, and other contextual signals. This role will challenge you to utilize innovative machine learning techniques in the domain of predictive modeling, natural language processing (NLP), deep learning, reinforcement learning, query understanding, vector search (kNN) and image recognition to deliver significant impact for the business. Ideal candidates will be able to work cross functionally across multiple stakeholders, synthesize the science needs of our business partners, develop models to solve business needs, and implement solutions in production. In addition to being a strongly motivated IC, you will also be responsible for mentoring junior scientists and guiding them to deliver high impacting products and services for Amazon customers and sellers. Why you will love this opportunity: Amazon is investing heavily in building a world-class advertising business. This team defines and delivers a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are a highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. Impact and Career Growth: You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. Team video https://youtu.be/zD_6Lzw8raE Key job responsibilities As an Applied Scientist III on this team, you will: - Drive end-to-end Machine Learning projects that have a high degree of ambiguity, scale, complexity. - Perform hands-on analysis and modeling of enormous data sets to develop insights that increase traffic monetization and merchandise sales, without compromising the shopper experience. - Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in deploying your ML models. - Run A/B experiments, gather data, and perform statistical analysis. - Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. - Research new and innovative machine learning approaches.
US, CA, San Francisco
Are you interested in a unique opportunity to advance the accuracy and efficiency of Artificial General Intelligence (AGI) systems? If so, you're at the right place! We are the AGI Autonomy organization, and we are looking for a driven and talented Member of Technical Staff to join us to build state-of-the art agents. As an MTS on our team, you will design, build, and maintain a Spark-based infrastructure to process and manage large datasets critical for machine learning research. You’ll work closely with our researchers to develop data workflows and tools that streamline the preparation and analysis of massive multimodal datasets, ensuring efficiency and scalability. We operate at Amazon's large scale with the energy of a nimble start-up. If you have a learner's mindset, enjoy solving challenging problems and value an inclusive and collaborative team culture, you will thrive in this role, and we hope to hear from you. Key job responsibilities * Develop and maintain reliable infrastructure to enable large-scale data extraction and transformation. * Work closely with researchers to create tooling for emerging data-related needs. * Manage project prioritization, deliverables, timelines, and stakeholder communication. * Illuminate trade-offs, educate the team on best practices, and influence technical strategy. * Operate in a dynamic environment to deliver high quality software.
US, WA, Bellevue
This is currently a 12 month temporary contract opportunity with the possibility to extend to 24 months based on business needs. The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, CA, Sunnyvale
Are you passionate about robotics and research? Do you want to solve real customer problems through innovative technology? Do you enjoy working on scalable research and projects in a collaborative team environment? Do you want to see your science solutions directly impact millions of customers worldwide? At Amazon, we hire the best minds in technology to innovate and build on behalf of our customers. Customer obsession is part of our company DNA, which has made us one of the world's most beloved brands. We’re looking for current PhD students with a passion for robotic research and applications to join us as Robotics Applied Scientist II Intern/Co-ops in 2026 to shape the future of robotics and automation at an unprecedented scale across. For these positions, our Robotics teams at Amazon are looking for students with a specialization in one or more of the research areas in robotics such as: robotics, robotics manipulation (e.g., robot arm, grasping, dexterous manipulation, end of arm tools/end effector), autonomous mobile robots, mobile manipulation, movement, autonomous navigation, locomotion, motion/path planning, controls, perception, sensing, robot learning, artificial intelligence, machine learning, computer vision, large language models, human-robot interaction, robotics simulation, optimization, and more! We're looking for curious minds who think big and want to define tomorrow's technology. At Amazon, you'll grow into the high-impact engineer you know you can be, supported by a culture of learning and mentorship. Every day brings exciting new challenges and opportunities for personal growth. By applying to this role, you will be considered for Robotics Applied Scientist II Intern/Co-op (2026) opportunities across various Robotics teams at Amazon with different robotics research focus, with internship positions available for multiple locations, durations (3 to 6+ months), and year-round start dates (winter, spring, summer, fall). Amazon intern and co-op roles follow the same internship structure. "Intern/Internship" wording refers to both interns and co-ops. Amazon internships across all seasons are full-time positions, and interns should expect to work in office, Monday-Friday, up to 40 hours per week typically between 8am-5pm. Specific team norms around working hours will be communicated by your manager. Interns should not have conflicts such as classes or other employment during the Amazon work-day. Applicants should have a minimum of one quarter/semester/trimester remaining in their studies after their internship concludes. The robotics internship join dates, length, location, and prospective team will be finalized at the time of any applicable job offers. In your application, you will be able to provide your preference of research interests, start dates, internship duration, and location. While your preference will be taken into consideration, we cannot guarantee that we can meet your selection based on several factors including but not limited to the internship availability and business needs of this role. About the team The Personal Robotics Group is pioneering intelligent robotic products that deliver meaningful customer experiences. We're the team behind Amazon Astro, and we're building the next generation of robotic systems that will redefine how customers interact with technology. Our work spans the full spectrum from advanced hardware design to sophisticated software and control systems, combining mechanical innovation, software engineering, dynamic systems modeling, and intelligent algorithms to create robots that are not just functional, but delightful. This is a unique opportunity to shape the future of personal robotics working with world-class teams pushing the boundaries of what's possible in robotic manipulation, locomotion, and human-robot interaction. Join us if you're passionate about creating the future of personal robotics, solving complex challenges at the intersection of hardware and software, and seeing your innovations deliver transformative customer experiences.