The science behind Echo Show 10

A combination of audio and visual signals guide the device’s movement, so the screen is always in view.

The first Echo Show represented an entirely new way to interact with Alexa; she could show you things on a screen controlled by voice. Being able to easily see your favorite recipe, watch your flash briefing, or video call with a friend is delightful — but we thought we could add even more to the experience. Our screens are stationary, but we are not. So with Echo Show 10, we asked ourselves: how can we keep the screen in view, no matter where you are in the room? The answer: it has to move.

Creating a device that can move intelligently in a way that improves the Alexa experience and is not distracting was no easy task. We had to consider when, where, and how to incorporate motion into Echo Show to make it feel like a natural extension of how customers experience Alexa.

Combining audio and computer vision algorithms

When you say “Alexa” to any Echo Show device today, you’ll see a blue light bar on screen. The lighter part of that blue light bar approximates the direction the device chooses to focus; we call this beam selection. Echo devices try to select the beam that gives the best accuracy for recognizing what was said.

Cutaway view of Echo 10's motor with a brass disc at the bottom.
A cutaway view of Echo 10's motor (brass disc at bottom).

However, what works for beam selection doesn’t work best for guiding motion. Noises, multiple speakers, or sound reflections from walls and other surfaces can prevent these algorithms from selecting the beam that best represents the direction of the talker. And with audio-only output, it doesn’t matter if Echo’s input system has selected a different beam: the user still hears Alexa’s response. But a screen that’s constantly moving around to avoid these echoes and noises would be a severe distraction.

With Echo Show 10, we solve this problem by combining sound source localization (SSL) with computer vision (CV). Our implementation of SSL uses acoustic-wave-decomposition and machine-learning techniques to determine the direction in which the user is most probably located. Then, the raw SSL measurements are fused with our CV algorithms.

The intersection of design and science

Learn how a team of designers, scientists, and engineers worked together to overcome challenges and create Echo Show 10.

The CV algorithms can identify objects and humans in the field of view, enabling the device to differentiate between sounds coming from people and those coming from other sources and reflections off walls. Sometimes audio can reflect from behind the device, so we added a setup step in which customers set the device’s range of motion. If the device can ignore sounds originating outside its range of motion, it’s better able to avoid reflections and narrow down the direction of the wake word.

The CV algorithms turn the camera image into hundreds of data points representing shapes, edges, facial landmarks, and general coloring; then the image is deleted permanently. These data points cannot be reverse-engineered to the original input, and no facial-recognition technology is used. All of this processing happens in a matter of milliseconds, entirely on-device.

Visualization of the non-reversible process Echo 10 uses to convert images into a higher-level abstraction to support motion.
A visualization of the non-reversible process Echo 10 uses to convert images into a higher-level abstraction to support motion.

The device’s computer vision service (CVS) can dynamically vary the frame rate (the number of frames per second), and it operates with over 95% precision at distances of up to 10 feet. The CVS uses spatiotemporal filtering to suppress ephemeral false positives caused by camera motion and blur. In a multiuser environment, engagement detection — determining which user is facing the device — helps us further target the screen to the relevant user or users.

Defining the experience

With our algorithms built, the next step was to orchestrate the ideal customer experience. We started with capturing data from internal beta participants and product teams. Amazon employees tested Echo Show 10 in their homes, and before the hardware was even ready, we used virtual-reality to gather early input on what movements felt most natural, preferred speed of motion, and so on. What we learned was invaluable.

First, knowing when not to move is just as important as knowing when to move. We wanted customers to be able to manually redirect the screen. But that meant distinguishing between the pressure applied by someone scrolling through a recipe while making dinner and someone physically trying to move the device. The device also needed to know that if it turned in one direction and hit something — a wall, cabinet, etc. — it should not continue to go in that direction.

This required a motor resistance — or “back drive” — that could kick in, or not, depending on the user’s movement. A lot of fine-tuning went into getting that distinction and timing right.

We also had to determine a speed and acceleration that felt natural. The motor allows us to accelerate at up to 360 degrees/second2 to a speed of up to 180 degrees/second. However, at that speed, in a typical, in-home environment, you risk knocking over a glass or a picture frame that might be near the device. Move too slowly, on the other hand, and you might try the customer’s patience — and even risk spurious stall detection. We settled on a speed that was quick but also allowed the device to stop short if it bumped an object.

Lastly, we needed to define the types of movements that Echo Show 10 will make. As humans, we have an innate ability to know when to respond with our eyes versus a full move of the head. Echo Show 10, while not quite as adaptive as a human, tries to approximate this distinction with three zones of perception, defined by the camera’s field of view.

Within the “dead” zone, the center of the field of view, the device doesn’t move, even if the customers do. Within the “holding” zone, the regions of the field of view outside the center, the device turns only if the customer settles into a new position for long enough. And when the customer enters the “motion” zone, the edges of the field of view, the device moves, ensuring that the screen always remains visible.

The range of these zones, their dependency on your distance from the device, and the device’s speed and acceleration are tuned based on thousands of hours of lab and user testing. There are also certain situations where Echo Show 10 will not move — for instance, if the built-in camera shutter is closed or if SSL cannot differentiate between sounds in two very different directions.

Applications

Echo Show stationed on a kitchen counter.
Imagine, says Sajjadi, that as you were cooking the Echo Show 10 was watching you and could alert you if you missed an ingredient. That, he says, would be an example of taking procuedure monitoring from the shop floor to the kitchen.

After solving these scientific challenges came the fun part: what are some of the first features that will use motion? Video calling is a hugely popular feature for Echo Show customers, so the use of auto-framing and motion in calling was obvious. Customers also tend to place Echo Show devices in kitchens and use Alexa for recipes, so not requiring a busy cook to strain to see a recipe on-screen was also top of mind.

And because customers love Alexa Guard for helping keep their homes safe while they are away, remote access to the camera was high on the list as well. When Away Mode is turned on, Echo Show 10 will periodically pan the room and send a Smart Alert if someone is detected in its field of view. You can also remotely check in on your home for added peace of mind if you are on a trip or to see if your dog has snuck onto the couch while you’re at the grocery store.

In developing Echo Show 10, I have come to appreciate how complex, evolved, and adaptive we are as a species; the things we communicate with nonverbal cues are incredibly complex yet somehow globally understood. We believe that the potential of motion as a response modality is enormous, and we’re just scratching the surface of all the ways we can delight customers with Echo Show 10. For that reason, we’re inviting developers to build experiences for Echo Show 10, with motion APIs that they can use to unleash their creativity. To learn more about these new APIs, visit our developer blog.

Research areas

Related content

US, MA, Boston
As part of Alexa CAS team, our mission is to provide scalable and reliable evaluation of the state-of-the-art Conversational AI. We are looking for a passionate, talented, and resourceful Applied Scientist in the field of LLM, Artificial Intelligence (AI), Natural Language Processing (NLP), to invent and build end-to-end evaluation of how customers perceive state-of-the-art context-aware conversational AI assistants. A successful candidate will have strong machine learning background and a desire to push the envelope in one or more of the above areas. The ideal candidate would also have hands-on experiences in building Generative AI solutions with LLMs, including Supervised Fine-Tuning (SFT), In-Context Learning (ICL), Learning from Human Feedback (LHF), etc. As an Applied Scientist, you will leverage your technical expertise and experience to collaborate with other talented applied scientists and engineers to research and develop novel methods for evaluating conversational assistants. You will analyze and understand user experiences by leveraging Amazon’s heterogeneous data sources and build evaluation models using machine learning methods. Key job responsibilities - Design, build, test and release predictive ML models using LLMs - Ensure data quality throughout all stages of acquisition and processing, including such areas as data sourcing/collection, ground truth generation, normalization, and transformation. - Collaborate with colleagues from science, engineering and business backgrounds. - Present proposals and results to partner teams in a clear manner backed by data and coupled with actionable conclusions - Work with engineers to develop efficient data querying and inference infrastructure for both offline and online use cases About the team Central Analytics and Research Science (CARS) is an analytics, software, and science team within Amazon's Conversational Assistant Services (CAS) organization. Our mission is to provide an end-to-end understanding of how customers perceive the assistants they interact with – from the metrics themselves to software applications to deep dive on those metrics – allowing assistant developers to improve their services. Learn more about Amazon’s approach to customer-obsessed science on the Amazon Science website, which features the latest news and research from scientists across the company. For the latest updates, subscribe to the monthly newsletter, and follow the @AmazonScience handle and #AmazonScience hashtag on LinkedIn, Twitter, Facebook, Instagram, and YouTube.
US, WA, Seattle
AWS Industry Products (IP) is a new AWS engineering organization chartered to build new AWS products by applying Amazon’s innovation mechanisms along with AWS digital technologies to transform the world, industry by industry. We dive deep with leaders and innovators to solve the problems which block their industries, enabling them to capitalize on new digital business models. Simply put, our goal is to use the skill and scale of AWS to make the benefits of a connected world achievable for all businesses. We are looking for an Applied Scientist who are passionate about transforming industries through AI. This is a unique opportunity to not only listen to industry customers but also to develop AI and generative AI expertise in multiple core industries. You will join a team of scientists, product managers and software engineers that builds AI solutions in automotive, manufacturing, healthcare, sustainability/clean energy, and supply chain/operations domains. Leveraging and advancing generative AI technology will be a big part of your charter as we seek to apply the latest advancements in generative AI to industry-specific problems. Key job responsibilities Using your in-depth expertise in machine learning and generative AI, you will deliver reusable science components and services that differentiate our industry products and solve customer problems. You will be the voice of scientific rigor, delivery, and innovation as you work with our segment teams on AI-driven product differentiators. You will conduct and advance research in AI and generative AI within and outside Amazon.
DE, Berlin
The Community Feedback organization powers customer-generated features and insights that help customers use the wisdom of the community to make unregretted shopping decisions. Today our features include Customer Reviews, Content Moderation, and Customer Q&A (Ask), however our mission and charter are broader than these features. We are focused on building a rewarding and engaging experience for contributors to share their feedback, and providing shoppers with trusted insights based on this feedback to inform their shopping decision The Community Data & Science team is looking for a passionate, talented, and inventive Senior Applied Scientist with a background in AI, Gen AI, Machine Learning, and NLP to help build LLM solutions for Community Feedback. You'll be working with talented scientists and engineers to innovate on behalf of our customers. If you're fired up about being part of a dynamic, driven team and are ready to make a lasting impact on the future of AI-powered shopping, we invite you to join us on this exciting journey to reshape shopping. Please visit https://www.amazon.science for more information. Key job responsibilities - As a Senior Applied Scientist, you will work on state-of-the-art technologies that will result in published papers. - However, you will not only theorize about the algorithms but also have the opportunity to implement them and see how they perform in the field. - Our team works on a variety of projects, including state-of-the-art generative AI, LLM fine-tuning, alignment, prompt engineering, and benchmarking solutions. - You will be also mentoring junior scientists on the team. About the team The Community Data & Science team focusses on analyzing, understanding, structuring and presenting customer-generated content (in the form of ratings, text, images and videos) to help customers use the wisdom of the community to make unregretted purchase decisions. We build and own ML models that help with i) shaping the community content corpus both in terms of quantity and quality, ii) extracting insights from the content and iii) presenting the content and insights to shoppers to eventually influence purchase decisions. Today, our ML models support experiences like content solicitation, submission, moderation, ranking, and summarization.
US, WA, Seattle
Amazon Advertising is one of Amazon's fastest growing and most profitable businesses. As a core product offering within our advertising portfolio, Sponsored Products (SP) helps merchants, retail vendors, and brand owners succeed via native advertising, which grows incremental sales of their products sold through Amazon. The SP team's primary goals are to help shoppers discover new products they love, be the most efficient way for advertisers to meet their business objectives, and build a sustainable business that continuously innovates on behalf of customers. Our products and solutions are strategically important to enable our Retail and Marketplace businesses to drive long-term growth. We deliver billions of ad impressions and millions of clicks and break fresh ground in product and technical innovations every day! Within Sponsored Products, the Bidding team is responsible for defining and delivering a collection of advertising products around bid controls (dynamic bidding, bid recommendations, etc.) that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven fundamentally from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding.
US, WA, Seattle
Ever wonder how you can keep the world’s largest selection also the world’s safest and legally compliant selection? Then come join a team with the charter to monitor and classify the billions of items in the Amazon catalog to ensure compliance with various legal regulations. The Classification and Policy Platform (CPP) team is looking for Applied Scientists to build technology to automatically monitor the billions of products on the Amazon platform. The software and processes built by this team are a critical component of building a catalog that our customers trust. As an Applied Scientist on the CPP team, you will train LLMs to solve customer problems, distill knowledge into optimized inference artifacts, and collaborate cross-functionally to deliver impactful solutions. This role offers the opportunity to push the boundaries of LLM capabilities and drive tangible value for our customers. The ideal candidate should possess exceptional technical skills, a startup-driven mindset, outstanding communication abilities to join our dynamic team. We believe that innovation is key to being the most customer-centric company. We innovate, publish, teach, and set strategy, while using Amazon's "working backwards" method to serve our customers.
US, MA, North Reading
Are you inspired by invention? Is problem solving through teamwork in your DNA? Do you like the idea of seeing how your work impacts the bigger picture? Answer yes to any of these and you’ll fit right in here at Amazon Robotics. We are a smart team of doers who work passionately to apply cutting edge advances in robotics and software to solve real-world challenges that will transform our customers’ experiences. We invent new improvements every day. We are Amazon Robotics and we will give you the tools and support you need to invent with us in ways that are rewarding, fulfilling, and fun. Amazon Robotics is seeking students to join us for a 5-6 month internship (full-time, 40 hours per week) as Data Science Co-op. Please note that by applying to this role you would be considered for Data Scientist spring co-op and fall co-op roles on various Amazon Robotics teams. The internship/co-op project(s) and location are determined by the team the student will be working on. Learn more about Amazon Robotics: https://amazon.jobs/en/teams/amazon-robotics About the team Amazon empowers a smarter, faster, more consistent customer experience through automation. Amazon Robotics automates fulfillment center operations using various methods of robotic technology including autonomous mobile robots, sophisticated control software, language perception, power management, computer vision, depth sensing, machine learning, object recognition, and semantic understanding of commands. Amazon Robotics has a dedicated focus on research and development to continuously explore new opportunities to extend its product lines into new areas.
US, CA, Santa Clara
Come join the AWS AI science team in building the next generation models for intelligent automation. AWS, the world-leading provider of cloud services, has fostered the creation and growth of countless new businesses, and is a positive force for good. Our customers bring problems that will give Applied Scientists like you endless opportunities to see your research have a positive and immediate impact in the world. You will have the opportunity to partner with technology and business teams to solve real-world problems, have access to virtually endless data and computational resources, and to world-class engineers and developers that can help bring your ideas into the world. As part of the team, we expect that you will develop innovative solutions to hard problems, and publish your findings at peer reviewed conferences and workshops. We are looking for world class researchers with experience in one or more of the following areas - autonomous agents, API orchestration, Planning, large multimodal models (especially vision-language models), reinforcement learning (RL) and sequential decision making. We are located in the USA (Seattle, Pasadena, Bay Area). About the team Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Utility Computing (UC) AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (IoT), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.
US, NY, New York
Want to work on one of the highest priorities across Amazon Ads? This is your chance to help build a billion dollar business, innovate on a new product space, and have a positive impact on millions of views while working with industry-leading technologies. The Ad Catalyst team in Amazon Advertising operates at the intersection of eCommerce and advertising, offering a rich array of digital advertising solutions to over a million advertisers with the goal of helping our our hundreds of millions customers find and discover anything they want to buy. We start with the customer and work backwards in everything we do, including advertising. Our team owns researching, evaluating, ranking and serving personalized recommendation to each of our 1+ million advertisers using state of the art machine learning techniques ( e.g., deep learning, deep-reinforcement learning, causal modeling). Our team is placed centrally in the Advertising Experience organization which owns the advertising console, this provides us full-stack ownership giving scientists the satisfaction of seeing their work directly power advertiser experiences with measurable outcomes. If you’re interested in joining a rapidly growing team working to build a unique, highly respected advertising group with a relentless focus on the customer, you’ve come to the right place. This is a unique opportunity to get in early and drive significant portions of the technical roadmap and shape the research agenda of a billion+ dollar business. Successful candidates will have strong technical ability, focus on customers by applying a customer-first approach, excellent teamwork and communication skills, and a motivation to achieve results in a fast-paced environment through both strong personal delivery and the ability to develop partnerships with science teams across the org. This is a high visibility leadership position where you will be the first principal scientist in a 400+ people org. Our position offers exceptional opportunities for every candidate to grow their technical and non-technical skills. If you are selected, you have the opportunity to make a difference to our business by designing and building state of the art machine learning systems on big data, leveraging Amazon’s vast computing resources (AWS), working on exciting and challenging projects, and delivering meaningful results to customers world-wide. Key job responsibilities - Be a thought leader and forward thinker, anticipating obstacles to success, helping avoid common failure modes, and holding us to a high standard of technical rigor and excellence in machine learning (ML). - Own and drive the most complex and strategic solutions across the business; responsible for many millions in revenue. - Own the dialogue with partner science teams - shape consensus in scientific research roadmap, modeling approaches evaluation and presentation of the science driven results to our advertisers. - Define evaluation methods and metrics that measure the effectiveness of advertising recommendations using a variety of science techniques (Randomized Control Trials, Causal Modeling, Reinforcement learning policy evaluation) - Research, build, and deploy innovative ML solutions; working across all technical disciplines. - Identify untapped, high-risk technical and scientific directions, and stimulate new research directions that you will deliver on. - Be responsible for communicating our ML innovations to the broader internal & external scientific communities. - Hire, mentor, and guide senior scientists. - Partner with engineering leaders to build efficient and scalable solutions. We are open to hiring candidates to work out of one of the following locations: New York, Seattle
US, CA, Santa Clara
AWS AI is looking for passionate, talented, and inventive Research Scientists with a strong machine learning background to help build industry-leading Conversational AI Systems. Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Natural Language Understanding (NLU), Dialog Systems including Generative AI with Large Language Models (LLMs) and Applied Machine Learning (ML). As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services that make use language technology. You will gain hands on experience with Amazon’s heterogeneous text, structured data sources, and large-scale computing resources to accelerate advances in language understanding. We are hiring in all areas of human language technology: NLU, Dialog Management, Conversational AI, LLMs and Generative AI. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Utility Computing (UC) AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (IoT), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.
US, NY, New York
Amazon Advertising is one of Amazon's fastest growing and most profitable businesses, responsible for defining and delivering a collection of advertising products that drive discovery and sales. Our products and solutions are strategically important to enable our Retail and Marketplace businesses to drive long-term growth. We deliver billions of ad impressions and millions of clicks and break fresh ground in product and technical innovations every day! We are seeking a highly accomplished and visionary Data Science professional to join our team, leading our data science strategy for the Media Planning Science program. In this role, you will collaborate closely with business leaders, stakeholders, and cross-functional teams to drive the success of the program through data-driven solutions. You will be responsible for shaping the data science roadmap fostering a culture of data-driven decision-making, and delivering significant business impact through advanced analytics and cutting-edge data science methodologies. Key job responsibilities As a Data Scientist on this team, you will: 1. Develop and drive the data science strategy for the Media Planning Science program, aligning it with the program's objectives and overall business goals. 2. Identify high-impact opportunities within the program and lead the ideation, planning, and execution of data science initiatives to address them. 3. Solve real-world problems by getting and analyzing large amounts of data, diving deep to identify business insights and opportunities, design simulations and experiments, developing statistical and ML models by tailoring to business needs, and collaborating with Scientists, Engineers, BIE's, and Product Managers. 4. Write code (Python, R, Scala, SQL, etc.) to obtain, manipulate, and analyze data 5. Apply statistical and machine learning knowledge to specific business problems and data. 6. Build decision-making models and propose solution for the business problem you define. 7. Formalize assumptions about how our systems are expected to work, create statistical definition of the outlier, and develop methods to systematically identify outliers. Work out why such examples are outliers and define if any actions needed. 8. Conduct written and verbal presentations to share insights to audiences of varying levels of technical sophistication. About the team The Media Planning Science team builds and deploys models that provide insights and recommendations for media planning. Our mission is to assist advertisers in activating plans that align with their goals. Our insights and recommendations leverage heuristic and machine learning models to simplify the complex tasks of forecasting, outcome prediction, budget planning, optimized audience selection and measurements for media planners. We integrate our insights into user interfaces and programmatic integrations via APIs, ensuring reliable data, timely delivery, and optimal advertising outcomes for our advertisers.