Amazon Halo Rise advances the future of sleep

Built-in radar technology, deep domain adaptation for sleep stage classification, and low-latency incremental sleep tracking enable Halo Rise to deliver a seamless, no-contact way to help customers improve sleep.

The benefits of quality sleep are well documented, and sleep affects nearly every aspect of our physical and emotional well-being. Yet one in three adults doesn’t get enough sleep. Given Amazon’s expertise in machine learning and radar technology innovation, we wanted to invent a device that would help customers improve their sleep by looking holistically at the factors that contribute to a good night’s rest.

That’s why we’re excited to announce that Amazon has unveiled its first dedicated sleep device — Halo Rise, a combined bedside sleep tracker, wake-up light, and smart alarm. Powered by custom machine learning algorithms and a suite of built-in sensors, Halo Rise accurately determines users’ sleep stages and provides valuable insights that can be used to optimize their sleep, including information about their sleep environments. Halo Rise has no sensors to wear, batteries to charge, or apps to open. And since a good wake-up experience is core to good sleep, Halo Rise features a wake-up light and smart alarm, designed to help customers start the day feeling rested and alert.

Halo Rise in action
A built-in radar sensor uses ultralow-power radio signals to sense respiration and movement patterns and determine sleep stages.

Designing with customer trust as our foundation

Customer privacy and safety are foundational to Halo Rise, and that's evident in both the hardware design and the technologies used to power the experience. Halo Rise features neither a camera nor a microphone and instead relies on ambient radar technology and machine learning to accurately determine sleep stages: deep, light, REM (rapid eye movement), and awake.

The technology at the core of Halo Rise is a built-in radar sensor that safely emits and receives an ultralow-power radio signal. The sensor uses phase differences between reflected signals at different antennas to measure movement and distance. Through on-chip signal processing, Halo Rise produces a discrete waveform corresponding to the user’s respiration. The device cannot detect noise or visual identifiers associated with an individual user, such as body images.

Using built-in radar technology enables us to prioritize customer privacy while still delivering accurate measurements and useful results. Customers have the option to manually put Halo Rise into Standby mode, which turns off the device’s ability to detect someone’s presence or track sleep.

Halo Rise hardware design
Halo Rise features a suite of sensors to accurately track your sleep and measure your room’s temperature, humidity, and light levels. 

Intuitive and accurate experience

To design the sleep-tracking algorithm that powers Halo Rise, we thought about the most common bedtime behaviors and the ways in which customers and their families (pets included) might engage with the bedroom. This led us to innovate on five main technological fronts:

  • Presence detection: Halo Rise activates its sleep detection only when someone is in range of the sensor. Otherwise, the device remains in a monitoring mode, where no data is transmitted to the cloud.
  • Primary-user tracking: Halo Rise distinguishes the sleep of the primary user (the user closest to the device) from that of other people or pets in the same bed, even though the respiration signal cannot be associated with individual users.
  • Sleep intent detection: Halo Rise detects when the user first starts trying to sleep and distinguishes that attempt from other in-bed activities — such as reading or watching TV — to accurately measure the time it takes to fall asleep, an important indicator of sleep health.
  • Sleep stage classification: Halo Rise reliably correlates respiration-driven movement signals with sleep stages.
  • Smart-alarm integration: During the user’s alarm window, the Halo Rise smart alarm checks the user’s sleep stage every few minutes to detect light sleep, while also maximizing sleep duration.
Halo-Vienna-MM_Wave-Chart.png
A combination of breathing and movement patterns enables Halo Rise to determine the primary user for the sleep session and to measure that person’s sleep throughout the night.

Presence detection

Halo Rise has an easy setup process. To get started, a customer will place Halo Rise on their bedside table facing their chest and note in the Amazon Halo app what side of the bed they sleep on — and that’s it: Halo Rise is ready to go. The radar sensor detects motion within a 3-D geometric volume that fans out from the sensor, an area called the detection zone. Within this zone, the presence detection algorithm estimates the location of the bed and an “out-of-bed” area between the bed and the device.

On-chip algorithms detect the motion and location of respiration events within the detection zone. In both cases — motion and respiration — the algorithm evaluates the quality of the signals. On that basis, it computes a score indicating its confidence that the readings are reliable and a user is present. Only if the confidence score crosses a reliability threshold does Halo Rise begin streaming sensor data to the cloud, where it is processed by the primary-user-tracking algorithm.

Radar Fan.png
The Halo Rise detection zone is the region within which the radar sensor senses motion and location.

Primary-user tracking

We know that many of our customers share their beds, be it with other people or with pets, so our algorithms are designed to track the sleep of only the primary user. Halo Rise starts a sleep session after it detects someone’s presence within the detection zone for longer than five minutes. From there, the primary-user-tracking algorithm runs continuously in the background, sensing the closest user’s sleep stages. As long as the user sleeps on their side of the bed, and their partner sleeps on the other side, Halo Rise will track the primary user’s sleep quality irrespective of who comes to bed first and who leaves the bed last.

During the sleep session, Halo Rise dynamically monitors changes in the user’s distance from the sensor, the respiration signal quality, and abrupt changes in respiration patterns that indicate another person’s presence. These changes cause the algorithm to reassess whether it’s actually sensing the intended user and to ignore the data unrelated to the primary user. For instance, if the user gets into bed after their partner has already fallen asleep, or if they use the restroom in the middle of the night, Halo Rise detects that and adjusts the sleep results accordingly.

Sleep intent detection

Another big algorithmic challenge we faced was determining when a user is quietly sitting in bed reading their Kindle or watching TV rather than trying to fall asleep. The time it takes to fall asleep (also known as sleep latency) is an important indicator of sleep health. Too short of a time may result from sleep deprivation, while too long of a time may be due to difficulty winding down.

To address this problem, we used a combination of presence and primary-user tracking along with a machine-learning model trained and evaluated on tens of thousands of hours of sleep diaries to accurately identify when the user is trying to sleep. The model uses sensor data streamed from the device — including respiration, movement, and distance — to generate a sleep intent score. The score is then post-processed by a regularized change-point detection algorithm to determine when the user is trying to fall asleep or wake up.

Halo Rise Sleep Intent v2.png
A machine learning model trained on thousands of hours of sleep uses respiration, movement, and distance data to generate a sleep intent score.

Sleep stage classification

Wearable health trackers like Halo Band and Halo View use heart rate and motion signals to determine sleep stages during the night, but Halo Rise uses respiration. To learn how to reliably recognize those stages, we needed to develop new machine learning models.

We pretrained a deep-learning model to predict sleep stages using a rich and diverse clinical dataset that included tens of thousands of hours of sleep collected by academic and research sources. The research included sleep data measured using the clinical gold standard, polysomnography (PSG). PSG studies use a large array of sensors attached to the body to measure sleep, including respiratory inductance plethysmography (RIP) sensors, whose output is analogous to the respiration data measured by Halo Rise.

Pretraining the model to predict sleep stages from RIP sensors enabled it to develop meaningful representations of the relationship between respiration and sleep prior to additional training on radar datasets collected alongside PSG. To collect radar training data for the models, we partnered with sleep clinics to conduct thousands of hours of PSG studies. Ultimately, this enables our models to classify sleep stages using just a built-in radar in the comfort of a customer’s home.

Halo_hypnogram.png
In the morning, customers can access a sleep hypnogram that provides a detailed breakdown of time spent in each sleep stage throughout the night.

A smarter wake-up experience

When woken naturally during a light sleep stage, people are most likely to feel rested, refreshed, and ready to tackle the day. Consequently, Halo Rise features a wake-up light, which gently simulates the colors and gradual brightening of a sunrise, and a smart alarm. Customers can also set an audible smart alarm that’s integrated with our sleep stage classification algorithms, optimizing their wake experience. Ahead of their scheduled wake-up time, the audible smart alarm monitors their sleep stages and wakes them up at their ideal time for getting up. This combination of wake-up light and smart alarm is shown to increase cognitive and physical performance throughout the day.

The smart-alarm algorithms are trained around two factors: sensing when the user is in light sleep and maximizing the user’s sleep duration. For the first component, Halo Rise needs to continuously monitor sleep stages during the alarm window — the 30 minutes before a user’s scheduled alarm — to identify when the user has entered a light sleep stage, known as the “wake window.”

At this phase, our algorithms work to sense “wakeable events,” such as a change in motion or breathing. This requires incrementally computing sleep stages to trigger the alarm with low latency. Unlike many sleep algorithms, Halo Rise does not require data from the entirety of the sleep session to classify sleep stages, allowing predictions to be used directly for alarm triggers as data is streamed.

For the second component, the system’s models are trained to predict the latest moment to trigger the alarm during the wake window. This ensures that as the user drifts between sleep stages, they are getting those crucial minutes of additional sleep before the alarm goes off.

The Halo Rise wake-up light
Halo Rise identifies a “wake window” when the user is in light sleep, while also maximizing sleep duration before activating an audible smart alarm.

A solution you can trust

To evaluate our machine learning algorithms, we collected thousands of hours of sleep studies comparing Halo Rise to PSG for over a hundred sleepers, developed with input from leading sleep labs. While sleep studies are typically conducted in sleep labs, we performed in-home PSG studies at participants’ homes under supervision of registered PSG technologists to test the device in naturalistic settings.

We used three different registered PSG technologists to reliably annotate ground truth sleep stages per the American Academy of Sleep Medicine’s scoring rules. We then compared Halo Rise’s outputs to the ground truth sleep data across 14 different sleep metrics — including time asleep, time awake, time to fall asleep, and accuracy for every 30 seconds — following analysis guidelines from a standardized framework for sleep stage classification assessment. This evaluation was supplemented by thousands of sleep diaries from our beta trials, expanding our evaluation to a diverse population of adults to account for variations in preferred sleep postures, age, body shapes, and other background conditions.

What’s next?

As we look to invent new products that help our customers live better longer, Halo Rise is an important step in giving our customers greater agency over their health and well-being. By looking holistically at the end-to-end sleep experience — not just going to sleep but also getting up in the morning — Halo Rise unlocks an entirely new way for customers to understand and manage sleep. We’re excited to help them make sense of valuable sleep data, from the quality and quantity of their sleep to their room’s environment, and deliver actionable insights and resources to improve it in the future. Halo Rise is just getting started, and we are going to learn from our customers how this technology can continue to evolve and become even more personalized to better meet their needs.

Research areas

Related content

  • Staff writer
    December 24, 2024
    From cloud databases and anomaly detection on graphs to recession prediction and Amazon's new Nova foundation models, these are the most viewed publications authored by Amazon scientists and collaborators in 2024.
  • Staff writer
    December 24, 2024
    Large language models remained a hot topic, but posts about cryptography and automated reasoning also drew readers.
  • Amazon Research Awards team
    December 20, 2024
    Awardees, who represent 10 universities, have access to Amazon public datasets, along with AWS AI/ML services and tools.
GB, London
We are looking for an Economist to work on exciting and challenging business problems related to Amazon Retail’s worldwide product assortment. You will build innovative solutions based on econometrics, machine learning, and experimentation. You will be part of a interdisciplinary team of economists, product managers, engineers, and scientists, and your work will influence finance and business decisions affecting Amazon’s vast product assortment globally. If you have an entrepreneurial spirit, you know how to deliver results fast, and you have a deeply quantitative, highly innovative approach to solving problems, and long for the opportunity to build pioneering solutions to challenging problems, we want to talk to you. Key job responsibilities * Work on a challenging problem that has the potential to significantly impact Amazon’s business position * Develop econometric models and experiments to measure the customer and financial impact of Amazon’s product assortment * Collaborate with other scientists at Amazon to deliver measurable progress and change * Influence business leaders based on empirical findings
US, NY, New York
As part of the AWS Solutions organization, we have a vision to provide business applications, leveraging Amazon’s unique experience and expertise, that are used by millions of companies worldwide to manage day-to-day operations. We will accomplish this by accelerating our customers’ businesses through delivery of intuitive and differentiated technology solutions that solve enduring business challenges. We blend vision with curiosity and Amazon’s real-world experience to build opinionated, turnkey solutions. Where customers prefer to buy over build, we become their trusted partner with solutions that are no-brainers to buy and easy to use. The Team: Amazon Go is a new kind of store with no lines and no checkout—you just grab and go! Customers simply use the Amazon Go app to enter the store, take what they want from our selection of fresh, delicious meals and grocery essentials, and go! Our checkout-free shopping experience is made possible by our Just Walk Out Technology, which automatically detects when products are taken from or returned to the shelves and keeps track of them in a virtual cart. When you’re done shopping, you can just leave the store. Shortly after, we’ll charge your Amazon account and send you a receipt. Check it out at amazon.com/go. Designed and custom-built by Amazonians, our Just Walk Out Technology uses a variety of technologies including computer vision, sensor fusion, and advanced machine learning. Innovation is part of our DNA! Our goal is to be Earths’ most customer centric company and we are just getting started. We need people who want to join an ambitious program that continues to push the state of the art in computer vision, machine learning, distributed systems and hardware design. The Role: Everyone on the team needs to be entrepreneurial, wear many hats and work in a highly collaborative environment that’s more startup than big company. We’ll need to tackle problems that span a variety of domains: computer vision, image recognition, machine learning, real-time and distributed systems. As a Machine Learning or Computer Vision Research Scientist, you will help solve a variety of technical challenges and mentor other engineers. You will tackle challenging, novel situations every day and given the size of this initiative, you’ll have the opportunity to work with multiple technical teams at Amazon in different locations. You should be comfortable with a degree of ambiguity that’s higher than most projects and relish the idea of solving problems that, frankly, haven’t been solved at scale before - anywhere. Along the way, we guarantee that you’ll learn a ton, have fun and make a positive impact on millions of people. About the team Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
IN, KA, Bengaluru
Customer addresses, Geospatial information and Road-network play a crucial role in Amazon Logistics' Delivery Planning systems. We own exciting science problems in the areas of Address Normalization, Geocode learning, Maps learning, Time estimations including route-time, delivery-time, transit-time predictions which are key inputs in delivery planning. As part of the Geospatial science team within Last Mile, you will partner closely with other scientists and engineers in a collegial environment to develop enterprise ML solutions with a clear path to business impact. The setting also gives you an opportunity to think about a complex large-scale problem for multiple years and building increasingly sophisticated solutions year over year. In the process there will be opportunity to innovate, explore SOTA and publish the research in internal and external ML conferences. Successful candidates will have deep knowledge of competing machine learning methods for large scale predictive modelling, natural language processing, semi-supervised & graph based learning. We also look for the experience to graduate prototype models to production and the communication skills to explain complex technical approaches to the stakeholders of varied technical expertise. Here is a glimpse of the problem spaces and technologies that we deal with on a regular basis: 1. De-duping and organizing addresses into hierarchy while handling noisy, inconsistent, localized and multi-lingual user inputs. We do this at the scale of millions of customers for established (US, EU) as well as emerging geographies (IN, MX). We make use of technologies like LLMs, Weak supervision, Graph-based clustering & Entity matching. We also use additional modalities such as building outlines in maps, street view images and 3P datasets, gazetteers. 2. Build a generic ML framework which leverages relationship between places to improve delivery experience by learning precise delivery locations and propagating attributes, such as business hours and safe places. 3. (Work done in sister teams) Developing systems to consume inputs from areal imagery and optimize our maps to enable efficient delivery planning. Also building models to estimate travel time, turn costs, optimal route and defect propensities. Key job responsibilities As an Applied Scientist I, your responsibility will be to deliver on a well defined but complex business problem, explore SOTA technologies including GenAI and customize the large models as suitable for the application. Your job will be to work on a end-to-end business problem from design to experimentation and implementation. There is also an opportunity to work on open ended ML directions within the space and publish the work in prestigious ML conferences. About the team Last Mile Address Intelligence (LMAI) team owns WW charter for address and location learning solutions which are crucial for efficient Last Mile delivery planning. The team works out of Hyderabad and Bangalore offices in India. LMAI is a part of Geospatial science team, which also owns problems in the space of maps learning and travel time estimations. The rest of the Geospatial science team and senior leadership of Last Mile org works out of Bellevue office.
IL, Haifa
Come build the future of entertainment with us. Are you interested in helping shape the future of movies and television? Do you want to help define the next generation of how and what Amazon customers are watching? Prime Video is a premium streaming service that offers customers a vast collection of TV shows and movies - all with the ease of finding what they love to watch in one place. We offer customers thousands of popular movies and TV shows from Originals and Exclusive content to exciting live sports events. We also offer our members the opportunity to subscribe to add-on channels which they can cancel at any time and to rent or buy new release movies and TV box sets on the Prime Video Store. Prime Video is a fast-paced, growth business - available in over 240 countries and territories worldwide. The team works in a dynamic environment where innovating on behalf of our customers is at the heart of everything we do. If this sounds exciting to you, please read on As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities As an Applied Scientist at Prime Video, you will have end-to-end ownership of the product, related research and experimentation, applying advanced machine learning techniques in computer vision (CV), natural language processing (NLP), multimedia understanding and so on. You’ll work on diverse projects that enhance Prime Video’s recommendation systems, image/video understanding, and content personalization, driving impactful innovations for our global audience. Other responsibilities include: - Lead cutting-edge research in computer vision and natural language processing, applying it to video-centric media challenges. - Develop scalable machine learning models to enhance media asset generation, content discovery, and personalization. - Collaborate closely with engineering teams to integrate your models into production systems at scale, ensuring optimal performance and reliability. - Actively participate in publishing your research in leading conferences and journals. - Lead a team of skilled applied scientists, you will shape the research strategy, create forward-looking roadmaps, and effectively communicate progress and insights to senior leadership - Stay up-to-date with the latest advancements in AI and machine learning to drive future research initiatives. About the team At Prime Video, we strive to deliver the best-in-class entertainment experiences across devices for millions of customers. Whether it’s developing new personalization algorithms, improving video content discovery, or building robust media processing systems, our scientists and engineers tackle real-world challenges daily. You’ll be part of a fast-paced environment where experimentation, risk-taking, and innovation are encouraged.
US, MA, Boston
The Automated Reasoning Group is looking for a Applied Scientist with expertise in programming language semantics and deductive verification techniques (e.g. Lean, Dafny) to deliver novel code reasoning capabilities at scale. You will be part of a larger organization that develops a spectrum of formal software analysis tools and applies them to software at all levels of abstraction from assembler through high-level programming languages. You will work with a team of world class automated reasoning experts to deliver code reasoning technology that is accessible to all developers.
US, WA, Bellevue
Welcome to the Worldwide Returns & ReCommerce team (WWR&R) at Amazon.com. WWR&R is an agile, innovative organization dedicated to ‘making zero happen’ to benefit our customers, our company, and the environment. Our goal is to achieve the three zeroes: zero cost of returns, zero waste, and zero defects. We do this by developing groundbreaking products and driving truly innovative operational excellence to help customers keep what they buy, recover returned and damaged product value, keep thousands of tons of waste from landfills, and create the best customer returns experience in the world. We have an eye to the future – we create long-term value at Amazon by focusing not just on the bottom line, but on the planet. We are building the most sustainable re-use channel we can by driving multiple aspects of the Circular Economy for Amazon – Returns & ReCommerce. Amazon WWR&R is comprised of business, product, operational, program, software engineering and data teams that manage the life of a returned or damaged product from a customer to the warehouse and on to its next best use. Our work is broad and deep: we train machine learning models to automate routing and find signals to optimize re-use; we invent new channels to give products a second life; we develop highly respected product support to help customers love what they buy; we pilot smarter product evaluations; we work from the customer backward to find ways to make the return experience remarkably delightful and easy; and we do it all while scrutinizing our business with laser focus. You will help create everything from customer-facing and vendor-facing websites to the internal software and tools behind the reverse-logistics process. You can develop scalable, high-availability solutions to solve complex and broad business problems. We are a group that has fun at work while driving incredible customer, business, and environmental impact. We are backed by a strong leadership group dedicated to operational excellence that empowers a reasonable work-life balance. As an established, experienced team, we offer the scope and support needed for substantial career growth. Amazon is earth’s most customer-centric company and through WWR&R, the earth is our customer too. Come join us and innovate with the Amazon Worldwide Returns & ReCommerce team! Key job responsibilities * Design, develop, and evaluate highly innovative models for Natural Language Programming (NLP), Large Language Model (LLM), or Large Computer Vision Models. * Use SQL to query and analyze the data. * Use Python, Jupyter notebook, and Pytorch to train/test/deploy ML models. * Use machine learning and analytical techniques to create scalable solutions for business problems. * Research and implement novel machine learning and statistical approaches. * Mentor interns. * Work closely with data & software engineering teams to build model implementations and integrate successful models and algorithms in production systems at very large scale. A day in the life If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! Benefits: Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan Learn more about our benefits here: https://amazon.jobs/en/internal/benefits/us-benefits-and-stock About the team When a customer returns a package to Amazon, the request and package will be passed through our WWRR machine learning (ML) systems so that we could improve the customer experience, identify return root cause, optimize re-use, and evaluate the returned package. Our problems touch multiple modalities spanning from: textual, categorical, image, to speech data. We operate at large scale and rely on state-of-the-art modeling techniques to power our ML models: XGBoost, BERT, Vision Transformers, Large Language Models.
US, CA, Santa Clara
Amazon CloudWatch is the native AWS monitoring and observability service for cloud resources and applications. We are seeking a talented Senior Applied Scientist to develop next-generation scientific methods and infrastructure to support a core AWS business that delivers critical services to millions of customers operating at scale. This is a high visibility and high impact role that work on highly strategic projects in the AI/ML and Analytics space, will interact with all levels of AWS leadership. We are developing solutions that not only surface the “what” but also the “why” and “how to fix it”, without requiring operators to have extensive domain knowledge and technical expertise to efficiently troubleshoot and remediate incidents. Using decades of AWS operational excellence coupled with the advances in LLMs and Gen-AI technologies, we are transforming the very core of how customers can effortlessly interact with our offerings to build and operate their applications in the cloud. We are hiring to grow our team, and are looking for well-rounded applied scientists with backgrounds in machine learning, foundation models, and natural language processing. You'll be working with talented scientists, engineers, and product managers to innovate on behalf of our customers. If you're fired up about being part of a dynamic, mission driven team, then this is your moment to join us on this exciting journey! Key job responsibilities As an Applied Scientist II you will be responsible for * Research and development of algorithms that improve training of foundation models across pre-training, multitask learning, supervised finetuning, and reinforcement learning from human feedback * Research and development of novel approaches for anomaly detection, root cause analysis, and provide intelligent insights from vast amounts of monitoring and observability data * Collaborating with scientists, engineers, and Product Managers across CloudWatch team as well as directly with customers * Lead key science initiatives in strategic investment areas of AI/ML/LLM Ops and Observability * Be an industry thought leader representing Amazon at top-tier scientific conferences * Engaging in the hiring process and developing, growing, and mentoring junior scientists A day in the life Working closely with and across agile teams, you will be able to see and feel the impact of your work on our customers. This is a high visibility and high impact role that will interact with all levels of AWS leadership. Our ideal candidate is excited about the incredible opportunity that cloud monitoring represents and is deeply passionate about delivering the highest quality services leveraging AI/ML/LLMs. You’re naturally customer centric and thrive in a fast-paced environment that requires strong technical and business judgment and solid communication skills. About the team Amazon CloudWatch Logs is a core monitoring service used by millions of AWS customers. We move fast and have delivered remarkable products and features over the last few years to streamline how AWS customers troubleshoot their critical applications. Our mission is to be the most cost effective, integrated, fast, and secure logs management and analytics platform for AWS customers. We are a diverse group of product and engineering professionals that are passionate about delivering logging features that delight customers operating at any scale. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Utility Computing (UC) AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (IoT), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.
US, WA, Seattle
"We see our customers as invited guests to a party, and we are the hosts. It's our job every day to make every important aspect of the customer experience a little bit better." - Jeff Bezos, Founder & CEO. We didn’t make Amazon a trillion-dollar company, our customers did and we want to ensure that our customers always have a positive experience that keeps them coming back to Amazon. To help achieve this, the Worldwide Defect Elimination (WWDE) team, within Amazon Customer Service (CS), relentlessly focuses on maintaining customer trust by building products that offer appropriate resolutions to resolve issues faced by our customers. WWDE scientists solve complex problems and build scalable solutions to help our customers navigate through issues and eliminate systemic defects to prevent future issues. As a Research Scientist, your role is pivotal in leveraging advanced Artificial Intelligence (AI) and Machine Learning (ML) techniques to address customer issues at scale. You'll develop innovative solutions that summarize and detect issues, organize them using taxonomy, and pinpoint root causes within Amazon systems. Your expertise will drive the identification of responsible stakeholders and enable swift resolution. Utilizing the latest techniques, you will build an AI ecosystem that can efficiently comb over our billions of customer interactions (using a combination of media). As a part of this role, you will collaborate with a large team of experts in the field and move the state of defect elimination research forward. You should have a knack for leveraging AI to translate complex data insights into actionable strategies and can communicate these effectively to both technical and non-technical audiences. Key job responsibilities - Develop ML/GenAI-powered solutions for automating defect elimination workflows - Design and implement robust metrics to evaluate the effectiveness of ML/AI models - Perform statistical analyses and statistical tests, including hypothesis testing and A/B testing - Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: - Medical, Dental, and Vision Coverage - Maternity and Parental Leave Options - Paid Time Off (PTO) - 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply.
US, CA, San Francisco
We are hiring an Economist with the ability to disambiguate very challenging structural problems in two and multi-sided markets. The right hire will be able to get dirty with the data to come up with stylized facts, build reduced form model that motivate structural assumptions, and build to more complex structural models. The main use case will be understanding the incremental effects of subsidies to a two sided market relate to sales motions characterized by principal agent problems. Key job responsibilities This role with interface directly with product owners, scientists/economists, and leadership to create multi-year research agendas that drive step change growth for the business. The role will also be an important collaborator with other science teams at AWS. A day in the life Our team takes big swings and works on hard cross organizational problems where the optimal success rate is not 100%. We also ask people to grow their skills and stretch and make sure we do it in a supportive and fun environment. It’s about empirically measured impact, advancement, and fun on our team. We work hard during work hours but we also don’t encourage working at nights or on weekends except in very rare, high stakes cases. Burn out isn’t a successful long run strategy. Because we invest in the long run success of our group it’s important to have hobbies, relax and then come to work refreshed and excited. It makes for bigger impact, faster skill accrual and thus career advancement. About the team Our group is technically rigorous and encourages ongoing academic conference participation and publication. Our leaders are here for you and to enable you to be successful. We believe in being servant leaders focused on influence: good data work has little value if it doesn’t translate into actionable insights that are rolled out and impact the real economy. We are communication centric since being able to explain what we do ensures high success rates and lowers administrative churn. Also: we laugh a lot. If it’s not fun, what’s the point?
US, WA, Seattle
This is a unique opportunity to join a small, high-impact team working on AI agents for health initiatives. You will lead the crucial data foundation of our project, managing health data acquisition, processing, and model evaluation, while also contributing to machine learning model development. Your work will directly influence the creation and improvement of AI solutions that could significantly impact how individuals manage their daily health and long-term wellness goals. If you're passionate about leveraging data and developing ML models to solve meaningful problems in healthcare through AI, this role is for you. You'll work on large-scale data processing, design annotation workflows, develop evaluation metrics, and contribute to the machine learning algorithms that drive the performance of our health AI agents. You'll have the chance to innovate alongside healthcare experts and data scientists. In this early-stage initiative, you'll have significant influence on our data strategies and ML approaches, shaping how they drive our AI solutions. This is an excellent opportunity for a high-judgment data scientist with ML expertise to demonstrate impact and make key decisions that will form the backbone of our health AI initiatives. Key job responsibilities Be the complete owner for health data acquisition, processing, and quality assurance Design and oversee data annotation workflows Collaborate on data sourcing strategies Lead health data acquisition and processing initiatives Manage AI agent example annotation processes Develop and implement data evaluation metrics Design, implement, and evaluate machine learning models for AI agents, with a focus on improving natural language understanding and generation in health contexts A day in the life You'll work with a cross-disciplinary team to source, evaluate, and leverage health data for AI agent development. You'll shape data acquisition strategies, annotation workflows, and machine learning models to enhance our AI's health knowledge. Expect to dive deep into complex health datasets, challenge conventional data evaluation metrics, and continuously refine our AI agents' ability to understand and respond to health-related queries.