The science of price experiments in the Amazon Store

The requirement that at any given time, all customers see the same prices for the same products necessitates innovation in the design of A/B experiments.

The prices of products in the Amazon Store reflect a range of factors, such as demand, seasonality, and general economic trends. Pricing policies typically involve formulas that take such factors into account; newer pricing policies usually rely on machine learning models.

With the Amazon Pricing Labs, we can conduct a range of online A/B experiments to evaluate new pricing policies. Because we practice nondiscriminatory pricing — all visitors to the Amazon Store at the same time see the same prices for all products — we need to apply experimental treatments to product prices over time, rather than testing different price points simultaneously on different customers. This complicates the experimental design.

Related content
Amazon Scholar David Card on the revolution in economic research that he helped launch and its consequences for industry.

In a paper we published in the Journal of Business Economics in March and presented at the American Economics Association’s annual conference in January (AEA), we described some of the experiments we can conduct to prevent spillovers, improve precision, and control for demand trends and differences in treatment groups when evaluating new pricing policies.

The simplest type of experiment we can perform is a time-bound experiment, in which we apply a treatment to some products in a particular class, while leaving other products in the class untreated, as controls.

Time-bound experiment.png
A time-bound experiment, which begins at day eight, with treatments in red and controls in white.

One potential source of noise in this type of experiment is that an external event — say, a temporary discount on the same product at a different store — can influence treatment effects. If we can define these types of events in advance, we can conduct triggered interventions, in which we time the starts of our treatment and control periods to the occurrence of the events. This can result in staggered start times for experiments on different products.

Triggered interventions.png
The design of a triggered experiment. Red indicates treatment groups, and green indicates control groups. The start of each experiment is triggered by an external event.

If the demand curves for the products are similar enough, and the difference in results between the treatment group and the control group are dramatic enough, time-bound and triggered experiments may be adequate. But for more precise evaluation of a pricing policy, it may be necessary to run treatment and control experiments on the same product, as would be the case with typical A/B testing. That requires a switchback experiment.

Related content
Context vectors that capture “side information” can make experiments more informative.

The most straightforward switchback experiment is the random-days experiments, in which, each day, each product is randomly assigned to either the control group or the treatment group. Our analyses indicate that random days can reduce the standard error of our experimental results — that is, the extent to which the statistics of our observations differ, on average, from the true statistics of the intervention — by 60%.

Random days.png
A random-days experiment. The experiment begins on day 8; red represents treatment, white control.

One of the drawbacks with any switchback experiment, however, is the risk of carryover, in which the effects of a treatment carry over from the treatment phase of the experiment to the control phase. For instance, if treatment increases a product’s sales, recommendation algorithms may recommend that product more often. That could artificially boost the product’s sales even during control periods.

Related content
Pat Bajari, VP and chief economist for Amazon's Core AI group, on his team's new research and what it says about economists' role at Amazon.

We can combat carryover by instituting blackout periods during transitions to treatment and control phases. In a crossover experiment, for instance, we might apply a treatment to some products in a group, leaving the others as controls, but toss out the first week’s data for both groups. Then, after collecting enough data — say, two weeks’ worth — we remove the treatment from the former treatment group and apply it to the former control group. Once again, we throw out the first week’s data, to let the carryover effect die down.

Crossover experiment.png
A crossover experiment, with blackout periods at the beginning of each phase of the experiment. In week 7, the treatment (red) has been applied to products A, D, F, G, and J, but the data is thrown out. In week 10, the first treatment and control groups switch roles, but again, the first week’s data is thrown out.

Crossover experiments can reduce the standard error of our results measurements by 40% to 50%. That’s not quite as good as random days, but carryover effects are mitigated.

Heterogeneous panel treatment effect

The Amazon Pricing Labs also offers two more sophisticated means of evaluating pricing policies. The first of these is the heterogeneous panel treatment effect, or HPTE.

HPTE is a four-step process:

  1. Estimate product-level first difference from detrended data.
  2. Filter outliers.
  3. Estimate second difference from grouped products using causal forest.
  4. Bootstrap data to estimate noise.

Estimate product-level first difference from detrended data. In a standard difference-in-difference (DID) analysis, the first difference is the difference between the results for a single product before and after the experiment begins.

Related content
Amazon Scholar David Card and Amazon academic research consultant Guido Imbens talk about the past and future of empirical economics.

Rather than simply subtracting the results before treatment from the results after treatment, however, we analyze historical trends to predict what would have happened if products were left untreated during the treatment period. We then subtract that prediction from the observed results.

Filter outliers. In pricing experiments, there are frequently unobserved factors that can cause extreme swings in our outcome measurements. We define a cutoff point for outliers as a percentage (quantile) of the results distribution that is inversely proportional to the number of products in the data. This approach has been used previously, but we validated it in simulations.

Estimate second difference from grouped products using causal forest. In DID analysis, the second difference is the difference between the treatment and control groups’ first differences. Because we’re considering groups of heterogeneous products, we calculate the second difference only for products that have strong enough affinities with each other to make the comparison informative. Then we average the second difference across products.

To compute affinity scores, we use a variation on decision trees called causal forests. A typical decision tree is a connected acyclic graph — a tree — each of whose nodes represents a question. In our case, those questions regard product characteristics — say, “Does it require replaceable batteries?”, or “Is its width greater than three inches?”. The answer to the question determines which branch of the tree to follow.

Related content
New method goes beyond Granger causality to identify only the true causes of a target time series, given some graph constraints.

A causal forest consists of many such trees. The questions are learned from the data, and they define the axes along which the data shows the greatest variance. Consequently, the data used to train the trees requires no labeling.

After training our causal forest, we use it to evaluate the products in our experiment. Products from the treatment and control groups that end up at the same terminal node, or leaf, of a tree are deemed similar enough that their second difference should be calculated.

Bootstrap data to estimate noise. To compute the standard error, we randomly sample products from our dataset and calculate their average treatment effect, then return them to the dataset and randomly sample again. Multiple resampling allows us to compute the variance in our outcome measures.

Spillover effect

At the Amazon Pricing Labs, we have also investigated ways to gauge the spillover effect, which occurs when treatment of one product causes a change in demand for another, similar product. This can throw off our measurements of treatment effect.

For instance, if a new pricing policy increases demand for, say, a particular kitchen chair, more customers will view that chair’s product page. Some fraction of those customers, however, may buy a different chair listed on the page’s “Discover similar items” section.

If the second chair is in the control group, its sales may be artificially inflated by the treatment of the first chair, leading to an underestimation of the treatment effect. If the second chair is in the treatment group, the inflation of its sales may lead to an overestimation of the treatment effect.

To correct for the spillover effect, we need to measure it. The first step in that process is to build a graph of products with correlated demand.

Related content
“Group testing” protocols tailored to particularities of the COVID-19 pandemic promise more-informative test results.

We begin with a list of products that are related to each other according to criteria such as their fine-grained classifications in the Amazon Store catalogue. For each pair of related items, we then look at a year’s worth of data to determine whether a change in the price of one affects demand for another. If those connections are strong enough, we join the products by an edge in our substitutable-items graph.

From the graph, we compute the probability that any given pair of substitutable products will find themselves included in the same experiment and which group, treatment or control, they’ll be assigned to. From those probabilities, we can use an inverse probability-weighting schema to estimate the effect of spillover on our observed outcomes.

Estimating spillover effect, however, is not as good as eliminating it. One way to do that is to treat substitutable products as a single product class and assign them to treatment or control groups en masse. This does reduce the power of our experiments, but it gives our business partners confidence that the results aren’t tainted by spillover.

To determine which products to include in each of our product classes, we use a clustering algorithm that searches the substitutable-product graph for regions of dense interconnection and severs those regions connections to the rest of the graph. In an iterative process, this partitions the graph into clusters of closely related products.

In simulations, we found that this clustering process can reduce spillover bias by 37%.

Research areas

Related content

IN, KA, Bengaluru
Customer addresses, Gespatial information and Road-network play a crucial role in Amazon Logistics' Delivery Planning systems. We own exciting science problems in the areas of Address Normalization, Geocode learning, Maps learning, Time estimations including route-time, delivery-time, transit-time predictions which are key inputs in delivery planning. As part of the Geospatial science team within Last Mile, you will partner closely with other scientists and engineers in a collegial environment to develop enterprise ML solutions with a clear path to business impact. The setting also gives you an opportunity to think about a complex large-scale problem for multiple years and building increasingly sophisticated solutions year over year. In the process there will be opportunity to innovate, explore SOTA and publish the research in internal and external ML conferences. Successful candidates will have deep knowledge of competing machine learning methods for large scale predictive modelling, natural language processing, semi-supervised & graph based learning. We also look for the experience to graduate prototype models to production and the communication skills to explain complex technical approaches to the stakeholders of varied technical expertise. Here is a glimpse of the problem spaces and technologies that we deal with on a regular basis: 1. De-duping and organizing addresses into hierarchy while handling noisy, inconsistent, localized and multi-lingual user inputs. We do this at the scale of millions of customers for established (US, EU) as well as emerging geographies (IN, MX). We make use of technologies like LLMs, Weak supervision, Graph-based clustering & Entity matching. We also use additional modalities such as building outlines in maps, street view images and 3P datasets, gazetteers. 2. Build a generic ML framework which leverages relationship between places to improve delivery experience by learning precise delivery locations and propagating attributes, such as business hours and safe places. This requires us to combine a variety of inputs (maps, delivery locations, defects) effectively, work in a multi-objective setting and exploit semantic as well as structural properties of places. 3. Build LLMs and Foundational models that are specialized for Geospatial domain to perform multitasking (address parsing, validation, normalization, completion, etc.). We also use in-context and retrieval augmented learning to utilize real-world contextual information to ground the model predictions. 4. (Work done in sister teams) Developing systems to consume inputs from areal imagery and optimize our maps to enable efficient delivery planning. Also building models to estimate travel time, turn costs, optimal route and defect propensities. For these problems, we make use of multiple CV, Optimization (TSP), Counterfactual analysis and other supervised learning techniques that can operate at scale. Key job responsibilities Key job responsibilities As a Sr. Applied Scientist, your responsibility will be to "think big", explore SOTA technologies including GenAI and customize the large models for the business use-case. You will be working with a group of junior or peer scientists to solve complex real-world problems involving large scale data, and behavioural/systemic noise. Your job will be to design a high-level solution, innovate new technology, ensure end-to-end implementation and oversee impact into production. While this is an individual contributor role, there exists an opportunity to pursue a people management path in the future
US, WA, Seattle
In this role, you will leverage econometrics, statistics and machine learning models a massive scale to make multi-million dollar business decisions that support Consumer Hardware device concepts (including smart home security devices and alarms) from innovative idea inception to launch. All of this work is performed in close coordination with senior business leaders and you will work on a cross-functional team including Economists, Data Scientists and Applied Scientists tackling difficult business questions and then scaling those Statistics and Econometrics solutions across Amazon Devices and beyond. Key job responsibilities - In this role, you will leverage existing science infrastructure and known techniques and focus both on combining estimates from existing models and creating new models to generate actionable business insights to support senior leadership in critical decision-making meetings, including the approval of confidential funding requests (PRFAQs) for innovative devices and services, product portfolio optimization, and strategic tiering decisions. - You will enhance Science and Tools produced by the Device Economics team to improve forecast accuracy. Over time, you will be expected to progressively take on a leading role in expanding our analytical capabilities to address increasingly complex business questions and forecast intricate business scenarios. - You will manage and cultivate strong relationships with key business stakeholders, including PM, Sales and Marketing, GTM teams. This role requires close collaboration to align analytical outputs with strategic business objectives. A day in the life Your days will be split between working with scientists to refine and build models and business leaders to interpret them. Always contributing to major decisions that can directly impact Amazon's bottom line on the order of multi-million dollar decisions. - You will perform model refreshes or updates to analyses as needed; and, - You will be expected to develop new techniques to process large data sets, address quantitative problems, and contribute to design of automated systems. About the team The Decision Science team owns demand estimates and pricing recommendations of concept devices before customers know they exist. We support devices and services ranging from Echo Frames to Kindle Paperwhite to Blink Video Camera subscriptions to the Amazon Smart Plug…all prior to launch. We are a cross-functional Product team working to scale Econometrics through Amazon and beyond by incorporating Science into internal facing tools and making it easier for others to do so as well.
US, WA, Bellevue
We are a part of Amazon Alexa Devices organization with the mission “delight customers through contextual and personalized proactive experiences that keep customers informed, engaged, and productive without cognitive burden”. We are developing an advanced system using Large Language Model (LLM) technologies to deliver engaging, intuitive, and adaptive content recommendations across all Amazon surfaces. We aim to facilitate seamless reasoning and customer experiences, surpassing the capabilities of previous machine learning models. We are looking for a passionate, talented, and resourceful Applied Scientist in the field of Natural Language Processing (NLP), Recommender Systems and/or Information Retrieval, to invent and build scalable solutions for a state-of-the-art context-aware speech assistant. A successful candidate will have strong machine learning background and a desire to push the envelope in one or more of the above areas. The ideal candidate would also enjoy operating in dynamic environments, be self-motivated to take on challenging problems to deliver big customer impact, shipping solutions via rapid experimentation and then iterating on user feedback and interactions. Key job responsibilities As an Applied Scientist on the team, you will collaborate with other applied scientists and engineers to develop novel algorithms to enable timely, relevant and delightful recommendations and conversations. Your work will directly impact our customers in the form of products and services that make use of various machine learning, deep learning and language model technologies. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in the state of art.
DE, Aachen
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art of multi-modal LLMs and speech recognition technology. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in conversational AI. About the team Our team builds multimodal foundation models for a wide range of applications in speech and audio domain. We partner with other teams to achieve state-of-the-art performance in speech technologies. We support product teams to deliver innovative conversational AI solutions to a broad set of customers and use cases.
US, WA, Seattle
Interested in helping build Prime's content and offer experimentation system to drive huge business impact on millions of customers? Join our team of Scientists and Engineers developing algorithms to adaptively generate and experiment on new content, personalize, and optimize the customer experience with Amazon Prime. This includes identifying who our customers are and providing them with personalized relevant content. As an ML lead, you will partner directly with product owners to intake, build, and directly apply your modeling solutions. There are numerous scientific and technical challenges you will get to tackle in this role, such as adaptive experimentation, structured multi-armed bandits and its application to various types of experimentation and multi-step optimization leading to reinforcement learning of the customer journey. We employ techniques from supervised learning, multi-armed bandits, optimization, and RL - while this role is focused on leading the space of multi-armed bandit solutions. As the central science team within Prime, our expertise gets routinely called upon to weigh in on a variety of topics. We also emphasize the need and value of scientific research and have developed a strong publication and patent record (internally/externally) which you will be a part of. You will also utilize and be exposed to the latest in ML technologies and infrastructure: AWS technologies (EMR/Spark, Redshift, Sagemaker, DynamoDB, S3, ...), various ML algorithms and techniques (Random Forests, Neural Networks, supervised/unsupervised/semi-supervised/reinforcement learning, LLM's), and statistical modeling techniques. Major responsibilities - Build and develop machine learning models and supporting infrastructure at TB scale, in coordination with software engineering teams. - Leverage Bandits and Reinforcement Learning for Experimentation and Optimization Systems. - Develop offline policy estimation tools and integrate with reporting systems. - Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation. - Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes. - Work closely with the business to understand their problem space, identify the opportunities and formulate the problems. - Use machine learning, data mining, statistical techniques and others to create actionable, meaningful, and scalable solutions for the business problems. - Design, develop and evaluate highly innovative models and statistical approaches to understand and predict customer behavior and to solve business problems.
IN, KA, Bengaluru
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced ML systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real-world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning team for India Consumer Businesses. Machine Learning, Big Data and related quantitative sciences have been strategic to Amazon from the early years. Amazon has been a pioneer in areas such as recommendation engines, ecommerce fraud detection and large-scale optimization of fulfillment center operations. As Amazon has rapidly grown and diversified, the opportunity for applying machine learning has exploded. We have a very broad collection of practical problems where machine learning systems can dramatically improve the customer experience, reduce cost, and drive speed and automation. These include product bundle recommendations for millions of products, safeguarding financial transactions across by building the risk models, improving catalog quality via extracting product attribute values from structured/unstructured data for millions of products, enhancing address quality by powering customer suggestions We are developing state-of-the-art machine learning solutions to accelerate the Amazon India growth story. Amazon India is an exciting place to be at for a machine learning practitioner. We have the eagerness of a fresh startup to absorb machine learning solutions, and the scale of a mature firm to help support their development at the same time. As part of the India Machine Learning team, you will get to work alongside brilliant minds motivated to solve real-world machine learning problems that make a difference to millions of our customers. We encourage thought leadership and blue ocean thinking in ML. Key job responsibilities Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes Design, develop, evaluate and deploy, innovative and highly scalable ML models Work closely with software engineering teams to drive real-time model implementations Work closely with business partners to identify problems and propose machine learning solutions Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production Leading projects and mentoring other scientists, engineers in the use of ML techniques About the team International Machine Learning Team is responsible for building novel ML solutions that attack India first (and other Emerging Markets across MENA and LatAm) problems and impact the bottom-line and top-line of India business. Learn more about our team from https://www.amazon.science/working-at-amazon/how-rajeev-rastogis-machine-learning-team-in-india-develops-innovations-for-customers-worldwide
US, WA, Bellevue
Conversational AI ModEling and Learning (CAMEL) team is part of Amazon Devices organization where our mission is to build a best-in-class Conversational AI that is intuitive, intelligent, and responsive, by developing superior Large Language Models (LLM) solutions and services which increase the capabilities built into the model and which enable utilizing thousands of APIs and external knowledge sources to provide the best experience for each request across millions of customers and endpoints. We are looking for a passionate, talented, and resourceful Applied Scientist in the field of LLM, Artificial Intelligence (AI), Natural Language Processing (NLP), Recommender Systems and/or Information Retrieval, to invent and build scalable solutions for a state-of-the-art context-aware conversational AI. A successful candidate will have strong machine learning background and a desire to push the envelope in one or more of the above areas. The ideal candidate would also have hands-on experiences in building Generative AI solutions with LLMs, enjoy operating in dynamic environments, be self-motivated to take on challenging problems to deliver big customer impact, moving fast to ship solutions and then iterating on user feedback and interactions. Key job responsibilities As an Applied Scientist, you will leverage your technical expertise and experience to collaborate with other talented applied scientists and engineers to research and develop novel algorithms and modeling techniques to reduce friction and enable natural and contextual conversations. You will analyze, understand and improve user experiences by leveraging Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in artificial intelligence. You will work on core LLM technologies, including Continual Pre-Training (CPT), Supervised Fine-Tuning (SFT), Reinforcement Learning from Human Feedback (RLHF), Evaluation, etc. Your work will directly impact our customers in the form of novel products and services.
US, WA, Bellevue
As part of Alexa CAS team, our mission is to create a best-in-class Conversational AI that is intuitive, intelligent, and responsive, by developing superior Large Language Models (LLM) solutions and services which increase the capabilities built into the model and which enable utilizing thousands of APIs and external knowledge sources to provide the best experience for each request across millions of customers and endpoints. We are looking for a passionate, talented, and resourceful Applied Scientist in the field of LLM, Artificial Intelligence (AI), Natural Language Processing (NLP), Recommender Systems and/or Information Retrieval, to invent and build scalable solutions for a state-of-the-art context-aware conversational AI. A successful candidate will have strong machine learning background and a desire to push the envelope in one or more of the above areas. The ideal candidate would also have hands-on experiences in building Generative AI solutions with LLMs, enjoy operating in dynamic environments, be self-motivated to take on challenging problems to deliver big customer impact, moving fast to ship solutions and then iterating on user feedback and interactions. Key job responsibilities As an Applied Scientist, you will leverage your technical expertise and experience to collaborate with other talented applied scientists and engineers to research and develop novel algorithms and modeling techniques to reduce friction and enable natural and contextual conversations. You will analyze, understand and improve user experiences by leveraging Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in artificial intelligence. You will work on core LLM technologies, including Supervised Fine-Tuning (SFT), In-Context Learning (ICL), Learning from Human Feedback (LHF), etc. Your work will directly impact our customers in the form of novel products and services . Key job responsibilities . You will analyze, understand and improve user experiences by leveraging Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in artificial intelligence. . You will work on core LLM technologies, including developing best-in-class modeling, prompt optimization algorithms to enable Conversation AI use cases · Build and measure novel online & offline metrics for personal digital assistants and customer scenarios, on diverse devices and endpoints · Create, innovate and deliver deep learning, policy-based learning, and/or machine learning based algorithms to deliver customer-impacting results · Perform model/data analysis and monitor metrics through online A/B testing · Research and implement novel machine learning and deep learning algorithms and models.
US, WA, Bellevue
We’re building the speech and language solutions behind Alexa. We’re working hard, having fun, and making history; come join us! Amazon is looking for a Language Data Scientist to join our Language Science, Engineering and Research team. We are seeking a candidate with strong analytical skills, solid linguistics domain expertise, and Machine Learning experience to help us measure, analyze and solve complex problems. In this role, you are responsible for the design and delivery of LLM products using your linguistic, machine learning, and data analysis skills to understand what a customer meant. You are a key member in new feature development while proactively improving existing experiences. You work closely with linguists, scientists, engineers, and product managers, to deliver magical experiences that customers love. Key job responsibilities * Design, develop, and implement innovative NLP solutions to address large-scale qualitative and quantitative data needs * Streamline the development and evaluation process for LLMs with a focus on customer requests (text, speech, etc.) * Collaborate with engineers, scientists and linguists to ensure models are effective, accurate, and aligned with business goals * Conduct research and stay current on the latest advancements in NLP and machine learning * Analyze the interpret NLP model outputs, providing actionable insights to stakeholders * Document and present findings in a clear and concise manner
US, MA, Boston
As part of Alexa CAS team, our mission is to provide scalable and reliable evaluation of the state-of-the-art Conversational AI. We are looking for a passionate, talented, and resourceful Applied Scientist in the field of LLM, Artificial Intelligence (AI), Natural Language Processing (NLP), to invent and build end-to-end evaluation of how customers perceive state-of-the-art context-aware conversational AI assistants. A successful candidate will have strong machine learning background and a desire to push the envelope in one or more of the above areas. The ideal candidate would also have hands-on experiences in building Generative AI solutions with LLMs, including Supervised Fine-Tuning (SFT), In-Context Learning (ICL), Learning from Human Feedback (LHF), etc. As an Applied Scientist, you will leverage your technical expertise and experience to collaborate with other talented applied scientists and engineers to research and develop novel methods for evaluating conversational assistants. You will analyze and understand user experiences by leveraging Amazon’s heterogeneous data sources and build evaluation models using machine learning methods. Key job responsibilities - Design, build, test and release predictive ML models using LLMs - Ensure data quality throughout all stages of acquisition and processing, including such areas as data sourcing/collection, ground truth generation, normalization, and transformation. - Collaborate with colleagues from science, engineering and business backgrounds. - Present proposals and results to partner teams in a clear manner backed by data and coupled with actionable conclusions - Work with engineers to develop efficient data querying and inference infrastructure for both offline and online use cases About the team Central Analytics and Research Science (CARS) is an analytics, software, and science team within Amazon's Conversational Assistant Services (CAS) organization. Our mission is to provide an end-to-end understanding of how customers perceive the assistants they interact with – from the metrics themselves to software applications to deep dive on those metrics – allowing assistant developers to improve their services. Learn more about Amazon’s approach to customer-obsessed science on the Amazon Science website, which features the latest news and research from scientists across the company. For the latest updates, subscribe to the monthly newsletter, and follow the @AmazonScience handle and #AmazonScience hashtag on LinkedIn, Twitter, Facebook, Instagram, and YouTube.