Improving forecasting by learning quantile functions

Learning the complete quantile function, which maps probabilities to variable values, rather than building separate models for each quantile level, enables better optimization of resource trade-offs.

The quantile function is a mathematical function that takes a quantile (a percentage of a distribution, from 0 to 1) as input and outputs the value of a variable. It can answer questions like, “If I want to guarantee that 95% of my customers receive their orders within 24 hours, how much inventory do I need to keep on hand?” As such, the quantile function is commonly used in the context of forecasting questions.

In practical cases, however, we rarely have a tidy formula for computing the quantile function. Instead, statisticians usually use regression analysis to approximate it for a single quantile level at a time. That means that if you decide you want to compute it for a different quantile, you have to build a new regression model — which, today, often means retraining a neural network.

In a pair of papers we’re presenting at this year’s International Conference on Artificial Intelligence and Statistics (AISTATS), we describe an approach to learning an approximation of the entire quantile function at once, rather than simply approximating it for each quantile level.

Related content
Konstantinos Benidis talks about his experience as an intern at Amazon, and why he decided to pursue a full-time role at the company.

This means that users can query the function at different points, to optimize the trade-offs between performance criteria. For instance, it could be that lowering the guarantee of 24-hour delivery from 95% to 94% enables a much larger reduction in inventory, which might be a trade-off worth making. Or, conversely, it could be that raising the guarantee threshold — and thus increasing customer satisfaction — requires very little additional inventory.

Our approach is agnostic as to the shape of the distribution underlying the quantile function. The distribution could be Gaussian (the bell curve, or normal distribution); it could be uniform; or it could be anything else. Not locking ourselves into any assumptions about distribution shape allows our approach to follow the data wherever it leads, which increases the accuracy of our approximations.

In the first of our AISTATS papers, we present an approach to learning the quantile function in the univariate case, where there’s a one-to-one correspondence between probabilities and variable values. In the second paper, we consider the multivariate case.

The quantile function

Any probability distribution — say, the distribution of heights in a population — can be represented as a function, called the probability density function (PDF). The input to the function is a variable (a particular height), and the output is a positive number representing the probability of the input (the fraction of people in that population who have that height).

Cumulative distribution function.png
The graph of a probability density function (blue line) and its associated cumulative distribution function (orange line).

A useful related function is the cumulative distribution function (CDF), which is the probability that the variable will take a value at or below a particular value — for instance, the fraction of the population that is 5’6” or shorter. The CDF’s values are between 0 (no one is shorter than 0’0”) and 1 (100% of the population is shorter than 500’0”).

Technically, the CDF is the integral of the PDF, so it computes the area under the probability curve up to the target point. At low input values, the probability output by the CDF can be lower than that output by the PDF. But because the CDF is cumulative, it is monotonically non-decreasing: the higher the input value, the higher the output value.

If the CDF exists, the quantile function is simply its inverse. The quantile function’s graph can be produced by flipping the CDF graph over — that is, rotating it 180 degrees around a diagonal axis that extends for the lower left to the upper right of the graph.

Quantile function animation.gif
The quantile function is simply the inverse of the cumulative distribution function (if it exists). Its graph can be produced by flipping the cumulative distribution function's graph over.

Like the CDF, the quantile function is monotonically non-decreasing. That’s the fundamental observation on which our method rests.

The univariate case

Quantile estimator architecture.png
The architecture of our quantile function estimator (the incremental quantile function, or IQF), which enforces the monotonicity of the quantile function by representing the value of each quantile as an incremental increase in the value of the previous quantile.

One of the drawbacks of the conventional approach to approximating the quantile function — estimating it only at specific points — is that it can lead to quantile crossing. That is, because each prediction is based on a different model, trained on different local data, the predicted variable value for a given probability could be lower than the value predicted for a lower probability. This violates the requirement that the quantile function be monotonically non-decreasing.

Quantile function, five knots.png
An approximation of the quantile function that (mostly) uses linear extrapolation.
Quantile function, 20 knots.png
An approximation of the quantile function with 20 knots (anchor points).

To avoid quantile crossing, our method learns a predictive model for several different input values — quantiles — at once, spaced at regular intervals between 0 and 1. The model is a neural network designed so that the prediction for each successive quantile is an incremental increase of the prediction for the preceding quantile.

Once our model has learned estimates for several anchor points that enforce the monotonicity of the quantile function, we can estimate the function through simple linear extrapolation between the anchor points (called “knots” in the literature), with nonlinear extrapolation to handle the tails of the function.

Where training data is plentiful enough to enable a denser concentration of anchor points (knots), linear extrapolation provides a more accurate approximation.

To test our method, we applied it to a toy distribution with three arbitrary peaks, to demonstrate that we don’t need to make any assumptions about distribution shape.

Distribution and approximations.png
The true distribution (red, right), with three arbitrary peaks; our method's approximation, using five knots (center); and our method's approximation, using 20 knots (right).

The multivariate case

So far, we’ve been considering the case in which our distribution applies to a single variable. But in many practical forecasting use cases, we want to consider multivariate distributions.

For instance, if a particular product uses a rare battery that doesn’t come included, a forecast of the demand for that battery will probably be correlated with the forecast of the demand for that product.

Related content
New method goes beyond Granger causality to identify only the true causes of a target time series, given some graph constraints.

Similarly, if we want to predict demand over several different time horizons, we would expect there to be some correlation between consecutive predictions: demand shouldn’t undulate too wildly. A multivariate probability distribution over time horizons should capture that correlation better than a separate univariate prediction for each horizon.

The problem is that the notion of a multivariate quantile function is not well defined. If the CDF maps multiple variables to a single probability, when you perform that mapping in reverse, which value do you map to?

This is the problem we address in our second AISTATS paper. Again, the core observation is that the quantile function must be monotonically non-decreasing. So we define the multivariate quantile function as the derivative of a convex function.

A convex function is one that tends everywhere toward a single global minimum: in two dimensions, it looks like a U-shaped curve. The derivative of a function computes the slope of its graph: again in the two-dimensional case, the slope of a convex function is negative but flattening as it approaches the global minimum, zero at the minimum, and increasingly positive on the other side. Hence, the derivative is monotonically increasing.

Multivariate quantile function.png
A convex function (blue) and its monotonically increasing derivative (green).

This two-dimensional picture generalizes readily to higher dimensions. In our paper, we describe a method for training a neural network to learn a quantile function that is the derivative of a convex function. The architecture of the network enforces convexity, and, essentially, the model learns the convex function using its derivative as a training signal.

In addition to real-world datasets, we test our approach on the problem of simultaneous prediction across multiple time horizons, using a dataset that follows a multivariate Gaussian distribution. Our experiments showed that, indeed, our approach better captures the correlations between successive time horizons than a univariate approach.

Quantile correlation.png
Three self-correlation graphs that maps a time series against itself. At left is the ground truth. In the center is the forecast produced by a standard univariate quantile function, in which each time step correlates only with itself. At right is the forecast produced using our method, which better captures correlations between successive time steps.

This work continues a line of research at Amazon combining quantile regression and deep learning to solve forecasting problems at a massive scale. In particular, it builds upon work on the MQ-CNN model proposed by a group of Amazon scientists in 2017, extensions of which are currently powering Amazon’s demand forecasting system. The current work is also closely related to spline quantile function RNNs, which — like the multivariate quantile forecaster — started as an internship project.

Code for all these methods is available in the open source GluonTS probabilistic time series modeling library.

Acknowledgements

This work would have not been possible without the help of our awesome co-authors, whom we would like to thank for their contributions to these two papers: Kelvin Kan, Danielle Maddix, Tim Januschowski, Konstantinos Benidis, Lars Ruthotto, and Yuyang Wang, Jan Gasthaus.

Related content

IN, TS, Hyderabad
Welcome to the Worldwide Returns & ReCommerce team (WWR&R) at Amazon.com. WWR&R is an agile, innovative organization dedicated to ‘making zero happen’ to benefit our customers, our company, and the environment. Our goal is to achieve the three zeroes: zero cost of returns, zero waste, and zero defects. We do this by developing products and driving truly innovative operational excellence to help customers keep what they buy, recover returned and damaged product value, keep thousands of tons of waste from landfills, and create the best customer returns experience in the world. We have an eye to the future – we create long-term value at Amazon by focusing not just on the bottom line, but on the planet. We are building the most sustainable re-use channel we can by driving multiple aspects of the Circular Economy for Amazon – Returns & ReCommerce. Amazon WWR&R is comprised of business, product, operational, program, software engineering and data teams that manage the life of a returned or damaged product from a customer to the warehouse and on to its next best use. Our work is broad and deep: we train machine learning models to automate routing and find signals to optimize re-use; we invent new channels to give products a second life; we develop highly respected product support to help customers love what they buy; we pilot smarter product evaluations; we work from the customer backward to find ways to make the return experience remarkably delightful and easy; and we do it all while scrutinizing our business with laser focus. You will help create everything from customer-facing and vendor-facing websites to the internal software and tools behind the reverse-logistics process. You can develop scalable, high-availability solutions to solve complex and broad business problems. We are a group that has fun at work while driving incredible customer, business, and environmental impact. We are backed by a strong leadership group dedicated to operational excellence that empowers a reasonable work-life balance. As an established, experienced team, we offer the scope and support needed for substantial career growth. Amazon is earth’s most customer-centric company and through WWR&R, the earth is our customer too. Come join us and innovate with the Amazon Worldwide Returns & ReCommerce team!
GB, MLN, Edinburgh
We’re looking for a Machine Learning Scientist in the Personalization team for our Edinburgh office experienced in generative AI and large models. You will be responsible for developing and disseminating customer-facing personalized recommendation models. This is a hands-on role with global impact working with a team of world-class engineers and scientists across the Edinburgh offices and wider organization. You will lead the design of machine learning models that scale to very large quantities of data, and serve high-scale low-latency recommendations to all customers worldwide. You will embody scientific rigor, designing and executing experiments to demonstrate the technical efficacy and business value of your methods. You will work alongside a science team to delight customers by aiding in recommendations relevancy, and raise the profile of Amazon as a global leader in machine learning and personalization. Successful candidates will have strong technical ability, focus on customers by applying a customer-first approach, excellent teamwork and communication skills, and a motivation to achieve results in a fast-paced environment. Our position offers exceptional opportunities for every candidate to grow their technical and non-technical skills. If you are selected, you have the opportunity to make a difference to our business by designing and building state of the art machine learning systems on big data, leveraging Amazon’s vast computing resources (AWS), working on exciting and challenging projects, and delivering meaningful results to customers world-wide. Key job responsibilities Develop machine learning algorithms for high-scale recommendations problems. Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative analysis and business judgement. Collaborate with software engineers to integrate successful experimental results into large-scale, highly complex Amazon production systems capable of handling 100,000s of transactions per second at low latency. Report results in a manner which is both statistically rigorous and compellingly relevant, exemplifying good scientific practice in a business environment.
US, CA, Palo Alto
Amazon’s Advertising Technology team builds the technology infrastructure and ad serving systems to manage billions of advertising queries every day. The result is better quality advertising for publishers and more relevant ads for customers. In this organization you’ll experience the benefits of working in a dynamic, entrepreneurial environment, while leveraging the resources of Amazon.com (AMZN), one of the world's leading companies. Amazon Publisher Services (APS) helps publishers of all sizes and on all channels better monetize their content through effective advertising. APS unites publishers with advertisers across devices and media channels. We work with Amazon teams across the globe to solve complex problems for our customers. The end results are Amazon products that let publishers focus on what they do best - publishing. The APS Publisher Products Engineering team is responsible for building cloud-based advertising technology services that help Web, Mobile, Streaming TV broadcasters and Audio publishers grow their business. The engineering team focuses on unlocking our ad tech on the most impactful Desktop, mobile and Connected TV devices in the home, bringing real-time capabilities to this medium for the first time. As a successful Data Scientist in our team, · You are an analytical problem solver who enjoys diving into data, is excited about investigations and algorithms, and can credibly interface between technical teams and business stakeholders. You will collaborate directly with product managers, BIEs and our data infra team. · You will analyze large amounts of business data, automate and scale the analysis, and develop metrics (e.g., user recognition, ROAS, Share of Wallet) that will enable us to continually measure the impact of our initiatives and refine the product strategy. · Your analytical abilities, business understanding, and technical aptitude will be used to identify specific and actionable opportunities to solve existing business problems and look around corners for future opportunities. Your expertise in synthesizing and communicating insights and recommendations to audiences of varying levels of technical sophistication will enable you to answer specific business questions and innovate for the future. · You will have direct exposure to senior leadership as we communicate results and provide scientific guidance to the business. Major responsibilities include: · Utilizing code (Apache, Spark, Python, R, Scala, etc.) for analyzing data and building statistical models to solve specific business problems. · Collaborate with product, BIEs, software developers, and business leaders to define product requirements and provide analytical support · Build customer-facing reporting to provide insights and metrics which track system performance · Influence the product strategy directly through your analytical insights · Communicating verbally and in writing to business customers and leadership team with various levels of technical knowledge, educating them about our systems, as well as sharing insights and recommendations
US, WA, Seattle
Amazon Advertising operates at the intersection of eCommerce and advertising, and is investing heavily in building a world-class advertising business. We are defining and delivering a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses driving long-term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products to improve both shopper and advertiser experience. With a broad mandate to experiment and innovate, we grow at an unprecedented rate with a seemingly endless range of new opportunities. The Ad Response Prediction team in Sponsored Products organization build advanced deep-learning models, large-scale machine-learning pipelines, and real-time serving infra to match shoppers’ intent to relevant ads on all devices, for all contexts and in all marketplaces. Through precise estimation of shoppers’ interaction with ads and their long-term value, we aim to drive optimal ads allocation and pricing, and help to deliver a relevant, engaging and delightful ads experience to Amazon shoppers. As the business and the complexity of various new initiatives we take continues to grow, we are looking for talented Applied Scientists to join the team. Key job responsibilities As a Applied Scientist II, you will: * Conduct hands-on data analysis, build large-scale machine-learning models and pipelines * Work closely with software engineers on detailed requirements, technical designs and implementation of end-to-end solutions in production * Run regular A/B experiments, gather data, perform statistical analysis, and communicate the impact to senior management * Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving * Provide technical leadership, research new machine learning approaches to drive continued scientific innovation * Be a member of the Amazon-wide Machine Learning Community, participating in internal and external MeetUps, Hackathons and Conferences
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! In Prime Video READI, our mission is to automate infrastructure scaling and operational readiness. We are growing a team specialized in time series modeling, forecasting, and release safety. This team will invent and develop algorithms for forecasting multi-dimensional related time series. The team will develop forecasts on key business dimensions with optimization recommendations related to performance and efficiency opportunities across our global software environment. As a founding member of the core team, you will apply your deep coding, modeling and statistical knowledge to concrete problems that have broad cross-organizational, global, and technology impact. Your work will focus on retrieving, cleansing and preparing large scale datasets, training and evaluating models and deploying them to production where we continuously monitor and evaluate. You will work on large engineering efforts that solve significantly complex problems facing global customers. You will be trusted to operate with complete independence and are often assigned to focus on areas where the business and/or architectural strategy has not yet been defined. You must be equally comfortable digging in to business requirements as you are drilling into design with development teams and developing production ready learning models. You consistently bring strong, data-driven business and technical judgment to decisions. You will work with internal and external stakeholders, cross-functional partners, and end-users around the world at all levels. Our team makes a big impact because nothing is more important to us than delivering for our customers, continually earning their trust, and thinking long term. You are empowered to bring new technologies to your solutions. If you crave a sense of ownership, this is the place to be.
US, WA, Bellevue
mmPROS Surface Research Science seeks an exceptional Applied Scientist with expertise in optimization and machine learning to optimize Amazon's middle mile transportation network, the backbone of its logistics operations. Amazon's middle mile transportation network utilizes a fleet of semi-trucks, trains, and airplanes to transport millions of packages and other freight between warehouses, vendor facilities, and customers, on time and at low cost. The Surface Research Science team delivers innovation, models, algorithms, and other scientific solutions to efficiently plan and operate the middle mile surface (truck and rail) transportation network. The team focuses on large-scale problems in vehicle route planning, capacity procurement, network design, forecasting, and equipment re-balancing. Your role will be to build innovative optimization and machine learning models to improve driver routing and procurement efficiency. Your models will impact business decisions worth billions of dollars and improve the delivery experience for millions of customers. You will operate as part of a team of innovative, experienced scientists working on optimization and machine learning. You will work in close collaboration with partners across product, engineering, business intelligence, and operations. Key job responsibilities - Design and develop optimization and machine learning models to inform our hardest planning decisions. - Implement models and algorithms in Amazon's production software. - Lead and partner with product, engineering, and operations teams to drive modeling and technical design for complex business problems. - Lead complex modeling and data analyses to aid management in making key business decisions and set new policies. - Write documentation for scientific and business audiences. About the team This role is part of mmPROS Surface Research Science. Our mission is to build the most efficient and optimal transportation network on the planet, using our science and technology as our biggest advantage. We leverage technologies in optimization, operations research, and machine learning to grow our businesses and solve Amazon's unique logistical challenges. Scientists in the team work in close collaboration with each other and with partners across product, software engineering, business intelligence, and operations. They regularly interact with software engineering teams and business leadership.
US, WA, Seattle
Come be a part of a rapidly expanding $35 billion dollar global business. At Amazon Business, a fast-growing startup passionate about building solutions, we set out every day to innovate and disrupt the status quo. We stand at the intersection of tech & retail in the B2B space developing innovative purchasing and procurement solutions to help businesses and organizations thrive. At Amazon Business, we strive to be the most recognized and preferred strategic partner for smart business buying. Bring your insight, imagination and a healthy disregard for the impossible. Join us in building and celebrating the value of Amazon Business to buyers and sellers of all sizes and industries. Unlock your career potential. We are seeking an Applied Scientist who has a solid background in applied Machine Learning and Data Science, deep passion for building data-driven products, ability to formulate data insights and scientific vision, and has a proven track record of executing complex projects and delivering business impact. Key job responsibilities • Data driven insights to accelerate acquisition of new members. • Develop and implement personalized marketing strategies and campaigns tailored to individual customer preferences, behaviors, and demographics to enhance engagement and drive customer loyalty. • Develop, implement, and optimize marketing attribution models to accurately measure the impact of various marketing channels and campaigns, and create valuation frameworks to assess the ROI and contribution of each channel to overall business objectives. • Work with a group of both applied scientists and software engineers to deliver machine-learning and data science solutions to production. • Advance team's engineering craftsmanship and drive continued scientific innovation as a thought leader and practitioner. • Mentor talented members, provide technical and career development guidance to both scientists and engineers in the organization. About the team The Marketing Science team applies scientific methods and research techniques to enhance our understanding of AB consumer behavior, market trends, and the effectiveness of marketing strategies. Our goal is to develop and advance theories and models that can be used to make informed decisions in marketing and to provide insights into consumer decision-making processes. Additionally, we seek to identify and explore emerging trends and technologies in marketing, and to develop innovative approaches for addressing the challenges and opportunities in the field.
US, WA, Seattle
Amazon’s eCommerce Foundation (eCF) organization provides the core technologies that drive and power Amazon's Stores, Digital, and Other (SDO) businesses. Millions of customer page views and orders per day are enabled by the systems eCF builds from the ground up. CloudTune, within eCF, empowers growth and business agility needs by automatically and efficiently managing AWS capacity and business processes needed to safely meet Amazon’s customer demand. CloudTune serves its primary customers, internal software teams, through forecast driven automation of cost controllership, capacity management and scaling. We predict expected load, and drive procurement and allocation of AWS capacity for new product launches and high velocity events like Prime Day and Cyber Monday. CloudTune, in partnership with Region Flexibility, is driving an SDO-wide program to diversify our use of AWS regions beyond DUB, IAD, and PDX regions. The objective of the Diversify AWS Region Usage (DARU) program is to mitigate the risk of capacity concentration by encouraging teams to design workloads that are region-flexible, utilize AWS automation such as Flexible Fleets to access multiple capacity pools, and optimize workload placement so SDO efficiently utilizes AWS. This is a strategic, highly visible, multi-year program which spans all Amazon business. CloudTune is looking for a Data Scientist to join our forecasting team and support DARU program. The team develops sophisticated algorithms that involve learning from large amounts of past data, such as actual sales, website traffic, merchandising activities, promotions, similar products and product attributes to forecast the demand for our compute infrastructure. These forecasts are used to determine the level of investment in capital expenditures, promotional activity, engineering efficiency projects and determining financial performance. As a Data Scientist CloudTune, you will work with other scientists, software engineers, data engineers, and product managers on a variety of important applied machine learning problems in the area of time series modeling. You will be an expert at communicating insights and recommendations to audiences of varying levels of technical sophistication. You will lead the design, implementation, and delivery of data science solutions for complex capacity planning problems. Key job responsibilities - Research and develop new methodologies for capacity demand forecasting. - Translate analytic insights into concrete, actionable recommendations for business or product improvement. Develop and present these as papers to senior stakeholders. - Given anecdotes about anomalies or generate automatic scripts to define anomalies, deep dive to explain why they happen, and identify fixes. - Drive scalable solutions for multi-year capacity demand forecasting horizons. - Play an integral role in developing a roadmap to expand and enhance demand forecasting for cloud compute resources. - Create and track accuracy and performance metrics (both technical and business metrics). - Create, enhance, and maintain technical documentation, and present to other scientists, engineers and business leaders.
US, WA, Bellevue
The Conversational Assistant Services (CAS) seeks a passionate, talented, and resourceful Applied Scientist in the field of LLM, Artificial Intelligence (AI), Natural Language Processing (NLP) and/or Information Retrieval, to invent and build scalable solutions for a state-of-the-art context-aware conversational AI. As part of this team, you will collaborate with talented peers to create scalable solutions for an innovative conversational assistant, aiming to revolutionize user experiences for millions of Alexa customers. The ideal candidate possesses a solid understanding of machine learning fundamentals and a passion for pushing boundaries in the field. They thrive in fast-paced environments, possess the drive to tackle complex challenges, and excel at swiftly delivering impactful solutions while iterating based on user feedback. Join us in our mission to redefine industry standards and provide unparalleled experiences for our customers. Key job responsibilities . You will analyze, understand and improve user experiences by leveraging Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in artificial intelligence. . You will work on core LLM technologies, including developing best-in-class modeling, prompt optimization algorithms to enable Conversation AI use cases · Build and measure novel online & offline metrics for personal digital assistants and customer scenarios, on diverse devices and endpoints · Create, innovate and deliver deep learning, policy-based learning, and/or machine learning based algorithms to deliver customer-impacting results · Perform model/data analysis and monitor metrics through online A/B testing · Research and implement novel machine learning and deep learning algorithms and models.
US, WA, Seattle
Amazon's Pricing & Promotions Optimization Science is seeking a motivated Applied Scientist to harness planet scale multi-modal datasets, and navigate a continuously evolving competitor landscape, in order to regularly generate fresh customer-relevant prices and promotions on billions of Amazon and Third Party Seller products worldwide. We are looking for a talented, organized, and customer-focused applied scientists to define, measure, and launch customer-obsessed solutions across all products listed on Amazon. This role requires an individual with exceptional AI and data science expertise, excellent cross-functional collaboration skills, strong business acumen, and an entrepreneurial spirit. We are looking for an experienced innovator, who is a self-starter, comfortable with ambiguity, demonstrates strong attention to detail, and has the ability to work in a fast-paced and ever-changing environment. Key job responsibilities - See the big picture. Understand and influence the long term vision for Amazon's science-based competitive, perception-preserving pricing/promotion techniques - Build strong collaborations. Partner with product, engineering, and science teams within and outside Pricing & Promotions org to deploy AI/Data solutions at Amazon scale - Stay informed. Establish mechanisms to stay up to date on latest scientific advancements in neural networks, search & ranking, natural language processing, probabilistic forecasting, reinforcement learning, and multi-objective optimization techniques. Identify opportunities to apply them to relevant Pricing & Promotions business problems - Keep innovating for our customers. Foster an environment that promotes rapid experimentation, continuous learning, and incremental value delivery - Successfully execute & deliver. Apply your exceptional AI and data science expertise to incrementally move the needle on some of our hardest science and tech problems About the team About the team: the Pricing and Promotion Optimization team within P2 Science leads the definition, measurement, and implementation of the state-of-the-art AI and data science solutions to improve price/promotion quality across the site and bring value to customers, sellers and Amazon.