Animation shows a flow of dots (historical data) flowing through a CloudTune forecasting icon to generate forecasts, it also includes some detailed shots of pretend peak event forecasts for the US and India.
CloudTune Forecasting, which uses past data to generate forecasts, was initially intended to help US service teams know how much computational capacity they needed for peak events. Since then, improvements have focused on differentiating across teams and regions around the world.

How CloudTune generates forecasts for the Amazon Store

The system has expanded from generating peak computation-load forecasts one year in advance to a series of forecasts that include per-minute forecasts several months into the future.

On what are known as game days to teams inside Amazon, millions of virtual “customers” log on to the Amazon Store to search for items, browse product pages, load shopping carts, and check out as if they were real customers hunting for bargains during a sale such as Prime Day.

Jeff Barr, chief evangelist for AWS, shares what he calls some of the "most interesting and/or mind-blowing metrics" from Prime Day.

“It’s like a fire drill, a planned practice,” said Molly McElheny, a principal technical program manager in Central Reliability Engineering at Amazon. McElheny is responsible for helping to oversee those game days, which her organization runs at strategically chosen times in advance of big sales. Their goal? Make sure the Amazon Store and the many teams who help it run smoothly are ready ahead of time for potentially massive spikes in traffic.

That planned practice draws on forecasts of traffic and loads on Amazon services generated by CloudTune, a system that serves as a communications vehicle between the teams who plan events such as Prime Day and service teams that own infrastructure components and help run the Amazon Store.

Related content
The SCOT science team used lessons from the past — and improved existing tools — to contend with “a peak that lasted two years”.

CloudTune Forecasting emanated from Amazon’s central economics team back in 2015 as an improved methodology for capacity planning to handle major events such as Prime Day and Black Friday, explained Oleksiy Mnyshenko, a senior manager and economist at Amazon.

“These events have large peak-to-mean spreads,” he noted. “This means we need to proactively model the expected peak load and continuously assess our AWS capacity needs to support it.”

Demand forecasting

The CloudTune Forecasting system has expanded over the years from generating peak computation-load forecasts one year in advance in the United States to a series of forecasts that range from per-week forecasts up to two years out to per-minute forecasts several months into the future. In addition, those forecasts — which are continually refreshed with new data — are now also generated for a wide variety of Amazon teams and regions around the world.

While the need for specific regional forecasts may be obvious — a Mother’s Day sale forecast in the United States will not be relevant for a Diwali sale in India — many unique service teams that support the Amazon Store also rely on these forecasts.

When you go to the Amazon Store, ... in the background, there are thousands of software systems that together constitute what the experience is, and all of these systems and teams owning them need to be ready for these peak events.
Oleksiy Mnyshenko

One team may be responsible for the home page in a specific region, whereas another team is responsible for the shopping cart experience there, and yet another handles the checkout process. Each team experiences traffic differently and, necessarily, consumes AWS computing power differently. Over time, teams at Amazon have collaborated to improve CloudTune forecasts to be useful for each of those teams and their specific concerns.

“When you go to the Amazon Store, it feels very seamless as you go from searching for something to navigating to details about the product to then checking out, but in the background, there are thousands of software systems that together constitute what the experience is, and all of these systems and teams owning them need to be ready for these peak events,” Mnyshenko said.

In the early years, CloudTune forecasts were geared primarily to help service teams know how much computational capacity they needed for peak events. Since then, improvements have focused on differentiating across teams and regions. As the Amazon Store continued to grow, it became important to extend demand outlook to a two-years-out aggregate forecast per region to help inform decisions for AWS related to computing power, networking, and data center planning.

Related content
The story of a decade-plus long journey toward a unified forecasting model.

“A data center is not built in a day,” noted Chunpeng Wang, a senior applied scientist at Amazon who works on the CloudTune forecast team. “Our forecasts are an important input into long-term capacity planning for AWS.”

What’s more, the Amazon Store is not alone in contending with peak events, noted Ben Mildenhall, a senior manager in cloud computing and auto scaling.

“Many AWS external customers have Black Friday and Cyber Monday events as well,” Mildenhall said. “So it’s important we optimize to give all of our customers a great experience.”

CloudTune forecasts provide inputs to AWS to help size infrastructure in a way that maximizes utilization efficiency, noted Mnyshenko. “The way CloudTune specifically helps here is continuously getting better at anticipating the mix of capacity we’re using by generation, by type, by location, so that we can have those conversations and provide this feedback to AWS,” he said.

Granular, flexible, and explainable

Like many demand-forecasting applications, CloudTune is a time-series forecasting system. What’s unique about it is the ability to predict demand at one-minute granularity, noted Mnyshenko. This level of granularity provides insight into patterns such as short-duration spikes in website traffic. Teams use the forecasts as inputs to determine their computing capacity not just for peak events like back to school but also peak times during any given day, week, or month.

“Our comparative advantage is intra-day load predictions at one-minute granularity, allowing us to track actuals during peak events, highlighting these sharp edges where checkout spikes way beyond the natural peak for the period,” Mnyshenko said.

In addition, CloudTune forecasts need to be flexible to accommodate changes in the day and duration of events, such as the evolution of Prime Day from a 24-hour event to a 48-hour event on different days each year.

Related content
Part-time sabbatical plan turns into full-time role for author of five books and more than 170 research articles.

At other times, CloudTune needs to make forecasts for special events such as the launch of popular gaming consoles, which may sell out in a matter of minutes.

“That can create a huge spike, and we have to predict the traffic spike and the order spike,” explained Ebrahim Nasrabadi, a senior manager of applied science who leads the CloudTune Forecasting science team.

The team responsible for CloudTune Forecasting has developed modular and configurable models to address these and other challenges, he noted.

For example, built-in functionality allows the removal of outliers — due to things such as a spike in robot traffic that can decrease or increase actual website traffic and order rate unexpectedly — from predictable seasonal behavior and known calendar events. Since these interruptions do not regularly occur, the tool allows forecast teams to exclude those outliers from data used in the forecast.

“Our models are simple and quite flexible to include additional variables and seasonality,” noted Nasrabadi. The models also take into account significant changes in a trend within a dataset, also known as a slope break.

The CloudTune team also emphasizes forecast models that are explainable.

“We have to be very crisp about what we are doing, very transparent about our expectations,” said Wang.

Hundreds of Amazon Store software teams use these forecasts to help determine their AWS capacity needs for peak events. The better these teams understand the forecasts, the more trust they have in them, noted Mnyshenko.

“We need to be able to explain what goes into the ingredients and, more importantly, what we are doing to reduce the spread in errors,” he said.

Continuous automation

Currently, service teams not yet using automation enhancements take the CloudTune forecasts and translate them into capacity orders for servers through the Amazon Elastic Compute Cloud (Amazon EC2) using many different manual tools and processes, said Doug Smith, a senior technical program manager responsible for delivering improvements and features to the CloudTune toolset.

A key future direction for CloudTune is to continuously enhance these tools and automate as many manual processes as possible, Smith noted.

The world we’re envisioning between our team and CloudTune is one where services teams don’t have to worry about scaling at all.
Molly McElheny

“We’re moving into automation so that we can take our CloudTune forecasts as inputs into these new products that we’re building to provide a hands-off experience,” he said.

And while the game days McElheny’s team runs in advance of these major events will continue apace, she has a vision for the future there as well. Today, she said, the forecasts enable simulations of high-level customer journeys. She’d like to get to a forecast that allows her team to simulate an event down to the types of products customers are ordering when and where.

“This matters because different services get called depending on a lot of different factors. The closer we can simulate the real traffic the better, because we’re actually hitting services with the traffic they expect to see during the event,” McElheny said.

To get there, McElheny, Smith, and their colleagues work together to make sure the forecasts provide the best data for the most realistic simulations.

“The world we’re envisioning between our team and CloudTune is one where services teams don’t have to worry about scaling at all,” McElheny said. “CloudTune does it for them, and then we run a game day, and as we find issues during game day, CloudTune goes and places orders to scale things up for those customers.”

Research areas

Related content

US, MA, N.reading
Amazon Industrial Robotics Group is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics Group, we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of dexterous manipulation system that: - Enables unprecedented generalization across diverse tasks - Enables contact-rich manipulation in different environments - Seamlessly integrates low-level skills and high-level behaviors - Leverage mechanical intelligence, multi-modal sensor feedback and advanced control techniques. The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. A day in the life - Work on design and implementation of methods for Visual SLAM, navigation and spatial reasoning - Leverage simulation and real-world data collection to create large datasets for model development - Develop a hierarchical system that combines low-level control with high-level planning - Collaborate effectively with multi-disciplinary teams to co-design hardware and algorithms for dexterous manipulation
US, NY, New York
We are seeking an Applied Scientist to lead the development of evaluation frameworks and data collection protocols for robotic capabilities. In this role, you will focus on designing how we measure, stress-test, and improve robot behavior across a wide range of real-world tasks. Your work will play a critical role in shaping how policies are validated and how high-quality datasets are generated to accelerate system performance. You will operate at the intersection of robotics, machine learning, and human-in-the-loop systems, building the infrastructure and methodologies that connect teleoperation, evaluation, and learning. This includes developing evaluation policies, defining task structures, and contributing to operator-facing interfaces that enable scalable and reliable data collection. The ideal candidate is highly experimental, systems-oriented, and comfortable working across software, robotics, and data pipelines, with a strong focus on turning ambiguous capability goals into measurable and actionable evaluation systems. Key job responsibilities - Design and implement evaluation frameworks to measure robot capabilities across structured tasks, edge cases, and real-world scenarios - Develop task definitions, success criteria, and benchmarking methodologies that enable consistent and reproducible evaluation of policies - Create and refine data collection protocols that generate high-quality, task-relevant datasets aligned with model development needs - Build and iterate on teleoperation workflows and operator interfaces to support efficient, reliable, and scalable data collection - Analyze evaluation results and collected data to identify performance gaps, failure modes, and opportunities for targeted data collection - Collaborate with engineering teams to integrate evaluation tooling, logging systems, and data pipelines into the broader robotics stack - Stay current with advances in robotics, evaluation methodologies, and human-in-the-loop learning to continuously improve internal approaches - Lead technical projects from conception through production deployment - Mentor junior scientists and engineers
US, WA, Seattle
Come be a part of a rapidly expanding $35 billion-dollar global business. At Amazon Business, a fast-growing startup passionate about building solutions, we set out every day to innovate and disrupt the status quo. We stand at the intersection of tech & retail in the B2B space developing innovative purchasing and procurement solutions to help businesses and organizations thrive. At Amazon Business, we strive to be the most recognized and preferred strategic partner for smart business buying. Bring your insight, imagination and a healthy disregard for the impossible. Join us in building and celebrating the value of Amazon Business to buyers and sellers of all sizes and industries. Unlock your career potential. Amazon Business Data Insights and Analytics team is looking for a Data Scientist to lead the research and thought leadership to drive our data and insights strategy for Amazon Business. This role is central in shaping the definition and execution of the long-term strategy for Amazon Business. You will be responsible for researching, experimenting and analyzing predictive and optimization models, designing and implementing advanced detection systems that analyze customer behavior at registration and throughout their journey. You will work on ambiguous and complex business and research science problems with large opportunities. You'll leverage diverse data signals including customer profiles, purchase patterns, and network associations to identify potential abuse and fraudulent activities. You are an analytical individual who is comfortable working with cross-functional teams and systems, working with state-of-the-art machine learning techniques and AWS services to build robust models that can effectively distinguish between legitimate business activities and suspicious behavior patterns You must be a self-starter and be able to learn on the go. Excellent written and verbal communication skills are required as you will work very closely with diverse teams. Key job responsibilities - Interact with business and software teams to understand their business requirements and operational processes - Frame business problems into scalable solutions - Adapt existing and invent new techniques for solutions - Gather data required for analysis and model building - Create and track accuracy and performance metrics - Prototype models by using high-level modeling languages such as R or in software languages such as Python. - Familiarity with transforming prototypes to production is preferred. - Create, enhance, and maintain technical documentation
US, TX, Austin
Amazon Leo is an initiative to launch a constellation of Low Earth Orbit satellites that will provide low-latency, high-speed broadband connectivity to unserved and underserved communities around the world. As a Systems Engineer, this role is primarily responsible for the design, development and integration of communication payload and customer terminal systems. The Role: Be part of the team defining the overall communication system and architecture of Amazon Leo’s broadband wireless network. This is a unique opportunity to innovate and define groundbreaking wireless technology at global scale. The team develops and designs the communication system for Leo and analyzes its overall system level performance such as for overall throughput, latency, system availability, packet loss etc. This role in particular will be responsible for leading the effort in designing and developing advanced technology and solutions for communication system. This role will also be responsible developing advanced physical layer + protocol stacks systems as proof of concept and reference implementation to improve the performance and reliability of the LEO network. In particular this role will be responsible for using concepts from digital signal processing, information theory, wireless communications to develop novel solutions for achieving ultra-high performance LEO network. This role will also be part of a team and develop simulation tools with particular emphasis on modeling the physical layer aspects such as advanced receiver modeling and abstraction, interference cancellation techniques, FEC abstraction models etc. This role will also play a critical role in the integration and verification of various HW and SW sub-systems as a part of system integration and link bring-up and verification. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, WA, Bellevue
We are seeking a passionate, talented, and inventive individual to join the Applied AI team and help build industry-leading technologies that customers will love. This team offers a unique opportunity to make a significant impact on the customer experience and contribute to the design, architecture, and implementation of a cutting-edge product. The mission of the Applied AI team is to enable organizations within Worldwide Amazon.com Stores to accelerate the adoption of AI technologies across various parts of our business. We are looking for a Senior Applied Scientist to join our Applied AI team to work on LLM-based solutions. On our team you will push the boundaries of ML and Generative AI techniques to scale the inputs for hundreds of billions of dollars of annual revenue for our eCommerce business. If you have a passion for AI technologies, a drive to innovate and a desire to make a meaningful impact, we invite you to become a valued member of our team. You will be responsible for developing and maintaining the systems and tools that enable us to accelerate knowledge operations and work in the intersection of Science and Engineering. You will push the boundaries of ML and Generative AI techniques to scale the inputs for hundreds of billions of dollars of annual revenue for our eCommerce business. If you have a passion for AI technologies, a drive to innovate and a desire to make a meaningful impact, we invite you to become a valued member of our team. We are seeking an experienced Scientist who combines superb technical, research, analytical and leadership capabilities with a demonstrated ability to get the right things done quickly and effectively. This person must be comfortable working with a team of top-notch developers and collaborating with our research teams. We’re looking for someone who innovates, and loves solving hard problems. You will be expected to have an established background in building highly scalable systems and system design, excellent project management skills, great communication skills, and a motivation to achieve results in a fast-paced environment. You should be somebody who enjoys working on complex problems, is customer-centric, and feels strongly about building good software as well as making that software achieve its operational goals.
IN, KA, Bengaluru
Do you want to lead the development of advanced machine learning systems that protect millions of customers and power a trusted global eCommerce experience? Are you passionate about modeling terabytes of data, solving highly ambiguous fraud and risk challenges, and driving step-change improvements through scientific innovation? If so, the Amazon Buyer Risk Prevention (BRP) Machine Learning team may be the right place for you. We are seeking a Senior Applied Scientist to define and drive the scientific direction of large-scale risk management systems that safeguard millions of transactions every day. In this role, you will lead the design and deployment of advanced machine learning solutions, influence cross-team technical strategy, and leverage emerging technologies—including Generative AI and LLMs—to build next-generation risk prevention platforms. Key job responsibilities Lead the end-to-end scientific strategy for large-scale fraud and risk modeling initiatives Define problem statements, success metrics, and long-term modeling roadmaps in partnership with business and engineering leaders Design, develop, and deploy highly scalable machine learning systems in real-time production environments Drive innovation using advanced ML, deep learning, and GenAI/LLM technologies to automate and transform risk evaluation Influence system architecture and partner with engineering teams to ensure robust, scalable implementations Establish best practices for experimentation, model validation, monitoring, and lifecycle management Mentor and raise the technical bar for junior scientists through reviews, technical guidance, and thought leadership Communicate complex scientific insights clearly to senior leadership and cross-functional stakeholders Identify emerging scientific trends and translate them into impactful production solutions
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.