Making deep learning practical for Earth system forecasting

Novel “cuboid attention” helps transformers handle large-scale multidimensional data, while diffusion models enable probabilistic prediction.

The Earth is a complex system. Variabilities ranging from regular events like temperature fluctuations to extreme events like drought, hailstorms, and the El Niño–Southern Oscillation (ENSO) phenomenon can influence crop yields, delay airline flights, and cause floods and forest fires. Precise and timely forecasting of these variabilities can help people take necessary precautions to avoid crises or better utilize natural resources such as wind and solar energy.

The success of transformer-based models in other AI domains has led researchers to attempt applying them to Earth system forecasting, too. But these efforts have encountered several major challenges. Foremost among these is the high dimensionality of Earth system data: naively applying the transformer’s quadratic-complexity attention mechanism is too computationally expensive.

Most existing machine-learning-based Earth systems models also output single, point forecasts, which are often averages across wide ranges of possible outcomes. Sometimes, however, it may be more important to know that there’s a 10% chance of an extreme weather event than to know the general averages across a range of possible outcomes. And finally, typical machine learning models don’t have guardrails imposed by physical laws or historical precedents and can produce outputs that are unlikely or even impossible.

In recent work, our team at Amazon Web Services has tackled all these challenges. Our paper “Earthformer: Exploring space-time transformers for Earth system forecasting”, published at NeurIPS 2022, suggests a novel attention mechanism we call cuboid attention, which enables transformers to process large-scale, multidimensional data much more efficiently.

And in “PreDiff: Precipitation nowcasting with latent diffusion models”, to appear at NeurIPS 2023, we show that diffusion models can both enable probabilistic forecasts and impose constraints on model outputs, making them much more consistent with both the historical record and the laws of physics.

Earthformer and cuboid attention

The heart of the transformer model is its “attention mechanism”, which enables it to weigh the importance of different parts of an input sequence when processing each element of the output sequence. This mechanism allows transformers to capture spatiotemporally long-range dependencies and relationships in the data, which have not been well modeled by conventional convolutional-neural-network- or recurrent-neural-network-based architectures.

Earth system data, however, is inherently high-dimensional and spatiotemporally complex. In the SEVIR dataset studied in our NeurIPS 2022 paper, for instance, each data sequence consists of 25 frames of data captured at five-minute intervals, each frame having a spatial resolution of 384 x 384 pixels. Using the conventional transformer attention mechanism to process such high-dimensional data would be extremely expensive.

In our NeurIPS 2022 paper, we proposed a novel attention mechanism we call cuboid attention, which decomposes input tensors into cuboids, or higher-dimensional analogues of cubes, and applies attention at the level of each cuboid. Since the computational cost of attention scales quadratically with the tensor size, applying attention locally in each cuboid is much more computationally tractable than trying to compute attention weights across the entire tensor at once. For instance, decomposing along the temporal axis can result in cost reduction by a factor of 3842 for the SEVIR dataset, since each frame has a spatial resolution of 384 x 384 pixels

Of course, such decomposition introduces a limitation: attention functions independently within each cuboid, with no communication between cuboids. To address this issue, we also compute global vectors that summarize the cuboids’ attention weights. Other cuboids can factor the global vectors into their own attention weight computations.

cuboid_illustration.gif
Cuboid attention layer processing an input tensor (X) with global vectors (G).

We call our transformer-based model with cuboid attention Earthformer. Earthformer adopts a hierarchical encoder-decoder architecture, which gradually encodes the input sequence to multiple levels of representations and generates the prediction via a coarse-to-fine procedure. Each hierarchy includes a stack of cuboid attention blocks. By stacking multiple cuboid attention layers with different configurations, we are able to efficiently explore effective space-time attention.

earthforer_enc_dec.png
The Earthformer architecture is a hierarchical transformer encoder-decoder with cuboid attention. In this diagram, “×D” means to stack D cuboid attention blocks with residual connections, while “×M” means to have M layers of hierarchies.

We experimented with multiple methods for decomposing an input tensor into cuboids. Our empirical studies show that the “axial” pattern, which stacks three unshifted local decompositions along the temporal, height, and width axes, is both effective and efficient. It achieves the best performance while avoiding the exponential computational cost of vanilla attention.

cub_pattern_together.png
Illustration of cuboid decomposition strategies when the input shape is (T, H, W) = (6, 4, 4), and cuboid size is (3, 2, 2). Elements that have the same color belong to the same cuboid and will attend to each other. Local decompositions aggregate contiguous elements of the tensor, and dilated decompositions aggregate elements according to a step function determined by the cuboid size. Both local and dilated decompositions, however, can be shifted by some number of elements along any of the tensor’s axes.

Experimental results

To evaluate Earthformer, we compared it to six state-of-the-art spatiotemporal forecasting models on two real-world datasets: SEVIR, for the task of continuously predicting precipitation probability in the near future (“nowcasting”), and ICAR-ENSO, for forecasting sea surface temperature (SST) anomalies.

On SEVIR, the evaluation metrics we used were standard mean squared error (MSE) and critical success index (CSI), a standard metric in precipitation nowcasting evaluation. CSI is also known as intersection over union (IoU): at different thresholds, it's denoted as CSI-thresh; their mean is denoted as CSI-M.

On both MSE and CSI, Earthformer outperformed all six baseline models across the board. Earthformer with global vectors also uniformly outperformed the version without global vectors.

Model

#Params.(M)

GFLOPS

Metrics

CSI-M↑

CSI-219↑

CSI-181↑

MSE(10-3)↓

Persistence

-

-

0.2613

0.0526

0.0969

11.5338

UNet

16.6

33

0.3593

0.0577

0.1580

4.1119

ConvLSTM

14.0

527

0.4185

0.1288

0.2482

3.7532

PredRNN

46.6

328

0.4080

0.1312

0.2324

3.9014

PhyDNet

13.7

701

0.3940

0.1288

0.2309

4.8165

E3D-LSTM

35.6

523

0.4038

0.1239

0.2270

4.1702

Rainformer

184.0

170

0.3661

0.0831

0.1670

4.0272

Earthformer w/o global

13.1

257

0.4356

0.1572

0.2716

3.7002

Earthformer

15.1

257

0.4419

0.1791

0.2848

3.6957

On ICAR-ENSO, we report the correlation skill of the three-month-moving-averaged Nino3.4 index, which evaluates the accuracy of SST anomaly prediction across a certain area (170°-120°W, 5°S-5°N) of the Pacific. Earthformer consistently outperforms the baselines in all concerned evaluation metrics, and the version using global vectors further improves performance.

Model

#Params.(M)

GFLOPS

Metrics

C-Nino3.4-M↑

C-Nino3.4-WM↑

MSE(10-4)↓

Persistence

-

-

0.3221

0. 447

4.581

UNet

12.1

0.4

0.6926

2.102

2.868

ConvLSTM

14.0

11.1

0.6955

2.107

2.657

PredRNN

23.8

85.8

0.6492

1.910

3.044

PhyDNet

3.1

5.7

0.6646

1.965

2.708

E3D-LSTM

12.9

99.8

0.7040

2.125

3.095

Rainformer

19.2

1.3

0.7106

2.153

3.043

Earthformer w/o global

6.6

23.6

0.7239

2.214

2.550

Earthformer

7.6

23.9

0.7329

2.259

2.546

PreDiff

Diffusion models have recently emerged as a leading approach to many AI tasks. Diffusion models are generative models that establish a forward process of iteratively adding Gaussian noise to training samples; the model then learns to incrementally remove the added noise in a reverse diffusion process, gradually reducing the noise level and ultimately resulting in clear and high-quality generation.

During training, the model learns a sequence of transition probabilities between each of the denoising steps it incrementally learns to perform. It is therefore an intrinsically probabilistic model, which is well suited for probabilistic forecasting.

A recent variation on diffusion models is the latent diffusion model: before passing to the diffusion model, an input is first fed to an autoencoder, which has a bottleneck layer that produces a compressed embedding (data representation); the diffusion model is then applied in the compressed space.

In our forthcoming NeurIPS paper, “PreDiff: Precipitation nowcasting with latent diffusion models”, we present PreDiff, a latent diffusion model that uses Earthformer as its core neural-network architecture.

By modifying the transition probabilities of the trained model, we can impose constraints on the model output, making it more likely to conform to some prior knowledge. We achieve this by simply shifting the mean of the learned distribution, until it complies better with the constraint we wish to impose. 

prediff_overview_new_v1.png
An overview of PreDiff. The autoencoder (e) encodes the input as a latent vector (zcond). The latent diffusion model, which adopts the Earthformer architecture, then incrementally denoises (steps zt+1 to z0) the noisy version of the input (zT). In the knowledge control step, the transition distributions between denoising steps are modified to accord with prior knowledge.

Results

We evaluated PreDiff on the task of predicting precipitation intensity in the near future (“nowcasting”) on SEVIR. We use anticipated precipitation intensity as a knowledge control to simulate possible extreme weather events like rainstorms and droughts.

We found that knowledge control with anticipated future precipitation intensity effectively guides generation while maintaining fidelity and adherence to the true data distribution. For example, the third row of the following figure simulates how weather unfolds in an extreme case (with probability around 0.35%) where the future average intensity exceeds μτ + 4στ. Such simulation can be valuable for estimating potential damage in extreme-rainstorm cases.

nbody_vis_v6.png
A set of example forecasts from PreDiff with knowledge control (PreDiff-KC), i.e., PreDiff under the guidance of anticipated average intensity. From top to bottom: context sequence y, target sequence x, and forecasts from PreDiff-KC showcasing different levels of anticipated future intensity τ + nστ), where n takes the values −4, −2, 0, 2, and 4.

Related content

RO, Bucharest
Amazon's Compliance and Safety Services (CoSS) Team is looking for a smart and creative Applied Scientist to apply and extend state-of-the-art research in NLP, multi-modal modeling, domain adaptation, continuous learning and large language model to join the Applied Science team. At Amazon, we are working to be the most customer-centric company on earth. Millions of customers trust us to ensure a safe shopping experience. This is an exciting and challenging position to drive research that will shape new ML solutions for product compliance and safety around the globe in order to achieve best-in-class, company-wide standards around product assurance. You will research on large amounts of tabular, textual, and product image data from product detail pages, selling partner details and customer feedback, evaluate state-of-the-art algorithms and frameworks, and develop new algorithms to improve safety and compliance mechanisms. You will partner with engineers, technical program managers and product managers to design new ML solutions implemented across the entire Amazon product catalog. Key job responsibilities As an Applied Scientist on our team, you will: - Research and Evaluate state-of-the-art algorithms in NLP, multi-modal modeling, domain adaptation, continuous learning and large language model. - Design new algorithms that improve on the state-of-the-art to drive business impact, such as synthetic data generation, active learning, grounding LLMs for business use cases - Design and plan collection of new labels and audit mechanisms to develop better approaches that will further improve product assurance and customer trust. - Analyze and convey results to stakeholders and contribute to the research and product roadmap. - Collaborate with other scientists, engineers, product managers, and business teams to creatively solve problems, measure and estimate risks, and constructively critique peer research - Consult with engineering teams to design data and modeling pipelines which successfully interface with new and existing software - Publish research publications at internal and external venues. About the team The science team delivers custom state-of-the-art algorithms for image and document understanding. The team specializes in developing machine learning solutions to advance compliance capabilities. Their research contributions span multiple domains including multi-modal modeling, unstructured data matching, text extraction from visual documents, and anomaly detection, with findings regularly published in academic venues.
US, WA, Seattle
At Amazon Selection and Catalog Systems (ASCS), our mission is to power the online buying experience for customers worldwide so they can find, discover, and buy any product they want. We innovate on behalf of our customers to ensure uniqueness and consistency of product identity and to infer relationships between products in Amazon Catalog to drive the selection gateway for the search and browse experiences on the website. We're solving a fundamental AI challenge: establishing product identity and relationships at unprecedented scale. Using Generative AI, Visual Language Models (VLMs), and multimodal reasoning, we determine what makes each product unique and how products relate to one another across Amazon's catalog. The scale is staggering: billions of products, petabytes of multimodal data, millions of sellers, dozens of languages, and infinite product diversity—from electronics to groceries to digital content. The research challenges are immense. GenAI and VLMs hold transformative promise for catalog understanding, but we operate where traditional methods fail: ambiguous problem spaces, incomplete and noisy data, inherent uncertainty, reasoning across both images and textual data, and explaining decisions at scale. Establishing product identities and groupings requires sophisticated models that reason across text, images, and structured data—while maintaining accuracy and trust for high-stakes business decisions affecting millions of customers daily. Amazon's Item and Relationship Platform group is looking for an innovative and customer-focused applied scientist to help us make the world's best product catalog even better. In this role, you will partner with technology and business leaders to build new state-of-the-art algorithms, models, and services to infer product-to-product relationships that matter to our customers. You will pioneer advanced GenAI solutions that power next-generation agentic shopping experiences, working in a collaborative environment where you can experiment with massive data from the world's largest product catalog, tackle problems at the frontier of AI research, rapidly implement and deploy your algorithmic ideas at scale, across millions of customers. Key job responsibilities Key job responsibilities include: * Formulate novel research problems at the intersection of GenAI, multimodal learning, and large-scale information retrieval—translating ambiguous business challenges into tractable scientific frameworks * Design and implement leading models leveraging VLMs, foundation models, and agentic architectures to solve product identity, relationship inference, and catalog understanding at billion-product scale * Pioneer explainable AI methodologies that balance model performance with scalability requirements for production systems impacting millions of daily customer decisions * Own end-to-end ML pipelines from research ideation to production deployment—processing petabytes of multimodal data with rigorous evaluation frameworks * Define research roadmaps aligned with business priorities, balancing foundational research with incremental product improvements * Mentor peer scientists and engineers on advanced ML techniques, experimental design, and scientific rigor—building organizational capability in GenAI and multimodal AI * Represent the team in the broader science community—publishing findings, delivering tech talks, and staying at the forefront of GenAI, VLM, and agentic system research
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing in Pasadena, CA, is looking to hire a Quantum Research Scientist in the Fabrication group. You will join a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers working at the forefront of quantum computing. You should have a deep and broad knowledge of device fabrication techniques. Candidates with a track record of original scientific contributions will be preferred. We are looking for candidates with strong engineering principles, resourcefulness and a bias for action, superior problem solving, and excellent communication skills. Working effectively within a team environment is essential. As a research scientist you will be expected to work on new ideas and stay abreast of the field of experimental quantum computation. Key job responsibilities In this role, you will drive improvements in qubit performance by characterizing the impact of environmental and material noise on qubit dynamics. This will require designing experiments to assess the role of specific noise sources, ensuring the collection of statistically significant data through automation, analyzing the results, and preparing clear summaries for the team. Finally, you will work with hardware engineers, material scientists, and circuit designers to implement changes which mitigate the impact of the most significant noise sources. About the team Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a US export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility.
US, VA, Herndon
AWS Infrastructure Services owns the design, planning, delivery, and operation of all AWS global infrastructure. In other words, we’re the people who keep the cloud running. We support all AWS data centers and all of the servers, storage, networking, power, and cooling equipment that ensure our customers have continual access to the innovation they rely on. We work on the most challenging problems, with thousands of variables impacting the supply chain — and we’re looking for talented people who want to help. You’ll join a diverse team of software, hardware, and network engineers, supply chain specialists, security experts, operations managers, and other vital roles. You’ll collaborate with people across AWS to help us deliver the highest standards for safety and security while providing seemingly infinite capacity at the lowest possible cost for our customers. And you’ll experience an inclusive culture that welcomes bold ideas and empowers you to own them to completion. AWS Infrastructure Services Science (AISS) researches and builds machine learning models that influence the power utilization at our data centers to ensure the health of our thermal and electrical infrastructure at high infrastructure utilization. As a Data Scientist, you will work on our Science team and partner closely with other scientists and data engineers as well as Business Intelligence, Technical Program Management, and Software teams to accurately model and optimize our power infrastructure. Outputs from your models will directly influence our data center topology and will drive exceptional cost savings. You will be responsible for building data science prototypes that optimize our power and thermal infrastructure, working across AWS to solve data mapping and quality issues (e.g. predicting when we might have bad sensor readings), and contribute to our Science team vision. You are skeptical. When someone gives you a data source, you pepper them with questions about sampling biases, accuracy, and coverage. When you’re told a model can make assumptions, you actively try to break those assumptions. You have passion for excellence. The wrong choice of data could cost the business dearly. You maintain rigorous standards and take ownership of the outcome of your data pipelines and code. You do whatever it takes to add value. You don’t care whether you’re building complex ML models, writing blazing fast code, integrating multiple disparate data-sets, or creating baseline models - you care passionately about stakeholders and know that as a curator of data insight you can unlock massive cost savings and preserve customer availability. You have a limitless curiosity. You constantly ask questions about the technologies and approaches we are taking and are constantly learning about industry best practices you can bring to our team. You have excellent business and communication skills to be able to work with product owners to understand key business questions and earn the trust of senior leaders. You will need to learn Data Center architecture and components of electrical engineering to build your models. You are comfortable juggling competing priorities and handling ambiguity. You thrive in an agile and fast-paced environment on highly visible projects and initiatives. The tradeoffs of cost savings and customer availability are constantly up for debate among senior leadership - you will help drive this conversation. Key job responsibilities - Proactively seek to identify opportunities and insights through analysis and provide solutions to automate and optimize power utilization based on a broad and deep knowledge of AWS data center systems and infrastructure. - Apply a range of data science techniques and tools combined with subject matter expertise to solve difficult customer or business problems and cases in which the solution approach is unclear. - Collaborate with Engineering teams to obtain useful data by accessing data sources and building the necessary SQL/ETL queries or scripts. - Build models and automated tools using statistical modeling, econometric modeling, network modeling, machine learning algorithms and neural networks. - Validate these models against alternative approaches, expected and observed outcome, and other business defined key performance indicators. - Collaborate with Engineering teams to implement these models in a manner which complies with evaluations of the computational demands, accuracy, and reliability of the relevant ETL processes at various stages of production. About the team Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. *Why AWS* Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. *Diverse Experiences* Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. *Work/Life Balance* We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. *Inclusive Team Culture* Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) conferences, inspire us to never stop embracing our uniqueness. *Mentorship and Career Growth* We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
US, CA, San Francisco
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, science understanding, locomotion, manipulation, sim2real transfer, multi-modal foundation models and multi-task robot learning, designing novel frameworks that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Drive independent research initiatives across the robotics stack, including robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Lead full-stack robotics projects from conceptualization through deployment, taking a system-level approach that integrates hardware considerations with algorithmic development, ensuring robust performance in production environments - Collaborate with platform and hardware teams to ensure seamless integration across the entire robotics stack, optimizing and scaling models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures and innovative systems and algorithms, leveraging our extensive infrastructure to prototype and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through innovative foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the next level. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. As a Senior Research Scientist, you will work with a unique and gifted team developing exciting products for consumers and collaborate with cross-functional teams. Our team rewards intellectual curiosity while maintaining a laser-focus in bringing products to market. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the intersection of both academic and applied research in this product area, you have the opportunity to work together with some of the most talented scientists, engineers, and product managers. Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We are constantly learning through programs that are local, regional, and global. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Our team highly values work-life balance, mentorship and career growth. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We care about your career growth and strive to assign projects and offer training that will challenge you to become your best.
CA, BC, Vancouver
Join our Amazon Private Brands Selection Guidance organization in building science and tech solutions at scale to delight our customers with products across our leading private brands such as Amazon Basics, Amazon Essentials, and by Amazon. The Selection Guidance team applies Generative AI, Machine Learning, Statistics, and Economics solutions to drive our private brands product assortment, strategic business decisions, and product inputs such as title, price, merchandising and ordering. We are an interdisciplinary team of Scientists, Economists, Engineers, and Product Managers incubating and building day one solutions using novel technology, to solve some of the toughest business problems at Amazon. As a Data Scientist you will investigate business problems using data, invent novel solutions and prototypes, and directly contribute to bringing your ideas to life through production implementation. Current research areas include named entity recognition, product substitutes, pricing optimization, agentic AI, and large language models. You will review and guide scientists across the team on their designs and implementations, and raise the team bar for science research and prototypes. This is a unique, high visibility opportunity for someone who wants to develop ambitious science solutions and have direct business and customer impact. Key job responsibilities - Partner with business stakeholders to deeply understand APB business problems and frame ambiguous business problems as science problems and solutions. - Perform data analysis and build data pipelines to drive business decisions. - Invent novel science solutions, develop prototypes, and deploy production software to solve business problems. - Review and guide science solutions across the team. - Publish and socialize your and the team's research across Amazon and external avenues as appropriate - Leverage industry best practices to establish repeatable applied science practices, principles & processes.
US, VA, Arlington
This position requires that the candidate selected be a US Citizen and currently possess and maintain an active Top Secret security clearance. The Amazon Web Services Professional Services (ProServe) team seeks an experienced Principal Data Scientist to join our ProServe Shared Delivery Team (SDT). In this role, you will serve as a technical leader and strategic advisor to AWS enterprise customers, partners, and internal AWS teams on transformative AI/ML projects. You will leverage your deep technical expertise to architect and implement innovative machine learning and generative AI solutions that drive significant business outcomes. As a Principal Data Scientist, you will lead complex, high-impact AI/ML initiatives across multiple customer engagements. You will collaborate with Director and C-level executives to translate business challenges into technical solutions. You will drive innovation through thought leadership, establish technical standards, and develop reusable solution frameworks that accelerate customer adoption of AWS AI/ML services. Your work will directly influence the strategic direction of AWS Professional Services AI/ML offerings and delivery approaches. Your extensive experience in designing and implementing sophisticated AI/ML solutions will enable you to tackle the most challenging customer problems. You will provide technical mentorship to other data scientists, establish best practices, and represent AWS as a subject matter expert in customer-facing engagements. You will build trusted advisor relationships with customers and partners, helping them achieve their business outcomes through innovative applications of AWS AI/ML services. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. Key job responsibilities Architecting and implementing complex, enterprise-scale AI/ML solutions that solve critical customer business challenges Providing technical leadership across multiple customer engagements, establishing best practices and driving innovation Collaborating with Delivery Consultants, Engagement Managers, Account Executives, and Cloud Architects to design and deploy AI/ML solutions Developing reusable solution frameworks, reference architectures, and technical assets that accelerate customer adoption of AWS AI/ML services Representing AWS as a subject matter expert in customer-facing engagements, including executive briefings and technical workshops Identifying and driving new business opportunities through technical innovation and thought leadership Mentoring junior data scientists and contributing to the growth of AI/ML capabilities within AWS Professional Services
IN, KA, Bengaluru
The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key responsibilities include: - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues
US, VA, Arlington
This position requires that the candidate selected be a US Citizen and currently possess and maintain an active Top Secret security clearance. The Amazon Web Services Professional Services (ProServe) team seeks an experienced Principal Data Scientist to join our ProServe Shared Delivery Team (SDT). In this role, you will serve as a technical leader and strategic advisor to AWS enterprise customers, partners, and internal AWS teams on transformative AI/ML projects. You will leverage your deep technical expertise to architect and implement innovative machine learning and generative AI solutions that drive significant business outcomes. As a Principal Data Scientist, you will lead complex, high-impact AI/ML initiatives across multiple customer engagements. You will collaborate with Director and C-level executives to translate business challenges into technical solutions. You will drive innovation through thought leadership, establish technical standards, and develop reusable solution frameworks that accelerate customer adoption of AWS AI/ML services. Your work will directly influence the strategic direction of AWS Professional Services AI/ML offerings and delivery approaches. Your extensive experience in designing and implementing sophisticated AI/ML solutions will enable you to tackle the most challenging customer problems. You will provide technical mentorship to other data scientists, establish best practices, and represent AWS as a subject matter expert in customer-facing engagements. You will build trusted advisor relationships with customers and partners, helping them achieve their business outcomes through innovative applications of AWS AI/ML services. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. Key job responsibilities Architecting and implementing complex, enterprise-scale AI/ML solutions that solve critical customer business challenges Providing technical leadership across multiple customer engagements, establishing best practices and driving innovation Collaborating with Delivery Consultants, Engagement Managers, Account Executives, and Cloud Architects to design and deploy AI/ML solutions Developing reusable solution frameworks, reference architectures, and technical assets that accelerate customer adoption of AWS AI/ML services Representing AWS as a subject matter expert in customer-facing engagements, including executive briefings and technical workshops Identifying and driving new business opportunities through technical innovation and thought leadership Mentoring junior data scientists and contributing to the growth of AI/ML capabilities within AWS Professional Services