Economics Nobelist on causal inference

In a keynote address at the latest Amazon Machine Learning Conference, Amazon academic research consultant, Stanford professor, and recent Nobel laureate Guido Imbens offered insights on the estimation of causal effects in “panel data” settings.

Since 2013, Amazon has held an annual internal conference, the Amazon Machine Learning Conference (AMLC), where machine learning practitioners from around the company come together to share their work, teach and learn new techniques, and discuss best practices.

At the third AMLC, in 2015, Guido Imbens, a professor of economics at the Stanford University Graduate School of Business, gave a popular tutorial on causality and machine learning. Nine years and one Nobel Prize for economics later, Imbens — now in his tenth year as an Amazon academic research consultant — was one of the keynote speakers at the 2024 AMLC, held in October.

Guido cropped.png
Guido Imbens, Nobel laureate, professor of economics at the Stanford University Graduate School of Business, and an Amazon academic research consultant for the past 10 years.

In his talk, Imbens discussed causal inference, a mainstay of his research for more than 30 years and the topic that the Nobel committee highlighted in its prize citation. In particular, he considered so-called panel data, in which multiple units — say, products, customers, or geographic regions — and outcomes — say, sales or clicks — are observed at discrete points in time.

Over particular time spans, some units receive a treatment — say, a special product promotion or new environmental regulation — whose effects are reflected in the outcome measurements. Causal inference is the process of determining how much of the change in outcomes over time can be attributed to the treatment. This means adjusting for spurious correlations that result from general trends in the data, which can be inferred from trends among the untreated (control) units.

Imbens began by discussing the value of his work at Amazon. “I started working with people here at Amazon in 2014, and it's been a real pleasure and a real source of inspiration for my research, interacting with the people here and seeing what kind of problems they're working on, what kind of questions they have,” he said. “I've always found it very useful in my econometric, in my statistics, in my methodological research to talk to people who are using these methods in practice, who are actually working with these things on the ground. So it's been a real privilege for the last 10 years doing that with the people here at Amazon.”

Panel data

Then, with no further ado, he launched into the substance of his talk. Panel data, he explained, is generally represented by a pair of matrices, whose rows represents units and whose columns represent points in time. In one matrix, the entries represent measurements made on particular units at particular times; the other matrix takes only binary values, which represent whether a given unit was subject to treatment during the corresponding time span.

Related content
Amazon Scholar David Card and Amazon academic research consultant Guido Imbens talk about the past and future of empirical economics.

Ideally, for a given unit and a given time span, we would run an experiment in which the unit went untreated; then we would back time up and run the experiment again, with the treatment. But of course, time can’t be backed up. So instead, for each treated cell in the matrix, we estimate what the relevant measurement would have been if the treatment hadn’t been applied, and we base that estimate on the outcomes for other units and time periods.

For ease of explanation, Imbens said, he considered the case in which only one unit was treated, for only one time interval: “Once I have methods that work effectively for that case, the particular methods I'm going to suggest extend very naturally to the more-general assignment mechanism,” he said. “This is a very common setup.”

Control estimates

Imbens described five standard methods for estimating what would have been the outcome if a treated unit had been untreated during the same time period. The first method, which is very common in empirical work in economics, is known as known as difference of differences. It involves a regression analysis of all the untreated data up to the treatment period; the regression function can then be used to estimate the outcome for the treated unit if it hadn’t been treated.

The second method is called synthetic control, in which a control version of the treated unit is synthesized as a weighted average of the other control units.

“One of the canonical examples is one where he [Alberto Abadie, an Amazon Scholar, pioneer of synthetic control, and long-time collaborator of Imbens] is interested in estimating the effect of an anti-smoking regulation in California that went into effect in 1989,” Imbens explained. “So he tries to find the convex combination of the other states such that smoking rates for that convex combination match the actual smoking rates in California prior to 1989 — say, 40% Arizona, 30% Utah, 10% Washington and 20% New York. Once he has those weights, he then estimates the counterfactual smoking rate in California.”

Guido Imbens AMLC keynote figure
A synthetic control estimates a counterfactual control for a treated unit by synthesizing outcomes for untreated units. For instance, smoking rates in California might by synthesized as a convex combination of smoking rates in other states.

The third method, which Imbens and a colleague had proposed in 2016, adds an intercept to the synthetic-control equation; that is, it specifies an output value for the function when all the unit measurements are zero.

The final two methods were variations on difference of differences that added another term to the function to be optimized: a low-rank matrix, which approximates the results of the outcomes matrix at a lower resolution. The first of these variations — the matrix completion method — simply adds the matrix, with a weighting factor, to the standard difference-of-differences function.

Related content
Amazon Scholar David Card wins half the award, while academic research consultant Guido Imbens shares in the other half.

The second variation — synthetic difference of differences — weights the distances between the unit-time measurements and the regression curve according to the control units’ similarities to the unit that received the intervention.

“In the context of the smoking example,” Imbens said, “you assign more weight to units that are similar to California, that match California better. So rather than pretending that Delaware or Alaska is very similar to California — other than in their level — you only put weight on states that are very similar to California.”

Drawbacks

Having presented these five methods, Imbens went on to explain what he found wrong with them. The first problem, he said, is that they treat the outcome and treatment matrices as both row (units) and column (points in time) exchangeable. That is, the methods produce the same results whatever the ordering of rows and columns in the matrices.

“The unit exchangeability here seems very reasonable,” Imbens said. “We may have some other covariates, but in principle, there's nothing that distinguishes these units or suggests treating them in a way that's different from exchangeable.

Related content
Pat Bajari, VP and chief economist for Amazon's Core AI group, on his team's new research and what it says about economists' role at Amazon.

“But for the time dimension, it's different. You would think that if we're trying to predict outcomes in 2020, having outcomes measured in 2019 is going to be much more useful than having outcomes measured in 1983. We think that there's going to be correlation over time that makes predictions based on values from 2019 much more likely to be accurate than predictions based on values from 1983.”

The second problem, Imbens said, is that while the methods work well in the special case he considered, where only a single unit-time pair is treated — and indeed, they work well under any conditions in which the treatment assignments have a clearly discernible structure — they struggle in cases where the treatment assignments are more random. That’s because, with random assignment, units drop in and out of the control group from one time period to the next, making accurate regression analysis difficult.

A new estimator

So Imbens proposed a new estimator, one based on the matrix completion method, but with additional terms that apply two sets of weights to each control unit’s contribution to the regression analysis. The first weight reduces the contribution of a unit measurement according to its distance in time from the measurement of the treated unit — that is, it privileges more recent measurements.

Related content
The requirement that at any given time, all customers see the same prices for the same products necessitates innovation in the design of A/B experiments.

The second weight reduces the contributions of control unit measurements according to their absolute distance from the measurement of the treated unit. There, the idea is to limit the influence of outliers in sparse datasets — that is, datasets that control units are constantly dropping in and out of.

Imbens then compared the performance of his new estimator to those of the other five, on nine existing datasets that had been chosen to test the accuracy of prior estimators. On eight of the nine datasets, Imbens’s estimator outperformed all five of its predecessors, sometimes by a large margin; on the ninth dataset, it finished a close second to the difference-of-differences approach — which, however, was the last-place finisher on several other datasets.

Imbens estimator.png
Root mean squared error of six estimators on nine datasets, normalized to the best-performing dataset. Imbens’s new estimator, the doubly weighted causal panel (DWCP) estimator, outperforms its predecessors, often by a large margin.

“I don't want to push this as a particular estimator that you should use in all settings,” Imbens explained. “I want to mainly show that even simple changes to existing classes of estimators can actually do substantially better than the previous estimators by incorporating the time dimension in a more uh more satisfactory way.”

For purposes of causal inference, however, the accuracy of an estimator is not the only consideration. The reliability of the estimator — its power, in the statistical sense — also depends on its variance, the degree to which its margin of error deviates from the mean in particular instances. The lower the variance, the more likely the estimator is to provide accurate estimates.

Variance of variance

For the rest of his talk, Imbens discussed methods of estimating the variance of counterfactual estimators. Here things get a little confusing, because the variance estimators themselves display variance. Imbens advocated the use of conditional variance estimators, which hold some variables fixed — in the case of panel data, unit, time, or both — and estimate the variance of the free variables. Counterintuitively, higher-variance variance estimators, Imbens said, offer more power.

Related content
Causal machine learning provides a powerful tool for estimating the effectiveness of Fulfillment by Amazon’s recommendations to selling partners.

“In general, you should prefer the conditional variance because it adapts more to the particular dataset you're analyzing,” Imbens explained. “It's going to give you more power to find the treatment effects. Whereas the marginal variance” — an alternative and widely used method for estimating variance — “has the lowest variance itself, and it's going to have the lowest power in general for detecting treatment effects.”

Imbens then presented some experimental results using synthetic panel data that indicated that, indeed, in cases where data is heteroskedastic — meaning that the variance of one variable increases with increasing values of the other — variance estimators that themselves use conditional variance have greater statistical power than other estimators.

“There's clearly more to be done, both in terms of estimation, despite all the work that's been done in the last couple of years in this area, and in terms of variance estimation,” Imbens concluded. “And where I think the future lies for these models is a combination of the outcome modeling by having something flexible in terms of both factor models as well as weights that ensure that you're doing the estimation only locally. And we need to do more on variance estimation, keeping in mind both power and validity, with some key role for modeling some of the heteroskedasticity.”

Research areas

Related content

US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a highly skilled and experienced Applied Scientist, to support the development and implementation of state-of-the-art algorithms and models for supervised fine-tuning and reinforcement learning through human feedback and and complex reasoning; with a focus across text, image, and video modalities. As an Applied Scientist, you will play a critical role in supporting the development of Generative AI (Gen AI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities - Collaborate with cross-functional teams of engineers, product managers, and scientists to identify and solve complex problems in Gen AI - Design and execute experiments to evaluate the performance of different algorithms and models, and iterate quickly to improve results - Think big about the arc of development of Gen AI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems - Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports
US, CA, Santa Clara
The AWS Neuron Science Team is looking for talented scientists to enhance our software stack, accelerating customer adoption of Trainium and Inferentia accelerators. In this role, you will work directly with external and internal customers to identify key adoption barriers and optimization opportunities. You'll collaborate closely with our engineering teams to implement innovative solutions and engage with academic and research communities to advance state-of-the-art ML systems. As part of a strategic growth area for AWS, you'll work alongside distinguished engineers and scientists in an exciting and impactful environment. We actively work on these areas: - AI for Systems: Developing and applying ML/RL approaches for kernel/code generation and optimization - Machine Learning Compiler: Creating advanced compiler techniques for ML workloads - System Robustness: Building tools for accuracy and reliability validation - Efficient Kernel Development: Designing high-performance kernels optimized for our ML accelerator architectures A day in the life AWS Utility Computing (UC) provides product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Additionally, this role may involve exposure to and experience with Amazon's growing suite of generative AI services and other cloud computing offerings across the AWS portfolio. About the team AWS Neuron is the software of Trainium and Inferentia, the AWS Machine Learning chips. Inferentia delivers best-in-class ML inference performance at the lowest cost in the cloud to our AWS customers. Trainium is designed to deliver the best-in-class ML training performance at the lowest training cost in the cloud, and it’s all being enabled by AWS Neuron. Neuron is a Software that include ML compiler and native integration into popular ML frameworks. Our products are being used at scale with external customers like Anthropic and Databricks as well as internal customers like Alexa, Amazon Bedrocks, Amazon Robotics, Amazon Ads, Amazon Rekognition and many more. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.
US, WA, Seattle
Application deadline: Applications will be accepted on an ongoing basis Amazon Ads is re-imagining advertising through cutting-edge generative artificial intelligence (AI) technologies. We combine human creativity with AI to transform every aspect of the advertising life cycle—from ad creation and optimization to performance analysis and customer insights. Our solutions help advertisers grow their brands while enabling millions of customers to discover and purchase products through delightful experiences. We deliver billions of ad impressions and millions of clicks daily, breaking fresh ground in product and technical innovations. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. As a Senior Applied Scientist at Amazon Ads, you will: • Research and implement cutting-edge machine learning (ML) approaches, including applications of generative AI and large language models • Develop and deploy innovative ML solutions spanning multiple disciplines, from ranking and personalization to natural language processing, computer vision, recommender systems, and large language models • Drive end-to-end projects that tackle ambiguous problems at massive scale, often working with petabytes of data • Build and optimize models that balance multiple stakeholder needs, helping customers discover relevant products while enabling advertisers to achieve their goals efficiently • Build ML models, perform proof-of-concept, experiment, optimize, and deploy your models into production, working closely with cross-functional teams that include engineers, product managers, and other scientists • Design and run A/B experiments to validate hypotheses, gather insights from large-scale data analysis, and measure business impact • Develop scalable, efficient processes for model development, validation, and deployment that optimize traffic monetization while maintaining customer experience Why you’ll love this role: This role offers unprecedented breadth in ML applications and access to extensive computational resources and rich datasets that will enable you to build truly innovative solutions. You'll work on projects that span the full advertising life cycle, from sophisticated ranking algorithms and real-time bidding systems to creative optimization and measurement solutions. You'll work alongside talented engineers, scientists, and product leaders in a culture that encourages innovation, experimentation, and bias for action, and you’ll directly influence business strategy through your scientific expertise. What makes this role unique is the combination of scientific rigor with real-world impact. You’ll re-imagine advertising through the lens of advanced ML while solving problems that balance the needs of advertisers, customers, and Amazon's business objectives. Your impact and career growth: Amazon Ads is investing heavily in AI and ML capabilities, creating opportunities for scientists to innovate and make their marks. Your work will directly impact millions. Whether you see yourself growing as an individual contributor or moving into people management, there are clear paths for career progression. This role combines scientific leadership, organizational ability, technical strength, and business understanding. You'll have opportunities to lead technical initiatives, mentor other scientists, and collaborate with senior leadership to shape the future of advertising technology. Most importantly, you'll be part of a community that values scientific excellence and encourages you to push the boundaries of what's possible with AI. Watch two Applied Scientists at Amazon Ads talk about their work: https://www.youtube.com/watch?v=vvHsURsIPEA Learn more about Amazon Ads: https://advertising.amazon.com/ Key job responsibilities As an Applied Scientist in Amazon Ads, you will: - Research and implement cutting-edge ML approaches, including applications of generative AI and large language models - Develop and deploy innovative ML solutions spanning multiple disciplines – from ranking and personalization to natural language processing, computer vision, recommender systems, and large language models - Drive end-to-end projects that tackle ambiguous problems at massive scale, often working with petabytes of data - Build and optimize models that balance multiple stakeholder needs - helping customers discover relevant products while enabling advertisers to achieve their goals efficiently - Build ML models, perform proof-of-concept, experiment, optimize, and deploy your models into production, working closely with cross-functional teams including engineers, product managers, and other scientists - Design and run A/B experiments to validate hypotheses, gather insights from large-scale data analysis, and measure business impact - Develop scalable, efficient processes for model development, validation, and deployment that optimize traffic monetization while maintaining customer experience A day in the life Why you will love this role: This role offers unprecedented breadth in ML applications, and access to extensive computational resources and rich datasets that enable you to build truly innovative solutions. You'll work on projects that span the full advertising lifecycle - from sophisticated ranking algorithms and real-time bidding systems to creative optimization and measurement solutions. You'll also work alongside talented engineers, scientists and product leaders in a culture that encourages innovation, experimentation, and bias for action where you’ll directly influence business strategy through your scientific expertise. What makes this role unique is the combination of scientific rigor with real-world impact. You’ll re-imagine advertising through the lens of advanced ML while solving problems that balance the needs of advertisers, customers, and Amazon's business objectives. About the team Your impact and career growth: Amazon Ads is investing heavily in AI and ML capabilities, creating opportunities for scientists to innovate and make their mark. Your work will directly impact millions. Whether you see yourself growing as an individual contributor or moving into people management, there are clear paths for career progression. This role combines scientific leadership, organizational ability, technical strength, and business understanding. You'll have opportunities to lead technical initiatives, mentor other scientists, and collaborate with senior leadership to shape the future of advertising technology. Most importantly, you'll be part of a community that values scientific excellence and encourages you to push the boundaries of what's possible with AI. Watch two applied scientists at Amazon Ads talk about their work: https://www.youtube.com/watch?v=vvHsURsIPEA Learn more about Amazon Ads: https://advertising.amazon.com/
US, NY, New York
We are looking for a passionate Applied Scientist to help pioneer the next generation of agentic AI applications for Amazon advertisers. In this role, you will design agentic architectures, develop tools and datasets, and contribute to building systems that can reason, plan, and act autonomously across complex advertiser workflows. You will work at the forefront of applied AI, developing methods for fine-tuning, reinforcement learning, and preference optimization, while helping create evaluation frameworks that ensure safety, reliability, and trust at scale. You will work backwards from the needs of advertisers—delivering customer-facing products that directly help them create, optimize, and grow their campaigns. Beyond building models, you will advance the agent ecosystem by experimenting with and applying core primitives such as tool orchestration, multi-step reasoning, and adaptive preference-driven behavior. This role requires working independently on ambiguous technical problems, collaborating closely with scientists, engineers, and product managers to bring innovative solutions into production. Key job responsibilities - Design and build agents for our autonomous campaigns experience. - Design and implement advanced model and agent optimization techniques, including supervised fine-tuning, instruction tuning and preference optimization (e.g., DPO/IPO). - Curate datasets and tools for MCP. - Build evaluation pipelines for agent workflows, including automated benchmarks, multi-step reasoning tests, and safety guardrails. - Develop agentic architectures (e.g., CoT, ToT, ReAct) that integrate planning, tool use, and long-horizon reasoning. - Prototype and iterate on multi-agent orchestration frameworks and workflows. - Collaborate with peers across engineering and product to bring scientific innovations into production. - Stay current with the latest research in LLMs, RL, and agent-based AI, and translate findings into practical applications. About the team The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through the latest generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. The Autonomous Campaigns team within Sponsored Products and Brands is focused on guiding and supporting 1.6MM advertisers to meet their advertising needs of creating and managing ad campaigns. At this scale, the complexity of diverse advertiser goals, campaign types, and market dynamics creates both a massive technical challenge and a transformative opportunity: even small improvements in guidance systems can have outsized impact on advertiser success and Amazon’s retail ecosystem. Our vision is to build a highly personalized, context-aware campaign creation and management system that leverages LLMs together with tools such as auction simulations, ML models, and optimization algorithms. This agentic framework, will operate across both chat and non-chat experiences in the ad console, scaling to natural language queries as well as proactively delivering guidance based on deep understanding of the advertiser. To execute this vision, we collaborate closely with stakeholders across Ad Console, Sales, and Marketing to identify opportunities—from high-level product guidance down to granular keyword recommendations—and deliver them through a tailored, personalized experience. Our work is grounded in state-of-the-art agent architectures, tool integration, reasoning frameworks, and model customization approaches (including tuning, MCP, and preference optimization), ensuring our systems are both scalable and adaptive.
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to help build industry-leading technology with generative AI (GenAI) and multi-modal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with talented peers to develop algorithms and modeling techniques to advance the state of the art with multi-modal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s large-scale computing resources to accelerate development with multi-modal Large Language Models (LLMs) and GenAI in Computer Vision. About the team The AGI team has a mission to push the envelope with multimodal LLMs and GenAI in Computer Vision, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The AGI Autonomy Perception team performs applied machine learning research, including model training, dataset design, pre- and post- training. We train Nova Act, our state-of-the art computer use agent, to understand arbitrary human interfaces in the digital world. We are seeking a Machine Learning Engineer who combines strong ML expertise with software engineering excellence to scale and optimize our ML workflows. You will be a key member on our research team, helping accelerate the development of our leading computer-use agent. We are seeking a strong engineer who has a passion for scaling ML models and datasets, designing new ML frameworks, improving engineering practices, and accelerating the velocity of AI development. You will be hired as a Member of Technical Staff. Key job responsibilities * Design, build, and deploy machine learning models, frameworks, and data pipelines * Optimize ML training, inference, and evaluation workflows for reliability and performance * Evaluate and improve ML model performance and metrics * Develop tools and infrastructure to enhance ML development productivity
US, WA, Seattle
The Sponsored Products and Brands (SPB) team at Amazon Ads is re-imagining the advertising landscape through generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. This position will be part of the Conversational Ad Experiences team within the Amazon Advertising organization. Our cross-functional team focuses on designing, developing and launching innovative ad experiences delivered to shoppers in conversational contexts. We utilize leading-edge engineering and science technologies in generative AI to help shoppers discover new products and brands through intuitive, conversational, multi-turn interfaces. We also empower advertisers to reach shoppers, using their own voice to explain and demonstrate how their products meet shoppers' needs. We collaborate with various teams across multiple Amazon organizations to push the boundary of what's possible in these fields. We are seeking a science leader for our team within the Sponsored Products & Brands organization. You'll be working with talented scientists, engineers, and product managers to innovate on behalf of our customers. An ideal candidate is able to navigate through ambiguous requirements, working with various partner teams, and has experience in generative AI, large language models (LLMs), information retrieval, and ads recommendation systems. Using a combination of generative AI and online experimentation, our scientists develop insights and optimizations that enable the monetization of Amazon properties while enhancing the experience of hundreds of millions of Amazon shoppers worldwide. If you're fired up about being part of a dynamic, driven team, then this is your moment to join us on this exciting journey! Key job responsibilities - Serve as a tech lead for defining the science roadmap for multiple projects in the conversational ad experiences space powered by LLMs. - Build POCs, optimize and deploy models into production, run experiments, perform deep dives on experiment data to gather actionable learnings and communicate them to senior leadership - Work closely with software engineers on detailed requirements, technical designs and implementation of end-to-end solutions in production. - Work closely with product managers to contribute to our mission, and proactively identify opportunities where science can help improve customer experience - Research new machine learning approaches to drive continued scientific innovation - Be a member of the Amazon-wide machine learning community, participating in internal and external meetups, hackathons and conferences - Help attract and recruit technical talent, mentor scientists and engineers in the team
US, WA, Seattle
Amazon Economics is seeking Structural Economist (STRUC) Interns who are passionate about applying structural econometric methods to solve real-world business challenges. STRUC economists specialize in the econometric analysis of models that involve the estimation of fundamental preferences and strategic effects. In this full-time internship (40 hours per week, with hourly compensation), you'll work with large-scale datasets to model strategic decision-making and inform business optimization, gaining hands-on experience that's directly applicable to dissertation writing and future career placement. Key job responsibilities As a STRUC Economist Intern, you'll specialize in structural econometric analysis to estimate fundamental preferences and strategic effects in complex business environments. Your responsibilities include: - Analyze large-scale datasets using structural econometric techniques to solve complex business challenges - Applying discrete choice models and methods, including logistic regression family models (such as BLP, nested logit) and models with alternative distributional assumptions - Utilizing advanced structural methods including dynamic models of customer or firm decisions over time, applied game theory (entry and exit of firms), auction models, and labor market models - Building datasets and performing data analysis at scale - Collaborating with economists, scientists, and business leaders to develop data-driven insights and strategic recommendations - Tackling diverse challenges including pricing analysis, competition modeling, strategic behavior estimation, contract design, and marketing strategy optimization - Helping business partners formalize and estimate business objectives to drive optimal decision-making and customer value - Build and refine comprehensive datasets for in-depth structural economic analysis - Present complex analytical findings to business leaders and stakeholders
US, WA, Seattle
Amazon Economics is seeking Reduced Form Causal Analysis (RFCA) Economist Interns who are passionate about applying econometric methods to solve real-world business challenges. RFCA represents the largest group of economists at Amazon, and these core econometric methods are fundamental to economic analysis across the company. In this full-time internship (40 hours per week, with hourly compensation), you'll work with large-scale datasets to analyze causal relationships and inform strategic business decisions, gaining hands-on experience that's directly applicable to dissertation writing and future career placement. Key job responsibilities As an RFCA Economist Intern, you'll specialize in econometric analysis to determine causal relationships in complex business environments. Your responsibilities include: - Analyze large-scale datasets using advanced econometric techniques to solve complex business challenges - Applying econometric techniques such as regression analysis, binary variable models, cross-section and panel data analysis, instrumental variables, and treatment effects estimation - Utilizing advanced methods including differences-in-differences, propensity score matching, synthetic controls, and experimental design - Building datasets and performing data analysis at scale - Collaborating with economists, scientists, and business leaders to develop data-driven insights and strategic recommendations - Tackling diverse challenges including program evaluation, elasticity estimation, customer behavior analysis, and predictive modeling that accounts for seasonality and time trends - Build and refine comprehensive datasets for in-depth economic analysis - Present complex analytical findings to business leaders and stakeholders
US, WA, Seattle
Amazon Economics is seeking Forecasting, Macroeconomics and Finance (FMF) Economist Interns who are passionate about applying time-series econometric methods to solve real-world business challenges. FMF economists interpret and forecast Amazon business dynamics by combining advanced time-series statistical methods with strong economic analysis and intuition. In this full-time internship (40 hours per week, with hourly compensation), you'll work with large-scale datasets to forecast business trends and inform strategic decisions, gaining hands-on experience that's directly applicable to dissertation writing and future career placement. Key job responsibilities As an FMF Economist Intern, you'll specialize in time-series econometric analysis to understand, predict, and optimize Amazon's business dynamics. Your responsibilities include: - Analyze large-scale datasets using advanced time-series econometric techniques to solve complex business challenges - Applying frontier methods in time series econometrics, including forecasting models, dynamic systems analysis, and econometric models that combine macro and micro data - Developing formal models to understand past and present business dynamics, predict future trends, and identify relevant risks and opportunities - Building datasets and performing data analysis at scale using world-class data tools - Collaborating with economists, scientists, and business leaders to develop data-driven insights and strategic recommendations - Tackling diverse challenges including analyzing drivers of growth and profitability, forecasting business metrics, understanding how customer experience interacts with external conditions, and evaluating short, medium, and long-term business dynamics - Build and refine comprehensive datasets for in-depth time-series economic analysis - Present complex analytical findings to business leaders and stakeholders