Card-Imbens 16x9.jpg
David Card (left), an Amazon Scholar, a professor of economics at the University of California, Berkeley, and the outgoing president of the AEA, and Guido Imbens (right), an academic research consultant at Amazon and a professor at the Stanford Graduate School of Business.

A conversation with economics Nobelists

Amazon Scholar David Card and academic research consultant Guido Imbens on the past and future of empirical economics.

The annual meeting of the American Economic Association (AEA) took place Jan. 7 - 9, and as it approached, Amazon Science had the chance to interview two of the three recipients of the 2021 Nobel Prize in economics — who also happen to be Amazon-affiliated economists.

David Card, an Amazon Scholar, a professor of economics at the University of California, Berkeley, and the outgoing president of the AEA, won half the prize “for his empirical contributions to labor economics”.

Guido Imbens, an academic research consultant at Amazon and a professor at the Stanford Graduate School of Business, shared the other half of the prize with MIT’s Josh Angrist for “methodological contributions to the analysis of causal relationships”.

Amazon Science: The empirical approach to economics has been recognized by the Nobel Prize committee several times in the last few years, but it wasn't always as popular as it is today. I'm curious how you both first became interested in empirical approaches to economics.

David Card: The heroes of economics for many, many decades were the theorists, and in the postwar era especially, there was a recognition that economic modeling was underdeveloped — the math was underdeveloped — and there was a need to formalize things and understand better what the models really delivered.

People started to realize that we had the data to better look at real labor market phenomena and possibly make economics something different than just a kind of a branch of philosophy.
David Card

That need really proceeded through the ’60s, and Arrow and Debreu were these famous mathematical economists who developed some very elegant theoretical models of how the market works in an idealized economy.

What happened in my time was people started to realize that we had the data to better look at real labor market phenomena and possibly make economics something different than just a kind of a branch of philosophy. Arrow-Debreu is basically mathematical philosophy.

Guido Imbens: I came from a very different tradition. I grew up in the Netherlands, and there was a strong tradition of econometrics started by people like Tinbergen. Tinbergen had been very broad — he did econometrics, but he also did empirical work and was very heavily involved in policy analysis. But over time, the program he had started was becoming much more focused on technical econometrics.

So as an undergraduate, we didn't really do any empirical work. We really just did a lot of mathematical statistics and some operations research and some economic theory. My thesis was a theoretical econometrics study.

When I presented that at Harvard, Josh Angrist wasn't really all that impressed with it, and he actually opposed the department hiring me there because he thought the paper was boring. And he was probably right! But luckily, the more senior people there at the time thought I was at least somewhat promising. And so I got hired at Harvard. But then it was really Josh and Larry Katz, one of the labor economists there, who got me interested in going to the labor seminar and got me exposed to the modern empirical work.

The context Josh and I started talking in really was this paper that I think came up in all three of the Nobel lectures, this paper by Ed Leamer, “Let's Take the Con Out of Econometrics”, where Leamer says, “Hardly anyone takes data analysis seriously. Or perhaps more accurately, hardly anyone takes anyone else’s data analysis seriously.”

And I think Leamer was right: people did these very elaborate things, and it was all showing off complicated technical things, but it wasn't really very credible. In fact, Leamer presented a lecture based on that work at Harvard. And I remember Josh getting up at some point and saying, “Well, you talk about all this old stuff, but look at the work Card does. Look at the work Krueger does. Look at the work I do. It's very different.”

And that felt right to me. It felt that the work was qualitatively very different from the work that Ed Leamer was describing and that he was complaining about.

AS: So that's when you first became aware of Professor Card’s work. Professor Card, when did you first become aware of Professor Imbens’s work?

Card: One of his early papers was pretty interesting. He was trying to combine data from micro survey evidence with benchmark numbers that you would get from a population, and it's actually a version of a kind of a problem that arises at Amazon all the time, which is, we've got noisy estimates of something, and we've got probably reliable estimates of some other aggregates, and there's often ways to try and combine those. I saw that and I thought that was very interesting.

Then there’s the problem that Josh and Guido worked on that was most impactful and that was cited by the Nobel Prize committee. I had worked on an experiment, a real experiment [as opposed to a natural experiment], in welfare analysis in Canada, and it was providing an economic incentive to try and get single mothers off of welfare and into work. And we noticed that the group of mothers who complied or followed on with the experiment was reasonable size, but it wasn't 100%.

We did some analysis of it trying to characterize them. Around the same time, I became aware of Imbens’s and Angrist’s paper, which basically formalized that a lot better and described what exactly was going on with this group. That framework just instantly took off, and everyone within a few years was thinking about problems that way.

This morning I was talking to another Amazon person about a problem. It was a difference analysis. I was saying we should try and characterize the compliers for this difference intervention. So it's exactly this problem.

The Nobel committee’s press release for Card, Imbens, and Angrist’s prize announcement emphasizes their use of natural experiments, which it defines as “situations in which chance events or policy changes result in groups of people being treated differently, in a way that resembles clinical trials in medicine.” A seminal instance of this was Card’s 1993 paper with his Princeton colleague Alan Krueger, which compared fast-food restaurants in two demographically similar communities on either side of the New Jersey-Pennsylvania border, one of which had recently seen a minimum-wage hike and one of which hadn’t.

AS: In the early days, there was skepticism about the empirical approach to economics. So every time you selected a new research project, you weren't just trying to answer an economics problem; you were also, in a sense, establishing the credibility of the approach. How did you select problems then? Was there a structure that you recognized as possibly lending itself to natural experiment?

Card: I think that the natural-experiment thing — there was really a brief period where that was novel, to tell you the truth. Maybe 1989 to 1992 or 3. I did this paper on the Mariel boatlift, which was cited by the committee. But to tell you the truth, that was a very modest paper. I never presented it anywhere, and it's in a very modest journal. So I never thought of that paper as going anywhere [laughs].

What happened was, it became more and more well understood that in order to make a claim of causality even from a natural-experiment setting, you had to have a fair amount of information from before the experiment took place to validate or verify that the group that you were calling the treatment group and the group that you were calling the control group actually were behaving the same.

That was a weakness of the project that Alan Krueger and I did. We had restaurants in New Jersey and Pennsylvania. We knew the minimum wage was going to increase — or we thought we knew that; it wasn't entirely clear at the time — but we surveyed the restaurants before, and then the minimum wage went up, and we surveyed them after, and that was good.

But we didn't really have multiple surveys from before to show that in the absence of the minimum wage, New Jersey and Pennsylvania restaurants had tracked each other for a long time. And these days, that's better understood. At Amazon for instance, people are doing intervention analyses of this type. They would normally look at what they call pre-trend analysis, make sure that the treatment group and the control group are trending the same beforehand.

I think there are 1,000 questions in economics that have been open forever. Sometimes new datasets come along. That's been happening a lot in labor economics: huge administrative datasets have become available, richer and richer, and now we're getting datasets that are created by these tech firms. So my usual thing is, I think, that's a dataset that maybe we can answer this old question on. That’s more my approach.

That's why being at Amazon has been great .... A lot of people have substantive questions they're trying to analyze with data, and they're kind of stuck in places, so there's a need for new methodologies.
Guido Imbens

Imbens: I come from a slightly different perspective. Most of my work has come from listening to people like David and Josh and seeing what type of problems they're working on, what type of methods they're using, and seeing if there's something to be added there — if there’s some way of improving the methods or places where maybe they're stuck, but listening to the people actually doing the empirical work rather than starting with the substantive questions.

That's why being at Amazon has been great, from my perspective. A lot of people have substantive questions they're trying to analyze with data, and they're kind of stuck in places, so there's a need for new methodologies. It's been a very fertile environment for me to come up with new research.

AS: Methodologically, what are some of the outstanding questions that interest you both?

Imbens: Well, one of the things is experimental design in complex environments. A lot of the experimental designs we’re using at the moment still come fairly directly from biomedical settings. We have a population, we randomize them into a treatment group and a control group, and then we compare outcomes for the two groups.

But in a lot of the settings we’re interested in at Amazon, there are very complex interactions between the units and their experiences, and dealing with that is very challenging. There are lots of special cases where we know somewhat what to do, but there are lots of cases where we don't know exactly what to do, and we need to do more complex experiments to get the answers to the questions we're interested in.

Double randomization — original color scheme.jpeg
An example of what Imbens calls “experimental design in complex environments”. In this illustration, each of five viewers is shown promotions for eight different Prime Video shows. Some of those promotions contain extra information, indicated in the image by star ratings (the “treatment”). This design helps determine whether the treatment affects viewing habits (the viewer experiment) but also helps identify spillover effects, in which participation in the viewer experiment influences the viewer’s behavior in other contexts.

The second thing is, we do a lot of these experiments, but often the experiments are relatively small. They’re small in duration, and they’re small in size relative to the overall population. You know, it goes back to the paper we mentioned before, combining this observational-study data with experimental data. That raises a lot of interesting methodological challenges that I spend a lot of time thinking about these days.

AS: I wondered if in the same way that in that early paper you were looking at survey data and population data, there's a way that natural experiments and economic field experiments can reinforce each other or give you a more reliable signal than you can get from either alone.

Card: There's one thing that people do; I've done a few of these myself. It's called meta analysis. It's a technique where you take results from different studies and try and put them into a statistical model. In a way it's comparable to work Guido has done at Amazon, where you take a series of actual experiments, A/B experiments done in Weblab, and basically combine them and say, “Okay, these aren't exactly the same products and the same conditions, but there's enough comparability that maybe I can build a model and use the information from the whole set to help inform what we're learning from any given one.”

And you can do that in studies in economics. For example, I’ve done one on training programs. There are many of these training programs. Each of them — exactly as Guido was saying — is often quite small. And there are weird conditions: sometimes it's only young males or young females that are in the experiment, or they don't have very long follow-up, or sometimes the labor market is really strong, and other times it's really weak. So you can try and build a model of the outcome you get from any given study and then try and see if there are any systematic patterns there.

Imbens: We do all these experiments, but often we kind of do them once, and then we put them aside. There's a lot of information over the years built up in all these experiments we've done, and finding more of these meta-analysis-type ways of combining them and exploiting all the information we have collected there — I think it's a very promising way to go.

AS: How can empirical methods complement theoretical approaches — model building of the kind that, in some sense, the early empirical research was reacting against?

Card: Normally, if you're building a model, there are a few key parameters, like you need to get some kind of an elasticity of what a customer will do if faced with a higher price or if offered a shorter, faster delivery speed versus slower delivery speed. And if you have those elasticities, then you can start building up a model.

If you have even a fairly complicated dynamic model, normally there's a relatively small number of these parameters, and the value of the model is to take this set of parameters and try and tell a bit richer story — not just how the customer responds to an offer of a faster delivery today but how that affects their future purchases and whether they come back and buy other products or whatever. But you need credible estimates of those elasticities. It's not helpful to build a model and then just pull numbers out of the air [laughs]. And that's why A/B experiments are so important at Amazon.

AS: I asked about outstanding methodological questions that you're interested in, but how about economic questions more broadly that you think could really benefit from an empirical approach?

Card: In my field [labor economics], we've begun to realize that different firms are setting different wages for the same kinds of workers. And we're starting to think about two issues related to that. One is, how do workers choose between jobs? Do they know about all the jobs out there? Do they just find out about some of the jobs? We're trying to figure out exactly why it's okay in the labor market for there to be multiple wages for a certain class of workers. Why don't all the workers immediately try to go to one job? This seems to be a very important phenomenon.

And on the other side of that, how do employers think about it? What are the benefits to employers of a higher wage or lower wage? Is it just the recruiting, or is it retention, or is it productivity? Is it longer-term goals? That's front and center in the research that I do outside of Amazon.

AS: I was curious if there were any cases where a problem presented itself, and at first you didn't think there was any way to get an empirical handle on it, and then you figured out that there was.

We're supposed to be social scientists who are trying to see what people are doing and the problems they confront and trying to analyze them. ... That's different than this old-fashioned Adam Smith view of the economy as a perfectly functioning tool that we're just supposed to admire.
David Card

Card: I saw a really interesting paper that was done by a PhD student who was visiting my center at Berkeley. In European football, there are a lot of non-white players, and fan racism is pretty pervasive. This guy noticed that during COVID, they played a lot of games with no fans. So he was able to compare the performance of the non-white and white players in the pre-COVID era and the COVID era, with and without fans, and showed that the non-white players did a little bit better. That's the kind of question where you’re saying, How are we ever going to study that? But if you're thinking and looking around, there's always some angle that might be useful.

Imbens: That's a very clever idea. I agree with David. If you just pay attention, there are a lot of things happening that allow you to answer important questions. Maybe fan insults in sports itself isn't that big a deal, but clearly, racism in the labor market and having people treated differently is a big problem. And here you get a very clear handle on an aspect of it. And once you show it's a problem there, it's very likely that it shows up in arguably substantively much more important settings where it's really hard to study.

In the Netherlands for a long time, they had a limit on the number of students who could go to medical school. And it wasn't decided by the medical schools themselves; they couldn't choose whom to admit. It was partly based on a lottery. At some point, someone used that to figure out how much access to medical school is actually worth. So essentially, you have two people who are both qualified to go to medical school; one gets lucky in the lottery; one doesn't. And it turns out you're giving the person who wins the lottery basically a lot of money. Obviously, in many professions we can't just randomly assign people to different types of jobs. But here you get a handle on the value of rationing that type of education.

Card: I think that's really important. You know, we're supposed to be social scientists who are trying to see what people are doing and the problems they confront and trying to analyze them. In a way, that's different than this sort of old-fashioned Adam Smith view of the economy as a perfectly functioning tool that we're just supposed to admire. That is a difference, I think.

Research areas

Related content

US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing in Pasadena, CA, is looking to hire a Quantum Research Scientist in the Fabrication group. You will join a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers working at the forefront of quantum computing. You should have a deep and broad knowledge of device fabrication techniques. Candidates with a track record of original scientific contributions will be preferred. We are looking for candidates with strong engineering principles, resourcefulness and a bias for action, superior problem solving, and excellent communication skills. Working effectively within a team environment is essential. As a research scientist you will be expected to work on new ideas and stay abreast of the field of experimental quantum computation. Key job responsibilities In this role, you will drive improvements in qubit performance by characterizing the impact of environmental and material noise on qubit dynamics. This will require designing experiments to assess the role of specific noise sources, ensuring the collection of statistically significant data through automation, analyzing the results, and preparing clear summaries for the team. Finally, you will work with hardware engineers, material scientists, and circuit designers to implement changes which mitigate the impact of the most significant noise sources. About the team Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a US export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility.
US, VA, Herndon
AWS Infrastructure Services owns the design, planning, delivery, and operation of all AWS global infrastructure. In other words, we’re the people who keep the cloud running. We support all AWS data centers and all of the servers, storage, networking, power, and cooling equipment that ensure our customers have continual access to the innovation they rely on. We work on the most challenging problems, with thousands of variables impacting the supply chain — and we’re looking for talented people who want to help. You’ll join a diverse team of software, hardware, and network engineers, supply chain specialists, security experts, operations managers, and other vital roles. You’ll collaborate with people across AWS to help us deliver the highest standards for safety and security while providing seemingly infinite capacity at the lowest possible cost for our customers. And you’ll experience an inclusive culture that welcomes bold ideas and empowers you to own them to completion. AWS Infrastructure Services Science (AISS) researches and builds machine learning models that influence the power utilization at our data centers to ensure the health of our thermal and electrical infrastructure at high infrastructure utilization. As a Data Scientist, you will work on our Science team and partner closely with other scientists and data engineers as well as Business Intelligence, Technical Program Management, and Software teams to accurately model and optimize our power infrastructure. Outputs from your models will directly influence our data center topology and will drive exceptional cost savings. You will be responsible for building data science prototypes that optimize our power and thermal infrastructure, working across AWS to solve data mapping and quality issues (e.g. predicting when we might have bad sensor readings), and contribute to our Science team vision. You are skeptical. When someone gives you a data source, you pepper them with questions about sampling biases, accuracy, and coverage. When you’re told a model can make assumptions, you actively try to break those assumptions. You have passion for excellence. The wrong choice of data could cost the business dearly. You maintain rigorous standards and take ownership of the outcome of your data pipelines and code. You do whatever it takes to add value. You don’t care whether you’re building complex ML models, writing blazing fast code, integrating multiple disparate data-sets, or creating baseline models - you care passionately about stakeholders and know that as a curator of data insight you can unlock massive cost savings and preserve customer availability. You have a limitless curiosity. You constantly ask questions about the technologies and approaches we are taking and are constantly learning about industry best practices you can bring to our team. You have excellent business and communication skills to be able to work with product owners to understand key business questions and earn the trust of senior leaders. You will need to learn Data Center architecture and components of electrical engineering to build your models. You are comfortable juggling competing priorities and handling ambiguity. You thrive in an agile and fast-paced environment on highly visible projects and initiatives. The tradeoffs of cost savings and customer availability are constantly up for debate among senior leadership - you will help drive this conversation. Key job responsibilities - Proactively seek to identify opportunities and insights through analysis and provide solutions to automate and optimize power utilization based on a broad and deep knowledge of AWS data center systems and infrastructure. - Apply a range of data science techniques and tools combined with subject matter expertise to solve difficult customer or business problems and cases in which the solution approach is unclear. - Collaborate with Engineering teams to obtain useful data by accessing data sources and building the necessary SQL/ETL queries or scripts. - Build models and automated tools using statistical modeling, econometric modeling, network modeling, machine learning algorithms and neural networks. - Validate these models against alternative approaches, expected and observed outcome, and other business defined key performance indicators. - Collaborate with Engineering teams to implement these models in a manner which complies with evaluations of the computational demands, accuracy, and reliability of the relevant ETL processes at various stages of production. About the team Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. *Why AWS* Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. *Diverse Experiences* Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. *Work/Life Balance* We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. *Inclusive Team Culture* Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) conferences, inspire us to never stop embracing our uniqueness. *Mentorship and Career Growth* We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
US, CA, San Francisco
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, science understanding, locomotion, manipulation, sim2real transfer, multi-modal foundation models and multi-task robot learning, designing novel frameworks that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Drive independent research initiatives across the robotics stack, including robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Lead full-stack robotics projects from conceptualization through deployment, taking a system-level approach that integrates hardware considerations with algorithmic development, ensuring robust performance in production environments - Collaborate with platform and hardware teams to ensure seamless integration across the entire robotics stack, optimizing and scaling models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures and innovative systems and algorithms, leveraging our extensive infrastructure to prototype and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through innovative foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the next level. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. As a Senior Research Scientist, you will work with a unique and gifted team developing exciting products for consumers and collaborate with cross-functional teams. Our team rewards intellectual curiosity while maintaining a laser-focus in bringing products to market. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the intersection of both academic and applied research in this product area, you have the opportunity to work together with some of the most talented scientists, engineers, and product managers. Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We are constantly learning through programs that are local, regional, and global. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Our team highly values work-life balance, mentorship and career growth. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We care about your career growth and strive to assign projects and offer training that will challenge you to become your best.
US, VA, Arlington
This position requires that the candidate selected be a US Citizen and currently possess and maintain an active Top Secret security clearance. The Amazon Web Services Professional Services (ProServe) team seeks an experienced Principal Data Scientist to join our ProServe Shared Delivery Team (SDT). In this role, you will serve as a technical leader and strategic advisor to AWS enterprise customers, partners, and internal AWS teams on transformative AI/ML projects. You will leverage your deep technical expertise to architect and implement innovative machine learning and generative AI solutions that drive significant business outcomes. As a Principal Data Scientist, you will lead complex, high-impact AI/ML initiatives across multiple customer engagements. You will collaborate with Director and C-level executives to translate business challenges into technical solutions. You will drive innovation through thought leadership, establish technical standards, and develop reusable solution frameworks that accelerate customer adoption of AWS AI/ML services. Your work will directly influence the strategic direction of AWS Professional Services AI/ML offerings and delivery approaches. Your extensive experience in designing and implementing sophisticated AI/ML solutions will enable you to tackle the most challenging customer problems. You will provide technical mentorship to other data scientists, establish best practices, and represent AWS as a subject matter expert in customer-facing engagements. You will build trusted advisor relationships with customers and partners, helping them achieve their business outcomes through innovative applications of AWS AI/ML services. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. Key job responsibilities Architecting and implementing complex, enterprise-scale AI/ML solutions that solve critical customer business challenges Providing technical leadership across multiple customer engagements, establishing best practices and driving innovation Collaborating with Delivery Consultants, Engagement Managers, Account Executives, and Cloud Architects to design and deploy AI/ML solutions Developing reusable solution frameworks, reference architectures, and technical assets that accelerate customer adoption of AWS AI/ML services Representing AWS as a subject matter expert in customer-facing engagements, including executive briefings and technical workshops Identifying and driving new business opportunities through technical innovation and thought leadership Mentoring junior data scientists and contributing to the growth of AI/ML capabilities within AWS Professional Services
IN, KA, Bengaluru
The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key responsibilities include: - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues
US, VA, Herndon
This position requires that the candidate selected be a US Citizen and must currently possess and maintain an active TS/SCI security clearance with polygraph. The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Data Scientist to join our team at Amazon Web Services (AWS). Are you looking to work at the forefront of Machine Learning and AI? Would you be excited to apply Generative AI algorithms to solve real world problems with significant impact? In this role, you'll work directly with customers to design, evangelize, implement, and scale AI/ML solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their AI transformation journey, providing deep expertise in data science, machine learning, generative AI, and best practices throughout the project lifecycle. As a Data Scientist within the AWS Professional Services organization, you will be proficient in architecting complex, scalable, and secure machine learning solutions tailored to meet the specific needs of each customer. You'll help customers imagine and scope the use cases that will create the greatest value for their businesses, develop statistical models and analytical frameworks, select and train the right models, and define paths to navigate technical or business challenges. Working closely with stakeholders, you'll assess current data infrastructure, perform exploratory data analysis, develop proof-of-concepts, and propose effective strategies for implementing AI and generative AI solutions at scale. You will design and run experiments, research new algorithms, extract insights from complex datasets, and find new ways of optimizing risk, profitability, and customer experience. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. Key job responsibilities - Designing and implementing complex, scalable, and secure AI/ML solutions on AWS tailored to customer needs, including developing statistical models, performing feature engineering, and selecting appropriate algorithms for specific use cases - Developing and deploying machine learning models and generative AI applications that solve real-world business problems, conducting experiments, performing rigorous statistical analysis, and optimizing for performance at scale - Collaborating with customer stakeholders to identify high-value AI/ML use cases, gather requirements, analyze data quality and availability, and propose effective strategies for implementing machine learning and generative AI solutions - Providing technical guidance on applying AI, machine learning, and generative AI responsibly and cost-efficiently, performing model validation and interpretation, troubleshooting throughout project delivery, and ensuring adherence to best practices - Acting as a trusted advisor to customers on the latest advancements in AI/ML, emerging technologies, statistical methodologies, and innovative approaches to leveraging diverse data sources for maximum business impact - Sharing knowledge within the organization through mentoring, training, creating reusable AI/ML artifacts and analytical frameworks, and working with team members to prototype new technologies and evaluate technical feasibility
US, VA, Arlington
This position requires that the candidate selected be a US Citizen and currently possess and maintain an active Top Secret security clearance. Join a sizeable team of data scientists, research scientists, and machine learning engineers that develop computer vision models on overhead imagery for a high-impact government customer. We own the entire machine learning development life cycle, developing models on customer data: Exploring the data and brainstorming and prioritizing ideas for model development Implementing new features in our sizable code base Training models in support of experimental or performance goals T&E-ing, packaging, and delivering models We perform this work on both unclassified and classified networks, with portions of our team working on each network. We seek a new team member to work on the classified networks. Three to four days a week, you would travel to the customer site in Northern Virginia to perform tasking as described below. Weekdays when you do not travel to the customer site, you would work from your local Amazon office. You would work collaboratively with teammates to use and contribute to a well-maintained code base that the team has developed over the last several years, almost entirely in python. You would have great opportunities to learn from team members and technical leads, while also having opportunities for ownership of important project workflows. You would work with Jupyter Notebooks, the Linux command line, Apache AirFlow, GitLab, and Visual Studio Code. We are a very collaborative team, and regularly teach and learn from each other, so, if you are familiar with some of these technologies, but unfamiliar with others, we encourage you to apply - especially if you are someone who likes to learn. We are always learning on the job ourselves. Key job responsibilities With support from technical leads, carry out tasking across the entire machine learning development lifecycle to develop computer vision models on overhead imagery: - Run data conversion pipelines to transform customer data into the structure needed by models for training - Perform EDA on the customer data - Train deep neural network models on overhead imagery - Develop and implement hyper-parameter optimization strategies - Test and Evaluate models and analyze results - Package and deliver models to the customer - Incorporate model R&D from low-side researchers - Implement new features to the model development code base - Collaborate with the rest of the team on long term strategy and short-medium term implementation. - Contribute to presentations to the customer regarding the team’s work.
US, WA, Seattle
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to help Amazon provide the best customer experience by preventing eCommerce fraud? Are you excited by the prospect of analyzing and modeling terabytes of data and creating state-of-the-art algorithms to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you enjoy collaborating in a diverse team environment? If yes, then you may be a great fit to join the Amazon Selling Partner Trust & Store Integrity Science Team. We are looking for a talented scientist who is passionate to build advanced machine learning systems that help manage the safety of millions of transactions every day and scale up our operation with automation. Key job responsibilities Innovate with the latest GenAI/LLM/VLM technology to build highly automated solutions for efficient risk evaluation and automated operations Design, develop and deploy end-to-end machine learning solutions in the Amazon production environment to create impactful business value Learn, explore and experiment with the latest machine learning advancements to create the best customer experience A day in the life You will be working within a dynamic, diverse, and supportive group of scientists who share your passion for innovation and excellence. You'll be working closely with business partners and engineering teams to create end-to-end scalable machine learning solutions that address real-world problems. You will build scalable, efficient, and automated processes for large-scale data analyses, model development, model validation, and model implementation. You will also be providing clear and compelling reports for your solutions and contributing to the ongoing innovation and knowledge-sharing that are central to the team's success.
US, WA, Seattle
Are you passionate about applying machine learning and advanced statistical techniques to protect one of the world's largest online marketplaces? Do you want to be at the forefront of developing innovative solutions that safeguard Amazon's customers and legitimate sellers while ensuring a fair and trusted shopping experience? Do you thrive in a collaborative environment where diverse perspectives drive breakthrough solutions? If yes, we invite you to join the Amazon Risk Intelligence Science Team. We're seeking an exceptional scientist who can revolutionize how we protect our marketplace through intelligent automation. As a key member of our team, you'll develop and deploy state-of-the-art machine learning systems that analyze millions of seller interactions daily, ensuring the integrity and trustworthiness of Amazon's marketplace while scaling our operations to new heights. Your work will directly impact the safety and security of the shopping experience for hundreds of millions of customers worldwide, while supporting the growth of honest entrepreneurs and businesses. Key job responsibilities • Use machine learning and statistical techniques to create scalable abuse detection solutions that identify fraudulent seller behavior, account takeovers, and marketplace manipulation schemes • Innovate with the latest GenAI technology to build highly automated solutions for efficient seller verification, transaction monitoring, and risk assessment • Design, develop and deploy end-to-end machine learning solutions in the Amazon production environment to prevent and detect sophisticated abuse patterns across the marketplace • Learn, explore and experiment with the latest machine learning advancements to protect customer trust and maintain marketplace integrity while supporting legitimate selling partners • Collaborate with cross-functional teams to develop comprehensive risk models that can adapt to evolving abuse patterns and emerging threats About the team You'll be working closely with business partners and engineering teams to create end-to-end scalable machine learning solutions that address real-world problems. You will build scalable, efficient, and automated processes for large-scale data analyses, model development, model validation, and model implementation. You will also be providing clear and compelling reports for your solutions and contributing to the ongoing innovation and knowledge-sharing that are central to the team's success.