Card-Imbens 16x9.jpg
David Card (left), an Amazon Scholar, a professor of economics at the University of California, Berkeley, and the outgoing president of the AEA, and Guido Imbens (right), an academic research consultant at Amazon and a professor at the Stanford Graduate School of Business.

A conversation with economics Nobelists

Amazon Scholar David Card and academic research consultant Guido Imbens on the past and future of empirical economics.

The annual meeting of the American Economic Association (AEA) took place Jan. 7 - 9, and as it approached, Amazon Science had the chance to interview two of the three recipients of the 2021 Nobel Prize in economics — who also happen to be Amazon-affiliated economists.

David Card, an Amazon Scholar, a professor of economics at the University of California, Berkeley, and the outgoing president of the AEA, won half the prize “for his empirical contributions to labor economics”.

Guido Imbens, an academic research consultant at Amazon and a professor at the Stanford Graduate School of Business, shared the other half of the prize with MIT’s Josh Angrist for “methodological contributions to the analysis of causal relationships”.

Amazon Science: The empirical approach to economics has been recognized by the Nobel Prize committee several times in the last few years, but it wasn't always as popular as it is today. I'm curious how you both first became interested in empirical approaches to economics.

David Card: The heroes of economics for many, many decades were the theorists, and in the postwar era especially, there was a recognition that economic modeling was underdeveloped — the math was underdeveloped — and there was a need to formalize things and understand better what the models really delivered.

People started to realize that we had the data to better look at real labor market phenomena and possibly make economics something different than just a kind of a branch of philosophy.
David Card

That need really proceeded through the ’60s, and Arrow and Debreu were these famous mathematical economists who developed some very elegant theoretical models of how the market works in an idealized economy.

What happened in my time was people started to realize that we had the data to better look at real labor market phenomena and possibly make economics something different than just a kind of a branch of philosophy. Arrow-Debreu is basically mathematical philosophy.

Guido Imbens: I came from a very different tradition. I grew up in the Netherlands, and there was a strong tradition of econometrics started by people like Tinbergen. Tinbergen had been very broad — he did econometrics, but he also did empirical work and was very heavily involved in policy analysis. But over time, the program he had started was becoming much more focused on technical econometrics.

So as an undergraduate, we didn't really do any empirical work. We really just did a lot of mathematical statistics and some operations research and some economic theory. My thesis was a theoretical econometrics study.

When I presented that at Harvard, Josh Angrist wasn't really all that impressed with it, and he actually opposed the department hiring me there because he thought the paper was boring. And he was probably right! But luckily, the more senior people there at the time thought I was at least somewhat promising. And so I got hired at Harvard. But then it was really Josh and Larry Katz, one of the labor economists there, who got me interested in going to the labor seminar and got me exposed to the modern empirical work.

The context Josh and I started talking in really was this paper that I think came up in all three of the Nobel lectures, this paper by Ed Leamer, “Let's Take the Con Out of Econometrics”, where Leamer says, “Hardly anyone takes data analysis seriously. Or perhaps more accurately, hardly anyone takes anyone else’s data analysis seriously.”

And I think Leamer was right: people did these very elaborate things, and it was all showing off complicated technical things, but it wasn't really very credible. In fact, Leamer presented a lecture based on that work at Harvard. And I remember Josh getting up at some point and saying, “Well, you talk about all this old stuff, but look at the work Card does. Look at the work Krueger does. Look at the work I do. It's very different.”

And that felt right to me. It felt that the work was qualitatively very different from the work that Ed Leamer was describing and that he was complaining about.

AS: So that's when you first became aware of Professor Card’s work. Professor Card, when did you first become aware of Professor Imbens’s work?

Card: One of his early papers was pretty interesting. He was trying to combine data from micro survey evidence with benchmark numbers that you would get from a population, and it's actually a version of a kind of a problem that arises at Amazon all the time, which is, we've got noisy estimates of something, and we've got probably reliable estimates of some other aggregates, and there's often ways to try and combine those. I saw that and I thought that was very interesting.

Then there’s the problem that Josh and Guido worked on that was most impactful and that was cited by the Nobel Prize committee. I had worked on an experiment, a real experiment [as opposed to a natural experiment], in welfare analysis in Canada, and it was providing an economic incentive to try and get single mothers off of welfare and into work. And we noticed that the group of mothers who complied or followed on with the experiment was reasonable size, but it wasn't 100%.

We did some analysis of it trying to characterize them. Around the same time, I became aware of Imbens’s and Angrist’s paper, which basically formalized that a lot better and described what exactly was going on with this group. That framework just instantly took off, and everyone within a few years was thinking about problems that way.

This morning I was talking to another Amazon person about a problem. It was a difference analysis. I was saying we should try and characterize the compliers for this difference intervention. So it's exactly this problem.

The Nobel committee’s press release for Card, Imbens, and Angrist’s prize announcement emphasizes their use of natural experiments, which it defines as “situations in which chance events or policy changes result in groups of people being treated differently, in a way that resembles clinical trials in medicine.” A seminal instance of this was Card’s 1993 paper with his Princeton colleague Alan Krueger, which compared fast-food restaurants in two demographically similar communities on either side of the New Jersey-Pennsylvania border, one of which had recently seen a minimum-wage hike and one of which hadn’t.

AS: In the early days, there was skepticism about the empirical approach to economics. So every time you selected a new research project, you weren't just trying to answer an economics problem; you were also, in a sense, establishing the credibility of the approach. How did you select problems then? Was there a structure that you recognized as possibly lending itself to natural experiment?

Card: I think that the natural-experiment thing — there was really a brief period where that was novel, to tell you the truth. Maybe 1989 to 1992 or 3. I did this paper on the Mariel boatlift, which was cited by the committee. But to tell you the truth, that was a very modest paper. I never presented it anywhere, and it's in a very modest journal. So I never thought of that paper as going anywhere [laughs].

What happened was, it became more and more well understood that in order to make a claim of causality even from a natural-experiment setting, you had to have a fair amount of information from before the experiment took place to validate or verify that the group that you were calling the treatment group and the group that you were calling the control group actually were behaving the same.

That was a weakness of the project that Alan Krueger and I did. We had restaurants in New Jersey and Pennsylvania. We knew the minimum wage was going to increase — or we thought we knew that; it wasn't entirely clear at the time — but we surveyed the restaurants before, and then the minimum wage went up, and we surveyed them after, and that was good.

But we didn't really have multiple surveys from before to show that in the absence of the minimum wage, New Jersey and Pennsylvania restaurants had tracked each other for a long time. And these days, that's better understood. At Amazon for instance, people are doing intervention analyses of this type. They would normally look at what they call pre-trend analysis, make sure that the treatment group and the control group are trending the same beforehand.

I think there are 1,000 questions in economics that have been open forever. Sometimes new datasets come along. That's been happening a lot in labor economics: huge administrative datasets have become available, richer and richer, and now we're getting datasets that are created by these tech firms. So my usual thing is, I think, that's a dataset that maybe we can answer this old question on. That’s more my approach.

That's why being at Amazon has been great .... A lot of people have substantive questions they're trying to analyze with data, and they're kind of stuck in places, so there's a need for new methodologies.
Guido Imbens

Imbens: I come from a slightly different perspective. Most of my work has come from listening to people like David and Josh and seeing what type of problems they're working on, what type of methods they're using, and seeing if there's something to be added there — if there’s some way of improving the methods or places where maybe they're stuck, but listening to the people actually doing the empirical work rather than starting with the substantive questions.

That's why being at Amazon has been great, from my perspective. A lot of people have substantive questions they're trying to analyze with data, and they're kind of stuck in places, so there's a need for new methodologies. It's been a very fertile environment for me to come up with new research.

AS: Methodologically, what are some of the outstanding questions that interest you both?

Imbens: Well, one of the things is experimental design in complex environments. A lot of the experimental designs we’re using at the moment still come fairly directly from biomedical settings. We have a population, we randomize them into a treatment group and a control group, and then we compare outcomes for the two groups.

But in a lot of the settings we’re interested in at Amazon, there are very complex interactions between the units and their experiences, and dealing with that is very challenging. There are lots of special cases where we know somewhat what to do, but there are lots of cases where we don't know exactly what to do, and we need to do more complex experiments to get the answers to the questions we're interested in.

Double randomization — original color scheme.jpeg
An example of what Imbens calls “experimental design in complex environments”. In this illustration, each of five viewers is shown promotions for eight different Prime Video shows. Some of those promotions contain extra information, indicated in the image by star ratings (the “treatment”). This design helps determine whether the treatment affects viewing habits (the viewer experiment) but also helps identify spillover effects, in which participation in the viewer experiment influences the viewer’s behavior in other contexts.

The second thing is, we do a lot of these experiments, but often the experiments are relatively small. They’re small in duration, and they’re small in size relative to the overall population. You know, it goes back to the paper we mentioned before, combining this observational-study data with experimental data. That raises a lot of interesting methodological challenges that I spend a lot of time thinking about these days.

AS: I wondered if in the same way that in that early paper you were looking at survey data and population data, there's a way that natural experiments and economic field experiments can reinforce each other or give you a more reliable signal than you can get from either alone.

Card: There's one thing that people do; I've done a few of these myself. It's called meta analysis. It's a technique where you take results from different studies and try and put them into a statistical model. In a way it's comparable to work Guido has done at Amazon, where you take a series of actual experiments, A/B experiments done in Weblab, and basically combine them and say, “Okay, these aren't exactly the same products and the same conditions, but there's enough comparability that maybe I can build a model and use the information from the whole set to help inform what we're learning from any given one.”

And you can do that in studies in economics. For example, I’ve done one on training programs. There are many of these training programs. Each of them — exactly as Guido was saying — is often quite small. And there are weird conditions: sometimes it's only young males or young females that are in the experiment, or they don't have very long follow-up, or sometimes the labor market is really strong, and other times it's really weak. So you can try and build a model of the outcome you get from any given study and then try and see if there are any systematic patterns there.

Imbens: We do all these experiments, but often we kind of do them once, and then we put them aside. There's a lot of information over the years built up in all these experiments we've done, and finding more of these meta-analysis-type ways of combining them and exploiting all the information we have collected there — I think it's a very promising way to go.

AS: How can empirical methods complement theoretical approaches — model building of the kind that, in some sense, the early empirical research was reacting against?

Card: Normally, if you're building a model, there are a few key parameters, like you need to get some kind of an elasticity of what a customer will do if faced with a higher price or if offered a shorter, faster delivery speed versus slower delivery speed. And if you have those elasticities, then you can start building up a model.

If you have even a fairly complicated dynamic model, normally there's a relatively small number of these parameters, and the value of the model is to take this set of parameters and try and tell a bit richer story — not just how the customer responds to an offer of a faster delivery today but how that affects their future purchases and whether they come back and buy other products or whatever. But you need credible estimates of those elasticities. It's not helpful to build a model and then just pull numbers out of the air [laughs]. And that's why A/B experiments are so important at Amazon.

AS: I asked about outstanding methodological questions that you're interested in, but how about economic questions more broadly that you think could really benefit from an empirical approach?

Card: In my field [labor economics], we've begun to realize that different firms are setting different wages for the same kinds of workers. And we're starting to think about two issues related to that. One is, how do workers choose between jobs? Do they know about all the jobs out there? Do they just find out about some of the jobs? We're trying to figure out exactly why it's okay in the labor market for there to be multiple wages for a certain class of workers. Why don't all the workers immediately try to go to one job? This seems to be a very important phenomenon.

And on the other side of that, how do employers think about it? What are the benefits to employers of a higher wage or lower wage? Is it just the recruiting, or is it retention, or is it productivity? Is it longer-term goals? That's front and center in the research that I do outside of Amazon.

AS: I was curious if there were any cases where a problem presented itself, and at first you didn't think there was any way to get an empirical handle on it, and then you figured out that there was.

We're supposed to be social scientists who are trying to see what people are doing and the problems they confront and trying to analyze them. ... That's different than this old-fashioned Adam Smith view of the economy as a perfectly functioning tool that we're just supposed to admire.
David Card

Card: I saw a really interesting paper that was done by a PhD student who was visiting my center at Berkeley. In European football, there are a lot of non-white players, and fan racism is pretty pervasive. This guy noticed that during COVID, they played a lot of games with no fans. So he was able to compare the performance of the non-white and white players in the pre-COVID era and the COVID era, with and without fans, and showed that the non-white players did a little bit better. That's the kind of question where you’re saying, How are we ever going to study that? But if you're thinking and looking around, there's always some angle that might be useful.

Imbens: That's a very clever idea. I agree with David. If you just pay attention, there are a lot of things happening that allow you to answer important questions. Maybe fan insults in sports itself isn't that big a deal, but clearly, racism in the labor market and having people treated differently is a big problem. And here you get a very clear handle on an aspect of it. And once you show it's a problem there, it's very likely that it shows up in arguably substantively much more important settings where it's really hard to study.

In the Netherlands for a long time, they had a limit on the number of students who could go to medical school. And it wasn't decided by the medical schools themselves; they couldn't choose whom to admit. It was partly based on a lottery. At some point, someone used that to figure out how much access to medical school is actually worth. So essentially, you have two people who are both qualified to go to medical school; one gets lucky in the lottery; one doesn't. And it turns out you're giving the person who wins the lottery basically a lot of money. Obviously, in many professions we can't just randomly assign people to different types of jobs. But here you get a handle on the value of rationing that type of education.

Card: I think that's really important. You know, we're supposed to be social scientists who are trying to see what people are doing and the problems they confront and trying to analyze them. In a way, that's different than this sort of old-fashioned Adam Smith view of the economy as a perfectly functioning tool that we're just supposed to admire. That is a difference, I think.

Research areas

Related content

IN, KA, Bengaluru
RBS (Retail Business Services) Tech team works towards enhancing the customer experience (CX) and their trust in product data by providing technologies to find and fix Amazon CX defects at scale. Our platforms help in improving the CX in all phases of customer journey, including selection, discoverability & fulfilment, buying experience and post-buying experience (product quality and customer returns). The team also develops GenAI platforms for automation of Amazon Stores Operations. As a Sciences team in RBS Tech, we focus on foundational ML research and develop scalable state-of-the-art ML solutions to solve the problems covering customer experience (CX) and Selling partner experience (SPX). We work to solve problems related to multi-modal understanding (text and images), task automation through multi-modal LLM Agents, supervised and unsupervised techniques, multi-task learning, multi-label classification, aspect and topic extraction for Customer Anecdote Mining, image and text similarity and retrieval using NLP and Computer Vision for product groupings and identifying duplicate listings in product search results. Key job responsibilities As an Applied Scientist, you will be responsible to design and deploy scalable GenAI, NLP and Computer Vision solutions that will impact the content visible to millions of customer and solve key customer experience issues. You will develop novel LLM, deep learning and statistical techniques for task automation, text processing, image processing, pattern recognition, and anomaly detection problems. You will define the research and experiments strategy with an iterative execution approach to develop AI/ML models and progressively improve the results over time. You will partner with business and engineering teams to identify and solve large and significantly complex problems that require scientific innovation. You will independently file for patents and/or publish research work where opportunities arise. The RBS org deals with problems that are directly related to the selling partners and end customers and the ML team drives resolution to organization level problems. Therefore, the Applied Scientist role will impact the large product strategy, identifies new business opportunities and provides strategic direction which is very exciting.
IN, KA, Bengaluru
Selection Monitoring team is responsible for making the biggest catalog on the planet even bigger. In order to drive expansion of the Amazon catalog, we develop advanced ML/AI technologies to process billions of products and algorithmically find products not already sold on Amazon. We work with structured, semi-structured and Visually Rich Documents using deep learning, NLP and image processing. The role demands a high-performing and flexible candidate who can take responsibility for success of the system and drive solutions from research, prototype, design, coding and deployment. We are looking for Applied Scientists to tackle challenging problems in the areas of Information Extraction, Efficient crawling at internet scale, developing ML models for website comprehension and agents to take multi-step decisions. You should have depth and breadth of knowledge in text mining, information extraction from Visually Rich Documents, semi structured data (HTML) and advanced machine learning. You should also have programming and design skills to manipulate Semi-Structured and unstructured data and systems that work at internet scale. You will encounter many challenges, including: - Scale (build models to handle billions of pages), - Accuracy (requirements for precision and recall) - Speed (generate predictions for millions of new or changed pages with low latency) - Diversity (models need to work across different languages, market places and data sources) You will help us to - Build a scalable system which can algorithmically extract information from world wide web. - Intelligently cluster web pages, segment and classify regions, extract relevant information and structure the data available on semi-structured web. - Build systems that will use existing Knowledge Base to perform open information extraction at scale from visually rich documents. Key job responsibilities - Use AI, NLP and advances in LLMs/SLMs and agentic systems to create scalable solutions for business problems. - Efficiently Crawl web, Automate extraction of relevant information from large amounts of Visually Rich Documents and optimize key processes. - Design, develop, evaluate and deploy, innovative and highly scalable ML models, esp. leveraging latest advances in RL-based fine tuning methods like DPO, GRPO etc. - Work closely with software engineering teams to drive real-time model implementations. - Establish scalable, efficient, automated processes for large scale model development, model validation and model maintenance. - Lead projects and mentor other scientists, engineers in the use of ML techniques. - Publish innovation in research forums.
US, MA, Boston
MULTIPLE POSITIONS AVAILABLE Employer: AMAZON WEB SERVICES, INC. Offered Position: Data Scientist III Job Location: Boston, Massachusetts Job Number: AMZN9674163 Position Responsibilities: Own the data science elements of various products to help with data-based decision making, product performance optimization, and product performance tracking. Work directly with product managers to help drive the design of the product. Work with Technical Product Managers to help drive the build planning. Translate business problems and products into data requirements and metrics. Initiate the design, development, and implementation of scientific analysis projects or deliverables. Own the analysis, modelling, system design, and development of data science solutions for products. Write documents and make presentations that explain model/analysis results to the business. Bridge the degree of uncertainty in both problem definition and data scientific solution approaches. Build consensus on data, metrics, and analysis to drive business and system strategy. Position Requirements: Master's degree or foreign equivalent degree in Statistics, Applied Mathematics, Economics, Engineering, Computer Science or a related field and two years of experience in the job offered or a related occupation. Employer will accept a Bachelor's degree or foreign equivalent degree in Statistics, Applied Mathematics, Economics, Engineering, Computer Science, or a related field and five years of progressive post-baccalaureate experience in the job offered or a related occupation as equivalent to the Master's degree and two years of experience. Must have one year of experience in the following skills: (1) building statistical models and machine learning models using large datasets from multiple resources; (2) building complex data analyses by leveraging scripting languages including Python, Java, or related scripting language; and (3) communicating with users, technical teams, and management to collect requirements, evaluate alternatives, and develop processes and tools to support the organization. Amazon.com is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation. 40 hours / week, 8:00am-5:00pm, Salary Range $161,803/year to $215,300/year. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, visit: https://www.aboutamazon.com/workplace/employee-benefits.#0000
US, CA, Palo Alto
About Sponsored Products and Brands The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. Key job responsibilities As a Machine Learning Applied Scientist, you will: * Conduct deep data analysis to derive insights to the business, and identify gaps and new opportunities * Develop scalable and effective machine-learning models and optimization strategies to solve business problems * Run regular A/B experiments, gather data, and perform statistical analysis * Work closely with software engineers to deliver end-to-end solutions into production * Improve the scalability, efficiency and automation of large-scale data analytics, model training, deployment and serving * Conduct research on new machine-learning modeling and Generative AI solutions to optimize all aspects of Sponsored Products and Brands business About the team The Ad Response Prediction team within Sponsored Products and Brands (SPB) drives personalized shopping experiences for SPB Ads across placements, pages, and devices worldwide. We achieve this through ML and GenAI solutions that include customized shopper response prediction and session-level understanding to optimize every stage of the ad-serving process, from sourcing and bidding to widget discovery and auctions. Our responsibilities include advancing response prediction through model and feature innovations and extending prediction beyond the auction stage to areas such as targeting, sourcing, and bidding.
US, NY, New York
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the extreme. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We are constantly learning through programs that are local, regional, and global. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Our team highly values work-life balance, mentorship and career growth. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We care about your career growth and strive to assign projects and offer training that will challenge you to become your best. Key job responsibilities - Lead and execute complex, ambiguous research projects from ideation to production deployment - Drive technical strategy and roadmap decisions for ML/AI initiatives - Collaborate cross-functionally with product, engineering, and business teams to translate research into scalable products - Publish research findings at top-tier conferences and contribute to the broader scientific community - Establish best practices for ML experimentation, evaluation, and deployment
US, CA, Santa Clara
We are seeking an Applied Scientist II to join Amazon Customer Service's Science team, where you will build AI-based automated customer service solutions using state-of-the-art techniques in retrieval-augmented generation (RAG), agentic AI, and post-training of large language models. You will work at the intersection of research and production, developing intelligent systems that directly impact millions of customers while collaborating with scientists, engineers, and product managers in a fast-paced, innovative environment. Key job responsibilities - Design, develop, and deploy information retrieval systems and RAG pipelines using embedding models, reranking algorithms, and generative models to improve customer service automation - Conduct post-training of large language models using techniques such as Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and Group Relative Policy Optimization (GRPO) to optimize model performance for customer service tasks - Build and curate high-quality datasets for model training and evaluation, ensuring data quality and relevance for customer service applications - Design and implement comprehensive evaluation frameworks, including data curation, metrics development, and methods such as LLM-as-a-judge to assess model performance - Develop AI agents for automated customer service, understanding their advantages and common pitfalls, and implementing solutions that balance automation with customer satisfaction - Independently perform research and development with minimal guidance, staying current with the latest advances in machine learning and AI - Collaborate with cross-functional teams including engineering, product management, and operations to translate research into production systems - Publish findings and contribute to the broader scientific community through papers, patents, and open-source contributions - Monitor and improve deployed models based on real-world performance metrics and customer feedback A day in the life As an Applied Scientist II, you will start your day reviewing metrics from deployed models and identifying opportunities for improvement. You might spend your morning experimenting with new post-training techniques to improve model accuracy, then collaborate with engineers to integrate your latest model into production systems. You will participate in design reviews, share your findings with the team, and mentor junior scientists. You will balance research exploration with practical implementation, always keeping the customer experience at the forefront of your work. You will have the autonomy to drive your own research agenda while contributing to team goals and deliverables. About the team The Amazon Customer Service Science team is dedicated to revolutionizing customer support through advanced AI and machine learning. We are a diverse group of scientists and engineers working on some of the most challenging problems in natural language understanding and AI automation. Our team values innovation, collaboration, and a customer-obsessed mindset. We encourage experimentation, celebrate learning from failures, and are committed to maintaining Amazon's high bar for scientific rigor and operational excellence. You will have access to world-class computing resources, massive datasets, and the opportunity to work alongside some of the brightest minds in AI and machine learning.
US, CA, Sunnyvale
Amazon is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine innovative AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. We leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of robotics foundation models that: - Enable unprecedented generalization across diverse tasks - Integrate multi-modal learning capabilities (visual, tactile, linguistic) - Accelerate skill acquisition through demonstration learning - Enhance robotic perception and environmental understanding - Streamline development processes through reusable capabilities The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. As a Senior Applied Scientist, you will develop and improve machine learning systems that help robots perceive, reason, and act in real-world environments. You will leverage state-of-the-art models (open source and internal research), evaluate them on representative tasks, and adapt/optimize them to meet robustness, safety, and performance needs. You will invent new algorithms where gaps exist. You’ll collaborate closely with research, controls, hardware, and product-facing teams, and your outputs will be used by downstream teams to further customize and deploy on specific robot embodiments. Key job responsibilities As a Senior Applied Scientist in the Foundations Model team, you will: - Leverage state-of-the-art models for targeted tasks, environments, and robot embodiments through fine-tuning and optimization. - Execute rapid, rigorous experimentation with reproducible results and solid engineering practices, closing the gap between sim and real environments. - Build and run capability evaluations/benchmarks to clearly profile performance, generalization, and failure modes. - Contribute to the data and training workflow: collection/curation, dataset quality/provenance, and repeatable training recipes. - Write clean, maintainable, well commented and documented code, contribute to training infrastructure, create tools for model evaluation and testing, and implement necessary APIs - Stay current with latest developments in foundation models and robotics, assist in literature reviews and research documentation, prepare technical reports and presentations, and contribute to research discussions and brainstorming sessions. - Work closely with senior scientists, engineers, and leaders across multiple teams, participate in knowledge sharing, support integration efforts with robotics hardware teams, and help document best practices and methodologies.
US, CA, Sunnyvale
Amazon's AGI Information is seeking an exceptional Applied Scientist to drive science advancements in the Amazon Knowledge Graph team (AKG). AKG is re-inventing knowledge graphs for the LLM era, optimizing for LLM grounding. At the same time, AKG is innovating to utilize LLMs in the knowledge graph construction pipelines to overcome obstacles that traditional technologies could not overcome. As a member of the AKG IR team, you will have the opportunity to work on interesting problems with immediate customer impact. The team is addressing challenges in web-scale knowledge mining, fact verification, multilingual information retrieval, and agent memory operating over Graphs. You will also have the opportunity to work with scientists working on the other challenges, and with the engineering teams that deliver the science advancements to our customers. A successful candidate has a strong machine learning and agent background, is a master of state-of-the-art techniques, has a strong publication record, has a desire to push the envelope in one or more of the above areas, and has a track record of delivering to customers. The ideal candidate enjoys operating in dynamic environments, is self-motivated to take on new challenges, and enjoys working with customers, stakeholders, and engineering teams to deliver big customer impact, shipping solutions via rapid experimentation and then iterating on user feedback and interactions. Key job responsibilities As an Applied Scientist, you will leverage your technical expertise and experience to demonstrate leadership in tackling large complex problems. You will collaborate with applied scientists and engineers to develop novel algorithms and modeling techniques to build the knowledge graph that delivers fresh factual knowledge to our customers, and that automates the knowledge graph construction pipelines to scale to many billions of facts. Your first responsibility will be to solve entity resolution to enable conflating facts from multiple sources into a single graph entity for each real world entity. You will develop generic solutions that work fo all classes of data in AKG (e.g., people, places, movies, etc.), that cope with sparse, noisy data, that scale to hundreds of millions of entities, and that can handle streaming data. You will define a roadmap to make progress incrementally and you will insist on scientific rigor, leading by example.
US, MA, N.reading
Amazon is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. Key job responsibilities - Design and implement whole body control methods for balance, locomotion, and dexterous manipulation - Utilize state-of-the-art in methods in learned and model-based control - Create robust and safe behaviors for different terrains and tasks - Implement real-time controllers with stability guarantees - Collaborate effectively with multi-disciplinary teams to co-design hardware and algorithms for loco-manipulation - Mentor junior engineer and scientists
CN, 31, Shanghai
As a Sr. Applied Scientist, you will be responsible for bringing new product designs through to manufacturing. You will work closely with multi-disciplinary groups including Product Design, Industrial Design, Hardware Engineering, and Operations, to drive key aspects of engineering of consumer electronics products. In this role, you will use expertise in physical sciences, theoretical, numerical or empirical techniques to create scalable models representing response of physical systems or devices, including: * Applying domain scientific expertise towards developing innovative analysis and tests to study viability of new materials, designs or processes * Working closely with engineering teams to drive validation, optimization and implementation of hardware design or software algorithmic solutions to improve product and customer risks * Establishing scalable, efficient, automated processes to handle large scale design and data analysis * Conducting research into use conditions, materials and analysis techniques * Tracking general business activity including device health in field and providing clear, compelling reports to management on a regular basis * Developing, implementing guidelines to continually optimize design processes * Using simulation tools like LS-DYNA, and Abaqus for analysis and optimization of product design * Using of programming languages like Python and Matlab for analytical/statistical analyses and automation * Demonstrating strong understanding across multiple physical science domains, e.g. structural, thermal, fluid dynamics, and materials * Developing, analyzing and testing structural solutions from concept design, feature development, product architecture, through system validation * Supporting product development and optimization through application of analysis and testing of complex electronic assemblies using advanced simulation and experimentation tools and techniques