Michael I. Jordan, Amazon scholar and professor at the University of California, Berkeley
Michael I. Jordan, Amazon scholar and professor at the University of California, Berkeley
Credit: Flavia Loreto

Artificial Intelligence—The revolution hasn’t happened yet

Michael I. Jordan, Amazon scholar and professor at the University of California, Berkeley, writes about the classical goals in human-imitative AI, and reflects on how in the current hubbub over the AI revolution it is easy to forget that these goals haven’t yet been achieved. This article is reprinted with permission from the Harvard Data Science Review, where it first appeared.

Artificial Intelligence (AI) is the mantra of the current era. The phrase is intoned by technologists, academicians, journalists, and venture capitalists alike. As with many phrases that cross over from technical academic fields into general circulation, there is significant misunderstanding accompanying use of the phrase. However, this is not the classical case of the public not understanding the scientists—here the scientists are often as befuddled as the public. The idea that our era is somehow seeing the emergence of an intelligence in silicon that rivals our own entertains all of us, enthralling us and frightening us in equal measure. And, unfortunately, it distracts us.

There is a different narrative that one can tell about the current era. Consider the following story, which involves humans, computers, data, and life-or-death decisions, but where the focus is something other than intelligence-in-silicon fantasies. When my spouse was pregnant 14 years ago, we had an ultrasound. There was a geneticist in the room, and she pointed out some white spots around the heart of the fetus. “Those are markers for Down syndrome,” she noted, “and your risk has now gone up to one in 20.” She let us know that we could learn whether the fetus in fact had the genetic modification underlying Down syndrome via an amniocentesis, but amniocentesis was risky—the chance of killing the fetus during the procedure was roughly one in 300. Being a statistician, I was determined to find out where these numbers were coming from. In my research, I discovered that a statistical analysis had been done a decade previously in the UK in which these white spots, which reflect calcium buildup, were indeed established as a predictor of Down syndrome. I also noticed that the imaging machine used in our test had a few hundred more pixels per square inch than the machine used in the UK study. I returned to tell the geneticist that I believed that the white spots were likely false positives, literal white noise.

She said, “Ah, that explains why we started seeing an uptick in Down syndrome diagnoses a few years ago. That’s when the new machine arrived.”

We didn’t do the amniocentesis, and my wife delivered a healthy girl a few months later, but the episode troubled me, particularly after a back-of-the-envelope calculation convinced me that many thousands of people had gotten that diagnosis that same day worldwide, that many of them had opted for amniocentesis, and that a number of babies had died needlessly. The problem that this episode revealed wasn’t about my individual medical care; it was about a medical system that measured variables and outcomes in various places and times, conducted statistical analyses, and made use of the results in other situations. The problem had to do not just with data analysis per se, but with what database researchers call provenance—broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation? While a trained human might be able to work all of this out on a case-by-case basis, the issue was that of designing a planetary-scale medical system that could do this without the need for such detailed human oversight.

I’m also a computer scientist, and it occurred to me that the principles needed to build planetary-scale inference-and-decision-making systems of this kind, blending computer science with statistics, and considering human utilities, were nowhere to be found in my education. It occurred to me that the development of such principles—which will be needed not only in the medical domain but also in domains such as commerce, transportation, and education—were at least as important as those of building AI systems that can dazzle us with their game-playing or sensorimotor skills.

Whether or not we come to understand ‘intelligence’ any time soon, we do have a major challenge on our hands in bringing together computers and humans in ways that enhance human life. While some view this challenge as subservient to the creation of artificial intelligence, another more prosaic, but no less reverent, viewpoint is that it is the creation of a new branch of engineering. Much like civil engineering and chemical engineering in decades past, this new discipline aims to corral the power of a few key ideas, bringing new resources and capabilities to people, and to do so safely. Whereas civil engineering and chemical engineering built upon physics and chemistry, this new engineering discipline will build on ideas that the preceding century gave substance to, such as information, algorithm, data, uncertainty, computing, inference, and optimization. Moreover, since much of the focus of the new discipline will be on data from and about humans, its development will require perspectives from the social sciences and humanities.

While the building blocks are in place, the principles for putting these blocks together are not, and so the blocks are currently being put together in ad-hoc ways. Thus, just as humans built buildings and bridges before there was civil engineering, humans are proceeding with the building of societal-scale, inference-and-decision-making systems that involve machines, humans, and the environment. Just as early buildings and bridges sometimes fell to the ground—in unforeseen ways and with tragic consequences—many of our early societal-scale inference-and-decision-making systems are already exposing serious conceptual flaws.

Unfortunately, we are not very good at anticipating what the next emerging serious flaw will be. What we’re missing is an engineering discipline with principles of analysis and design.

The current public dialog about these issues too often uses the term AI as an intellectual wildcard, one that makes it difficult to reason about the scope and consequences of emerging technology. Let us consider more carefully what AI has been used to refer to, both recently and historically.

Most of what is labeled AI today, particularly in the public sphere, is actually machine learning (ML), a term in use for the past several decades. ML is an algorithmic field that blends ideas from statistics, computer science and many other disciplines (see below) to design algorithms that process data, make predictions, and help make decisions. In terms of impact on the real world, ML is the real thing, and not just recently. Indeed, that ML would grow into massive industrial relevance was already clear in the early 1990s, and by the turn of the century forward-looking companies such as Amazon were already using ML throughout their business, solving mission-critical, back-end problems in fraud detection and supply-chain prediction, and building innovative consumer-facing services such as recommendation systems. As datasets and computing resources grew rapidly over the ensuing two decades, it became clear that ML would soon power not only Amazon but essentially any company in which decisions could be tied to large-scale data. New business models would emerge. The phrase ‘data science’ emerged to refer to this phenomenon, reflecting both the need of ML algorithms experts to partner with database and distributed-systems experts to build scalable, robust ML systems, as well as reflecting the larger social and environmental scope of the resulting systems.This confluence of ideas and technology trends has been rebranded as ‘AI’ over the past few years. This rebranding deserves some scrutiny.

Historically, the phrase “artificial intelligence” was coined in the late 1950s to refer to the heady aspiration of realizing in software and hardware an entity possessing human-level intelligence. I will use the phrase “human-imitative AI” to refer to this aspiration, emphasizing the notion that the artificially intelligent entity should seem to be one of us, if not physically then at least mentally (whatever that might mean). This was largely an academic enterprise. While related academic fields such as operations research, statistics, pattern recognition, information theory, and control theory already existed, and often took inspiration from human or animal behavior, these fields were arguably focused on low-level signals and decisions. The ability of, say, a squirrel to perceive the three-dimensional structure of the forest it lives in, and to leap among its branches, was inspirational to these fields. AI was meant to focus on something different: the high-level or cognitive capability of humans to reason and to think. Sixty years later, however, high-level reasoning and thought remain elusive. The developments now being called AI arose mostly in the engineering fields associated with low-level pattern recognition and movement control, as well as in the field of statistics, the discipline focused on finding patterns in data and on making well-founded predictions, tests of hypotheses, and decisions.

Indeed, the famous backpropagation algorithm that David Rumelhart rediscovered in the early 1980s, and which is now considered at the core of the so-called “AI revolution,” first arose in the field of control theory in the 1950s and 1960s. One of its early applications was to optimize the thrusts of the Apollo spaceships as they headed towards the moon.

Since the 1960s, much progress has been made, but it has arguably not come about from the pursuit of human-imitative AI. Rather, as in the case of the Apollo spaceships, these ideas have often hidden behind the scenes, the handiwork of researchers focused on specific engineering challenges. Although not visible to the general public, research and systems-building in areas such as document retrieval, text classification, fraud detection, recommendation systems, personalized search, social network analysis, planning, diagnostics, and A/B testing have been a major success—these advances have powered companies such as Google, Netflix, Facebook, and Amazon.

One could simply refer to all of this as AI, and indeed that is what appears to have happened. Such labeling may come as a surprise to optimization or statistics researchers, who find themselves suddenly called AI researchers, but labels aside, the bigger problem is that the use of this single, ill-defined acronym prevents a clear understanding of the range of intellectual and commercial issues at play.

The past two decades have seen major progress—in industry and academia—in a complementary aspiration to human-imitative AI that is often referred to as “Intelligence Augmentation” (IA). Here computation and data are used to create services that augment human intelligence and creativity. A search engine can be viewed as an example of IA, as it augments human memory and factual knowledge, as can natural language translation, which augments the ability of a human to communicate. Computer-based generation of sounds and images serves as a palette and creativity enhancer for artists. While services of this kind could conceivably involve high-level reasoning and thought, currently they don’t; they mostly perform various kinds of string-matching and numerical operations that capture patterns that humans can make use of.

Hoping that the reader will tolerate one last acronym, let us conceive broadly of a discipline of “Intelligent Infrastructure” (II), whereby a web of computation, data, and physical entities exists that makes human environments more supportive, interesting, and safe. Such infrastructure is beginning to make its appearance in domains such as transportation, medicine, commerce, and finance, with implications for individual humans and societies. This emergence sometimes arises in conversations about an Internet of Things, but that effort generally refers to the mere problem of getting ‘things’ onto the Internet, not to the far grander set of challenges associated with building systems that analyze those data streams to discover facts about the world and permit ‘things’ to interact with humans at a far higher level of abstraction than mere bits.

For example, returning to my personal anecdote, we might imagine living our lives in a societal-scale medical system that sets up data flows and data-analysis flows between doctors and devices positioned in and around human bodies, thereby able to aid human intelligence in making diagnoses and providing care. The system would incorporate information from cells in the body, DNA, blood tests, environment, population genetics, and the vast scientific literature on drugs and treatments. It would not just focus on a single patient and a doctor, but on relationships among all humans, just as current medical testing allows experiments done on one set of humans (or animals) to be brought to bear in the care of other humans. It would help maintain notions of relevance, provenance, and reliability, in the way that the current banking system focuses on such challenges in the domain of finance and payment. While one can foresee many problems arising in such a system—privacy issues, liability issues, security issues, etc.—these concerns should be viewed as challenges, not show-stoppers.

We now come to a critical issue: is working on classical human-imitative AI the best or only way to focus on these larger challenges? Some of the most heralded recent success stories of ML have in fact been in areas associated with human-imitative AI—areas such as computer vision, speech recognition, game-playing, and robotics. Perhaps we should simply await further progress in domains such as these. There are two points to make here. First, although one would not know it from reading the newspapers, success in human-imitative AI has in fact been limited; we are very far from realizing human-imitative AI aspirations. The thrill (and fear) of making even limited progress on human-imitative AI gives rise to levels of over-exuberance and media attention that is not present in other areas of engineering.

Second, and more importantly, success in these domains is neither sufficient nor necessary to solve important IA and II problems. On the sufficiency side, consider self-driving cars. For such technology to be realized, a range of engineering problems will need to be solved that may have little relationship to human competencies (or human lack-of-competencies). The overall transportation system (an II system) will likely more closely resemble the current air-traffic control system than the current collection of loosely coupled, forward-facing, inattentive human drivers. It will be vastly more complex than the current air-traffic control system, specifically in its use of massive amounts of data and adaptive statistical modeling to inform fine-grained decisions. Those challenges need to be in the forefront versus a potentially distracting focus on human-imitative AI.

As for the necessity argument, some say that the human-imitative AI aspiration subsumes IA and II aspirations, because a human-imitative AI system would not only be able to solve the classical problems of AI (e.g., as embodied in the Turing test), but it would also be our best bet for solving IA and II problems. Such an argument has little historical precedent. Did civil engineering develop by envisaging the creation of an artificial carpenter or bricklayer? Should chemical engineering have been framed in terms of creating an artificial chemist? Even more polemically: if our goal was to build chemical factories, should we have first created an artificial chemist who would have then worked out how to build a chemical factory?

A related argument is that human intelligence is the only kind of intelligence we know, thus we should aim to mimic it as a first step. However, humans are in fact not very good at some kinds of reasoning—we have our lapses, biases, and limitations. Moreover, critically, we did not evolve to perform the kinds of large-scale decision-making that modern II systems must face, nor to cope with the kinds of uncertainty that arise in II contexts. One could argue that an AI system would not only imitate human intelligence, but also correct it, and would also scale to arbitrarily large problems. Of course, we are now in the realm of science fiction—such speculative arguments, while entertaining in the setting of fiction, should not be our principal strategy going forward in the face of the critical IA and II problems that are beginning to emerge. We need to solve IA and II problems on their own merits, not as a mere corollary to a human-imitative AI agenda.

It is not hard to pinpoint algorithmic and infrastructure challenges in II systems that are not central themes in human-imitative AI research. II systems require the ability to manage distributed repositories of knowledge that are rapidly changing and are likely to be globally incoherent. Such systems must cope with cloud-edge interactions in making timely, distributed decisions, and they must deal with long-tail phenomena where there is lots of data on some individuals and little data on most individuals. They must address the difficulties of sharing data across administrative and competitive boundaries. Finally, and of particular importance, II systems must bring economic ideas such as incentives and pricing into the realm of the statistical and computational infrastructures that link humans to each other and to valued goods. Such II systems can be viewed as not merely providing a service, but as creating markets. There are domains such as music, literature, and journalism that are crying out for the emergence of such markets, where data analysis links producers and consumers. And this must all be done within the context of evolving societal, ethical, and legal norms.

Of course, classical human-imitative AI problems remain of great interest as well. However, the current focus on doing AI research via the gathering of data, the deployment of deep learning infrastructure, and the demonstration of systems that mimic certain narrowly defined human skills—with little in the way of emerging explanatory principles—tends to deflect attention from major open problems in classical AI. These problems include the need to bring meaning and reasoning into systems that perform natural language processing, the need to infer and represent causality, the need to develop computationally tractable representations of uncertainty and the need to develop systems that formulate and pursue long-term goals. These are classical goals in human-imitative AI, but in the current hubbub over the AI revolution it is easy to forget that they are not yet solved.

IA will also remain quite essential, because for the foreseeable future, computers will not be able to match humans in their ability to reason abstractly about real-world situations. We will need well-thought-out interactions of humans and computers to solve our most pressing problems. And we will want computers to trigger new levels of human creativity, not replace human creativity (whatever that might mean).

It was John McCarthy (while a professor at Dartmouth, and soon to take a position at MIT) who coined the term AI, apparently to distinguish his budding research agenda from that of Norbert Wiener (then an older professor at MIT). Wiener had coined “cybernetics” to refer to his own vision of intelligent systems—a vision that was closely tied to operations research, statistics, pattern recognition, information theory, and control theory. McCarthy, on the other hand, emphasized the ties to logic. In an interesting reversal, it is Wiener’s intellectual agenda that has come to dominate in the current era, under the banner of McCarthy’s terminology. (This state of affairs is surely, however, only temporary; the pendulum swings more in AI than in most fields.)

Beyond the historical perspectives of McCarthy and Wiener, we need to realize that the current public dialog on AI—which focuses on narrow subsets of both industry and of academia—risks blinding us to the challenges and opportunities that are presented by the full scope of AI, IA, and II.

This scope is less about the realization of science-fiction dreams or superhuman nightmares, and more about the need for humans to understand and shape technology as it becomes ever more present and influential in their daily lives. Moreover, in this understanding and shaping, there is a need for a diverse set of voices from all walks of life, not merely a dialog among the technologically attuned. Focusing narrowly on human-imitative AI prevents an appropriately wide range of voices from being heard.

While industry will drive many developments, academia will also play an essential role, not only in providing some of the most innovative technical ideas, but also in bringing researchers from the computational and statistical disciplines together with researchers from other disciplines whose contributions and perspectives are sorely needed—notably the social sciences, the cognitive sciences, and the humanities.

On the other hand, while the humanities and the sciences are essential as we go forward, we should also not pretend that we are talking about something other than an engineering effort of unprecedented scale and scope; society is aiming to build new kinds of artifacts. These artifacts should be built to work as claimed. We do not want to build systems that help us with medical treatments, transportation options, and commercial opportunities only to find out after the fact that these systems don’t really work, that they make errors that take their toll in terms of human lives and happiness. In this regard, as I have emphasized, there is an engineering discipline yet to emerge for the data- and learning-focused fields. As exciting as these latter fields appear to be, they cannot yet be viewed as constituting an engineering discipline.

We should embrace the fact that we are witnessing the creation of a new branch of engineering. The term engineering has connotations—in academia and beyond—of cold, affectless machinery, and of loss of control for humans, but an engineering discipline can be what we want it to be. In the current era, we have a real opportunity to conceive of something historically new: a human-centric engineering discipline. I will resist giving this emerging discipline a name, but if the acronym AI continues to serve as placeholder nomenclature going forward, let’s be aware of the very real limitations of this placeholder. Let’s broaden our scope, tone down the hype, and recognize the serious challenges ahead.

Research areas

Related content

CN, 11, Beijing
职位:Applied scientist 应用科学家实习生 毕业时间:2026年10月 - 2027年7月之间毕业的应届毕业生 · 入职日期:2026年6月及之前 · 实习时间:保证一周实习4-5天全职实习,至少持续3个月 · 工作地点:北京朝阳区 投递须知: 1 填写简历申请时,请把必填和非必填项都填写完整。提交简历之后就无法修改了哦! 2 学校的英文全称请准确填写。中英文对应表请查这里(无法浏览请登录后浏览)https://docs.qq.com/sheet/DVmdaa1BCV0RBbnlR?tab=BB08J2 如果您正在攻读计算机,AI,ML或搜索领域专业的博士或硕士研究生,而且对应用科学家的实习工作感兴趣。如果您也喜爱深入研究棘手的技术问题并提出解决方案,用成功的产品显著地改善人们的生活。 那么,我们诚挚邀请您加入亚马逊的International Technology搜索团队改善Amazon的产品搜索服务。我们的目标是帮助亚马逊的客户找到他们所需的产品,并发现他们感兴趣的新产品。 这会是一份收获满满的工作。您每天的工作都与全球数百万亚马逊客户的体验紧密相关。您将提出和探索创新,基于TB级别的产品和流量数据设计机器学习模型。您将集成这些模型到搜索引擎中为客户提供服务,通过数据,建模和客户反馈来完成闭环。您对模型的选择需要能够平衡业务指标和响应时间的需求。
CN, 44, Shenzhen
职位:Applied scientist 应用科学家实习生 毕业时间:2026年10月 - 2027年7月之间毕业的应届毕业生 · 入职日期:2026年6月及之前 · 实习时间:保证一周实习4-5天全职实习,至少持续3个月 · 工作地点:深圳福田区 投递须知: 1 填写简历申请时,请把必填和非必填项都填写完整。提交简历之后就无法修改了哦! 2 学校的英文全称请准确填写。中英文对应表请查这里(无法浏览请登录后浏览)https://docs.qq.com/sheet/DVmdaa1BCV0RBbnlR?tab=BB08J2 如果您正在攻读计算机,AI,ML领域专业的博士或硕士研究生,而且对应用科学家的实习工作感兴趣。如果您也喜爱深入研究棘手的技术问题并提出解决方案,用成功的产品显著地改善人们的生活。 那么,我们诚挚邀请您加入亚马逊。这会是一份收获满满的工作。您每天的工作都与全球数百万亚马逊客户的体验紧密相关。您将提出和探索创新,基于TB级别的产品和流量数据设计机器学习模型。您将集成这些为客户提供服务,通过数据,建模和客户反馈来完成闭环。您对模型的选择需要能够平衡业务指标和响应时间的需求。
LU, Luxembourg
Join our team as an Applied Scientist II where you'll develop innovative machine learning solutions that directly impact millions of customers. You'll work on ambiguous problems where neither the problem nor solution is well-defined, inventing novel scientific approaches to address customer needs at the project level. This role combines deep scientific expertise with hands-on implementation to deliver production-ready solutions that drive measurable business outcomes. Key job responsibilities Invent: - Design and develop novel machine learning models and algorithms to solve ambiguous customer problems where textbook solutions don't exist - Extend state-of-the-art scientific techniques and invent new approaches driven by customer needs at the project level - Produce internal research reports with the rigor of top-tier publications, documenting scientific findings and methodologies - Stay current with academic literature and research trends, applying latest techniques when appropriate Implement: - Write production-quality code that meets or exceeds SDE I standards, ensuring solutions are testable, maintainable, and scalable - Deploy components directly into production systems supporting large-scale applications and services - Optimize algorithm and model performance through rigorous testing and iterative improvements - Document design decisions and implementation details to enable reproducibility and knowledge transfer - Contribute to operational excellence by analyzing performance gaps and proposing solutions Influence: - Collaborate with cross-functional teams to translate business goals into scientific problems and metrics - Mentor junior scientists and help new teammates understand customer needs and technical solutions - Present findings and recommendations to both technical and non-technical stakeholders - Contribute to team roadmaps, priorities, and strategic planning discussions - Participate in hiring and interviewing to build world-class science teams
US, CA, East Palo Alto
Amazon Aurora DSQL is a serverless, distributed SQL database with virtually unlimited scale, highest availability, and zero infrastructure management. Aurora DSQL provides active-active high availability, providing strong data consistency designed for 99.99% single-Region and 99.999% multi-Region availability. Aurora DSQL automatically manages and scales system resources, so you don't have to worry about maintenance downtime and provisioning, patching, or upgrading infrastructure. As a Senior Applied Scientist, you will be expected to lead research and development in advanced query optimization techniques for distributed sql services. You will innovate in the query planning and execution layer to help Aurora DSQL succeed at delivering high performance for complex OLTP workloads. You will develop novel approaches to stats collection, query planning, execution and optimization. You will drive industry leading research, publish your research and help convert your research into implementations to make Aurora DSQL the fastest sql database for OLTP workloads. AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Key job responsibilities Our engineers collaborate across diverse teams, projects, and environments to have a firsthand impact on our global customer base. You’ll bring a passion for innovation, data, search, analytics, and distributed systems. You’ll also: Solve challenging technical problems, often ones not solved before, at every layer of the stack. Design, implement, test, deploy and maintain innovative software solutions to transform service performance, durability, cost, and security. Build high-quality, highly available, always-on products. Research implementations that deliver the best possible experiences for customers. A day in the life As you design and code solutions to help our team drive efficiencies in software architecture, you’ll create metrics, implement automation and other improvements, and resolve the root cause of software defects. You’ll also: Build high-impact solutions to deliver to our large customer base. Participate in design discussions, code review, and communicate with internal and external stakeholders. Work cross-functionally to help drive business decisions with your technical input. Work in a startup-like development environment, where you’re always working on the most important stuff. About the team Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. About AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
US, VA, Arlington
Do you want a role with deep meaning and the ability to have a global impact? Hiring top talent is not only critical to Amazon’s success – it can literally change the world. It took a lot of great hires to deliver innovations like AWS, Prime, and Alexa, which make life better for millions of customers around the world. As part of the Intelligent Talent Acquisition (ITA) team, you'll have the opportunity to reinvent Amazon’s hiring process with unprecedented scale, sophistication, and accuracy. ITA is an industry-leading people science and technology organization made up of scientists, engineers, analysts, product professionals, and more. Our shared goal is to fairly and precisely connect the right people to the right jobs. Last year, we delivered over 6 million online candidate assessments, driving a merit-based hiring approach that gives candidates the opportunity to showcase their true skills. Each year we also help Amazon deliver billions of packages around the world by making it possible to hire hundreds of thousands of associates in the right quantity, at the right location, at exactly the right time. You’ll work on state-of-the-art research with advanced software tools, new AI systems, and machine learning algorithms to solve complex hiring challenges. Join ITA in using cutting-edge technologies to transform the hiring landscape and make a meaningful difference in people's lives. Together, we can solve the world's toughest hiring problems. Within ITA, the Global Hiring Science (GHS) team designs and implements innovative hiring solutions at scale. We work in a fast-paced, global environment where we use research to solve complex problems and build scalable hiring products that deliver measurable impact to our customers. We are seeking selection researchers with a strong foundation in hiring assessment development, legally-defensible validation approaches, research and experimental design, and data analysis. Preferred candidates will have experience across the full hiring assessment lifecycle, from solution design to content development and validation to impact analysis. We are looking for equal parts researcher and consultant, who is able to influence customers with insights derived from science and data. You will work closely with cross-functional teams to design new hiring solutions and experiment with measurement methods intended to precisely define exactly what job success looks like and how best to predict it. Key job responsibilities What you’ll do as a GHS Research Scientist: • Design large-scale personnel selection research that shapes Amazon’s global talent assessment practices across a variety of topics (e.g., assessment validation, measuring post-hire impact) • Partner with key stakeholders to create innovative solutions that blend scientific rigor with real-world business impact while navigating complex legal and professional standards • Apply advanced statistical techniques to analyze massive, diverse datasets to uncover insights that optimize our candidate evaluation processes and drive hiring excellence • Explore emerging technologies and innovative methodologies to enhance talent measurement while maintaining Amazon's commitment to scientific integrity • Translate complex research findings into compelling, actionable strategies that influence senior leader/business decisions and shape Amazon's talent acquisition roadmap • Write impactful documents that distill intricate scientific concepts into clear, persuasive communications for diverse audiences, from data scientists to business leaders • Ensure effective teamwork, communication, collaboration, and commitment across multiple teams with competing priorities A day in the life Imagine diving into challenges that impact millions of employees across Amazon's global operations. As a GHS Research Scientist, you'll tackle questions about hiring and organizational effectiveness on a global scale. Your day might begin with analyzing datasets to inform how we attract and select world-class talent. Throughout the day, you'll collaborate with peers in our research community, discussing different research methodologies and sharing innovative approaches to solving unique personnel challenges. This role offers a blend of focused analytical time and interacting with stakeholders across the globe.
US, WA, Seattle
We are looking for a researcher in state-of-the-art LLM technologies for applications across Alexa, AWS, and other Amazon businesses. In this role, you will innovate in the fastest-moving fields of current AI research, in particular in how to integrate a broad range of structured and unstructured information into AI systems (e.g. with RAG techniques), and get to immediately apply your results in highly visible Amazon products. If you are deeply familiar with LLMs, natural language processing, computer vision, and machine learning and thrive in a fast-paced environment, this may be the right opportunity for you. Our fast-paced environment requires a high degree of autonomy to deliver ambitious science innovations all the way to production. You will work with other science and engineering teams as well as business stakeholders to maximize velocity and impact of your deliverables. It's an exciting time to be a leader in AI research. In Amazon's AGI Information team, you can make your mark by improving information-driven experience of Amazon customers worldwide!
US, WA, Seattle
Amazon Prime is looking for an ambitious Economist to help create econometric insights for world-wide Prime. Prime is Amazon's premiere membership program, with over 200M members world-wide. This role is at the center of many major company decisions that impact Amazon's customers. These decisions span a variety of industries, each reflecting the diversity of Prime benefits. These range from fast-free e-commerce shipping, digital content (e.g., exclusive streaming video, music, gaming, photos), and grocery offerings. Prime Science creates insights that power these decisions. As an economist in this role, you will create statistical tools that embed causal interpretations. You will utilize massive data, state-of-the-art scientific computing, econometrics (causal, counterfactual/structural, time-series forecasting, experimentation), and machine-learning, to do so. Some of the science you create will be publishable in internal or external scientific journals and conferences. You will work closely with a team of economists, applied scientists, data professionals (business analysts, business intelligence engineers), product managers, and software engineers. You will create insights from descriptive statistics, as well as from novel statistical and econometric models. You will create internal-to-Amazon-facing automated scientific data products to power company decisions. You will write strategic documents explaining how senior company leaders should utilize these insights to create sustainable value for customers. These leaders will often include the senior-most leaders at Amazon. The team is unique in its exposure to company-wide strategies as well as senior leadership. It operates at the research frontier of utilizing data, econometrics, artificial intelligence, and machine-learning to form business strategies. A successful candidate will have demonstrated a capacity for building, estimating, and defending statistical models (e.g., causal, counterfactual, time-series, machine-learning) using software such as R, Python, or STATA. They will have a willingness to learn and apply a broad set of statistical and computational techniques to supplement deep-training in one area of econometrics. For example, many applications on the team use structural econometrics, machine-learning, and time-series forecasting. They rely on building scalable production software, which involves a broad set of world-class software-building skills often learned on-the-job. As a consequence, already-obtained knowledge of SQL, machine learning, and large-scale scientific computing using distributed computing infrastructures such as Spark-Scala or PySpark would be a plus. Additionally, this candidate will show a track-record of delivering projects well and on-time, preferably in collaboration with other team members (e.g. co-authors). Candidates must have very strong writing and emotional intelligence skills (for collaborative teamwork, often with colleagues in different functional roles), a growth mindset, and a capacity for dealing with a high-level of ambiguity. Endowed with these traits and on-the-job-growth, the role will provide the opportunity to have a large strategic, world-wide impact on the customer experiences of Prime members.
US, WA, Seattle
We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA Are you interested in building Agentic AI solutions that solve complex builder experience challenges with significant global impact? The Security Tooling team designs and builds high-performance AI systems using LLMs and machine learning that identify builder bottlenecks, automate security workflows, and optimize the software development lifecycle—empowering engineering teams worldwide to ship secure code faster while maintaining the highest security standards. As a Senior Applied Scientist on our Security Tooling team, you will focus on building state-of-the-art ML models to enhance builder experience and productivity. You will identify builder bottlenecks and pain points across the software development lifecycle, design and apply experiments to study developer behavior, and measure the downstream impacts of security tooling on engineering velocity and code quality. Our team rewards curiosity while maintaining a laser-focus on bringing products to market that empower builders while maintaining security excellence. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the forefront of both academic and applied research in builder experience and security automation, you have the opportunity to work together with a diverse and talented team of scientists, engineers, and product managers and collaborate with other teams. This role offers a unique opportunity to work on projects that could fundamentally transform how builders interact with security tools and how organizations balance security requirements with developer productivity. Key job responsibilities • Design and implement novel AI/ML solutions for complex security challenges and improve builder experience • Drive advancements in machine learning and science • Balance theoretical knowledge with practical implementation • Navigate ambiguity and create clarity in early-stage product development • Collaborate with cross-functional teams while fostering innovation in a collaborative work environment to deliver impactful solutions • Design and execute experiments to evaluate the performance of different algorithms and models, and iterate quickly to improve results • Establish best practices for ML experimentation, evaluation, development and deployment You’ll need a strong background in AI/ML, proven leadership skills, and the ability to translate complex concepts into actionable plans. You’ll also need to effectively translate research findings into practical solutions. A day in the life • Integrate ML models into production security tooling with engineering teams • Build and refine ML models and LLM-based agentic systems that understand builder intent • Create agentic AI solutions that reduce security friction while maintaining high security standards • Prototype LLM-powered features that automate repetitive security tasks • Design and conduct experiments (A/B tests, observational studies) to measure downstream impacts of tooling changes on engineering productivity • Present experimental results and recommendations to leadership and cross-functional teams • Gather feedback from builder communities to validate hypotheses About the team Diverse Experiences Amazon Security values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why Amazon Security? At Amazon, security is central to maintaining customer trust and delivering delightful customer experiences. Our organization is responsible for creating and maintaining a high bar for security across all of Amazon’s products and services. We offer talented security professionals the chance to accelerate their careers with opportunities to build experience in a wide variety of areas including cloud, devices, retail, entertainment, healthcare, operations, and physical stores Inclusive Team Culture In Amazon Security, it’s in our nature to learn and be curious. Ongoing DEI events and learning experiences inspire us to continue learning and to embrace our uniqueness. Addressing the toughest security challenges requires that we seek out and celebrate a diversity of ideas, perspectives, and voices. Training & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, training, and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.
US, WA, Bellevue
We are seeking a Senior Manager, Applied Science to lead the applied science charter for Amazon’s Last-Hundred-Yard automation initiative, developing the algorithms, models, and learning systems that enable safe, reliable, and scalable autonomous delivery from vehicle to customer doorstep. This role owns the scientific direction across perception, localization, prediction, planning, learning-based controls, human-robot interaction (HRI), and data-driven autonomy validation, operating in complex, unstructured real-world environments. The Senior Manager will build and lead a high-performing team of applied scientists, set the technical vision and research-to-production roadmap, and ensure tight integration between science, engineering, simulation, and operations. This leader is responsible for translating ambiguous real-world delivery problems into rigorous modeling approaches, measurable autonomy improvements, and production-ready solutions that scale across cities, terrains, weather conditions, and customer scenarios. Success in this role requires deep expertise in machine learning and robotics, strong people leadership, and the ability to balance long-term scientific innovation with near-term delivery milestones. The Senior Manager will play a critical role in defining how Amazon applies science to unlock autonomous last-mile delivery at scale, while maintaining the highest bars for safety, customer trust, and operational performance. Key job responsibilities Set and own the applied science vision and roadmap for last-hundred-yard automation, spanning perception, localization, prediction, planning, learning-based controls, and HRI. Build, lead, and develop a high-performing applied science organization, including hiring, mentoring, performance management, and technical bar-raising. Drive the end-to-end science lifecycle from problem formulation and data strategy to model development, evaluation, deployment, and iteration in production. Partner closely with autonomy engineering to translate scientific advances into scalable, production-ready autonomy behaviors. Define and own scientific success metrics (e.g., autonomy performance, safety indicators, scenario coverage, intervention reduction) and ensure measurable impact. Lead the development of learning-driven autonomy using real-world data, simulation, and offline/online evaluation frameworks. Establish principled approaches for generalization across environments, including weather, terrain, lighting, customer properties, and interaction scenarios. Drive alignment between real-world operations and simulation, ensuring tight feedback loops for data collection and model validation. Influence safety strategy and validation by defining scientific evidence required for autonomy readiness and scale. Represent applied science in executive reviews, articulating trade-offs, risks, and long-term innovation paths.
US, MA, N.reading
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of robotics foundation models that: - Enable unprecedented generalization across diverse tasks - Enable unprecedented robustness and reliability, industry-ready - Integrate multi-modal learning capabilities (visual, tactile, linguistic) - Accelerate skill acquisition through demonstration learning - Enhance robotic perception and environmental understanding - Streamline development processes through reusable capabilities The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. Key job responsibilities As an Applied Science Manager in the Foundations Model team, you will: - Build and lead a team of scientists and developers responsible for foundation model development - Define the right ‘FM recipe’ to reach industry ready solutions - Define the right strategy to ensure fast and efficient development, combining state of the art methods, research and engineering. - Lead Model Development and Training: Designing and implementing the model architectures, training and fine tuning the foundation models using various datasets, and optimize the model performance through iterative experiments - Lead Data Management: Process and prepare training data, including data governance, provenance tracking, data quality checks and creating reusable data pipelines. - Lead Experimentation and Validation: Design and execute experiments to test model capabilities on the simulator and on the embodiment, validate performance across different scenarios, create a baseline and iteratively improve model performance. - Lead Code Development: Write clean, maintainable, well commented and documented code, contribute to training infrastructure, create tools for model evaluation and testing, and implement necessary APIs - Research: Stay current with latest developments in foundation models and robotics, assist in literature reviews and research documentation, prepare technical reports and presentations, and contribute to research discussions and brainstorming sessions. - Collaboration: Work closely with senior scientists, engineers, and leaders across multiple teams, participate in knowledge sharing, support integration efforts with robotics hardware teams, and help document best practices and methodologies.