The National Science Foundation logo is seen on an exterior brick wall at NSF headquarters
The U.S. National Science Foundation and Amazon have announced the recipients of 13 selected projects from the program's most recent call for submissions. The awardees have proposed projects that address unfairness and bias in artificial intelligence and machine learning technologies, develop principles for human interaction with artificial intelligence systems, and theoretical frameworks for algorithms, and improve accessibility of speech recognition technology.
JHVEPhoto — stock.adobe.com

U.S. National Science Foundation, in collaboration with Amazon, announces latest Fairness in AI grant projects

Thirteen new projects focus on ensuring fairness in AI algorithms and the systems that incorporate them.

  1. In 2019, the U.S. National Science Foundation (NSF) and Amazon announced a collaboration — the Fairness in AI program — to strengthen and support fairness in artificial intelligence and machine learning.

    To date, in two rounds of proposal submissions, NSF has awarded 21 research grants in areas such as ensuring fairness in AI algorithms and the systems that incorporate them, using AI to promote equity in society, and developing principles for human interaction with AI-based systems.

    In June of 2021, Amazon and the NSF opened the third round of submissions with a focus on theoretical and algorithmic foundations; principles for human interaction with AI systems; technologies such as natural language understanding and computer vision; and applications including hiring decisions, education, criminal justice, and human services.

    Now Amazon and NSF are announcing the recipients of 13 selected projects from that latest call for submissions.

    The awardees, who collectively will receive up to $9.5 million in financial support, have proposed projects that address unfairness and bias in artificial intelligence and machine learning technologies, develop principles for human interaction with artificial intelligence systems, and theoretical frameworks for algorithms, and improve accessibility of speech recognition technology.

    “We are thrilled to share NSF’s selection of thirteen Fairness in AI proposals from talented researchers across the United States,” said Prem Natarajan, Alexa AI vice president of Natural Understanding. “The increasing prevalence of AI in our everyday lives calls for continued multi-sector investments into advancing their trustworthiness and robustness against bias. Amazon is proud to have partnered with the NSF for the past three years to support this critically important research area.”

    Amazon, which provides partial funding for the program, does not participate in the grant-selection process.

    “These awards are part of NSF's commitment to pursue scientific discoveries that enable us to achieve the full spectrum of artificial intelligence potential at the same time we address critical questions about their uses and impacts," said Wendy Nilsen, deputy division director for NSF's Information and Intelligent Systems Division.

    More information about the Fairness in AI program is available on NSF website, and via their program update. Below is the list of the 2022 awardees, and an overview of their projects.

  2. An interpretable AI framework for care of critically ill patients involving matching and decision trees

    “This project introduces a framework for interpretable, patient-centered causal inference and policy design for in-hospital patient care. This framework arose from a challenging problem, which is how to treat critically ill patients who are at risk for seizures (subclinical seizures) that can severely damage a patient's brain. In this high-stakes application of artificial intelligence, the data are complex, including noisy time-series, medical history, and demographic information. The goal is to produce interpretable causal estimates and policy decisions, allowing doctors to understand exactly how data were combined, permitting better troubleshooting, uncertainty quantification, and ultimately, trust. The core of the project's framework consists of novel and sophisticated matching techniques, which match each treated patient in the dataset with other (similar) patients who were not treated. Matching emulates a randomized controlled trial, allowing the effect of the treatment to be estimated for each patient, based on the outcomes from their matched group. A second important element of the framework involves interpretable policy design, where sparse decision trees will be used to identify interpretable subgroups of individuals who should receive similar treatments.”

    • Principal investigator: Cynthia Rudin
    • Co-principal investigators: Alexander Volfovsky, Sudeepa Roy
    • Organization: Duke University
    • Award amount: $625,000

    Project description

  3. Fair representation learning: fundamental trade-offs and algorithms

    “Artificial intelligence-based computer systems are increasingly reliant on effective information representation in order to support decision making in domains ranging from image recognition systems to identity control through face recognition. However, systems that rely on traditional statistics and prediction from historical or human-curated data also naturally inherit any past biased or discriminative tendencies. The overarching goal of the award is to mitigate this problem by using information representations that maintain its utility while eliminating information that could lead to discrimination against subgroups in a population. Specifically, this project will study the different trade-offs between utility and fairness of different data representations, and then identify solutions to reduce the gap to the best trade-off. Then, new representations and corresponding algorithms will be developed guided by such trade-off analysis. The investigators will provide performance limits based on the developed theory, and also evidence of efficacy in order to obtain fair machine learning systems and to gain societal trust. The application domain used in this research is face recognition systems. The undergraduate and graduate students who participate in the project will be trained to conduct cutting-edge research to integrate fairness into artificial intelligent based systems.”

    • Principal investigator: Vishnu Boddeti
    • Organization: Michigan State University
    • Award amount: $331,698

    Project description

  4. A new paradigm for the evaluation and training of inclusive automatic speech recognition

    “Automatic speech recognition can improve your productivity in small ways: rather than searching for a song, a product, or an address using a graphical user interface, it is often faster to accomplish these tasks using automatic speech recognition. For many groups of people, however, speech recognition works less well, possibly because of regional accents, or because of second-language accent, or because of a disability. This Fairness in AI project defines a new way of thinking about speech technology. In this new way of thinking, an automatic speech recognizer is not considered to work well unless it works well for all users, including users with regional accents, second-language accents, and severe disabilities. There are three sub-projects. The first sub-project will create black-box testing standards that speech technology researchers can use to test their speech recognizers, in order to test how useful their speech recognizer will be for different groups of people. For example, if a researcher discovers that their product works well for some people, but not others, then the researcher will have the opportunity to gather more training data, and to perform more development, in order to make sure that the under-served community is better-served. The second sub-project will create glass-box testing standards that researchers can use to debug inclusivity problems. For example, if a speech recognizer has trouble with a particular dialect, then glass-box methods will identify particular speech sounds in that dialect that are confusing the recognizer, so that researchers can more effectively solve the problem. The third sub-project will create new methods for training a speech recognizer in order to guarantee that it works equally well for all of the different groups represented in available data. Data will come from podcasts and the Internet. Speakers will be identified as members of a particular group if and only if they declare themselves to be members of that group. All of the developed software will be distributed open-source.”

    • Principal investigator: Mark Hasegawa-Johnson
    • Co-principal investigators: Zsuzsanna Fagyal, Najim Dehak, Piotr Zelasko, Laureano Moro-Velazquez
    • Organization: University of Illinois at Urbana-Champaign
    • Award amount: $500,000

    Project description

  5. A normative economic approach to fairness in AI

    “A vast body of work in algorithmic fairness is devoted to preventing artificial intelligence (AI) from exacerbating societal biases. The predominant viewpoints in this literature equates fairness with lack of bias or seeks to achieve some form of statistical parity between demographic groups. By contrast, this project pursues alternative approaches rooted in normative economics, the field that evaluates policies and programs by asking "what should be". The work is driven by two observations. First, fairness to individuals and groups can be realized according to people’s preferences represented in the form of utility functions. Second, traditional notions of algorithmic fairness may be at odds with welfare (the overall utility of groups), including the welfare of those groups the fairness criteria intend to protect. The goal of this project is to establish normative economic approaches as a central tool in the study of fairness in AI. Towards this end the team pursues two research questions. First, can the perspective of normative economics be reconciled with existing approaches to fairness in AI? Second, how can normative economics be drawn upon to rethink what fairness in AI should be? The project will integrate theoretical and algorithmic advances into real systems used to inform refugee resettlement decisions. The system will be examined from a fairness viewpoint, with the goal of ultimately ensuring fairness guarantees and welfare.”

    • Principal investigator: Yiling Chen
    • Co-principal investigator: Ariel Procaccia
    • Organization: Harvard University
    • Award amount: $560,345

    Project description

  6. Advancing optimization for threshold-agnostic fair AI systems

    “Artificial intelligence (AI) and machine learning technologies are being used in high-stakes decision-making systems like lending decision, employment screening, and criminal justice sentencing. A new challenge arising with these AI systems is avoiding the unfairness they might introduce and that can lead to discriminatory decisions for protected classes. Most AI systems use some kinds of thresholds to make decisions. This project aims to improve fairness-aware AI technologies by formulating threshold-agnostic metrics for decision making. In particular, the research team will improve the training procedures of fairness-constrained AI models to make the model adaptive to different contexts, applicable to different applications, and subject to emerging fairness constraints. The success of this project will yield a transferable approach to improve fairness in various aspects of society by eliminating the disparate impacts and enhancing the fairness of AI systems in the hands of the decision makers. Together with AI practitioners, the researchers will integrate the techniques in this project into real-world systems such as education analytics. This project will also contribute to training future professionals in AI and machine learning and broaden this activity by including training high school students and under-represented undergraduates.”

    • Principal investigator: Tianbao Yang
    • Co-principal investigators: Qihang Lin, Mingxuan Sun
    • Organization: University of Iowa
    • Award amount: $500,000

    Project description

  7. Toward fair decision making and resource allocation with application to AI-assisted graduate admission and degree completion

    “Machine learning systems have become prominent in many applications in everyday life, such as healthcare, finance, hiring, and education. These systems are intended to improve upon human decision-making by finding patterns in massive amounts of data, beyond what can be intuited by humans. However, it has been demonstrated that these systems learn and propagate similar biases present in human decision-making. This project aims to develop general theory and techniques on fairness in AI, with applications to improving retention and graduation rates of under-represented groups in STEM graduate programs. Recent research has shown that simply focusing on admission rates is not sufficient to improve graduation rates. This project is envisioned to go beyond designing "fair classifiers" such as fair graduate admission that satisfy a static fairness notion in a single moment in time, and designs AI systems that make decisions over a period of time with the goal of ensuring overall long-term fair outcomes at the completion of a process. The use of data-driven AI solutions can allow the detection of patterns missed by humans, to empower targeted intervention and fair resource allocation over the course of an extended period of time. The research from this project will contribute to reducing bias in the admissions process and improving completion rates in graduate programs as well as fair decision-making in general applications of machine learning.”

    • Principal investigator: Furong Huang
    • Co-principal investigators: Min Wu, Dana Dachman-Soled
    • Organization: University of Maryland, College Park
    • Award amount: $625,000

    Project description

  8. BRMI — bias reduction in medical information

    “This award, Bias Reduction In Medical Information (BRIMI), focuses on using artificial intelligence (AI) to detect and mitigate biased, harmful, and/or false health information that disproportionately hurts minority groups in society. BRIMI offers outsized promise for increased equity in health information, improving fairness in AI, medicine, and in the information ecosystem online (e.g., health websites and social media content). BRIMI's novel study of biases stands to greatly advance the understanding of the challenges that minority groups and individuals face when seeking health information. By including specific interventions for both patients and doctors and advancing the state-of-the-art in public health and fact checking organizations, BRIMI aims to inform public policy, increase the public's critical literacy, and improve the well-being of historically under-served patients. The award includes significant outreach efforts, which will engage minority communities directly in our scientific process; broad stakeholder engagement will ensure that the research approach to the groups studied is respectful, ethical, and patient-centered. The BRIMI team is composed of academics, non-profits, and industry partners, thus improving collaboration and partnerships across different sectors and multiple disciplines. The BRIMI project will lead to fundamental research advances in computer science, while integrating deep expertise in medical training, public health interventions, and fact checking. BRIMI is the first large scale computational study of biased health information of any kind. This award specifically focuses on bias reduction in the health domain; its foundational computer science advances and contributions may generalize to other domains, and it will likely pave the way for studying bias in other areas such as politics and finances.”

    • Principal investigator: Shiri Dori-Hacohen
    • Co-principal investigators: Sherry Pagoto, Scott Hale
    • Organization: University of Connecticut
    • Award amount: $392,994

    Project description

  9. A novel paradigm for fairness-aware deep learning models on data streams

    “Massive amounts of information are transferred constantly between different domains in the form of data streams. Social networks, blogs, online businesses, and sensors all generate immense data streams. Such data streams are received in patterns that change over time. While this data can be assigned to specific categories, objects and events, their distribution is not constant. These categories are subject to distribution shifts. These distribution shifts are often due to the changes in the underlying environmental, geographical, economic, and cultural contexts. For example, the risks levels in loan applications have been subject to distribution shifts during the COVID-19 pandemic. This is because loan risks are based on factors associated to the applicants, such as employment status and income. Such factors are usually relatively stable, but have changed rapidly due to the economic impact of the pandemic. As a result, existing loan recommendation systems need to be adapted to limited examples. This project will develop open software to help users evaluate online fairness-in algorithms, mitigate potential biases, and examine utility-fairness trade-offs. It will implement two real-world applications: online crime event recognition from video data and online purchase behavior prediction from click-stream data. To amplify the impact of this project in research and education, this project will leverage STEM programs for students with diverse backgrounds, gender and race/ethnicity. This project includes activities including seminars, workshops, short courses, and research projects for students.”

    • Principal investigator: Feng Chen
    • Co-principal investigators: Latifur Khan, Xintao Wu, Christan Grant
    • Organization: University of Texas at Dallas
    • Award amount: $392,993

    Project description

  10. A human-centered approach to developing accessible and reliable machine translation

    “This Fairness in AI project aims to develop technology to reliably enhance cross-lingual communication in high-stakes contexts, such as when a person needs to communicate with someone who does not speak their language to get health care advice or apply for a job. While machine translation technology is frequently used in these conditions, existing systems often make errors that can have severe consequences for a patient or a job applicant. Further, it is challenging for people to know when automatic translations might be wrong when they do not understand the source or target language for translation. This project addresses this issue by developing accessible and reliable machine translation for lay users. It will provide mechanisms to guide users to recognize and recover from translation errors, and help them make better decisions given imperfect translations. As a result, more people will be able to use machine translation reliably to communicate across language barriers, which can have far-reaching positive consequences on their lives."

    • Principal investigator: Marine Carpuat
    • Co-principal investigators: Niloufar Salehi, Ge Gao
    • Organization: University of Maryland, College Park
    • Award amount: $392,993

    Project description

  11. AI algorithms for fair auctions, pricing, and marketing

    “This project develops algorithms for making fair decisions in AI-mediated auctions, pricing, and marketing, thus advancing national prosperity and economic welfare. The deployment of AI systems in business settings has thrived due to direct access to consumer data, the capability to implement personalization, and the ability to run algorithms in real-time. For example, advertisements users see are personalized since advertisers are willing to bid more in ad display auctions to reach users with particular demographic features. Pricing decisions on ride-sharing platforms or interest rates on loans are customized to the consumer's characteristics in order to maximize profit. Marketing campaigns on social media platforms target users based on the ability to predict who they will be able to influence in their social network. Unfortunately, these applications exhibit discrimination. Discriminatory targeting in housing and job ad auctions, discriminatory pricing for loans and ride-hailing services, and disparate treatment of social network users by marketing campaigns to exclude certain protected groups have been exposed. This project will develop theoretical frameworks and AI algorithms that ensure consumers from protected groups are not harmfully discriminated against in these settings. The new algorithms will facilitate fair conduct of business in these applications. The project also supports conferences that bring together practitioners, policymakers, and academics to discuss the integration of fair AI algorithms into law and practice.”

    • Principal investigator: Adam Elmachtoub
    • Co-principal investigators: Shipra Agrawal, Rachel Cummings, Christian Kroer, Eric Balkanski
    • Organization: Columbia University
    • Award amount: $392,993

    Project description

  12. Using explainable AI to increase equity and transparency in the juvenile justice system’s use of risk scores

    “Throughout the United States, juvenile justice systems use juvenile risk and need-assessment (JRNA) scores to identify the likelihood a youth will commit another offense in the future. This risk assessment score is then used by juvenile justice practitioners to inform how to intervene with a youth to prevent reoffending (e.g., referring youth to a community-based program vs. placing a youth in a juvenile correctional center). Unfortunately, most risk assessment systems lack transparency and often the reasons why a youth received a particular score are unclear. Moreover, how these scores are used in the decision making process is sometimes not well understood by families and youth affected by such decisions. This possibility is problematic because it can hinder individuals’ buy-in to the intervention recommended by the risk assessment as well as mask potential bias in those scores (e.g., if youth of a particular race or gender have risk scores driven by a particular item on the assessment). To address this issue, project researchers will develop automated, computer-generated explanations for these risk scores aimed at explaining how these scores were produced. Investigators will then test whether these better-explained risk scores help youth and juvenile justice decision makers understand the risk score a youth is given. In addition, the team of researchers will investigate whether these risk scores are working equally well for different groups of youth (for example, equally well for boys and for girls) and identify any potential biases in how they are being used in an effort to understand how equitable the decision making process is for demographic groups based on race and gender. The project is embedded within the juvenile justice system and aims to evaluate how real stakeholders understand how the risk scores are generated and used within that system based on actual juvenile justice system data.”

    • Principal investigator: Trent Buskirk
    • Co-principal investigators: Kelly Murphy
    • Organization: Bowling Green State University
    • Award amount: $392,993

    Project description

  13. Breaking the tradeoff barrier in algorithmic fairness

    “In order to be robust and trustworthy, algorithmic systems need to usefully serve diverse populations of users. Standard machine learning methods can easily fail in this regard, e.g. by optimizing for majority populations represented within their training data at the expense of worse performance on minority populations. A large literature on "algorithmic fairness" has arisen to address this widespread problem. However, at a technical level, this literature has viewed various technical notions of "fairness" as constraints, and has therefore viewed "fair learning" through the lens of constrained optimization. Although this has been a productive viewpoint from the perspective of algorithm design, it has led to tradeoffs being centered as the central object of study in "fair machine learning". In the standard framing, adding new protected populations, or quantitatively strengthening fairness constraints, necessarily leads to decreased accuracy overall and within each group. This has the effect of pitting the interests of different stakeholders against one another, and making it difficult to build consensus around "fair machine learning" techniques. The over-arching goal of this project is to break through this "fairness/accuracy tradeoff" paradigm.”

    • Principal investigator: Aaron Roth
    • Co-principal investigator: Michael Kearns
    • Organization: University of Pennsylvania
    • Award amount: $392,992

    Project description

  14. Advancing deep learning towards spatial fairness

    “The goal of spatial fairness is to reduce biases that have significant linkage to the locations or geographical areas of data samples. Such biases, if left unattended, can cause or exacerbate unfair distribution of resources, social division, spatial disparity, and weaknesses in resilience or sustainability. Spatial fairness is urgently needed for the use of artificial intelligence in a large variety of real-world problems such as agricultural monitoring and disaster management. Agricultural products, including crop maps and acreage estimates, are used to inform important decisions such as the distribution of subsidies and providing farm insurance. Inaccuracies and inequities produced by spatial biases adversely affect these decisions. Similarly, effective and fair mapping of natural disasters such as floods or fires is critical to inform live-saving actions and quantify damages and risks to public infrastructures, which is related to insurance estimation. Machine learning, in particular deep learning, has been widely adopted for spatial datasets with promising results. However, straightforward applications of machine learning have found limited success in preserving spatial fairness due to the variation of data distribution, data quantity, and data quality. The goal of this project is to develop a new generation of learning frameworks to explicitly preserve spatial fairness. The results and code will be made freely available and integrated into existing geospatial software. The methods will also be tested for incorporation in existing real systems (crop and water monitoring).”

    • Principal investigator: Xiaowei Jia
    • Co-principal investigators: Sergii Skakun, Yiqun Xie
    • Organization: University of Pittsburgh
    • Award amount: $755,098

    Project description

Research areas

Related content

US, WA, Seattle
At Amazon Selection and Catalog Systems (ASCS), our mission is to power the online buying experience for customers worldwide so they can find, discover, and buy any product they want. We innovate on behalf of our customers to ensure uniqueness and consistency of product identity and to infer relationships between products in Amazon Catalog to drive the selection gateway for the search and browse experiences on the website. We're solving a fundamental AI challenge: establishing product identity and relationships at unprecedented scale. Using Generative AI, Visual Language Models (VLMs), and multimodal reasoning, we determine what makes each product unique and how products relate to one another across Amazon's catalog. The scale is staggering: billions of products, petabytes of multimodal data, millions of sellers, dozens of languages, and infinite product diversity—from electronics to groceries to digital content. The research challenges are immense. GenAI and VLMs hold transformative promise for catalog understanding, but we operate where traditional methods fail: ambiguous problem spaces, incomplete and noisy data, inherent uncertainty, reasoning across both images and textual data, and explaining decisions at scale. Establishing product identities and groupings requires sophisticated models that reason across text, images, and structured data—while maintaining accuracy and trust for high-stakes business decisions affecting millions of customers daily. Amazon's Item and Relationship Platform group is looking for an innovative and customer-focused applied scientist to help us make the world's best product catalog even better. In this role, you will partner with technology and business leaders to build new state-of-the-art algorithms, models, and services to infer product-to-product relationships that matter to our customers. You will pioneer advanced GenAI solutions that power next-generation agentic shopping experiences, working in a collaborative environment where you can experiment with massive data from the world's largest product catalog, tackle problems at the frontier of AI research, rapidly implement and deploy your algorithmic ideas at scale, across millions of customers. Key job responsibilities Key job responsibilities include: * Formulate novel research problems at the intersection of GenAI, multimodal learning, and large-scale information retrieval—translating ambiguous business challenges into tractable scientific frameworks * Design and implement leading models leveraging VLMs, foundation models, and agentic architectures to solve product identity, relationship inference, and catalog understanding at billion-product scale * Pioneer explainable AI methodologies that balance model performance with scalability requirements for production systems impacting millions of daily customer decisions * Own end-to-end ML pipelines from research ideation to production deployment—processing petabytes of multimodal data with rigorous evaluation frameworks * Define research roadmaps aligned with business priorities, balancing foundational research with incremental product improvements * Mentor peer scientists and engineers on advanced ML techniques, experimental design, and scientific rigor—building organizational capability in GenAI and multimodal AI * Represent the team in the broader science community—publishing findings, delivering tech talks, and staying at the forefront of GenAI, VLM, and agentic system research
US, WA, Seattle
Amazon is seeking exceptional science talent to develop AI and machine learning systems that will enable the next generation of advanced manufacturing capabilities at unprecedented scale. We're building revolutionary software infrastructure that combines cutting-edge AI, large-scale optimization, and advanced manufacturing processes to create adaptive production control systems. As a Senior Research Scientist, you will develop and improve machine learning systems that enable real-time manufacturing flow decisions. You will leverage state-of-the-art optimization and ML techniques, evaluate them against representative manufacturing scenarios, and adapt them to meet the robustness, reliability, and performance needs of production environments. You will invent new algorithms where gaps exist. You'll collaborate closely with software engineering, manufacturing engineering, robotics simulation, and operations teams, and your outputs will directly power the systems that determine what to build next, where to allocate resources, and how to maximize throughput. The ideal candidate brings deep expertise in optimization and machine learning, with a proven track record of delivering scientifically complex solutions into production. You are hands-on, writing significant portions of critical-path scientific code while driving your team's scientific agenda. If you're passionate about inventing the intelligent manufacturing systems of tomorrow rather than optimizing those of today, this role offers the chance to make a lasting impact on the future of automation. Key job responsibilities - Identify and devise new scientific approaches for constraint identification, dispatch optimization, WIP release control, and predictive flow intelligence when the problem is ill-defined and new methodologies need to be invented - Lead the design, implementation, and successful delivery of scientifically complex solutions for real-time manufacturing flow optimization in production - Design and build ML models and optimization algorithms including constraint prediction, starvation risk forecasting, and dispatch optimization - Write a significant portion of critical-path scientific code with solutions that are inventive, maintainable, scalable, and extensible - Execute rapid, rigorous experimentation with reproducible results, closing the gap between simulation and real manufacturing environments - Build evaluation benchmarks that measure model performance against manufacturing outcomes including constraint utilization and throughput rather than traditional ML metrics alone - Influence your team's science and business strategy through insightful contributions to roadmaps, goals, and priorities - Partner with manufacturing engineering, robotics simulation, and applied intelligence teams to ensure scientific approaches are grounded in operational reality - Drive your team's scientific agenda and role model publishing of research results at peer-reviewed venues when appropriate and not precluded by business considerations - Actively participate in hiring and mentor other scientists, improving their skills and ability to deliver - Write clear narratives and documentation describing scientific solutions and design choices
US, CA, Palo Alto
Global Optimization is a strategic initiative aimed at improving Amazon advertisers experience at global scale. We are looking for a passionate Applied Scientist to help pioneer the next generation of agentic AI applications for Amazon advertisers. In this role, you will design agentic architectures, develop tools and datasets, and contribute to building systems that can reason, plan, and act autonomously across complex advertiser workflows at global scale. You will work at the forefront of applied AI, developing methods for fine-tuning, reinforcement learning, and preference optimization, while helping create evaluation frameworks that ensure safety, reliability, and trust at scale. You will work backwards from the needs of advertisers—delivering customer-facing products that directly help them create, optimize, and grow their campaigns. Beyond building models, you will advance the agent ecosystem by experimenting with and applying core primitives such as tool orchestration, multi-step reasoning, and adaptive preference-driven behavior. This role requires working independently on ambiguous technical problems, collaborating closely with scientists, engineers, and product managers to bring innovative solutions into production. Key job responsibilities - Design and build agents that improve advertisers experiences globally - Design and implement advanced model and agent optimization techniques, including supervised fine-tuning, instruction tuning and preference optimization (e.g., DPO/IPO). - Design and implement optimization models that work at global scale taking into account nuances of multiple countries - Innovate new science models to help advertisers scale their campaigns globally - Curate datasets and tools for MCP. - Build evaluation pipelines for agent workflows, including automated benchmarks, multi-step reasoning tests, and safety guardrails. - Develop agentic architectures (e.g., CoT, ToT, ReAct) that integrate planning, tool use, and long-horizon reasoning. - Prototype and iterate on multi-agent orchestration frameworks and workflows. - Collaborate with peers across engineering and product to bring scientific innovations into production. - Stay current with the latest research in LLMs, RL, and agent-based AI, optimization and translate findings into practical applications. About the team The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through the latest generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. The Global Optimization team within Sponsored Products and Brands is focused on guiding and supporting 1.6MM advertisers to meet their advertising needs of creating and managing ad campaigns at global scale. At this scale, the complexity of diverse advertiser goals, campaign types, and market dynamics creates both a massive technical challenge and a transformative opportunity: even small improvements in guidance systems can have outsized impact on advertiser success and Amazon’s retail ecosystem. Our work is grounded in state-of-the-art agent architectures, tool integration, reasoning frameworks, and model customization approaches (including tuning, MCP, and preference optimization), ensuring our systems are both scalable and adaptive.
IN, KA, Bengaluru
Are you passionate about solving complex business problems at scale through Generative AI? Do you want to help build intelligent systems that reason, act, and learn from minimal supervision? If so, we have an exciting opportunity for you on Amazon's Trustworthy Shopping Experience (TSE) team. At TSE, our vision is to guarantee customers a worry-free shopping experience by earning their trust that the products they buy are safe, authentic, and compliant with regulations and policy. We do this in close partnership with our selling partners, empowering them with best-in-class tools and expertise to offer a high-quality, compliant selection that customers trust. As a Research Scientist I, you will bring subject matter expertise with fundamental improvements in at least one relevant discipline (e.g., NLP, computer vision, representation learning, agentic architecture) to contribute to next-generation agentic AI solutions that automate complex manual investigation processes at Amazon scale. You will invent, refine, and experiment with solutions spanning agentic reasoning, self-supervised representation learning, few-shot adaptation, multimodal understanding, and model compression. With guidance from senior scientists, you will stay current on research trends and benchmark your results against the state of the art. You will help design and execute experiments to identify optimal solutions, initiating the development and implementation of small components with team guidance. You will write secure, stable, testable, and well-documented production code at the level of an SDE I, rigorously evaluating models and quantifying performance. You will handle data in accordance with Amazon policies, troubleshoot issues to root cause, and ensure your work does not put the company at risk. Your scope of influence will typically be at the self-level, with the possibility of mentoring interns. You will participate in team design and prioritization discussions, learn the business context behind TSE's products, and escalate problems with proposed solutions. You will publish internal technical reports and may contribute to peer-reviewed publications and external review activities when aligned with business needs. This role offers a unique opportunity to contribute to end-to-end AI development—from research through production—with your contributions serving hundreds of millions of customers within months, not years. Key job responsibilities • Contribute to the design and development of agentic AI systems with multi-step reasoning, autonomous task execution, and multimodal intelligence, including feedback and memory mechanisms, leveraging reinforcement learning techniques for agent decision-making and policy optimization, with input and guidance from senior scientists • Develop novel models built on top of SFT (Supervised Fine-tuning) and RFT (Reinforced Fine-tuning) approaches, as well as few-shot approaches based on multimodal datasets spanning text, images, and structured data, applying mathematical optimization techniques to improve efficiency, resource allocation, and decision-making in complex workflows, working alongside senior scientists to identify optimal solutions • Contribute to building production-ready deep learning and conventional ML solutions, including multimodal fusion and cross-modal alignment techniques that seamlessly connect visual, textual, and relational understanding, to support automation requirements within your team's scope • Help identify customer and business problems; use reasonable assumptions, data, and customer requirements to solve well-defined scientific problems involving multimodal inputs such as unstructured text, documents, product images, and relational data, developing representations that capture complementary signals across modalities and mapping business goals to scientific metrics • May co-author research papers for peer-reviewed internal and/or external venues, including contributions in areas such as multimodal representation learning and vision-language modeling, and contribute to the wider scientific community by reviewing research submissions, when aligned with business needs • Prototype rapidly, iterate based on feedback, and deliver small components at SDE I level—including multimodal data pipelines and inference modules—that integrate into production-scale systems • Write secure, stable, testable, maintainable, and well-documented code, balancing model capability, deployment cost, and resource usage across multimodal architectures while understanding state-of-the-art data structures, algorithms, and performance tradeoffs • Rigorously test code and evaluate models across individual and combined modalities, quantifying their performance; troubleshoot issues, research root causes, and thoroughly resolve defects, leaving systems more maintainable • Participate in team design, scoping, and prioritization discussions through clear verbal and written communication; seek to learn the business context, science, and engineering behind your team's products, including how multimodal signals contribute to trust and safety decisions • Participate in engineering best practices with peer reviews; clearly document approaches and communicate design decisions; publish internal technical reports to institutionalize scientific learning • Help train and mentor scientist interns; identify and escalate problems with proposed solutions, taking ownership or ensuring clear hand-off to the right owner
US, NY, New York
We are seeking a scientist to further the development and application of analytics methods to examine the complex data flows of Amazon Ads and to translate deep-dives into actionable insights for our product teams. In this role you will develop new tools to analyze our advertising data to help improve the performance of our bidding algorithms, targeting and relevance systems, help advance our supply strategy, and evaluate the adoption and impact of feature releases. Key job responsibilities - Analyze data trends regarding supply, optimization, ad load, and advertising mix effects that affect advertiser performance and contribute to achieving advertiser goals - Present papers to senior leaders on issues like feature development impact on identity recognition rates, and changes of ad selection systems to improve fill rate highlighting insights that will inform our business development and engineering roadmaps - Formalize our analytics approach to Ads auctions by analyzing bid spreads, auction depth, and simulating impacts of potential auction structure changes - Identify, standardize, and operationalize KPIs to effectively measure the performance of all systems involved in ad serving, and use trend insights to inform business priorities - Partner with engineering teams to define data logging requirements and getting these prioritized in engineering roadmaps - Validate financial models through analysis - Develop and own ad revenue and supply intelligence analytics decks that provide ongoing deep-dives A day in the life The Ads Scientist will work closely with business leaders and engineers on developing common data architecture that will optimize our data logging at different grains, and will allow data interoperability from bid flow to optimization to campaign delivery. The scientist will then analyze the data and present papers and ongoing reports on actionable insights. About the team At Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups in over 190 chapters globally. We have innovative benefit offerings, and we host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Our team also puts a high value on work-life balance. Striking a healthy balance between your personal and professional life is crucial to your happiness and success here, which is why we aren’t focused on how many hours you spend at work or online. Instead, we’re happy to offer a flexible schedule so you can have a more productive and well-balanced life—both in and outside of work. Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded professional and enable them to take on more complex tasks in the future.
US, VA, Arlington
The People eXperience and Technology Central Science (PXTCS) team uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, well-being, and the value of work to Amazonians. The Benefits Science team is looking for an economist to transform complex business challenges into actionable scientific insights. In this role, you will partner directly with business leaders to design and evaluate pilots, build models using large-scale data, and scale successful prototypes into company-wide policies and programs. We're looking for someone who can combine rigorous scientific thinking with practical business acumen and is passionate about using economics to improve employee experiences at scale. The ideal candidate will thrive in interdisciplinary environments, working alongside engineers, data scientists, and business leaders from diverse backgrounds. Key job responsibilities - Design and conduct rigorous evaluations of benefits programs - Support the development and application of structural models - Develop experiments to evaluate the impact of benefits initiatives - Communicate complex findings to business stakeholders in clear, actionable terms - Work with engineering teams to develop scalable tools that automate and streamline evaluation processes A day in the life Work with teammates to apply economic methods to business problems. This might include identifying the appropriate research questions, writing code to implement a DID analysis or estimate a structural model, or writing and presenting a document with findings to business leaders. Our economists also collaborate with partner teams throughout the process, from understanding their challenges, to developing a research agenda that will address those challenges, to help them implement solutions.
US, WA, Seattle
Amazon Advertising is one of Amazon's fastest growing and most profitable businesses. Our products are used daily to surface new selection and provide customers a wider set of product choices along their shopping journeys. The business is focused on generating value for shoppers as well as advertisers. Our team uses a combination of econometrics, machine learning, and data science to build disruptive products for all our Advertising products. We also generate insights to guide Amazon Advertising strategy, providing direct support to senior leadership. We are looking for an experienced Economist who have a deep passion for building state-of-art causal models and ads measurement and optimization solutions, ability to communicate data insights and scientific vision, and execute strategic projects. As an Economist on this team, you will: - Lead the design and analysis of large-scale experiments to measure advertising effectiveness across Amazon's advertising products - Develop novel causal inference and econometric methodologies to solve attribution and incrementality measurement challenges at scale - Invent new optimization frameworks that translate measurement insights into actionable bidding, targeting, and budget allocation strategies for advertisers - Define the long-term science roadmap for ads measurement and optimization, identifying high-impact research directions and driving alignment across engineering, product, and science teams - Build and refine structural and reduced-form models that quantify the causal impact of advertising on consumer behavior, sales, and brand outcomes - Partner with engineering teams to operationalize econometric models into production systems serving millions of advertisers - Mentor and develop a team of economists and applied scientists, raising the bar on methodological rigor and scientific impact - Influence senior leadership through clear communication of complex economic concepts, shaping investment decisions and product strategy - Collaborate cross-functionally with product managers, engineers, and business leaders to translate business problems into well-defined economic questions with scalable solutions Why you will love this opportunity: Amazon is investing heavily in building a world-class advertising business. This team defines and delivers a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are a highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. Impact and Career Growth: You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding.
US, WA, Seattle
Interested in influencing what customers around the world see when they turn on Prime Video? The Prime Video Personalization and Discovery team matches customers with the right content at the right time, at all touch points throughout the content discovery journey. We are looking for a customer-focused, solutions-oriented Principal Data Scientist to develop next-gen measurement and experimentation systems within Prime Video Personalization and Discovery. You'll be part of an embedded science team driving projects across product and engineering teams that ultimately influence what millions of customers around the world see when the log into Prime Video. The ideal candidate brings experience building experiment-based measurement systems at scale, excellent stakeholder communication skills, and the ability to balance technical rigor with delivery speed and customer impact. You will build cross-functional support within Prime Video for high-quality, rigorous measurement, assess business problems, and support iterative scientific solutions that balance short-term delivery with long-term science roadmaps. Key job responsibilities - Define and drive the multi-year vision for experiment-based measurement systems within Prime Video - Partner with product stakeholders and science peers to identify strategic data-driven opportunities to improve the customer experience - Communicate findings, conclusions, and recommendations to technical and non-technical business leaders across Prime Video - Educate senior leaders about and advocate for high-quality measurement as an input to data-driven decisions - Mentor junior scientists and review technical artifacts to ensure quality - Stay up-to-date on the latest data science tools, techniques, and best practices and help evangelize them across the organization
US, WA, Seattle
Do you want to help shape the future of Amazon's physical retail presence? Worldwide Grocery Stores (WWGS), Location Strategy and Analytics team is looking for an Research Scientist to join us in developing advanced forecasting models, optimization models, and analytical tools to support critical real estate and store planning decisions for Amazon's Worldwide Grocery business, including Whole Foods Market. Our team is responsible for developing predictive models and tools to support Real Estate and Topology analysts in making important decisions regarding our stores—including new store openings, relocations, closures, remodels, design, new formats, and more. We leverage statistical modeling, machine learning, and GenAI to build solutions for store sales forecasting, sales transfer effects, macrospace optimization, store network optimization, store network diffusion planning, and causal effects. As a Research Scientist on our team, you will apply your technical and analytical skills to tackle complex business problems and develop innovative solutions to improve our forecasting and decision-making capabilities. You will collaborate with a diverse team of scientists, economists, and business partners to identify opportunities, develop hypotheses, build internal products, and translate analytical insights into actionable recommendations for Executive Leadership. Key job responsibilities - Design and implement forecasting models and machine learning solutions to predict store performance and optimize our retail network. - Analyze large datasets to uncover insights and patterns related to store performance, customer behavior, and market dynamics. - Develop end-to-end solutions, tools and frameworks to scale our ML model development and data analysis. - Leverage GenAI models to enhance user interaction with our solutions, improve overall user experience, and build new features. - Present research findings and recommendations to scientists, business leaders, and executives. - Collaborate with cross-functional teams to drive adoption of models and insights. - Stay current on latest developments in relevant fields and propose innovative approaches. About the team We are a team of scientists passionate about leveraging data and advanced analytics to drive strategic decisions for Amazon's grocery business. Our work directly impacts Amazon's worldwide grocery store growth and development strategy. We foster a collaborative environment where team members are encouraged to think creatively, challenge assumptions, and pursue novel approaches to solving complex problems. Our team is at the forefront of applying a multitude of techniques - including GenAI - to improve our scientific solutions and products.
US, WA, Bellevue
Have you ever ordered a product on Amazon and when that box with the smile arrived, wondered how it got to you so fast? Wondered where it came from and how much it cost Amazon? If so, the Amazon Global Supply Chain Optimization Technology (SCOT) organization is for you. Watch this video to learn more about our organization, SCOT: http://bit.ly/amazon-scot We are the Optimal Sourcing Systems team (OSS) within SCOT and are looking for a Data Scientist II to join us! OSS designs and builds systems that measure and manage Amazon’s supplier capabilities, identify and react to supply disruptions, and prioritizes inbound freight for our global network. OSS software is used by every country Amazon services, and is a critical link to ensuring Amazon offers the products our customers want, at the lowest possible cost. This team under OSS orchestrates and tracks inventory movement into Amazon's network, maintains performance feedback loops, and ensures vendor compliance. The Data Scientist II, in partnership with the Product Management, Operations, and Tech teams, will lead efforts in four areas: 1) Building models to set optimal parameters such as lead times to ensure the accuracy of our Inbound network 2) Building analytical frameworks to identify and drive improvements in purchase order lifecycle management and defect coaching/chargebacks 3) Developing Gen AI solutions related to dispute evaluation and vendor coaching 4) Building models and solutions to enable collaborative inventory planning with vendors The ideal candidate thrives in ambiguous problem spaces, relishes working with large volumes of data, and enjoys the challenge of highly complex supply chain contexts. They can translate complex business logic into scalable models and communicate insights effectively to both technical and non-technical stakeholders. Keys to success in this role include exceptional analytics, statistics, judgment, and communication skills. Experience with supply chain optimization, operations research, or vendor management systems is a plus. Key job responsibilities - Collaborate with product managers, science, and engineering teams to design and implement model solutions for Sourcing Execution & Performance systems - Use large datasets or experiments to make causal inferences or predictions - Work with engineers to automate science analysis processes and build scalable measurement solutions - Interpret data, write reports, and make actionable recommendations - Drive technical standards and best practices for the team's Science solutions - Mentor and provide technical guidance to other team members on complex projects A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: - Medical, Dental, and Vision Coverage - Maternity and Parental Leave Options - Paid Time Off (PTO) - 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply!