SocialBot Grand Challenge FAQs

Frequently asked questions about the challenge.
General
What is a SocialBot?
In the context of Alexa Prize, a SocialBot is an Alexa skill that can engage customers in a conversation about popular topics such as entertainment, sports, politics, technology, and fashion.
Can I choose to build any kind of conversational bot?
No, this competition focuses solely on SocialBots. Your SocialBot will need to converse about a wide range of popular societal topics and current events, and not only topics of your choosing. You will develop your own technologies relevant to your approach, such as natural language understanding, dialog management, knowledge acquisition, commonsense reasoning, conversational planning, machine learning, natural language generation, etc. Far-field automatic speech recognition (ASR) and text-to-speech (TTS) will be provided by Amazon, along with additional data, models, and tools.
What will my SocialBot do?
Your SocialBot will be an Alexa skill that can converse coherently and engagingly with humans on popular topics and current events. If your SocialBot advances to the pertinent stage of the competition, Alexa users in the U.S. may ask to converse with your SocialBot about topics of interest to them, such as baseball playoffs, celebrity gossip, or scientific breakthroughs. For example:
  • User: Let’s chat about the Mars Mission.
  • SocialBot: There are multiple Mars missions, some public and some private.
  • User: Who do you think will succeed?
  • SocialBot: I think more than one will succeed. The first one to get to Mars will open the doors for others.
  • User: I’d love to go to Mars.
  • SocialBot: Me too. Luckily I can hitch a ride on any computer.
  • User: That’s very funny.
  • SocialBot: The biggest challenge isn’t technology, it’s funding. It will cost $500 billion to send humans to Mars.
Your SocialBot will continue turn-by-turn interaction, starting with a topic the user asked for, until the user chooses to stop. Like an everyday human conversation, the interaction may shift naturally to related topics, as in the above example, but the conversation should remain coherent, relevant, and engaging. Your SocialBot may suggest topics to keep the conversation flowing. The goal is to keep the conversation from deteriorating to the point where the user loses interest.
How will I build my SocialBot?
You will use the Alexa Skills Kit (ASK) to build an Alexa skill, hosted on AWS Lambda, that will create the end-to-end conversational experience for a user. Using the provided APIs, your skill will receive as input the text of the user’s utterance, and produce as output a text sentence that will be spoken to the user. You do not need to tackle ASR (automatic speech recognition) or TTS (text to speech). You will also be provided with the CoBot Toolkit (a conversational bot toolkit), a software development kit that works with ASK and was built specifically for Alexa Prize teams to reduce the involved engineering in setting up a SocialBot and allow teams to focus on the science.


Your skill will need to determine an appropriate response at each turn of the conversation. It will also need to keep up with current news and events using the provided data sources. You may use additional data sources or libraries if you wish, subject to the terms described in the Official Rules.
What is the Alexa Skills Kit (ASK)?
The Alexa Skills Kit (ASK) is a collection of free, self-service APIs, tools, documentation, and code samples that make it fast and easy for you to add skills to Alexa. Your team will use ASK to build, deploy, and test a SocialBot that is capable of conversing with millions of Alexa users.
Competition details
What is the goal of the challenge?
The goal of the SocialBot Grand challenge is to advance several areas of conversational AI including natural language understanding (NLU), context modeling, dialog management, commonsense reasoning, natural language generation (NLG), and knowledge acquisition. The grand challenge objective is to create a SocialBot that converses coherently and engagingly with humans on popular topics for 20 minutes while achieving a user rating of at least 4.0/5.0.
How will winners be selected?
Through various phases of the competition, SocialBots will be evaluated based on feedback from Alexa users and assessment by Amazon.


Following the initial feedback period, SocialBots that have been certified and published will be evaluated on criteria such as the average interaction rating, uptime requirements, and ability to filter offensive content in order to advance to the Semifinals Interaction Period.



During the semifinals interaction period, Alexa customers will evaluate the semifinalist SocialBots. The two SocialBots with the highest Semifinals Interaction Rating Average and up to three more SocialBots selected by Amazon will advance to the finals.



Teams that advance to and complete the Semifinals Interaction Period, regardless of whether they advanced to the Finals Event, will be eligible to compete for Scientific Invention and Innovation Prizes based on the level of scientific invention and innovation demonstrated by each Entrant Team throughout the Competition.



The Teams whose SocialBots attain the three highest Composite Scores during the finals event will be the winners of the Overall Performance Prizes.
Will this competition be judged like a Turing Test?
No. The goal of the Alexa Prize is to create SocialBots that engage in interesting, human-like conversations, not to make them indistinguishable from a human when compared side-by-side. While the SocialBots built for the Alexa Prize will be human-like in some respects, they will be very different in others, and could easily reveal themselves in a Turing Test. For example, SocialBots may have ready access to much more information than a human. Asking the SocialBots to act like a human could diminish the customer experience and hinder the efforts of the participants to build the best SocialBot to further conversational AI.
When and where is the finals event?
The finals event will be held in July 2023 at a location to be determined, with a science invention and innovation presentation review to follow. The competition results will be announced in August 2023.
Can we use other funding to help us participate in this challenge?
Yes, you may use other funding to support your team, subject to the terms described in the Official Rules. External funding must be disclosed to Amazon.
Will Alexa customers be able to engage with our SocialBot?
Your team will be required to submit its SocialBot for certification and publication by the Amazon Alexa team. After certification, you will enter the Internal Amazon Beta Period, where Amazon employees will test your SocialBot and provide feedback. After the Internal Amazon Beta Period, we will allow Alexa users to try your SocialBot and provide feedback to you. Amazon may impose requirements that the SocialBots must meet before they will be made available to Alexa users. Such requirements may include, among other things, a minimum average customer rating, uptime requirements, or the ability to consistently filter offensive content.
Which Alexa users will be able to interact with the SocialBots, and what languages must they support?
SocialBots will be made available to Alexa users in the United States or who select the United States as their preferred marketplace. Your team must build its SocialBot using U.S. English.
Will we publish our research from the Alexa Prize?
Yes. Publishing research papers as an outcome of your work on Alexa Prize is required for all teams participating in the competition, although teams may not publish Amazon confidential information, as described in the Official Rules. The Alexa Prize requires all teams to submit a technical paper for the Alexa Prize proceedings. Your SocialBot will not be selected for the finals if your team does not submit a technical paper for Alexa Prize proceedings. Papers will be published online at the end of the competition and made publicly available.

Teams may also publish research papers in third-party publications and conferences, as long as all papers are provided to Amazon for review at least two weeks before the submission deadlines and no research papers are published before the Alexa Prize proceedings are published, unless Amazon approves otherwise in writing.
Who will own the intellectual property rights in my submission?
You will retain ownership over your SocialBot. Amazon will have a non-exclusive license to any technology or software you develop in connection with the competition. See the Official Rules for details.
Eligibility
Who can apply to participate?
The Alexa Prize is open to full-time students enrolled in an accredited university, with the exception of universities in Cuba, Iran, Syria, North Korea, Sudan, the region of Crimea, and where prohibited by law (see Official Rules). Proof of enrollment will be required to participate.
Can I participate if I don’t attend a university?
No. The Alexa Prize is open only to full-time enrolled university students.
Do I need to be enrolled in a university program throughout my participation in the competition?
All participating team members must remain full-time students in good standing at their university while participating in the competition.
Do I need to be a certain age?
Participants must be at or above the age of majority in the country, state, province, or jurisdiction of residence at the time of entry.
Can I enroll if a family member is an Amazon employee?
Immediate family members and household members of Amazon employees, directors, and contractors are not eligible to participate. See Official Rules for additional restrictions.
Teams
How many teams will be selected to participate?
All applications will be reviewed and evaluated by Amazon. Up to ten teams will be selected and sponsored by Amazon. All teams will receive a $250,000 grant intended to support two full-time students and a month of faculty time, free Alexa devices, and free AWS hosting including access to CPU and GPU based machines, SQL and NoSQL databases, and object storage. See Official Rules for details.
How many team members can our team have?
There is no minimum or maximum number of team members. All team members must be enrolled in their university throughout their participation. All teams will receive a $250,000 grant regardless of how many members are on the team. We recommend a team with four to six students with diverse fields of study or areas of expertise.
Can students from different universities be on the same team?
No. Teams must be comprised of students attending the same university.
Can one university have more than one team?
Yes, universities may have more than one team. Multiple teams cannot have the same faculty advisor.
Can I participate on two separate teams?
No. You can only be a part of one team for the duration of the competition.
Can undergraduate and graduate students work together?
Yes, teams may be comprised of undergraduate and graduate students.
Do I need a faculty advisor?
All teams must nominate a faculty advisor and include the faculty advisor’s consent in the applications.
What is the role of the faculty advisor?
Faculty advisors will advise students on technical directions and be a sounding board for new ideas, similar to a graduate school advisor. They will also act as the official representative from the university for this competition.
Can we add or remove team members during the competition?
During the competition, there will be a period of time during which faculty advisors may request to remove or add members to the team, subject to approval by Amazon. See Official Rules for details.
Can we discuss our SocialBot with faculty or students who aren’t on our team?
Only team members may work on their SocialBots. However, the faculty advisor and other students and faculty members at your university may provide support and advice to your team and may co-author technical publications and research papers.
Application process
How do we apply?
Begin the application via YouNoodle.
What do we need to apply?
Once you have selected your team members, team leader, and faculty sponsor, you are ready to begin the application process.
Do all team members have to apply?
Each team must have a team lead, who should submit only one application on behalf of the whole team. Your application must include all of your team members’ information.
Is there an application fee?
There is no application fee.
How will teams be selected to participate?
All applications will be reviewed. Teams will be selected by Amazon based on the following criteria: (1) the potential scientific contribution to the field; (2) the technical merit of the approach; (3) the novelty of the idea; and (4) an assessment of the team’s ability to execute against their plan. Please be sure to provide enough detail in your application to enable evaluation of your proposal.
Prizes
What are the prizes for winning the competition?
Overall Performance Prize: For the three teams that build the SocialBot with the highest overall performance, the first-place team will win $250,000, the second-place team will win $50,000, and the third-place team will win $25,000. These prizes will be paid directly to the students on each winning team.


Scientific Invention and Innovation Prize: For the three teams that demonstrate the most scientific invention and innovation throughout the competition, the first-place team will win $250,000, the second-place team will win $50,000, and the third-place team will win $25,000. These prizes will be paid directly to the students on each winning team.



Grand Prize: If and only if the SocialBot of the team that wins the first-place Overall Performance Prize also achieves the grand challenge of conversing coherently and engagingly with humans for 20 minutes in at least two-thirds of its conversations at the finals event and achieves a 4.0 or higher composite score, that team’s university will be awarded a $1 million research grant.



See Official Rules for details.
Do we get a stipend and devices to participate in the Alexa Prize?
Up to ten teams will be sponsored to participate in the competition. Each sponsored team’s university will receive a $250,000 research grant to help fund the team’s participation.


The sponsorship includes Alexa-enabled devices, free AWS services to support the development of the team’s SocialBot, and support from the Alexa Prize team.
How can the grant be spent?
The grant is intended to support two full-time students for the duration of the competition and one month of the faculty advisor’s salary. No more than 35% of the research grant may be allocated to administrative fees. If your team would like to use the funds in another manner, your faculty advisor must receive approval from Amazon before doing so.
How will the prizes be distributed among a team?
Each Overall Performance Prize and the Scientific Invention and Innovation Prize will be distributed equally among the members of each winning team.
Timeline
What are the key milestones of the competition?
Teams must submit their applications by October 5, 2022. Teams selected to participate in the competition will be notified in October of November 2022. The competition will run from about November 2022 through August 2023. See Official Rules for details.

Latest news

The latest updates, stories, and more about Alexa Prize.
  • Behnam Hedayatnia
    March 5, 2019
    The 2018 Alexa Prize featured eight student teams from four countries, each of which adopted distinctive approaches to some of the central technical questions in conversational AI. We survey those approaches in a paper we released late last year, and the teams themselves go into even greater detail in the papers they submitted to the latest Alexa Prize Proceedings. Here, we touch on just a few of the teams’ innovations.
  • Anushree Venkatesh
    February 27, 2019
    To ensure that Alexa Prize contestants can concentrate on dialogue systems — the core technology of socialbots — Amazon scientists and engineers built a set of machine learning modules that handle fundamental conversational tasks and a development environment that lets contestants easily mix and match existing modules with those of their own design.
US, WA, Bellevue
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
GB, London
As a STRUC Economist Intern, you'll specialize in structural econometric analysis to estimate fundamental preferences and strategic effects in complex business environments. Your responsibilities include: Analyze large-scale datasets using structural econometric techniques to solve complex business challenges Applying discrete choice models and methods, including logistic regression family models (such as BLP, nested logit) and models with alternative distributional assumptions Utilizing advanced structural methods including dynamic models of customer or firm decisions over time, applied game theory (entry and exit of firms), auction models, and labor market models Building datasets and performing data analysis at scale Collaborating with economists, scientists, and business leaders to develop data-driven insights and strategic recommendations Tackling diverse challenges including pricing analysis, competition modeling, strategic behavior estimation, contract design, and marketing strategy optimization Helping business partners formalize and estimate business objectives to drive optimal decision-making and customer value Build and refine comprehensive datasets for in-depth structural economic analysis Present complex analytical findings to business leaders and stakeholders
US, WA, Seattle
At Amazon Selection and Catalog Systems (ASCS), our mission is to power the online buying experience for customers worldwide so they can find, discover, and buy any product they want. We innovate on behalf of our customers to ensure uniqueness and consistency of product identity and to infer relationships between products in Amazon Catalog to drive the selection gateway for the search and browse experiences on the website. We're solving a fundamental AI challenge: establishing product identity and relationships at unprecedented scale. Using Generative AI, Visual Language Models (VLMs), and multimodal reasoning, we determine what makes each product unique and how products relate to one another across Amazon's catalog. The scale is staggering: billions of products, petabytes of multimodal data, millions of sellers, dozens of languages, and infinite product diversity—from electronics to groceries to digital content. The research challenges are immense. GenAI and VLMs hold transformative promise for catalog understanding, but we operate where traditional methods fail: ambiguous problem spaces, incomplete and noisy data, inherent uncertainty, reasoning across both images and textual data, and explaining decisions at scale. Establishing product identities and groupings requires sophisticated models that reason across text, images, and structured data—while maintaining accuracy and trust for high-stakes business decisions affecting millions of customers daily. Amazon's Item and Relationship Platform group is looking for an innovative and customer-focused applied scientist to help us make the world's best product catalog even better. In this role, you will partner with technology and business leaders to build new state-of-the-art algorithms, models, and services to infer product-to-product relationships that matter to our customers. You will pioneer advanced GenAI solutions that power next-generation agentic shopping experiences, working in a collaborative environment where you can experiment with massive data from the world's largest product catalog, tackle problems at the frontier of AI research, rapidly implement and deploy your algorithmic ideas at scale, across millions of customers. Key job responsibilities Key job responsibilities include: * Formulate open research problems at the intersection of GenAI, multimodal reasoning, and large-scale information retrieval—defining the scientific questions that transform ambiguous, real-world catalog challenges into publishable, high-impact research * Push the boundaries of VLMs, foundation models, and agentic architectures by designing novel approaches to product identity, relationship inference, and catalog understanding—where the problem complexity (billions of products, multimodal signals, inherent ambiguity) demands methods that don't yet exist * Advance the science of efficient model deployment—developing distillation, compression, and LLM/VLM serving optimization strategies that preserve frontier-level multimodal reasoning in compact, production-grade architectures while dramatically reducing latency, cost, and infrastructure footprint at billion-product scale * Make frontier models reliable—advancing uncertainty calibration, confidence estimation, and interpretability methods so that frontier-scale GenAI systems can be trusted for autonomous catalog decisions impacting millions of customers daily * Own the full research lifecycle from problem formulation through production deployment—designing rigorous experiments over petabytes of multimodal data, iterating on ideas rapidly, and seeing your research directly improve the shopping experience for hundreds of millions of customers * Shape the team's research vision by defining technical roadmaps that balance foundational scientific inquiry with measurable product impact * Mentor scientists and engineers on advanced ML techniques, experimental design, and scientific rigor—building deep organizational capability in GenAI and multimodal AI * Represent the team in the broader science community—publishing findings, delivering tech talks, and staying at the forefront of GenAI, VLM, and agentic system research