A quick guide to Amazon's 65-plus papers at this year's ACL

Familiar topics such as question answering and natural-language understanding remain well represented, but a new concentration on language modeling and multimodal models reflect the spread of generative AI.

Between the main conference and the recently inaugurated ACL Proceedings, Amazon researchers have more than 65 papers at this year's meeting of the Association for Computational Linguistics (ACL).

Automatic speech recognition

Masked audio text encoders are effective multi-modal rescorers*
Jason Cai, Monica Sunkara, Xilai Li, Anshu Bhatia, Xiao Pan, Sravan Bodapati

Code generation

A static evaluation of code completion by large language models
Hantian Ding, Varun Kumar, Yuchen Tian, Zijian Wang, Rob Kwiatkowski, Xiaopeng LI, Murali Krishna Ramanathan, Baishakhi Ray, Parminder Bhatia, Sudipta Sengupta, Dan Roth, Bing Xiang

Multitask pretraining with structured knowledge for text-to-SQL generation
Robert Giaquinto, Dejiao Zhang, Benjamin Kleiner, Yang Li, Ming Tan, Parminder Bhatia, Ramesh Nallapati, Xiaofei Ma

Code switching

Code-switched text synthesis in unseen language pairs*
I-Hung Hsu, Avik Ray, Shubham Garg, Nanyun Peng, Jing Huang

CoMix: Guide transformers to code-mix using POS structure and phonetics*
Gaurav Arora, Srujana Merugu, Vivek Sembium

Continual learning

Characterizing and measuring linguistic dataset drift
Tyler A. Chang, Kishaloy Halder, Neha Anna John, Yogarshi Vyas, Yassine Benajiba, Miguel Ballesteros, Dan Roth

Data-/table-to-text applications

An inner table retriever for robust table question answering
Weizhe Lin, Rexhina Blloshmi, Bill Byrne, Adrià de Gispert, Gonzalo Iglesias

Few-shot data-to-text generation via unified representation and multi-source learning
Alexander Hanbo Li, Mingyue Shang, Evangelia Spiliopoulou, JIE MA, Patrick Ng, Zhiguo Wang, Bonan Min, William Wang, Kathleen McKeown, Vittorio Castelli, Dan Roth, Bing Xiang

Improving cross-task generalization of unified table-to-text models with compositional task configurations*
Jifan Chen, Yuhao Zhang, Lan Liu, Rui Dong, Xinchi Chen, Patrick Ng, William Wang, Zhiheng Huang

LI-RAGE: Late interaction retrieval augmented generation with explicit signals for open-domain table question answering
Weizhe Lin, Rexhina Blloshmi, Bill Byrne, Adrià de Gispert, Gonzalo Iglesias

Dialogue

Diable: Efficient dialogue state tracking as operations on tables*
Pietro Lesci, Yoshinari Fujinuma, Momchil Hardalov, Chao Shang, Lluis Marquez

NatCS: Eliciting natural customer support dialogues
James Gung, Emily Moeng, Wesley Rose, Arshit Gupta, Yi Zhang, Saab Mansour

Schema-guided user satisfaction modeling for task-oriented dialogues
Yue Feng, Yunlong Jiao, Animesh Prasad, Nikolaos Aletras, Emine Yilmaz, Gabriella Kazai

Toward more accurate and generalizable evaluation metrics for task-oriented dialogs
Abi Komma, Nagesh Panyam, Timothy Leffel, Anuj Goyal, Angeliki Metallinou, Spyros Matsoukas, Aram Galstyan

Explainable AI

Efficient Shapley values estimation by amortization for text classification
Alan Yang, Fan Yin, He He, Kai-Wei Chang, Xiaofei Ma, Bing Xiang

Few shot rationale generation using self-training with dual teachers*
Aditya Srikanth Veerubhotla, Lahari Poddar, Jun Yin, Gyuri Szarvas, Sharanya Eswaran

Information extraction

An AMR-based link prediction approach for document-level event argument extraction
Yuqing Yang, Qipeng Guo, Xiangkun Hu, Yue Zhang, Qipeng Guo, Zheng Zhang

AVEN-GR: Attribute value extraction and normalization using product graphs
Donato Crisostomi, Thomas Ricatte

Large scale generative multimodal attribute extraction for e-commerce attributes
Anant Khandelwal, Happy Mittal, Shreyas Sunil Kulkarni, Deepak Gupta

ParaAMR: A large-scale syntactically diverse paraphrase dataset by AMR back-translation
Kuan-Hao Huang, Varun Iyer, I-Hung Hsu, Anoop Kumar, Kai-Wei Chang, Aram Galstyan

Weakly supervised hierarchical multi-task classification of customer questions
Jitenkumar Rana, Promod Yenigalla, Chetan Aggarwal, Sandeep Mukku, Manan Soni, Rashmi Patange

WebIE: Faithful and robust information extraction on the web
Chenxi Whitehouse, Clara Vania, Alham Fikri Aji, Christos Christodoulopoulos, Andrea Pierleoni

Information retrieval

CUPID: Curriculum learning based real-time prediction using distillation
Arindam Bhattacharya, Ankith M S, Ankit Gandhi, Vijay Huddar, Atul Saroop, Rahul Bhagat

Direct fact retrieval from knowledge graphs without entity linking
Jinheon Baek, Alham Fikri Aji, Jens Lehmann, Sung Ju Hwang

Language modeling

Adaptation approaches for nearest neighbor language models*
Rishabh Bhardwaj, George Polovets, Monica Sunkara

CONTRACLM: Contrastive learning for causal language model
Nihal Jain, Dejiao Zhang, Wasi Ahmad, Zijian Wang, Feng Nan, Xiaopeng LI, Ming Tan, Baishakhi Ray, Parminder Bhatia, Xiaofei Ma, Ramesh Nallapati, Bing Xiang

Controlled text generation with hidden representation transformations*
Vaibhav Kumar, Hana Koorehdavoudi, Masud Moshtaghi, Amita Misra, Ankit Chadha, Emilio Ferrara

KILM: Knowledge injection into encoder-decoder language models
Yan XU, Mahdi Namazifar, Devamanyu Hazarika, Aishwarya Padmakumar, Yang Liu, Dilek Hakkani-Tür

ReAugKD: Retrieval-augmented knowledge distillation for pre-trained language models
Jianyi Zhang, Aashiq Muhamed, Aditya Anantharaman, Guoyin Wang, Changyou Chen, Kai Zhong, Qingjun Cui, Yi Xu, Belinda Zeng, Trishul Chilimbi, Yiran Chen

Recipes for sequential pre-training of multilingual encoder and seq2seq models*
Saleh Soltan, Andy Rosenbaum, Tobias Falke, Qin Lu, Anna Rumshisky, Wael Hamza

Rethinking the role of scale for in-context learning: An interpretability-based case study at 66 billion scale
Hritik Bansal, Karthik Gopalakrishnan, Saket Dingliwal, Sravan Bodapati, Katrin Kirchhoff, Dan Roth

Machine learning

Mitigating the burden of redundant datasets via batch-wise unique samples and frequency-aware losses
Donato Crisostomi, Andrea Caciolai, Alessandro Pedrani, Alessandro Manzotti, Enrico Palumbo, Kay Rottmann, Davide Bernardi

Machine translation

RAMP: Retrieval and attribute-marking enhanced prompting for attribute-controlled translation
Gabriele Sarti, Phu Mon Htut, Xing Niu, Benjamin Hsu, Anna Currey, Georgiana Dinu, Maria Nădejde

Multimodal models

Benchmarking diverse-modal entity linking with generative models*
Sijia Wang, Alexander Li, Henry Zhu, Sheng Zhang, Pramuditha Perera, Chung-Wei Hang, JIE MA, William Wang, Zhiguo Wang, Vittorio Castelli, Bing Xiang, Patrick Ng

Generate then select: Open-ended visual question answering guided by world knowledge*
Xingyu Fu, Sheng Zhang, Gukyeong Kwon, Pramuditha Perera, Henry Zhu, Yuhao Zhang, Alexander Hanbo Li, William Wang, Zhiguo Wang, Vittorio Castelli, Patrick Ng, Dan Roth, Bing Xiang

KG-FLIP: Knowledge-guided fashion-domain language-image pre-training for e-commerce
Qinjin Jia, Yang Liu, Shaoyuan Xu, Huidong Liu, Daoping Wu, Jinmiao Fu, Roland Vollgraf, Bryan Wang

Resolving ambiguities in text-to-image generative models
Ninareh Mehrabi, Palash Goyal, Apurv Verma, Jwala Dhamala, Varun Kumar, Qian Hu, Kai-Wei Chang, Richard Zemel, Aram Galstyan, Rahul Gupta

Translation-enhanced multilingual text-to-image generation
Yaoyiran Li, Ching-Yun (Frannie) Chang, Stephen Rawls, Ivan Vulić, Anna Korhonen

Unsupervised melody-to-lyric generation
Yufei Tian, Anjali Narayan-Chen, Shereen Oraby, Alessandra Cervone, Chenyang Tao, Gunnar Sigurdsson, Wenbo Zhao, Tagyoung Chung, Jing Huang, Violet Peng

Natural-language processing

Multi-VALUE: A framework for cross-dialectal English NLP
Caleb Ziems, William Held, Jingfeng Yang, Jwala Dhamala, Rahul Gupta, Diyi Yang

vONTSS: vMF based semi-supervised neural topic modeling with optimal transport*
Weijie Xu, Xiaoyu Jiang, Srinivasan Sengamedu, "SHS", Francis Iannacci, Jinjin Zhao

Natural-language understanding

ECG-QALM: Entity-controlled synthetic text generation using contextual Q&A for NER*
Karan Aggarwal, Henry Jin, Aitzaz Ahmad

Entity contrastive learning in a large-scale virtual assistant system
Jonathan Rubin, Jason Crowley, George Leung, Morteza Ziyadi, Maria Minakova

EPIC: Multi-perspective annotation of a corpus of irony
Simona Frenda, Alessandro Pedrani, Valerio Basile, Soda Marem Lo, Alessandra Teresa Cignarella, Raffaella Panizzon, Cristina Marco, Bianca Scarlini, Viviana Patti, Cristina Bosco, Davide Bernardi

Measuring and mitigating local instability in deep neural networks*
Arghya Datta, Subhrangshu Nandi, Jingcheng Xu, Greg Ver Steeg, He Xie, Anoop Kumar, Aram Galstyan

Reducing cohort bias in natural language understanding systems with targeted self-training scheme
Thu Le, Gabriela Cortes Hernandez, Bei Chen, Melanie Bradford

Privacy

Controlling the extraction of memorized data from large language models via prompt-tuning
Mustafa Ozdayi, Charith Peris, Jack G. M. FitzGerald, Christophe Dupuy, Jimit Majmudar, Haidar Khan, Rahil Parikh, Rahul Gupta

Query rewriting

Context-aware query rewriting for improving users’ search experience on e-commerce websites
Simiao Zuo, Qingyu Yin, Haoming Jiang, Shaohui Xi, Bing Yin, Chao Zhang, Tuo Zhao

Unified contextual query rewriting
Yingxue Zhou, Jie Hao, Mukund Rungta, Yang Liu, Eunah Cho, Xing Fan, Yanbin Lu, Vishal Vasudevan, Kellen Gillespie, Zeynab Raeesy, Sawyer Shen, Edward Guo, Gokhan Tur

Question answering

Accurate training of web-based question answering systems with feedback from ranked users
Liang Wang, Ivano Lauriola, Alessandro Moschitti

Context-aware transformer pre-training for answer sentence selection
Luca Di Liello, Siddhant Garg, Alessandro Moschitti

Cross-Lingual Knowledge Distillation for answer sentence selection in low-resource languages*
Shivanshu Gupta, Yoshitomo Matsubara, Ankit Chadha, Alessandro Moschitti

Exploiting abstract meaning representation for open-domain question answering*
Cunxiang Wang, Zhikun Xu, Qipeng Guo, Xiangkun Hu, Xuefeng Bai, Zheng Zhang, Yue Zhang

Hybrid hierarchical retrieval for open-domain question answering*
Manoj Ghuhan Arivazhagan, Lan Liu, Peng Qi, Xinchi Chen, William Wang, Zhiheng Huang

Learning answer generation using supervision from automatic question answering evaluators
Matteo Gabburo, Siddhant Garg, Rik Koncel-Kedziorski, Alessandro Moschitti

RobustQA: Benchmarking the robustness of domain adaptation for open-domain question answering*
Rujun Han, Peng Qi, Yuhao Zhang, Lan Liu, Juliette Burger, William Wang, Zhiheng Huang, Bing Xiang, Dan Roth

Reasoning

FolkScope: Intention knowledge graph construction for e-commerce commonsense discovery*
Changlong Yu, Weiqi Wang, Xin Liu, Jiaxin Bai, Yangqiu Song, Zheng Li, Yifan Gao, Tianyu Cao, Bing Yin

SCOTT: Self-consistent chain-of-thought distillation
Peifeng Wang, Zhengyang Wang, Zheng Li, Yifan Gao, Bing Yin, Xiang Ren

Self-learning

Constrained policy optimization for controlled self-learning in conversational AI systems
Mohammad Kachuee, Sungjin Lee

Scalable and safe remediation of defective actions in self-learning conversational systems
Sarthak Ahuja, Mohammad Kachuee, Fateme Sheikholeslami, Weiqing Liu, Jae Do

Semantic parsing

An empirical analysis of leveraging knowledge for low-resource task-oriented semantic parsing*
Mayank Kulkarni, Aoxiao Zhong, Nicolas Guenon Des Mesnards, Sahar Movaghati, Mukund Harakere, He Xie, Jianhua Lu

XSEMPLR: Cross-lingual semantic parsing in multiple natural languages and meaning representations
Yusen Zhang, Jun Wang, Zhiguo Wang, Rui Zhang

Spoken-language understanding

Regression-free model updates for spoken language understanding
Andrea Caciolai, Verena Weber, Tobias Falke, Alessandro Pedrani, Davide Bernardi

Sharing encoder representations across languages, domains and tasks in large-scale spoken language understanding
Jonathan Hueser, Judith Gaspers, Thomas Gueudre, Chandana Satya Prakash, Jin Cao, Daniil Sorokin, Quynh Do, Nicolas Anastassacos, Tobias Falke, Turan Gojayev, Mariusz Momotko, Denis Romasanta Rodriguez, Austin Doolittle, Kartik Balasubramaniam, Wael Hamza, Fabian Triefenbach, Patrick Lehnen

Toxic-language classification

QCon at SemEval-2023 Task 10: Data augmentation and model ensembling for detection of online sexism
Wes Feely, Prabhakar Gupta, Manas Mohanty, Tim Chon, Tuhin Kundu, Vijit Singh, Sandeep Atluri, Tanya Roosta, Viviane Ghaderi, Peter Schulam, Heba Elfardy

Towards building a robust toxicity predictor
Dmitriy Bespalov, Sourav Bhabesh, Yi Xiang, Yanjun (Jane) Qi

*Accepted to ACL Findings

Research areas

Related content

ES, B, Barcelona
Are you interested in defining the science strategy that enables Amazon to market to millions of customers based on their lifecycle needs rather than one-size-fits-all campaigns? We are seeking a Applied Scientist to lead the science strategy for our Lifecycle Marketing Experimentation roadmap within the PRIMAS (Prime & Marketing analytics and science) team. The position is open to candidates in Amsterdam and Barcelona. In this role, you will own the end-to-end science approach that enables EU marketing to shift from broad, generic campaigns to targeted, cohort-based marketing that changes customer behavior. This is a high-ambiguity, high-impact role where you will define what problems are worth solving, build the science foundation from scratch, and influence senior business leaders on marketing strategy. You will work directly with Business Directors and channel leaders to solve critical business problems: how do we win back customers lost to competitors, convert Young Adults to Prime, and optimize marketing spend by de-averaging across customer cohorts. Key job responsibilities Science Strategy & Leadership: 1. Own the end-to-end science strategy for lifecycle marketing, defining the roadmap across audience targeting, behavioral modeling, and measurement 2. Navigate high ambiguity in defining customer journey frameworks and behavioral models – our most challenging science problem with no established playbook 3. Lead strategic discussions with business leaders translating business needs into science solutions and building trust across business and tech partners 4. Mentor and guide a team of 2-3 scientists and BIEs on technical execution while contributing hands-on to the hardest problems Advanced Customer Behavior Modeling: 1. Build sophisticated propensity models identifying customer cohorts based on lifecycle stage and complex behavioral patterns (e.g., Bargain hunters, Young adults Prime prospects) 2. Define customer journey frameworks using advanced techniques (Hidden Markov Models, sequential decision-making) to model how customers transition across lifecycle stages 3. Identify which customer behaviors and triggers drive lifecycle progression and what messaging/levers are most effective for each cohort 4. Integrate 1P behavioral data with 2P survey insights to create rich, actionable audience definitions Measurement & Cross-Workstream Integration: 1. Partner with measurement scientist to design experiments (RCTs) that isolate audience targeting effects from creative effects 2. Ensure audience definitions, journey models, and measurement frameworks work coherently across Meta, LiveRamp, and owned channels 3. Establish feedback loops connecting measurement insights back to model improvements About the team The PRIMAS (Prime & Marketing Analytics and Science) is the team that support the science & analytics needs of the EU Prime and Marketing organization, an org that supports the Prime and Marketing programs in European marketplaces and comprises 250-300 employees. The PRIMAS team, is part of a larger tech tech team of 100+ people called WIMSI (WW Integrated Marketing Systems and Intelligence). WIMSI core mission is to accelerate marketing technology capabilities that enable de-averaged customer experiences across the marketing funnel: awareness, consideration, and conversion.
IN, KA, Bengaluru
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced algorithmic systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning and Data Sciences team for India Consumer Businesses. If you have an entrepreneurial spirit, know how to deliver, love to work with data, are deeply technical, highly innovative and long for the opportunity to build solutions to challenging problems that directly impact the company's bottom-line, we want to talk to you. Major responsibilities - Use machine learning and analytical techniques to create scalable solutions for business problems - Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes - Design, development, evaluate and deploy innovative and highly scalable models for predictive learning - Research and implement novel machine learning and statistical approaches - Work closely with software engineering teams to drive real-time model implementations and new feature creations - Work closely with business owners and operations staff to optimize various business operations - Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation - Mentor other scientists and engineers in the use of ML techniques
ES, M, Madrid
At Amazon, we are committed to being the Earth's most customer-centric company. The European International Technology group (EU INTech) owns the enhancement and delivery of Amazon's engineering to all the varied customers and cultures of the world. We do this through a combination of partnerships with other Amazon technical teams and our own innovative new projects. You will be joining the Tamale team to work on Haul. As part of EU INTech and Haul, Tamale strives to create a discovery-driven shopping experience using challenging machine learning and ranking solutions. You will be exposed to large-scale recommendation systems, multi-objective optimization, and state-of-the-art deep learning architectures, and you'll be part of a key effort to improve our customers' browsing experience by building next-generation ranking models for Amazon Haul's endless scroll experience. We are looking for a passionate, talented, and inventive Scientist with a strong machine learning background to help build industry-leading ranking solutions. We strongly value your hard work and obsession to solve complex problems on behalf of Amazon customers. Key job responsibilities We look for applied scientists who possess a wide variety of skills. As the successful applicant for this role, you will work closely with your business partners to identify opportunities for innovation. You will apply machine learning solutions to optimize multi-objective ranking, improve discovery engagement through contextual signals, and scale ranking systems across multiple marketplaces. You will work with business leaders, scientists, and product managers to translate business and functional requirements into concrete deliverables, including the design, development, testing, and deployment of highly scalable distributed ranking services. You will be part of a team of scientists and engineers working on solving ranking and personalization challenges at scale. You will be able to influence the scientific roadmap of the team, setting the standards for scientific excellence. You will be working with state-of-the-art architectures and real-time feature serving systems. Your work will improve the experience of millions of daily customers using Amazon Haul worldwide. You will have the chance to have great customer impact and continue growing in one of the most innovative companies in the world. You will learn a huge amount - and have a lot of fun - in the process!
IN, HR, Gurugram
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced ML systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real-world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning team for International Emerging Stores (IES). Machine Learning, Big Data and related quantitative sciences have been strategic to Amazon from the early years. Amazon has been a pioneer in areas such as recommendation engines, ecommerce fraud detection and large-scale optimization of fulfillment center operations. As Amazon has rapidly grown and diversified, the opportunity for applying machine learning has exploded. We have a very broad collection of practical problems where machine learning systems can dramatically improve the customer experience, reduce cost, and drive speed and automation. These include product bundle recommendations for millions of products, safeguarding financial transactions across by building the risk models, improving catalog quality via extracting product attribute values from structured/unstructured data for millions of products, enhancing address quality by powering customer suggestions We are developing state-of-the-art machine learning solutions to accelerate the Amazon India growth story. Amazon is an exciting place to be at for a machine learning practitioner. We have the eagerness of a fresh startup to absorb machine learning solutions, and the scale of a mature firm to help support their development at the same time. As part of the International Machine Learning team, you will get to work alongside brilliant minds motivated to solve real-world machine learning problems that make a difference to millions of our customers. We encourage thought leadership and blue ocean thinking in ML. Key job responsibilities Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes Design, develop, evaluate and deploy, innovative and highly scalable ML models Work closely with software engineering teams to drive real-time model implementations Work closely with business partners to identify problems and propose machine learning solutions Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production Leading projects and mentoring other scientists, engineers in the use of ML techniques About the team International Machine Learning Team is responsible for building novel ML solutions across International Emerging Store (India, MENA, Far-East, LatAm) problems and impact the bottom-line and top-line of India business. Learn more about our team from https://www.amazon.science/working-at-amazon/how-rajeev-rastogis-machine-learning-team-in-india-develops-innovations-for-customers-worldwide
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, WA, Bellevue
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.