Improving LLM pretraining with better data organization

“Best-fit packing” adapts bin-packing to avoid unnecessary truncation of training documents, improving LLM performance across a wide range of tasks and reducing hallucination.

The documents used to train a large language model (LLM) are typically concatenated to form a single “superdocument”, which is then divided into sequences that match the model's context length. This improves training efficiency but often results in unnecessary truncations, where individual documents are broken up across successive sequences.

Related content
Contiguous parameter management and prefetched activation offloading expand the MiCS tool kit.

In paper we’re presenting at this year’s International Conference on Machine Learning (ICML 2024), titled “Fewer truncations improve language modeling”, we report an in-depth study of this common concatenation-chunking document-processing method. We found that it severely impairs the model's ability to understand contextual coherence and factual consistency. This not only affects the model's performance on downstream tasks but also increases the risk of hallucinations.

To address this issue, we propose best-fit packing, an innovative document-processing strategy that optimizes document combinations to eliminate unnecessary text truncations. In experiments, we compared a model trained using best-fit packing to one trained in the ordinary way on six downstream tasks: reading comprehension, natural-language inference, context following, summarization, commonsense and closed-book question answering, and program synthesis. We found that best-fit packing monotonically improves performance on an array of 22 sub-tasks, by as much as 15% (program synthesis) to 17% (context following). Importantly, best-fit packing also reduces closed-domain hallucination effectively by up to 58.3%.

Best-fit packing.png
A comparison of best-fit packing (left), which seeks to minimize document truncation, with the standard approach to large-language-model training, which concatenates training documents and then divides them into fixed-length sequences.

Consequences of truncation

In the analysis reported in our paper, we identified several problems caused by document truncation, including undefined names, ungrounded content, and missing knowledge.

Related content
Prompt engineering enables researchers to generate customized training examples for lightweight “student” models.

Undefined names: In programming languages like Python, truncation may separate definitions of variables from their invocations, introducing syntax errors and causing some variables to be undefined. As a consequence, the model may learn misleading patterns and possibly hallucinate on downstream tasks.

Ungrounded content: Truncation damages data integrity. In the example below, for instance, a reference (“the earthquake on Monday morning”) is separated from its antecedent, resulting in unfaithful generation.

Missing knowledge: Truncation hinders knowledge acquisition. In the example below, the model cannot learn the location of the ICML conference because the conference name and location occur in different training sequences.

Truncation errors.png
Examples of three common truncation errors: (a) undefined names, (b) ungrounded content, and (c) missing knowledge.

Best-fit packing

To address this issue, we propose optimizing the assignment of documents to training sequences so as to eliminate unnecessary truncations, while minimally increasing the number of sequences relative to concatenation. This is a variation of the well-known bin-packing problem, which is NP-hard in general, but we use a heuristic called the best-fit-decreasing (BFD) algorithm that tends to work well in practice. We thus call our method best-fit packing.

The normal implementation of BFD has quasi-linear time complexity, which is not efficient enough for LLM pretraining, which typically involves millions of documents. By taking advantage of the unique nature of pretraining data, however, we were able to optimize BFD so that it scales linearly with data size, ensuring its applicability to large-scale pretraining datasets. Further, we show that in practical applications, best-fit packing generates approximately the same number of training sequences as the traditional method, while significantly reducing data loss caused by truncation.

Truncations per document.png
Truncations per document as a function of document length, for both best-fit packing (pack) and concatenation (concat), for natural-language data (top) and programming-language data (bottom). The natural-language data is evaluated with context lengths of both 2,000 and 8,000.

Curious to know how we achieve it? Let’s dive deep!

Best-fit packing — an example

Following the standard bin-packing nomenclature, we call each training sequence a “bin”, and each bin has a capacity equal to the LLM’s context size. The goal is to assign a combination of whole documents to each bin so as to minimize the wasted bin capacity.

First, we divide any document that’s larger than the LLM context into context-length chunks, plus a remainder. Then we sort the documents (and document fragments) from largest to smallest. Finally, we work our way down the sorted list, assigning each document to the bin whose available space is as close to the document size as possible.

Related content
Novel “checkpointing” scheme that uses CPU memory reduces the time wasted on failure recovery by more than 92%.

To maximize efficiency, we use three data structures to manage the assignment of documents to bins: a binary tree and two tables. We can use this design because (1) the maximum bin size is the model’s context size, so the tree won’t be too deep, and (2) we do not need to distinguish bins with the same remaining capacity, which simplifies the the tree. Instead, we use the tables to map bin capacities to bins.

Consider a simple example, in which the context size (the bin size) is eight. The binary tree has eight leaves, corresponding to the eight possibilities for available space in any given bin. (In a real LLM, the context size is on the order of thousands of tokens, so the tree would have thousands of leaves.)

Each parent node of the tree has an associated number, indicating the size of the largest available bin slot among its descendants. The number associated with the parent’s right child is always greater than or equal to the number associated with the left child.

Initially, the value of the rightmost node in each layer of the tree is eight, and all the other nodes have values of zero. This means that all the available bin slots have a capacity of eight.

Best-fit initialization.png
The initial states of the three data structures we use to implement best-fit packing. The rightmost node of each layer of the tree has a value of eight, and all other nodes have values of zero, indicating that all the bins are empty (i.e., are at maximum capacity).

Now consider a later state, when four documents of size eight, six, six, and four have been packed. The two bins containing documents of size six have available slots of size two (8 – 6), and the bin containing a document of size four has an available slot of size four (8 – 4). These sizes are represented by the numbers two and four at leaves two and four of the tree. Multiple bins remain empty, so leaf eight has a value of eight, too.

Note that the value two at leaf two indicates only that at least one bin slot of size two is available; it doesn’t indicate how many such slots there are or where they can be found. That information is contained in the tables.

Tree after packing.png
The state of the data structures after four documents of sizes six, six, four, and eight have been packed.

Now consider a document of size three, which we wish to assign to a bin. To find the best available bin slot, simply go left at each node of the tree, unless going left leads to a node whose value is less than the document size, in which case, go right.

Document packing.png
Tree traversal identifies the available bin slot that best fits the new document.

The best fit for a document of size three is a slot of size four, and in the “space-to-bins” table, we see that there is one bin — bin three — with a slot of that size. So there we place the document.

Finally, we update all three data structures to reflect the new placement:

Data structure update.png
Data structure updates after the document (item four) of size three has been packed. The tree leaf corresponding to slot sizes of four is reset to zero, and the tree leaf corresponding to slot sizes of one is set to one. The tables are updated accordingly.

Results

To evaluate the effect of bin-packing on downstream tasks, we pretrained models of 7 billion and 13 billion parameters with context lengths of 2,000 and 8,000 on text and code using both best-fit packing and concatenation. We then tested both sets of models on our six downstream tasks. On average, across multiple datasets, context lengths, and metrics, best-fit packing offered better performance on all six tasks. The biggest gains came in reading comprehension (+4.7%), natural-language inference (+9.3%), context following (+16.8%), and program synthesis (+15.0%).

Related content
In a series of papers, Amazon researchers performed a theoretical analysis of a simplified problem that led to a learnable learning-rate scheduler, applied that scheduler to a more complex neural model, and distilled the results into a practical algorithm.

We also found that best-fit packing helped prevent closed-domain hallucination, particularly in program synthesis tasks, where it reduced "undefined name" errors by up to 58.3%, indicating a more complete understanding of program structure and logic.

Additionally, models trained with best-fit packing were better at following instructions, such as adhering to length constraints. And best-fit packing helped the model acquire “tail knowledge” that is truncation sensitive due to scarcity in training data. Indeed, this result suggests a possible reason for why LLMs struggle to learn long-tail knowledge.

While the experiments conducted in our paper primarily focused on LLM pretraining, best-fit packing is broadly applicable to fine tuning as well. Determining the benefits it can offer during fine tuning is an intriguing topic for future study.

Research areas

Related content

US, CA, Pasadena
The Amazon Center for Quantum Computing in Pasadena, CA, is looking to hire an Applied Scientist specializing in Testing of Control Systems hardware. Working alongside other scientists and engineers, you will validate hardware and software systems performing the control and readout functions for Amazon quantum processors. Working effectively within a cross-functional team environment is critical. The ideal candidate will have an established background in test engineering applicable to large mixed-signal systems. Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Key job responsibilities Our scientists and engineers collaborate across diverse teams and projects to offer state of the art, cost effective solutions for the control of Amazon quantum processor systems. You’ll bring a passion for innovation and collaboration to: Develop automated test scripts for mid-volume electronics manufacturing, utilizing high-speed test equipment such as Gsps oscilloscopes, logic analyzers, and network analyzers. Design and implement test plans for high-speed, mixed-signal PCAs and instrument assemblies, covering analog/digital interfaces, ADCs/DACs, FPGAs, and power distribution systems. Develop test requirements and coverage matrices with hardware and software stakeholders, including optimization of test coverage vs test time. Analyze test data to identify failure root causes and trends, implement corrective actions, and drive design-for-testability (DFT) enhancements. Drive continuous test improvement to improve test accuracy, improve final product reliability, and adapt to new measurement requirements.
US, WA, Seattle
This role will contribute to developing the Economics and Science products and services in the Fee domain, with specialization in supply chain systems and fees. Through the lens of economics, you will develop causal links for how Amazon, Sellers and Customers interact. You will be a key and senior scientist, advising Amazon leaders how to price our services. You will work on developing frameworks and scalable, repeatable models supporting optimal pricing and policy in the two-sided marketplace that is central to Amazon's business. The pricing for Amazon services is complex. You will partner with science and technology teams across Amazon including Advertising, Supply Chain, Operations, Prime, Consumer Pricing, and Finance. We are looking for an experienced Economist to improve our understanding of seller Economics, enhance our ability to estimate the causal impact of fees, and work with partner teams to design pricing policy changes. In this role, you will provide guidance to scientists to develop econometric models to influence our fee pricing worldwide. You will lead the development of causal models to help isolate the impact of fee and policy changes from other business actions, using experiments when possible, or observational data when not. Key job responsibilities The ideal candidate will have extensive Economics knowledge, demonstrated strength in practical and policy relevant structural econometrics, strong collaboration skills, proven ability to lead highly ambiguous and large projects, and a drive to deliver results. They will work closely with Economists, Data / Applied Scientists, Strategy Analysts, Data Engineers, and Product leads to integrate economic insights into policy and systems production. Familiarity with systems and services that constitute seller supply chains is a plus but not required. About the team The Stores Economics and Sciences team is a central science team that supports Amazon's Retail and Supply Chain leadership. We tackle some of Amazon's most challenging economics and machine learning problems, where our mandate is to impact the business on massive scale.
US, WA, Seattle
WW Amazon Stores Finance Science (ASFS) works to leverage science and economics to drive improved financial results, foster data backed decisions, and embed science within Finance. ASFS is focused on developing products that empower controllership, improve business decisions and financial planning by understanding financial drivers, and innovate science capabilities for efficiency and scale. We are looking for a data scientist to lead high visibility initiatives for forecasting Amazon Stores' financials. You will develop new science-based forecasting methodologies and build scalable models to improve financial decision making and planning for senior leadership up to VP and SVP level. You will build new ML and statistical models from the ground up that aim to transform financial planning for Amazon Stores. We prize creative problem solvers with the ability to draw on an expansive methodological toolkit to transform financial decision-making with science. The ideal candidate combines data-science acumen with strong business judgment. You have versatile modeling skills and are comfortable owning and extracting insights from data. You are excited to learn from and alongside seasoned scientists, engineers, and business leaders. You are an excellent communicator and effectively translate technical findings into business action. Key job responsibilities Demonstrating thorough technical knowledge, effective exploratory data analysis, and model building using industry standard ML models Working with technical and non-technical stakeholders across every step of science project life cycle Collaborating with finance, product, data engineering, and software engineering teams to create production implementations for large-scale ML models Innovating by adapting new modeling techniques and procedures Presenting research results to our internal research community
IN, KA, Bengaluru
RBS (Retail Business Services) Tech team works towards enhancing the customer experience (CX) and their trust in product data by providing technologies to find and fix Amazon CX defects at scale. Our platforms help in improving the CX in all phases of customer journey, including selection, discoverability & fulfilment, buying experience and post-buying experience (product quality and customer returns). The team also develops GenAI platforms for automation of Amazon Stores Operations. As a Sciences team in RBS Tech, we focus on foundational ML research and develop scalable state-of-the-art ML solutions to solve the problems covering customer experience (CX) and Selling partner experience (SPX). We work to solve problems related to multi-modal understanding (text and images), task automation through multi-modal LLM Agents, supervised and unsupervised techniques, multi-task learning, multi-label classification, aspect and topic extraction for Customer Anecdote Mining, image and text similarity and retrieval using NLP and Computer Vision for product groupings and identifying duplicate listings in product search results. Key job responsibilities As an Research Scientist, you will be responsible to design and deploy scalable GenAI, NLP and Computer Vision solutions that will impact the content visible to millions of customer and solve key customer experience issues. You will develop novel LLM, deep learning and statistical techniques for task automation, text processing, image processing, pattern recognition, and anomaly detection problems. You will define the research and experiments strategy with an iterative execution approach to develop AI/ML models and progressively improve the results over time. You will partner with business and engineering teams to identify and solve large and significantly complex problems that require scientific innovation. You will help the team leverage your expertise, by coaching and mentoring. You will contribute to the professional development of colleagues, improving their technical knowledge and the engineering practices. You will independently as well as guide team to file for patents and/or publish research work where opportunities arise. The RBS org deals with problems that are directly related to the selling partners and end customers and the ML team drives resolution to organization level problems. Therefore, the Research Scientist role will impact the large product strategy, identifies new business opportunities and provides strategic direction which is very exciting.
US, WA, Seattle
As part of the AWS Applied AI Solutions Core Services organization, we're advancing the frontier of geospatial intelligence and AI-powered spatial reasoning. Our vision is to be the trusted foundation for transforming every business with Amazon AI teammates. Our mission is to deliver turnkey, enterprise-grade foundational AI capabilities that create delightful AI powered solutions. We're building sophisticated AI systems that enable intelligent agents to understand and operate effectively in the physical world through advanced geospatial optimization. Key job responsibilities - Develop geospatial optimization models that generalize across diverse customer use cases in logistics, transportation, and spatial planning - Scope optimization projects with multiple customers in mind, abstracting away complex science problems to create scalable solutions - Discover, evaluate, and adapt existing optimization models and geospatial tools for customer deployment - Develop semantic enrichment methods to integrate heterogeneous data sources including open geospatial data, multimodal sensor data, images, videos, satellite imagery, and documents - Research novel approaches combining AI agents with geospatial optimization to solve complex spatial problems - Collaborate with engineering teams to integrate science components into production systems - Conduct rigorous experimentation and establish evaluation frameworks to measure solution performance A day in the life A day in the life As an Applied Scientist, you'll develop optimization algorithms and AI-powered geospatial solutions while maintaining a clear path to customer impact. You'll investigate novel approaches to spatial optimization, develop methods for semantic data enrichment, and validate ideas through rigorous experimentation with real customer data. You'll collaborate with other scientists and engineers to transform research insights into scalable solutions, work directly with enterprise customers to understand requirements, and help shape the future direction. Leveraging and advancing generative AI technology will be a big part of your charter. About the team Our Applied AI Solutions Core Services Science team is tackling fundamental challenges in geospatial optimization and AI-powered spatial reasoning. We're investigating novel approaches to how AI systems can solve complex logistics and transportation problems, reason about spatial relationships, and integrate diverse data sources to create enterprise-grade geospatial intelligence. Working at the intersection of optimization, large language models, and geospatial data science, we're developing practical techniques that advance the state-of-the-art in geospatial AI.
US, WA, Bellevue
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to apply their causal inference and/or structural econometrics skillsets to solve real world problems. The intern will work in the area of Economics Intelligence in Amazon Returns and Recommerce Technology and Innovation and develop new, data-driven solutions to support the most critical components of this rapidly scaling team. Our PhD Economist Internship Program offers hands-on experience in applied economics, supported by mentorship, structured feedback, and professional development. Interns work on real business and research problems, building skills that prepare them for full-time economist roles at Amazon and beyond. You will learn how to build data sets and perform applied econometric analysis collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. About the team The WWRR Economics Intelligence (RREI) team brings together Economists, Data Scientists, and Business Intelligence Engineers experts to delivers economic solutions focused on forecasting, causality, attribution, customer behavior for returns, recommerce, and sustainability domains.
US, WA, Bellevue
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to apply their causal inference and/or structural econometrics skillsets to solve real world problems. The intern will work in the area of Economics Intelligence in Amazon Returns and Recommerce Technology and Innovation and develop new, data-driven solutions to support the most critical components of this rapidly scaling team. Our PhD Economist Internship Program offers hands-on experience in applied economics, supported by mentorship, structured feedback, and professional development. Interns work on real business and research problems, building skills that prepare them for full-time economist roles at Amazon and beyond. You will learn how to build data sets and perform applied econometric analysis collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. About the team The WWRR Economics Intelligence (RREI) team brings together Economists, Data Scientists, and Business Intelligence Engineers experts to delivers economic solutions focused on forecasting, causality, attribution, customer behavior for returns, recommerce, and sustainability domains.
US, CA, San Francisco
AWS is one of Amazon’s largest and fastest growing businesses, serving millions of customers in more than 190 countries. We use cloud computing to reshape the way global enterprises use information technology. We are looking for entrepreneurial, analytical, creative, flexible leaders to help us redefine the information technology industry. If you want to join a fast-paced, innovative team that is making history, this is the place for you. AWS Central Economics & Science (ACES) drives best practices for objectively applying economics and science in decision making across AWS. The team collaborates with AWS science and business teams to identify, frame, and analyze complex and ambiguous problems of the highest priority. Through data-driven insights and modeling, ACES supports strategic decision-making across the AWS global organization, including sales operations and business performance optimization. The ACES Sales Channels team is hiring an Applied Scientist (Senior or below) to advance our mission of providing rigorous, causal-inference-driven recommendations for AWS sales optimization. This role will focus on building ML systems with a causal modeling foundation, designing seller incentive mechanisms, and developing intervention strategies across the entire sales motion. Key job responsibilities • Causal ML System Development: Build and deploy machine learning models that emphasize causal inference, ensuring recommendations are grounded in valid interventions • Incentive Design: Define and model incentives that drive desirable behaviors across AWS sales channels, partner programs, and reseller ecosystems • Stakeholder Collaboration: Work with business stakeholders to understand requirements, validate approaches, and ensure practical applicability of scientific solutions • Scientific Rigor: Promote findings at internal conferences and contribute to the team's reputation for methodological excellence A day in the life The ACES Sales Channels team works on understanding and optimizing AWS's sales channels, both direct (generalist and specialist sellers) and indirect (partners and Marketplace). Our work falls into three core areas: developing rigorous causal measurement and modeling frameworks using cutting-edge economics and statistical methods; designing programs and incentives to improve customer and business outcomes; and building ML-based recommendation systems for sellers, partners, and other AWS stakeholders. About the team Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices.
US, WA, Bellevue
The Central Learning Solutions (CLS) - Science team builds state-of-the-art Artificial Intelligence (AI) solutions for enhancing leadership and associate development within the organization. We develop technology and mechanisms for building personalized learning courses based on the profiles of different learners and asses the post-training performance curves for different learner profiles. As a Data Scientist on the team, you will be driving the data science/ML roadmap for the CLS t Science team. You will leverage your knowledge in statistics and econometrics, estimate the causal impact of training interventions, recommend the right interventions for a given learner profile, and measure the post-launch success of these interventions through A/B weblabs. These insights will help in dynamically changing the training content of Learning & Development courses and unlock opportunities to improve both training effectiveness and learner experience. You will collaborate effectively with internal stakeholders and cross-functional teams for solving business problems, create operational efficiencies, and deliver successfully against high organizational standards. Key job responsibilities - Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and implementation. - Use advanced causal inference methodologies to estimate the learning curves for different learner profiles and the effectiveness of training content. - Perform statistical analysis and statistical tests including hypothesis testing and A/B testing. - Implement new statistical, machine learning, or other mathematical methodologies to solve specific business problems. - Present deep dives and analysis to both technical and non-technical stakeholders, ensure clarity, and influence the strategy of business partners. About the team We serve North America L&D orgs as the strategic thought leader, looking beyond where other teams are focused to drive transformative solutions that leverage technology and processes to improve learning outcomes and drive down the cost to serve.
US, WA, Bellevue
The Principal Applied Scientist will own the science mission for building next-generation proactive and autonomous agentic experiences across Alexa AI's Personalization, Autonomy and Proactive Intelligence organization. You will technically lead a team of applied scientists to harness state-of-the-art technologies in machine learning, natural language processing, LLM training and application, and agentic AI systems to advance the scientific frontiers of autonomous intelligence and proactive user assistance. The right candidate will be an inventor at heart, provide deep scientific leadership, establish compelling technical direction and vision, and drive ambitious research initiatives that push the boundaries of what's possible with AI agents. You will need to be adept at identifying promising research directions in agentic AI, developing novel autonomous agent solutions, and translating advanced AI research into production-ready agentic systems. You will need to be adept at influencing and collaborating with partner teams, launching AI-powered autonomous agents into production, and building team mechanisms that will foster innovation and execution in the rapidly evolving field of agentic AI. This role represents a unique opportunity to tackle fundamental challenges in how Alexa proactively understands user needs, autonomously takes actions on behalf of users, and delivers intelligent assistance through state-of-the-art agentic AI technologies. As a science leader in Alexa AI, you will shape the technical strategy for making Alexa a truly proactive and autonomous agent that anticipates user needs, takes intelligent actions, and provides seamless assistance without explicit prompting. Your team will be at the forefront of solving complex problems in agentic reasoning, multi-step task planning, autonomous decision-making, proactive intelligence, and context-aware action execution that will fundamentally transform how users interact with Alexa as an intelligent agent. The successful candidate will bring deep technical expertise in machine learning, natural language processing, and agentic AI systems, along with the leadership ability to guide talented scientists in pursuing ambitious research that advances the state of the art in autonomous agents, proactive intelligence, and AI-driven personalization. Experience with multi-agent systems, reinforcement learning, goal-oriented dialogue systems, and production-scale agentic architectures is highly valued. You will lead the development of breakthrough capabilities that enable Alexa to: 1) proactively anticipate user needs through advanced predictive modeling and contextual understanding; 2) autonomously execute complex multi-step tasks with minimal user intervention; 3) reason and plan intelligently across diverse user goals and environmental contexts; 4) learn and adapt continuously from user interactions to improve agentic behaviors; 5) coordinate actions seamlessly across multiple domains and services as a unified intelligent agent. This is a unique opportunity to define the future of conversational AI agents and build technology that will impact hundreds of millions of customers worldwide. Key job responsibilities Technical Leadership - Lead complex research and development projects - Partner closely with the T&C Product and Engineering leaders on the technical strategy and roadmap - Evaluate emerging technologies and methodologies - Make high-level architectural decisions Technical leadership and mentoring: - Mentor and develop technical talent - Set team project goals and metrics - Help with resource allocation and project prioritization from technical side Research & Development - Drive innovation in applied science areas - Translate research into practical business solutions - Author technical papers and patents - Collaborate with academic and industry partners About the team PAPI (Personalization Autonomy and Proactive Intelligence) aims to accelerate personalized and intuitive experiences across Amazon's customer touchpoints through automated, scalable, self-serve AI systems. We leverage customer, device, and ambient signals to deliver conversational, visual, and proactive experiences that delight customers, increase engagement, reduce defects, and enable natural interactions across Amazon touch points including Alexa, FireTV, and Mobile etc. Our systems offer personalized suggestions, comprehend customer inputs, learn from interactions, and propose appropriate actions to serve millions of customers globally.