The 10 most viewed publications of 2024

From cloud databases and anomaly detection on graphs to recession prediction and Amazon's new Nova foundation models, these are the most viewed publications authored by Amazon scientists and collaborators in 2024.

  1. Applications of large-scale knowledge graphs in e-commerce platforms can improve customers' shopping experiences. While existing e-commerce knowledge graphs (KGs) integrate a large volume of concepts or product attributes, they fail to discover user intentions, leaving out important information about how people think, behave, and interact with the surrounding world.

    In this work, we present COSMO, a scalable system to mine user-centric commonsense knowledge from behavior data and construct industry-scale knowledge graphs to empower diverse online services. In particular, we describe a pipeline for collecting high-quality seed knowledge assertions that are distilled from large language models (LLMs) and further refined by critic classifiers trained over human-in-the-loop annotated data.

    Read the rest of the abstract and download the paper

  2. Amazon MemoryDB for Redis is a database service designed for 11 9s of durability with in-memory performance. In this paper, we describe the architecture of MemoryDB and how we leverage open-source Redis, a popular data structure store, to build an enterprise-grade cloud database. MemoryDB offloads durability concerns to a separate low-latency, durable transaction log service, allowing us to scale performance, availability, and durability independently from the in-memory execution engine. We describe how, using this architecture, we are able to remain fully compatible with Redis, while providing single-digit millisecond write and microsecond-scale read latencies, strong consistency, and high availability. MemoryDB launched in 2021.

    Read and download the paper

  3. We introduce a text-to-speech (TTS) model called BASE TTS, which stands for Big Adaptive Streamable TTS with Emergent abilities. BASE TTS is the largest TTS model to date, trained on 100K hours of public-domain speech data, achieving a new state of the art in speech naturalness. It deploys a one-billion-parameter autoregressive transformer that converts raw texts into discrete codes ("speechcodes"), followed by a convolution-based decoder that converts these speechcodes into waveforms in an incremental, streamable manner. Further, our speechcodes are built using a novel speech tokenization technique that features speaker ID disentanglement and compression with byte-pair encoding. Echoing the widely reported "emergent abilities" of large language models when trained on increasing volumes of data, we show that BASE TTS variants built with 10K+ hours and 500M+ parameters begin to demonstrate natural prosody on textually complex sentences.

    Read the rest of the abstract and download the paper

  4. Amazon Aurora Serverless is an on-demand, autoscaling configuration for Amazon Aurora with full MySQL and PostgreSQL compatibility. It automatically offers capacity scale-up/-down (i.e., vertical scaling) based on a customer database application’s needs. In this manner, it relieves the customer of the need to explicitly manage its database capacity; customers need only to specify minimum and maximum bounds using a simple-to-understand multi-resource capacity abstraction called the Aurora Capacity Unit (ACU). For customers with time-varying workloads, it offers cost savings compared to provisioned Aurora or other alternatives due to its agile and granular scaling and its usage-based charging model.

    This paper describes the key ideas underlying Aurora Serverless’s resource management. To help meet its goals, Aurora Serverless adapts and fine-tunes well-established ideas related to resource oversubscription; reactive control informed by recent measurements; distributed and hierarchical decision-making; and innovations in the DB engine, OS, and hypervisor for efficiency. Perhaps the most challenging goal is to offer a consistent resource elasticity experience while operating hosts at high degrees of utilization. Aurora Serverless implements several novel ideas for striking a balance between these opposing needs.

    Read the rest of the abstract and download the paper

  5. Debugging a performance issue in databases is notoriously hard. Wouldn’t it be convenient if there were an oracle or a copilot for every database system, which users could query in natural language — "What’s wrong?", or even better, "How do we fix it?" Large language models (LLMs) would seem to be a natural surrogate for such an oracle given their ability to answer a wide range of questions by efficiently encoding a vast amount of knowledge from, e.g., a major chunk of the internet. However, prompting LLMs with database performance queries often results in "technically correct" but highly "vague" or "generic" recommendations that experienced database engineers (DBEs) typically find useless or untrustworthy.

    In this work we propose Panda, a framework to provide context grounding to pretrained LLMs in order to generate more "useful" and "in-context" troubleshooting recommendations. Panda draws inspiration from the way experienced DBEs perform debugging and puts a system in place with the components necessary to robustly deploy pretrained LLMs in production for debugging.

    Read the rest of the abstract and download the paper

  6. We present Amazon Nova, a new generation of state-of-the-art foundation models that deliver frontier intelligence and industry-leading price performance. Amazon Nova Pro is a highly capable multimodal model with the best combination of accuracy, speed, and cost for a wide range of tasks. Amazon Nova Lite is a low-cost multimodal model that is lightning fast for processing images, video, documents and text. Amazon Nova Micro is a text-only model that delivers our lowest-latency responses at very low cost. Amazon Nova Canvas is an image generation model that creates professional-grade images with rich customization controls. Amazon Nova Reel is a video generation model offering high-quality outputs, customization, and motion control. Our models were built responsibly and with a commitment to customer trust, security, and reliability. We report benchmarking results for core capabilities, agentic performance, long context, functional adaptation, runtime performance, and human evaluation.

    Read and download the paper

  7. Database research and development is heavily influenced by benchmarks, such as the industry-standard TPC-H and TPC-DS for analytical systems. However, these 20-year-old benchmarks capture neither how databases are deployed nor what workloads modern cloud data warehouse systems face. In this paper, we summarize well-known, confirm suspected, and unearth novel discrepancies between TPC-H/DS and actual workloads using empirical data. We base our analysis on telemetrics from Amazon Redshift, one of the largest cloud data warehouse deployments. Among other insights, we show how write-heavy data pipelines are prominent, workloads vary over time (in both load and type), queries are repetitive, and most properties of queries or workloads experience very long-tailed distributions. We conclude that data warehouse benchmarks, just like database systems, need to become more holistic and stop focusing solely on query engine performance. Finally, we publish a dataset containing query statistics for 200 randomly selected Redshift serverless and provisioned instances (each) over a three-month period, as a basis for building more-realistic benchmarks.

    Read and download the paper

  8. We propose a simple yet robust framework to nowcast recession risk at a monthly frequency in both the United States and the Euro Area. Our nowcast leverages both macroeconomic and financial conditions and is available the first business day after the reference month closes. In particular, we argue that financial conditions are not only useful for predicting future downturns — as is emphasized in the existing literature — but also for distinguishing between expansions and downturns as they unfold. We then connect our recession risk nowcast with growth at risk by drawing on the literature on distributional regressions and quantile regressions. Finally, we benchmark our nowcast with the Survey of Professional Forecasters (SPF) and show that, while both have a similar ability to identify downturns, the former is more accurate in correctly identifying periods of expansion.

    Read and download the paper

  9. Anomaly detection on graphs focuses on identifying irregular patterns or nodes within graph-structured data that deviate significantly from the norm. The technique is important due to its wide applicability in fields such as spam detection, anti-money-laundering, and network security. Two challenges to the application of anomaly detection on graphs are label imbalance and data insufficiency. The recent proliferation of generative models, especially diffusion models, suggests a solution. In this paper, we introduce a graph diffusion model in latent space, designed to alleviate the label imbalance problem. The proposed model is capable of multitask generation of graph structures and node features and demonstrates conditional generative capabilities, mitigating label imbalance by producing only positive examples. We apply the diffusion model to both homogeneous graphs and heterogeneous graphs. Through extensive experiments, we demonstrate that our proposed method offers notable improvements over conventional techniques.

    Read and download the paper

  10. Recent breakthroughs in large language modeling have facilitated rigorous exploration of their application in diverse tasks related to tabular-data modeling, such as prediction, tabular-data synthesis, question answering, and table understanding. Each task presents unique challenges and opportunities. However, there has been no comprehensive review that summarizes and compares the key techniques, metrics, datasets, models, and optimization approaches in this research domain. This survey aims to close this gap by consolidating recent progress in these areas, offering a thorough survey and taxonomy of the datasets, metrics, and methodologies utilized.

    Read the rest of the abstract and download the paper

Related content

US, MA, Boston
The Automated Reasoning Group is looking for a Applied Scientist with expertise in programming language semantics and deductive verification techniques (e.g. Lean, Dafny) to deliver novel code reasoning capabilities at scale. You will be part of a larger organization that develops a spectrum of formal software analysis tools and applies them to software at all levels of abstraction from assembler through high-level programming languages. You will work with a team of world class automated reasoning experts to deliver code reasoning technology that is accessible to all developers.
US, WA, Bellevue
Welcome to the Worldwide Returns & ReCommerce team (WWR&R) at Amazon.com. WWR&R is an agile, innovative organization dedicated to ‘making zero happen’ to benefit our customers, our company, and the environment. Our goal is to achieve the three zeroes: zero cost of returns, zero waste, and zero defects. We do this by developing groundbreaking products and driving truly innovative operational excellence to help customers keep what they buy, recover returned and damaged product value, keep thousands of tons of waste from landfills, and create the best customer returns experience in the world. We have an eye to the future – we create long-term value at Amazon by focusing not just on the bottom line, but on the planet. We are building the most sustainable re-use channel we can by driving multiple aspects of the Circular Economy for Amazon – Returns & ReCommerce. Amazon WWR&R is comprised of business, product, operational, program, software engineering and data teams that manage the life of a returned or damaged product from a customer to the warehouse and on to its next best use. Our work is broad and deep: we train machine learning models to automate routing and find signals to optimize re-use; we invent new channels to give products a second life; we develop highly respected product support to help customers love what they buy; we pilot smarter product evaluations; we work from the customer backward to find ways to make the return experience remarkably delightful and easy; and we do it all while scrutinizing our business with laser focus. You will help create everything from customer-facing and vendor-facing websites to the internal software and tools behind the reverse-logistics process. You can develop scalable, high-availability solutions to solve complex and broad business problems. We are a group that has fun at work while driving incredible customer, business, and environmental impact. We are backed by a strong leadership group dedicated to operational excellence that empowers a reasonable work-life balance. As an established, experienced team, we offer the scope and support needed for substantial career growth. Amazon is earth’s most customer-centric company and through WWR&R, the earth is our customer too. Come join us and innovate with the Amazon Worldwide Returns & ReCommerce team! Key job responsibilities * Design, develop, and evaluate highly innovative models for Natural Language Programming (NLP), Large Language Model (LLM), or Large Computer Vision Models. * Use SQL to query and analyze the data. * Use Python, Jupyter notebook, and Pytorch to train/test/deploy ML models. * Use machine learning and analytical techniques to create scalable solutions for business problems. * Research and implement novel machine learning and statistical approaches. * Mentor interns. * Work closely with data & software engineering teams to build model implementations and integrate successful models and algorithms in production systems at very large scale. A day in the life If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! Benefits: Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan Learn more about our benefits here: https://amazon.jobs/en/internal/benefits/us-benefits-and-stock About the team When a customer returns a package to Amazon, the request and package will be passed through our WWRR machine learning (ML) systems so that we could improve the customer experience, identify return root cause, optimize re-use, and evaluate the returned package. Our problems touch multiple modalities spanning from: textual, categorical, image, to speech data. We operate at large scale and rely on state-of-the-art modeling techniques to power our ML models: XGBoost, BERT, Vision Transformers, Large Language Models.
US, CA, Santa Clara
Amazon CloudWatch is the native AWS monitoring and observability service for cloud resources and applications. We are seeking a talented Senior Applied Scientist to develop next-generation scientific methods and infrastructure to support a core AWS business that delivers critical services to millions of customers operating at scale. This is a high visibility and high impact role that work on highly strategic projects in the AI/ML and Analytics space, will interact with all levels of AWS leadership. We are developing solutions that not only surface the “what” but also the “why” and “how to fix it”, without requiring operators to have extensive domain knowledge and technical expertise to efficiently troubleshoot and remediate incidents. Using decades of AWS operational excellence coupled with the advances in LLMs and Gen-AI technologies, we are transforming the very core of how customers can effortlessly interact with our offerings to build and operate their applications in the cloud. We are hiring to grow our team, and are looking for well-rounded applied scientists with backgrounds in machine learning, foundation models, and natural language processing. You'll be working with talented scientists, engineers, and product managers to innovate on behalf of our customers. If you're fired up about being part of a dynamic, mission driven team, then this is your moment to join us on this exciting journey! Key job responsibilities As an Applied Scientist II you will be responsible for * Research and development of algorithms that improve training of foundation models across pre-training, multitask learning, supervised finetuning, and reinforcement learning from human feedback * Research and development of novel approaches for anomaly detection, root cause analysis, and provide intelligent insights from vast amounts of monitoring and observability data * Collaborating with scientists, engineers, and Product Managers across CloudWatch team as well as directly with customers * Lead key science initiatives in strategic investment areas of AI/ML/LLM Ops and Observability * Be an industry thought leader representing Amazon at top-tier scientific conferences * Engaging in the hiring process and developing, growing, and mentoring junior scientists A day in the life Working closely with and across agile teams, you will be able to see and feel the impact of your work on our customers. This is a high visibility and high impact role that will interact with all levels of AWS leadership. Our ideal candidate is excited about the incredible opportunity that cloud monitoring represents and is deeply passionate about delivering the highest quality services leveraging AI/ML/LLMs. You’re naturally customer centric and thrive in a fast-paced environment that requires strong technical and business judgment and solid communication skills. About the team Amazon CloudWatch Logs is a core monitoring service used by millions of AWS customers. We move fast and have delivered remarkable products and features over the last few years to streamline how AWS customers troubleshoot their critical applications. Our mission is to be the most cost effective, integrated, fast, and secure logs management and analytics platform for AWS customers. We are a diverse group of product and engineering professionals that are passionate about delivering logging features that delight customers operating at any scale. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Utility Computing (UC) AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (IoT), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.
US, WA, Seattle
"We see our customers as invited guests to a party, and we are the hosts. It's our job every day to make every important aspect of the customer experience a little bit better." - Jeff Bezos, Founder & CEO. We didn’t make Amazon a trillion-dollar company, our customers did and we want to ensure that our customers always have a positive experience that keeps them coming back to Amazon. To help achieve this, the Worldwide Defect Elimination (WWDE) team, within Amazon Customer Service (CS), relentlessly focuses on maintaining customer trust by building products that offer appropriate resolutions to resolve issues faced by our customers. WWDE scientists solve complex problems and build scalable solutions to help our customers navigate through issues and eliminate systemic defects to prevent future issues. As a Research Scientist, your role is pivotal in leveraging advanced Artificial Intelligence (AI) and Machine Learning (ML) techniques to address customer issues at scale. You'll develop innovative solutions that summarize and detect issues, organize them using taxonomy, and pinpoint root causes within Amazon systems. Your expertise will drive the identification of responsible stakeholders and enable swift resolution. Utilizing the latest techniques, you will build an AI ecosystem that can efficiently comb over our billions of customer interactions (using a combination of media). As a part of this role, you will collaborate with a large team of experts in the field and move the state of defect elimination research forward. You should have a knack for leveraging AI to translate complex data insights into actionable strategies and can communicate these effectively to both technical and non-technical audiences. Key job responsibilities - Develop ML/GenAI-powered solutions for automating defect elimination workflows - Design and implement robust metrics to evaluate the effectiveness of ML/AI models - Perform statistical analyses and statistical tests, including hypothesis testing and A/B testing - Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: - Medical, Dental, and Vision Coverage - Maternity and Parental Leave Options - Paid Time Off (PTO) - 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply.
BR, SP, Sao Paulo
Esta é uma posição de colaborador individual, com base em nosso escritório de São Paulo. Procuramos uma pessoa dinâmica, analítica, inovadora, orientada para a prática e com foco inabalável no cliente. Na Amazon, nosso objetivo é exceder as expectativas dos clientes, garantindo que seus pedidos sejam entregues com máxima rapidez, precisão e eficiência de custo. A determinação da rota de cada pacote é realizada por sistemas complexos, que precisam acompanhar o crescimento acelerado e a complexidade da malha logística no Brasil. Diante desse cenário, a equipe de Otimização de Supply Chain está à procura de um cientista de dados experiente, capaz de desenvolver modelos, ferramentas e processos para garantir confiabilidade, agilidade, eficiência de custos e a melhor utilização dos ativos. O candidato ideal terá sólidas habilidades quantitativas e experiência com conjuntos de dados complexos, sendo capaz de identificar tendências, inovar processos e tomar decisões baseadas em dados, considerando a cadeia de suprimentos de ponta a ponta. Key job responsibilities * Executar projetos de melhoria contínua na malha logística, aproveitando boas práticas de outros países e/ou desenvolvendo novos modelos. * Desenvolver modelos de otimização e cenários para planejamentos logísticos. * Criar modelos de otimização voltados para a execução de eventos e períodos de alta demanda. Automatizar processos manuais para melhorar a produtividade da equipe. * Auditar operações, configurações sistêmicas e processos que possam impactar custos, produtividade e velocidade de entregas. * Realizar benchmarks com outros países para identificar melhores práticas e processos avançados, conectando-os às operações no Brasil. About the team Nosso time é composto por engenheiros de dados, gerentes de projetos e cientistas de dados, todos dedicados a criar soluções escaláveis e inovadoras que suportem e otimizem as operações logísticas da Amazon no Brasil. Nossa missão é garantir a eficiência de todas as etapas da cadeia de suprimentos, desde a primeira até a última milha, ajudando a Amazon a entregar resultados com agilidade, precisão e a um custo competitivo, especialmente em um ambiente de rápido crescimento e complexidade.
US, CA, San Francisco
We are hiring an Economist with the ability to disambiguate very challenging structural problems in two and multi-sided markets. The right hire will be able to get dirty with the data to come up with stylized facts, build reduced form model that motivate structural assumptions, and build to more complex structural models. The main use case will be understanding the incremental effects of subsidies to a two sided market relate to sales motions characterized by principal agent problems. Key job responsibilities This role with interface directly with product owners, scientists/economists, and leadership to create multi-year research agendas that drive step change growth for the business. The role will also be an important collaborator with other science teams at AWS. A day in the life Our team takes big swings and works on hard cross organizational problems where the optimal success rate is not 100%. We also ask people to grow their skills and stretch and make sure we do it in a supportive and fun environment. It’s about empirically measured impact, advancement, and fun on our team. We work hard during work hours but we also don’t encourage working at nights or on weekends except in very rare, high stakes cases. Burn out isn’t a successful long run strategy. Because we invest in the long run success of our group it’s important to have hobbies, relax and then come to work refreshed and excited. It makes for bigger impact, faster skill accrual and thus career advancement. About the team Our group is technically rigorous and encourages ongoing academic conference participation and publication. Our leaders are here for you and to enable you to be successful. We believe in being servant leaders focused on influence: good data work has little value if it doesn’t translate into actionable insights that are rolled out and impact the real economy. We are communication centric since being able to explain what we do ensures high success rates and lowers administrative churn. Also: we laugh a lot. If it’s not fun, what’s the point?
US, WA, Seattle
This is a unique opportunity to join a small, high-impact team working on AI agents for health initiatives. You will lead the crucial data foundation of our project, managing health data acquisition, processing, and model evaluation, while also contributing to machine learning model development. Your work will directly influence the creation and improvement of AI solutions that could significantly impact how individuals manage their daily health and long-term wellness goals. If you're passionate about leveraging data and developing ML models to solve meaningful problems in healthcare through AI, this role is for you. You'll work on large-scale data processing, design annotation workflows, develop evaluation metrics, and contribute to the machine learning algorithms that drive the performance of our health AI agents. You'll have the chance to innovate alongside healthcare experts and data scientists. In this early-stage initiative, you'll have significant influence on our data strategies and ML approaches, shaping how they drive our AI solutions. This is an excellent opportunity for a high-judgment data scientist with ML expertise to demonstrate impact and make key decisions that will form the backbone of our health AI initiatives. Key job responsibilities Be the complete owner for health data acquisition, processing, and quality assurance Design and oversee data annotation workflows Collaborate on data sourcing strategies Lead health data acquisition and processing initiatives Manage AI agent example annotation processes Develop and implement data evaluation metrics Design, implement, and evaluate machine learning models for AI agents, with a focus on improving natural language understanding and generation in health contexts A day in the life You'll work with a cross-disciplinary team to source, evaluate, and leverage health data for AI agent development. You'll shape data acquisition strategies, annotation workflows, and machine learning models to enhance our AI's health knowledge. Expect to dive deep into complex health datasets, challenge conventional data evaluation metrics, and continuously refine our AI agents' ability to understand and respond to health-related queries.
US, MA, Boston
The Automated Reasoning Group is looking for a Applied Scientist with expertise in programming language semantics and deductive verification techniques (e.g. Lean, Dafny) to deliver novel code reasoning capabilities at scale. You will be part of a larger organization that develops a spectrum of formal software analysis tools and applies them to software at all levels of abstraction from assembler through high-level programming languages. You will work with a team of world class automated reasoning experts to deliver code reasoning technology that is accessible to all developers.
US, CA, San Diego
Do you want to be part of the team developing the future technology that impacts the customer experience of ground-breaking products? Then come join us and make history. We are looking for a passionate, talented, and inventive Applied Scientist with a background in AI, Gen AI, Machine Learning, NLP, to help build LLM solutions for Amazon core shopping. As an Applied Scientist, you will be working closely with a team of applied scientists and engineers to build systems that shape the future of Amazon's by automatically generating relevant content and building a whole page experience that is coherent, dynamic, and interesting. You will improve ranking and optimization in our algorithm. You will participate in driving features from idea to deployment, and your work will directly impact millions of customers.
US, WA, Seattle
Amazon is the 4th most popular site in the US. Our product search engine is one of the most heavily used services in the world, indexes billions of products, and serves hundreds of millions of customers world-wide. We are working on a new AI-first initiative to re-architect and reinvent the way we do search through the use of extremely large scale next-generation deep learning techniques. Our goal is to make step function improvements in the use of advanced Machine Learning (ML) on very large scale datasets, specifically through the use of aggressive systems engineering and hardware accelerators. This is a rare opportunity to develop cutting edge ML solutions and apply them to a problem of this magnitude. Some exciting questions that we expect to answer over the next few years include: - Can combining supervised multi-task training with unsupervised training help us to improve model accuracy? - Can we transfer our knowledge of the customer to every language and every locale ? - Can we build foundational ML models that can serve different business lines. This is a unique opportunity to get in on the ground floor, shape, and build the next-generation of Amazon ML. We are looking for exceptional scientists and ML engineers who are passionate about innovation and impact, and want to work in a team with a startup culture within a larger organization. Key job responsibilities Train large deep learning models with hundreds of billions parameters. Build foundational ML models that can be applied to different business applications in Amazon such as Search and Ads. Areas of interest include efficient model architecture, training and data optimization/scaling, model/data/pipeline parallel techniques, and much more.