Physics-constrained machine learning for scientific computing

Amazon researchers draw inspiration from finite-volume methods and adapt neural operators to enforce conservation laws and boundary conditions in deep-learning models of physical systems.

Commercial applications of deep learning have been making headlines for years — never more so than this spring. More surprisingly, deep-learning methods have also shown promise for scientific computing, where they can be used to predict solutions to partial differential equations (PDEs). These equations are often prohibitively expensive to solve numerically; using data-driven methods has the potential to transform both scientific and engineering applications of scientific computing, including aerodynamics, ocean and climate, and reservoir modeling.

A fundamental challenge is that the predictions of deep-learning models trained on physical data typically ignore fundamental physical principles. Such models might, for instance, violate system conservation laws: the solution to a heat transfer problem may fail to conserve energy, or the solution to a fluid flow problem may fail to conserve mass. Similarly, a model’s solution may violate boundary conditions — say, allowing heat flow through an insulator at the boundary of a physical system. This can happen even when the model’s training data includes no such violations: at inference time, the model may simply extrapolate from patterns in the training data in an illicit way.

In a pair of recent papers accepted at the International Conference on Machine Learning (ICML) and the International Conference on Learning Representations (ICLR), we investigate the problems of adding known physics constraints to the predictive outputs of machine learning (ML) models when computing the solutions to PDEs.

Related content
Danielle Maddix Robinson's mathematics background helps inform robust models that can predict everything from retail demand to epidemiology.

The ICML paper, “Learning physical models that can respect conservation laws”, which we will present in July, focuses on satisfying conservation laws with black-box models. We show that, for certain types of challenging PDE problems with propagating discontinuities, known as shocks, our approach to constraining model outputs works better than its predecessors: it more sharply and accurately captures the physical solution and its uncertainty and yields better performance on downstream tasks.

In this paper, we collaborated with Derek Hansen, a PhD student in the Department of Statistics at the University of Michigan, who was an intern at AWS AI Labs at the time, and Michael Mahoney, an Amazon Scholar in Amazon’s Supply Chain Optimization Technologies organization and a professor of statistics at the University of California, Berkeley.

In a complementary paper we presented at this year’s ICLR, “Guiding continuous operator learning through physics-based boundary constraints”, we, together with Nadim Saad, an AWS AI Labs intern at the time and a PhD student at the Institute for Computational and Mathematical Engineering (ICME) at Stanford University, focus on enforcing physics through boundary conditions. The modeling approach we describe in this paper is a so-called constrained neural operator, and it exhibits up to a 20-fold performance improvement over previous operator models.

So that scientists working with models of physical systems can benefit from our work, we’ve released the code for the models described in both papers (conservation laws | boundary constraints) on GitHub. We also presented on both works in March 2023 at AAAI's symposium on Computational Approaches to Scientific Discovery.

Danielle Maddix Robinson on physics-constrained machine learning for scientific computing
A talk presented in April 2023 at the Machine Learning and Dynamical Systems Seminar at the Alan Turing Institute.

Conservation laws

Recent work in scientific machine learning (SciML) has focused on incorporating physical constraints into the learning process as part of the loss function. In other words, the physical information is treated as a soft constraint or regularization.

Related content
Hybrid model that combines machine learning with differential equations outperforms models that use either strategy by itself.

A main issue with these approaches is that they do not guarantee that the physical property of conservation is satisfied. To address this issue, in “Learning physical models that can respect conservation laws”, we propose ProbConserv, a framework for incorporating constraints into a generic SciML architecture. Instead of expressing conservation laws in the differential forms of PDEs, which are commonly used in SciML as extra terms in the loss function, ProbConserv converts them into their integral form. This allows us to use ideas from finite-volume methods to enforce conservation.

In finite-volume methods, a spatial domain — say, the region through which heat is propagating — is discretized into a finite set of smaller volumes called control volumes. The method maintains the balance of mass, energy, and momentum throughout this domain by applying the integral form of the conservation law locally across each control volume. Local conservation requires that the out-flux from one volume equals the in-flux to an adjacent volume. By enforcing the conservation law across each control volume, the finite-volume method guarantees global conservation across the whole domain, where the rate of change of the system’s total mass is given by the change in fluxes along the domain boundaries.

Flux Volume Edit-01_230525135151.jpg
The integral form of a conservation law states that the rate of change of the total mass of the system over a domain (Ω) is equal to the difference between the in-flux and out-flux along the domain boundaries (∂Ω).

More specifically, the first step in the ProbConserv method is to use a probabilistic machine learning model — such as a Gaussian process, attentive neural process (ANP), or ensembles of neural-network models — to estimate the mean and variance of the outputs of the physical model. We then use the integral form of the conservation law to perform a Bayesian update to the mean and covariance of the distribution of the solution profile such that it satisfies the conservation constraint exactly in the limit.

Related content
Learning the complete quantile function, which maps probabilities to variable values, rather than building separate models for each quantile level, enables better optimization of resource trade-offs.

In the paper, we provide a detailed analysis of ProbConserv’s application to the generalized porous-medium equation (GPME), a widely used parameterized family of PDEs. The GPME has been used in applications ranging from underground flow transport to nonlinear heat transfer to water desalination and beyond. By varying the PDE parameters, we can describe PDE problems with different levels of complexity, ranging from “easy” problems, such as parabolic PDEs that model smooth diffusion processes, to “hard” nonlinear hyperbolic-like PDEs with shocks, such as the Stefan problem, which has been used to model two-phase flow between water and ice, crystal growth, and more complex porous media such as foams.

For easy GPME variants, ProbConserv compares well to state-of-the-art competitors, and for harder GPME variants, it outperforms other ML-based approaches that do not guarantee volume conservation. ProbConserv seamlessly enforces physical conservation constraints, maintains probabilistic uncertainty quantification (UQ), and deals well with the problem of estimating shock propagation, which is difficult given ML models’ bias toward smooth and continuous behavior. It also effectively handles heteroskedasticity, or fluctuation in variables’ standard deviations. In all cases, it achieves superior predictive performance on downstream tasks, such as predicting shock location, which is a challenging problem even for advanced numerical solvers.

Examples

Conservation of mass.png
Conservation of mass can be violated by a black-box deep-learning model (here, the ANP), even when the PDE is applied as a soft constraint (here, SoftC-ANP) on the loss function, à la physics-informed neural networks (PINNs). This figure shows the variation of total mass over time for the smooth constant coefficient diffusion equation (an “easy” GPME example). The true mass remains zero, since there is zero net flux from the domain boundaries, and thus mass cannot be created or destroyed in the domain interior.
Uncertainty quantification.png
Density solution profiles with uncertainty quantification. In the “hard” version of the GPME problem, also known as the Stefan problem, the solution profile may contain a moving sharp interface in space, known as a shock. The shock here separates the region with fluid from the degenerate one with zero fluid density. The uncertainty is largest in the shock region and becomes smaller in the areas away from it. The main idea behind ProbConserv’s UQ method is to use the uncertainty in the unconstrained black box to modify the mean and covariance at the locations where the variance is largest, to satisfy the conservation constraint. The constant-variance assumption in the HardC-ANP baseline does not result in improvement on this hard task, while ProbConserv results in a better estimate of the solution at the shock and a threefold improvement in the mean squared error (MSE).
Shock position.png
Downstream task. Histogram of the posterior of the shock position computed by ProbConserv and the other baselines. While the baseline models skew the distribution of the shock position, ProbConserv computes a distribution that is well-centered around the true shock position. This illustrates that enforcing physical constraints such as conservation is necessary in order to provide reliable and accurate estimations of the shock position.

Boundary conditions

Boundary conditions (BCs) are physics-enforced constraints that solutions of PDEs must satisfy at specific spatial locations. These constraints carry important physical meaning and guarantee the existence and the uniqueness of PDE solutions. Current deep-learning-based approaches that aim to solve PDEs rely heavily on training data to help models learn BCs implicitly. There is no guarantee, though, that these models will satisfy the BCs during evaluation. In our ICLR 2023 paper, “Guiding continuous operator learning through physics-based boundary constraints”, we propose an efficient, hard-constrained, neural-operator-based approach to enforcing BCs.

Related content
Amazon quantum computing scientist recognized for ‘outstanding contributions to physics’.

Where most SciML methods (for example, PINNs) parameterize the solution to PDEs with a neural network, neural operators aim to learn the mapping from PDE coefficients or initial conditions to solutions. At the core of every neural operator is a kernel function, formulated as an integral operator, that describes the evolution of a physical system over time. For our study, we chose the Fourier neural operator (FNO) as an example of a kernel-based neural operator.

We propose a model we call the boundary-enforcing operator network (BOON). Given a neural operator representing a PDE solution, a training dataset, and prescribed BCs, BOON applies structural corrections to the neural operator to ensure that the predicted solution satisfies the system BCs.

BOON architecture full.png
BOON architectures. Kernel correction architectures for commonly used Dirichlet, Neumann, and periodic boundary conditions that carry different physical meanings.

We provide our refinement procedure and demonstrate that BOON’s solutions satisfy physics-based BCs, such as Dirichlet, Neumann, and periodic. We also report extensive numerical experiments on a wide range of problems including the heat and wave equations and Burgers's equation, along with the challenging 2-D incompressible Navier-Stokes equations, which are used in climate and ocean modeling. We show that enforcing these physical constraints results in zero boundary error and improves the accuracy of solutions on the interior of the domain. BOON’s correction method exhibits a 2-fold to 20-fold improvement over a given neural-operator model in relative L2 error.

Examples

Insulator at boundary.png
Nonzero flux at an insulator on the boundary. The solution to the unconstrained Fourier-neural-operator (FNO) model for the heat equation has a nonzero flux at the left insulating boundary, which means that it allows heat to flow through an insulator. This is in direct contradiction to the physics-enforced boundary constraint. BOON, which satisfies this so-called Neumann boundary condition, ensures that the gradient is zero at the insulator. Similarly, at the right boundary, we see that the FNO solution has a negative gradient at a positive heat source and that the BOON solution corrects this nonphysical result. Guaranteeing no violation of the underlying physics is critical to the practical adoption of these deep-learning models by practitioners in the field.
Stokes's second problem.png
Stokes’s second problem. This figure shows the velocity profile and corresponding absolute errors over time obtained by BOON (top). BOON improves the accuracy at the boundary, which, importantly, also improves accuracy on the interior of the domain compared to the unconstrained Fourier-neural-operator (FNO) model (bottom), where the errors at the boundary propagate inward over time.
Initial condition.png
2-D Navier-Stokes lid-driven cavity flow initial condition. The initial vorticity field (perpendicular to the screen), which is defined as the curl of the velocity field. At the initial time step, t = 0, the only nonzero component of the horizontal velocity is given by the top constant Dirichlet boundary condition, which drives the viscous incompressible flow at the later time steps. The other boundaries have the common no-slip Dirichlet boundary condition, which fixes the velocity to be zero at those locations.

Navier-Stokes lid-driven flow
2-D Navier-Stokes lid-driven cavity flow vorticity field. The vorticity field (perpendicular to the screen) within a square cavity filled with an incompressible fluid, which is induced by a fixed nonzero horizontal velocity prescribed by the Dirichlet boundary condition at the top boundary line for a 25-step (T=25) prediction until final time t = 2.
2-D Navier-Stokes lid-driven cavity flow relative error.
The L2 relative-error plots show significantly higher relative error over time for the data-driven Fourier neural operator (FNO) compared to that of our constrained BOON model on the Navier-Stokes lid-driven cavity flow problem for both a random test sample and the average over the test samples.

Acknowledgements: This work would have not been possible without the help of our coauthor Michael W. Mahoney, an Amazon Scholar; coauthors and PhD student interns Derek Hansen and Nadim Saad; and mentors Yuyang Wang and Margot Gerritsen.

Research areas

Related content

US, VA, Arlington
We are seeking an exceptional Data Scientist to join our team in PXT Central Science. The ideal candidate will thrive in a dynamic, multifaceted role where you'll translate complex business challenges into rigorous quantitative frameworks, extract actionable insights from structured and unstructured datasets, and architect science-backed, scalable solutions that elevate the experience of our 1 million+ employees worldwide. If you're energized by the opportunity to apply data science to our mission of making Amazon Earth's Best Employer, we want to hear from you. Key job responsibilities • Own the design, development, and maintenance of scalable models and prototypes leveraging statistical, machine learning, or GenAI methodologies to enhance employee experience. • Partner with scientists, engineers, and product leaders to solve for employee experience defects using scientific approaches, building new services and tools that deliverable measurable impact. • Author and maintain detailed technical documentation related to the projects you drive. • Communicate results to diverse audiences of varying technical background with effective writing, visualizations, and presentations • Stay current with emerging methods and technologies, and implement them strategically to amplify the team’s impact. About the team The Central Science Team within Amazon’s People Experience and Technology org (PXTCS) uses economics, behavioral science, statistics, machine learning, and Generative AI to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, well-being, and the value of work to Amazonians. We are an interdisciplinary team, which combines the talents of science, engineering, and UX to develop and deliver solutions that measurably achieve this goal.
US, WA, Bellevue
The Amazon Fulfillment Technologies (AFT) Science team is looking for an exceptional Applied Scientist, with strong optimization and analytical skills, to develop production solutions for one of the most complex systems in the world: Amazon’s Fulfillment Network. At AFT Science, we design, build and deploy optimization, simulation, and machine learning solutions to power the production systems running at world wide Amazon Fulfillment Centers. We solve a wide range of problems that are encountered in the network, including labor planning and staffing, demand prioritization, pick assignment and scheduling, and flow process optimization. We are tasked to develop innovative, scalable, and reliable science-driven solutions that are beyond the published state of art in order to run frequently (ranging from every few minutes to every few hours per use case) and continuously in our large scale network. Key job responsibilities As an Applied Scientist, you will work with other scientists, software engineers, product managers, and operations leaders to develop scientific solutions and analytics using a variety of tools and observe direct impact to process efficiency and associate experience in the fulfillment network. Key responsibilities include: * Develop an understanding and domain knowledge of operational processes, system architecture and functions, and business requirements * Deep dive into data and code to identify opportunities for continuous improvement and/or disruptive new approach * Develop scalable mathematical models for production systems to derive optimal or near-optimal solutions for existing and new challenges * Create prototypes and simulations for agile experimentation of devised solutions * Advocate technical solutions to business stakeholders, engineering teams, and senior leadership * Partner with engineers to integrate prototypes into production systems * Design experiment to test new or incremental solutions launched in production and build metrics to track performance About the team Amazon Fulfillment Technology (AFT) designs, develops and operates the end-to-end fulfillment technology solutions for all Amazon Fulfillment Centers (FC). We harmonize the physical and virtual world so Amazon customers can get what they want, when they want it. The AFT Science team has expertise in operations research, optimization, scheduling, planning, simulation, and machine learning. We also have domain expertise in the operational processes within the FCs and their defects. We prioritize advancements that support AFT tech teams and focus areas rather than specific fields of research or individual business partners. We influence each stage of innovation from inception to deployment which includes both developing novel solutions or improving existing approaches. Resulting production systems rely on a diverse set of technologies, our teams therefore invest in multiple specialties as the needs of each focus area evolves.
US, WA, Seattle
We are seeking an exceptional Data Scientist to join our team in PXT Central Science. The ideal candidate will thrive in a dynamic, multifaceted role where you'll translate complex business challenges into rigorous quantitative frameworks, extract actionable insights from structured and unstructured datasets, and architect science-backed, scalable solutions that elevate the experience of our 1 million+ employees worldwide. If you're energized by the opportunity to apply data science to our mission of making Amazon Earth's Best Employer, we want to hear from you. Key job responsibilities • Own the design, development, and maintenance of scalable models and prototypes leveraging statistical, machine learning, or GenAI methodologies to enhance employee experience. • Partner with scientists, engineers, and product leaders to solve for employee experience defects using scientific approaches, building new services and tools that deliverable measurable impact. • Author and maintain detailed technical documentation related to the projects you drive. • Communicate results to diverse audiences of varying technical background with effective writing, visualizations, and presentations • Stay current with emerging methods and technologies, and implement them strategically to amplify the team’s impact. About the team The Central Science Team within Amazon’s People Experience and Technology org (PXTCS) uses economics, behavioral science, statistics, machine learning, and Generative AI to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, well-being, and the value of work to Amazonians. We are an interdisciplinary team, which combines the talents of science, engineering, and UX to develop and deliver solutions that measurably achieve this goal.
US, WA, Bellevue
Alexa International is looking for a passionate, talented, and inventive Applied Scientist to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems, requiring strong deep learning and generative models knowledge. You will contribute to developing novel solutions and deliver high-quality results that impact Alexa's international products and services. Key job responsibilities As an Applied Scientist with the Alexa International team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with LLMs. Your work will directly impact our international customers in the form of products and services that make use of digital assistant technology. You will leverage Amazon's heterogeneous data sources, unique and diverse international customer nuances and large-scale computing resources to accelerate advances in text, voice, and vision domains in a multimodal setup. The ideal candidate possesses a solid understanding of machine learning, natural language understanding, modern LLM architectures, LLM evaluation & tooling, and a passion for pushing boundaries in this vast and quickly evolving field. They thrive in fast-paced environments to tackle complex challenges, excel at swiftly delivering impactful solutions while iterating based on user feedback, and collaborate effectively with cross-functional teams. A day in the life * Analyze, understand, and model customer behavior and the customer experience based on large-scale data. * Build novel online & offline evaluation metrics and methodologies for multimodal personal digital assistants. * Fine-tune/post-train LLMs using techniques like SFT, DPO, RLHF, and RLAIF. * Set up experimentation frameworks for agile model analysis and A/B testing. * Collaborate with partner teams on LLM evaluation frameworks and post-training methodologies. * Contribute to end-to-end delivery of solutions from research to production, including reusable science components. * Communicate solutions clearly to partners and stakeholders. * Contribute to the scientific community through publications and community engagement.
US, WA, Bellevue
Amazon’s Last Mile Team is looking for a passionate individual with strong optimization and analytical skills to join its Last Mile Science team in the endeavor of designing and improving the most complex planning of delivery network in the world. Last Mile builds global solutions that enable Amazon to attract an elastic supply of drivers, companies, and assets needed to deliver Amazon's and other shippers' volumes at the lowest cost and with the best customer delivery experience. Last Mile Science team owns the core decision models in the space of jurisdiction planning, delivery channel and modes network design, capacity planning for on the road and at delivery stations, routing inputs estimation and optimization. Our research has direct impact on customer experience, driver and station associate experience, Delivery Service Partner (DSP)’s success and the sustainable growth of Amazon. Optimizing the last mile delivery requires deep understanding of transportation, supply chain management, pricing strategies and forecasting. Only through innovative and strategic thinking, we will make the right capital investments in technology, assets and infrastructures that allows for long-term success. Our team members have an opportunity to be on the forefront of supply chain thought leadership by working on some of the most difficult problems in the industry with some of the best product managers, scientists, and software engineers in the industry. Key job responsibilities Candidates will be responsible for developing solutions to better manage and optimize delivery capacity in the last mile network. The successful candidate should have solid research experience in one or more technical areas of Operations Research or Machine Learning. These positions will focus on identifying and analyzing opportunities to improve existing algorithms and also on optimizing the system policies across the management of external delivery service providers and internal planning strategies. They require superior logical thinkers who are able to quickly approach large ambiguous problems, turn high-level business requirements into mathematical models, identify the right solution approach, and contribute to the software development for production systems. To support their proposals, candidates should be able to independently mine and analyze data, and be able to use any necessary programming and statistical analysis software to do so. Successful candidates must thrive in fast-paced environments, which encourage collaborative and creative problem solving, be able to measure and estimate risks, constructively critique peer research, and align research focuses with the Amazon's strategic needs.
US, WA, Bellevue
Alexa International is looking for a passionate, talented, and inventive Applied Scientist to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems, requiring strong deep learning and generative models knowledge. You will contribute to developing novel solutions and deliver high-quality results that impact Alexa's international products and services. Key job responsibilities As an Applied Scientist with the Alexa International team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with LLMs. Your work will directly impact our international customers in the form of products and services that make use of digital assistant technology. You will leverage Amazon's heterogeneous data sources, unique and diverse international customer nuances and large-scale computing resources to accelerate advances in text, voice, and vision domains in a multimodal setup. The ideal candidate possesses a solid understanding of machine learning, natural language understanding, modern LLM architectures, LLM evaluation & tooling, and a passion for pushing boundaries in this vast and quickly evolving field. They thrive in fast-paced environments to tackle complex challenges, excel at swiftly delivering impactful solutions while iterating based on user feedback, and collaborate effectively with cross-functional teams. A day in the life * Analyze, understand, and model customer behavior and the customer experience based on large-scale data. * Build novel online & offline evaluation metrics and methodologies for multimodal personal digital assistants. * Fine-tune/post-train LLMs using techniques like SFT, DPO, RLHF, and RLAIF. * Set up experimentation frameworks for agile model analysis and A/B testing. * Collaborate with partner teams on LLM evaluation frameworks and post-training methodologies. * Contribute to end-to-end delivery of solutions from research to production, including reusable science components. * Communicate solutions clearly to partners and stakeholders. * Contribute to the scientific community through publications and community engagement.
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing (CQC) is a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers on a mission to develop a fault-tolerant quantum computer. Throughout your internship journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied science, where your contributions will shape the future of Quantum Computing and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Quantum Research Science and Applied Science Internships in Santa Clara, CA and Pasadena, CA. We are particularly interested in candidates with expertise in any of the following areas: superconducting qubits, cavity/circuit QED, quantum optics, open quantum systems, superconductivity, electromagnetic simulations of superconducting circuits, microwave engineering, benchmarking, quantum error correction, fabrication, etc. Key job responsibilities In this role, you will work alongside global experts to develop and implement novel, scalable solutions that advance the state-of-the-art in the areas of quantum computing. You will tackle challenging, groundbreaking research problems, work with leading edge technology, focus on highly targeted customer use-cases, and launch products that solve problems for Amazon customers. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices.
US, WA, Bellevue
Alexa International Science team is looking for a passionate, talented, and inventive Senior Applied Scientist to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems, requiring strong deep learning and generative models knowledge. At this level, you will drive cross-team scientific strategy, influence partner teams, and deliver solutions that have broad impact across Alexa's international products and services. Key job responsibilities As a Senior Applied Scientist with the Alexa International team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with LLMs, particularly delivering industry-leading scientific research and applied AI for multi-lingual applications — a challenging area for the industry globally. Your work will directly impact our global customers in the form of products and services that support Alexa+. You will leverage Amazon's heterogeneous data sources and large-scale computing resources to accelerate advances in text, speech, and vision domains. The ideal candidate possesses a solid understanding of machine learning, speech and/or natural language processing, modern LLM architectures, LLM evaluation & tooling, and a passion for pushing boundaries in this vast and quickly evolving field. They thrive in fast-paced environment, like to tackle complex challenges, excel at swiftly delivering impactful solutions while iterating based on user feedback, and are able to influence and align multiple teams around a shared scientific vision.
US, WA, Bellevue
Amazon is seeking a Language Data Scientist to join the Alexa International science team as domain expert. This role focuses on expanding analysis and evaluation of conversational interaction data deliverables. The Language Data Scientist is an expert in conversation assessment processes, working closely with a team of skilled machine learning scientists and engineers, and is a key member in developing new conventions for relevant annotation workflows. The Language Data Scientist will be own unique data analysis and research requests that support the training and evaluation of LLMs and machine learning models, and the overall processing of a data collection. Key job responsibilities To be successful in this role, you must have a passion for data, efficiency, and accuracy. Specifically, you will: - Own data analyses for customer-facing features, including launch go/no-go metrics for new features and accuracy metrics for existing features - Handle unique data analysis requests from a range of stakeholders, including quantitative and qualitative analyses to elevate customer experience with speech interfaces - Lead and evaluate changing dialog evaluation conventions, test tooling developments, and pilot processes to support expansion to new data areas - Continuously evaluate workflow tools and processes and offer solutions to ensure they are efficient, high quality, and scalable - Provide expert support for a large and growing team of data analysts - Provide support for ongoing and new data collection efforts as a subject matter expert on conventions and use of the data - Conduct research studies to understand speech and customer-Alexa interactions - Collaborate with scientists and product managers, and other stakeholders in defining and validating customer experience metrics
US, WA, Bellevue
Alexa International is looking for a passionate, talented, and inventive Applied Scientist to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems, requiring strong deep learning and generative models knowledge. You will contribute to developing novel solutions and deliver high-quality results that impact Alexa's international products and services. Key job responsibilities As an Applied Scientist with the Alexa International team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with LLMs. Your work will directly impact our international customers in the form of products and services that make use of digital assistant technology. You will leverage Amazon's heterogeneous data sources, unique and diverse international customer nuances and large-scale computing resources to accelerate advances in text, voice, and vision domains in a multimodal setup. The ideal candidate possesses a solid understanding of machine learning, natural language understanding, modern LLM architectures, LLM evaluation & tooling, and a passion for pushing boundaries in this vast and quickly evolving field. They thrive in fast-paced environments to tackle complex challenges, excel at swiftly delivering impactful solutions while iterating based on user feedback, and collaborate effectively with cross-functional teams. A day in the life * Analyze, understand, and model customer behavior and the customer experience based on large-scale data. * Build novel online & offline evaluation metrics and methodologies for multimodal personal digital assistants. * Fine-tune/post-train LLMs using techniques like SFT, DPO, RLHF, and RLAIF. * Set up experimentation frameworks for agile model analysis and A/B testing. * Collaborate with partner teams on LLM evaluation frameworks and post-training methodologies. * Contribute to end-to-end delivery of solutions from research to production, including reusable science components. * Communicate solutions clearly to partners and stakeholders. * Contribute to the scientific community through publications and community engagement.