Physics-constrained machine learning for scientific computing

Amazon researchers draw inspiration from finite-volume methods and adapt neural operators to enforce conservation laws and boundary conditions in deep-learning models of physical systems.

Commercial applications of deep learning have been making headlines for years — never more so than this spring. More surprisingly, deep-learning methods have also shown promise for scientific computing, where they can be used to predict solutions to partial differential equations (PDEs). These equations are often prohibitively expensive to solve numerically; using data-driven methods has the potential to transform both scientific and engineering applications of scientific computing, including aerodynamics, ocean and climate, and reservoir modeling.

A fundamental challenge is that the predictions of deep-learning models trained on physical data typically ignore fundamental physical principles. Such models might, for instance, violate system conservation laws: the solution to a heat transfer problem may fail to conserve energy, or the solution to a fluid flow problem may fail to conserve mass. Similarly, a model’s solution may violate boundary conditions — say, allowing heat flow through an insulator at the boundary of a physical system. This can happen even when the model’s training data includes no such violations: at inference time, the model may simply extrapolate from patterns in the training data in an illicit way.

In a pair of recent papers accepted at the International Conference on Machine Learning (ICML) and the International Conference on Learning Representations (ICLR), we investigate the problems of adding known physics constraints to the predictive outputs of machine learning (ML) models when computing the solutions to PDEs.

Related content
Danielle Maddix Robinson's mathematics background helps inform robust models that can predict everything from retail demand to epidemiology.

The ICML paper, “Learning physical models that can respect conservation laws”, which we will present in July, focuses on satisfying conservation laws with black-box models. We show that, for certain types of challenging PDE problems with propagating discontinuities, known as shocks, our approach to constraining model outputs works better than its predecessors: it more sharply and accurately captures the physical solution and its uncertainty and yields better performance on downstream tasks.

In this paper, we collaborated with Derek Hansen, a PhD student in the Department of Statistics at the University of Michigan, who was an intern at AWS AI Labs at the time, and Michael Mahoney, an Amazon Scholar in Amazon’s Supply Chain Optimization Technologies organization and a professor of statistics at the University of California, Berkeley.

In a complementary paper we presented at this year’s ICLR, “Guiding continuous operator learning through physics-based boundary constraints”, we, together with Nadim Saad, an AWS AI Labs intern at the time and a PhD student at the Institute for Computational and Mathematical Engineering (ICME) at Stanford University, focus on enforcing physics through boundary conditions. The modeling approach we describe in this paper is a so-called constrained neural operator, and it exhibits up to a 20-fold performance improvement over previous operator models.

So that scientists working with models of physical systems can benefit from our work, we’ve released the code for the models described in both papers (conservation laws | boundary constraints) on GitHub. We also presented on both works in March 2023 at AAAI's symposium on Computational Approaches to Scientific Discovery.

Danielle Maddix Robinson on physics-constrained machine learning for scientific computing
A talk presented in April 2023 at the Machine Learning and Dynamical Systems Seminar at the Alan Turing Institute.

Conservation laws

Recent work in scientific machine learning (SciML) has focused on incorporating physical constraints into the learning process as part of the loss function. In other words, the physical information is treated as a soft constraint or regularization.

Related content
Hybrid model that combines machine learning with differential equations outperforms models that use either strategy by itself.

A main issue with these approaches is that they do not guarantee that the physical property of conservation is satisfied. To address this issue, in “Learning physical models that can respect conservation laws”, we propose ProbConserv, a framework for incorporating constraints into a generic SciML architecture. Instead of expressing conservation laws in the differential forms of PDEs, which are commonly used in SciML as extra terms in the loss function, ProbConserv converts them into their integral form. This allows us to use ideas from finite-volume methods to enforce conservation.

In finite-volume methods, a spatial domain — say, the region through which heat is propagating — is discretized into a finite set of smaller volumes called control volumes. The method maintains the balance of mass, energy, and momentum throughout this domain by applying the integral form of the conservation law locally across each control volume. Local conservation requires that the out-flux from one volume equals the in-flux to an adjacent volume. By enforcing the conservation law across each control volume, the finite-volume method guarantees global conservation across the whole domain, where the rate of change of the system’s total mass is given by the change in fluxes along the domain boundaries.

Flux Volume Edit-01_230525135151.jpg
The integral form of a conservation law states that the rate of change of the total mass of the system over a domain (Ω) is equal to the difference between the in-flux and out-flux along the domain boundaries (∂Ω).

More specifically, the first step in the ProbConserv method is to use a probabilistic machine learning model — such as a Gaussian process, attentive neural process (ANP), or ensembles of neural-network models — to estimate the mean and variance of the outputs of the physical model. We then use the integral form of the conservation law to perform a Bayesian update to the mean and covariance of the distribution of the solution profile such that it satisfies the conservation constraint exactly in the limit.

Related content
Learning the complete quantile function, which maps probabilities to variable values, rather than building separate models for each quantile level, enables better optimization of resource trade-offs.

In the paper, we provide a detailed analysis of ProbConserv’s application to the generalized porous-medium equation (GPME), a widely used parameterized family of PDEs. The GPME has been used in applications ranging from underground flow transport to nonlinear heat transfer to water desalination and beyond. By varying the PDE parameters, we can describe PDE problems with different levels of complexity, ranging from “easy” problems, such as parabolic PDEs that model smooth diffusion processes, to “hard” nonlinear hyperbolic-like PDEs with shocks, such as the Stefan problem, which has been used to model two-phase flow between water and ice, crystal growth, and more complex porous media such as foams.

For easy GPME variants, ProbConserv compares well to state-of-the-art competitors, and for harder GPME variants, it outperforms other ML-based approaches that do not guarantee volume conservation. ProbConserv seamlessly enforces physical conservation constraints, maintains probabilistic uncertainty quantification (UQ), and deals well with the problem of estimating shock propagation, which is difficult given ML models’ bias toward smooth and continuous behavior. It also effectively handles heteroskedasticity, or fluctuation in variables’ standard deviations. In all cases, it achieves superior predictive performance on downstream tasks, such as predicting shock location, which is a challenging problem even for advanced numerical solvers.

Examples

Conservation of mass.png
Conservation of mass can be violated by a black-box deep-learning model (here, the ANP), even when the PDE is applied as a soft constraint (here, SoftC-ANP) on the loss function, à la physics-informed neural networks (PINNs). This figure shows the variation of total mass over time for the smooth constant coefficient diffusion equation (an “easy” GPME example). The true mass remains zero, since there is zero net flux from the domain boundaries, and thus mass cannot be created or destroyed in the domain interior.
Uncertainty quantification.png
Density solution profiles with uncertainty quantification. In the “hard” version of the GPME problem, also known as the Stefan problem, the solution profile may contain a moving sharp interface in space, known as a shock. The shock here separates the region with fluid from the degenerate one with zero fluid density. The uncertainty is largest in the shock region and becomes smaller in the areas away from it. The main idea behind ProbConserv’s UQ method is to use the uncertainty in the unconstrained black box to modify the mean and covariance at the locations where the variance is largest, to satisfy the conservation constraint. The constant-variance assumption in the HardC-ANP baseline does not result in improvement on this hard task, while ProbConserv results in a better estimate of the solution at the shock and a threefold improvement in the mean squared error (MSE).
Shock position.png
Downstream task. Histogram of the posterior of the shock position computed by ProbConserv and the other baselines. While the baseline models skew the distribution of the shock position, ProbConserv computes a distribution that is well-centered around the true shock position. This illustrates that enforcing physical constraints such as conservation is necessary in order to provide reliable and accurate estimations of the shock position.

Boundary conditions

Boundary conditions (BCs) are physics-enforced constraints that solutions of PDEs must satisfy at specific spatial locations. These constraints carry important physical meaning and guarantee the existence and the uniqueness of PDE solutions. Current deep-learning-based approaches that aim to solve PDEs rely heavily on training data to help models learn BCs implicitly. There is no guarantee, though, that these models will satisfy the BCs during evaluation. In our ICLR 2023 paper, “Guiding continuous operator learning through physics-based boundary constraints”, we propose an efficient, hard-constrained, neural-operator-based approach to enforcing BCs.

Related content
Amazon quantum computing scientist recognized for ‘outstanding contributions to physics’.

Where most SciML methods (for example, PINNs) parameterize the solution to PDEs with a neural network, neural operators aim to learn the mapping from PDE coefficients or initial conditions to solutions. At the core of every neural operator is a kernel function, formulated as an integral operator, that describes the evolution of a physical system over time. For our study, we chose the Fourier neural operator (FNO) as an example of a kernel-based neural operator.

We propose a model we call the boundary-enforcing operator network (BOON). Given a neural operator representing a PDE solution, a training dataset, and prescribed BCs, BOON applies structural corrections to the neural operator to ensure that the predicted solution satisfies the system BCs.

BOON architecture full.png
BOON architectures. Kernel correction architectures for commonly used Dirichlet, Neumann, and periodic boundary conditions that carry different physical meanings.

We provide our refinement procedure and demonstrate that BOON’s solutions satisfy physics-based BCs, such as Dirichlet, Neumann, and periodic. We also report extensive numerical experiments on a wide range of problems including the heat and wave equations and Burgers's equation, along with the challenging 2-D incompressible Navier-Stokes equations, which are used in climate and ocean modeling. We show that enforcing these physical constraints results in zero boundary error and improves the accuracy of solutions on the interior of the domain. BOON’s correction method exhibits a 2-fold to 20-fold improvement over a given neural-operator model in relative L2 error.

Examples

Insulator at boundary.png
Nonzero flux at an insulator on the boundary. The solution to the unconstrained Fourier-neural-operator (FNO) model for the heat equation has a nonzero flux at the left insulating boundary, which means that it allows heat to flow through an insulator. This is in direct contradiction to the physics-enforced boundary constraint. BOON, which satisfies this so-called Neumann boundary condition, ensures that the gradient is zero at the insulator. Similarly, at the right boundary, we see that the FNO solution has a negative gradient at a positive heat source and that the BOON solution corrects this nonphysical result. Guaranteeing no violation of the underlying physics is critical to the practical adoption of these deep-learning models by practitioners in the field.
Stokes's second problem.png
Stokes’s second problem. This figure shows the velocity profile and corresponding absolute errors over time obtained by BOON (top). BOON improves the accuracy at the boundary, which, importantly, also improves accuracy on the interior of the domain compared to the unconstrained Fourier-neural-operator (FNO) model (bottom), where the errors at the boundary propagate inward over time.
Initial condition.png
2-D Navier-Stokes lid-driven cavity flow initial condition. The initial vorticity field (perpendicular to the screen), which is defined as the curl of the velocity field. At the initial time step, t = 0, the only nonzero component of the horizontal velocity is given by the top constant Dirichlet boundary condition, which drives the viscous incompressible flow at the later time steps. The other boundaries have the common no-slip Dirichlet boundary condition, which fixes the velocity to be zero at those locations.

Navier-Stokes lid-driven flow
2-D Navier-Stokes lid-driven cavity flow vorticity field. The vorticity field (perpendicular to the screen) within a square cavity filled with an incompressible fluid, which is induced by a fixed nonzero horizontal velocity prescribed by the Dirichlet boundary condition at the top boundary line for a 25-step (T=25) prediction until final time t = 2.
2-D Navier-Stokes lid-driven cavity flow relative error.
The L2 relative-error plots show significantly higher relative error over time for the data-driven Fourier neural operator (FNO) compared to that of our constrained BOON model on the Navier-Stokes lid-driven cavity flow problem for both a random test sample and the average over the test samples.

Acknowledgements: This work would have not been possible without the help of our coauthor Michael W. Mahoney, an Amazon Scholar; coauthors and PhD student interns Derek Hansen and Nadim Saad; and mentors Yuyang Wang and Margot Gerritsen.

Research areas

Related content

US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a highly skilled and experienced Sr. Applied Scientist, to support the development and implementation of state-of-the-art algorithms and models for supervised fine-tuning and reinforcement learning through human feedback and complex reasoning; with a focus across text, image, and video modalities. As an Sr. Applied Scientist, you will play a critical role in supporting the development of Generative AI (Gen AI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities Collaborate with cross-functional teams of engineers, product managers, and scientists to identify and solve complex problems in Gen AI Design and execute experiments to evaluate the performance of different algorithms (PT, SFT, RL) and models, and iterate quickly to improve results Think big about the arc of development of Gen AI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports About the team We are passionate scientists dedicated to pushing the boundaries of innovation in Gen AI with focus on Software Development use cases.
IN, HR, Gurugram
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced ML systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real-world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning team for India Consumer Businesses. Machine Learning, Big Data and related quantitative sciences have been strategic to Amazon from the early years. Amazon has been a pioneer in areas such as recommendation engines, ecommerce fraud detection and large-scale optimization of fulfillment center operations. As Amazon has rapidly grown and diversified, the opportunity for applying machine learning has exploded. We have a very broad collection of practical problems where machine learning systems can dramatically improve the customer experience, reduce cost, and drive speed and automation. These include product bundle recommendations for millions of products, safeguarding financial transactions across by building the risk models, improving catalog quality via extracting product attribute values from structured/unstructured data for millions of products, enhancing address quality by powering customer suggestions We are developing state-of-the-art machine learning solutions to accelerate the Amazon India growth story. Amazon India is an exciting place to be at for a machine learning practitioner. We have the eagerness of a fresh startup to absorb machine learning solutions, and the scale of a mature firm to help support their development at the same time. As part of the India Machine Learning team, you will get to work alongside brilliant minds motivated to solve real-world machine learning problems that make a difference to millions of our customers. We encourage thought leadership and blue ocean thinking in ML. Key job responsibilities Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes Design, develop, evaluate and deploy, innovative and highly scalable ML models Work closely with software engineering teams to drive real-time model implementations Work closely with business partners to identify problems and propose machine learning solutions Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production Leading projects and mentoring other scientists, engineers in the use of ML techniques About the team International Machine Learning Team is responsible for building novel ML solutions that attack India first (and other Emerging Markets across MENA and LatAm) problems and impact the bottom-line and top-line of India business. Learn more about our team from https://www.amazon.science/working-at-amazon/how-rajeev-rastogis-machine-learning-team-in-india-develops-innovations-for-customers-worldwide
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Principal Applied Scientist with a strong deep learning background, to lead the development of industry-leading technology with multimodal systems. As a Principal Scientist within the Artificial General Intelligence (AGI) organization, you are a trusted part of the technical leadership. You bring business and industry context to science and technology decisions, set the standard for scientific excellence, and make decisions that affect the way we build and integrate algorithms. A Principal Applied Scientist will solicit differing views across the organization and are willing to change your mind as you learn more. Your artifacts are exemplary and often used as reference across organization. You are a hands-on scientific leader; develop solutions that are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility; and tackle intrinsically hard problems, acquiring expertise as needed. Principal Applied Scientists are expected to decompose complex problems into straightforward solutions. You amplify your impact by leading scientific reviews within your organization or at your location; and scrutinize and review experimental design, modeling, verification and other research procedures. You also probe assumptions, illuminate pitfalls, and foster shared understanding; align teams toward coherent strategies; and educate keeping the scientific community up to date on advanced techniques, state of the art approaches, the latest technologies, and trends. AGI Principal Applied Scientists help managers guide the career growth of other scientists by mentoring and play a significant role in hiring and developing scientists and leads. You will play a critical role in driving the development of Generative AI (GenAI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities You will be responsible for defining key research directions, inventing new machine learning techniques, conducting rigorous experiments, and ensuring that research is translated into practice. You will develop long-term strategies, persuade teams to adopt those strategies, propose goals and deliver on them. A Principal Applied Scientist will participate in organizational planning, hiring, mentorship and leadership development. You will also be build scalable science and engineering solutions, and serve as a key scientific resource in full-cycle development (conception, design, implementation, testing to documentation, delivery, and maintenance).
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the next level. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. As a Research Scientist, you will work with a unique and gifted team developing exciting products for consumers and collaborate with cross-functional teams. Our team rewards intellectual curiosity while maintaining a laser-focus in bringing products to market. At the edge of both academic and applied research in this product area, you have the opportunity to work together with some of the most talented scientists, engineers, and product managers. Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We are constantly learning through programs that are local, regional, and global. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Our team highly values work-life balance, mentorship and career growth. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We care about your career growth and strive to assign projects and offer training that will challenge you to become your best. Key job responsibilities * Partner with laboratory science teams on design and analysis of experiments * Originate and lead the development of new data collection workflows with cross-functional partners * Develop and deploy scalable bioinformatics analysis and QC workflows * Evaluate and incorporate novel bioinformatic approaches to solve critical business problems
US, CA, Sunnyvale
As a Principal Scientist within the Artificial General Intelligence (AGI) organization, you are a trusted part of the technical leadership. You bring business and industry context to science and technology decisions, set the standard for scientific excellence, and make decisions that affect the way we build and integrate algorithms. A Principal Applied Scientist will solicit differing views across the organization and are willing to change your mind as you learn more. Your artifacts are exemplary and often used as reference across organization. You are a hands-on scientific leader; develop solutions that are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility; and tackle intrinsically hard problems, acquiring expertise as needed. Principal Applied Scientists are expected to decompose complex problems into straightforward solutions. You amplify your impact by leading scientific reviews within your organization or at your location; and scrutinize and review experimental design, modeling, verification and other research procedures. You also probe assumptions, illuminate pitfalls, and foster shared understanding; align teams toward coherent strategies; and educate keeping the scientific community up to date on advanced techniques, state of the art approaches, the latest technologies, and trends. AGI Principal Applied Scientists help managers guide the career growth of other scientists by mentoring and play a significant role in hiring and developing scientists and leads. You will play a critical role in driving the development of Generative AI (GenAI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities You will be responsible for defining key research directions, inventing new machine learning techniques, conducting rigorous experiments, and ensuring that research is translated into practice. You will develop long-term strategies, persuade teams to adopt those strategies, propose goals and deliver on them. A Principal Applied Scientist will participate in organizational planning, hiring, mentorship and leadership development. You will also be build scalable science and engineering solutions, and serve as a key scientific resource in full-cycle development (conception, design, implementation, testing to documentation, delivery, and maintenance). A day in the life About the team Amazon’s AGI team is focused on building foundational AI to solve real-world problems at scale, delivering value to all existing businesses in Amazon, and enabling entirely new services and products for people and enterprises around the world.
LU, Luxembourg
Are you a MS or PhD student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models and speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for students interested in using a variety of domain expertise to invent, design and implement state-of-the-art solutions for never-before-solved problems. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science https://amazon.jobs/content/en/how-we-hire/university-roles/applied-science Key job responsibilities As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to write technical white papers, create roadmaps and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists and other science interns to develop solutions and deploy them into production. You will have the opportunity to design new algorithms, models, or other technical solutions whilst experiencing Amazon’s customer focused culture. The ideal intern must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, Estonia, France, Germany, Ireland, Israel, Italy, Jordan, Luxembourg, Netherlands, Poland, Romania, Spain, South Africa, UAE, and UK). Please note these are not remote internships.
US, WA, Seattle
Revolutionize the Future of AI at the Frontier of Applied Science Are you a brilliant mind seeking to push the boundaries of what's possible with artificial intelligence? Join our elite team of researchers and engineers at the forefront of applied science, where we're harnessing the latest advancements in natural language processing, deep learning, and generative AI to reshape industries and unlock new realms of innovation. As an Applied Science Intern, you'll have the unique opportunity to work alongside world-renowned experts, gaining invaluable hands-on experience with cutting-edge technologies such as large language models, transformers, and neural networks. You'll dive deep into complex challenges, fine-tuning state-of-the-art models, developing novel algorithms for named entity recognition, and exploring the vast potential of generative AI. This internship is not just about executing tasks – it's about being a driving force behind groundbreaking discoveries. You'll collaborate with cross-functional teams, leveraging your expertise in statistics, recommender systems, and question answering to tackle real-world problems and deliver impactful solutions. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied science, where your contributions will shape the future of AI and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for LLM & GenAI Applied Science Internships in, but not limited to, Bellevue, WA; Boston, MA; Cambridge, MA; New York, NY; Santa Clara, CA; Seattle, WA; Sunnyvale, CA; Pittsburgh, PA. Key job responsibilities We are particularly interested in candidates with expertise in: LLMs, NLP/NLU, Gen AI, Transformers, Fine-Tuning, Recommendation Systems, Deep Learning, NER, Statistics, Neural Networks, Question Answering. In this role, you will work alongside global experts to develop and implement novel, scalable algorithms and modeling techniques that advance the state-of-the-art in areas at the intersection of LLMs and GenAI. You will tackle challenging, groundbreaking research problems on production-scale data, with a focus on recommendation systems, question answering, deep learning and generative AI. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life - Collaborate with cross-functional teams to tackle complex challenges in natural language processing, computer vision, and generative AI. - Fine-tune state-of-the-art models and develop novel algorithms to push the boundaries of what's possible. - Explore the vast potential of generative AI and its applications across industries. - Attend cutting-edge research seminars and engage in thought-provoking discussions with industry luminaries. - Leverage state-of-the-art computing infrastructure and access to the latest research papers to fuel your innovation. - Present your groundbreaking work and insights to the team, fostering a culture of knowledge-sharing and continuous learning.
US, WA, Seattle
Unlock the Future with Amazon Science! Calling all visionary minds passionate about the transformative power of machine learning! Amazon is seeking boundary-pushing graduate student scientists who can turn revolutionary theory into awe-inspiring reality. Join our team of visionary scientists and embark on a journey to revolutionize the field by harnessing the power of cutting-edge techniques in bayesian optimization, time series, multi-armed bandits and more. At Amazon, we don't just talk about innovation – we live and breathe it. You'll conducting research into the theory and application of deep reinforcement learning. You will work on some of the most difficult problems in the industry with some of the best product managers, scientists, and software engineers in the industry. You will propose and deploy solutions that will likely draw from a range of scientific areas such as supervised, semi-supervised and unsupervised learning, reinforcement learning, advanced statistical modeling, and graph models. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied science, where your contributions will shape the future of AI and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Machine Learning Applied Science Internships in, but not limited to Arlington, VA; Bellevue, WA; Boston, MA; New York, NY; Palo Alto, CA; San Diego, CA; Santa Clara, CA; Seattle, WA. Key job responsibilities We are particularly interested in candidates with expertise in: Optimization, Programming/Scripting Languages, Statistics, Reinforcement Learning, Causal Inference, Large Language Models, Time Series, Graph Modeling, Supervised/Unsupervised Learning, Deep Learning, Predictive Modeling In this role, you will work alongside global experts to develop and implement novel, scalable algorithms and modeling techniques that advance the state-of-the-art in areas at the intersection of Reinforcement Learning and Optimization within Machine Learning. You will tackle challenging, groundbreaking research problems on production-scale data, with a focus on developing novel RL algorithms and applying them to complex, real-world challenges. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life - Develop scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation. - Design, development and evaluation of highly innovative ML models for solving complex business problems. - Research and apply the latest ML techniques and best practices from both academia and industry. - Think about customers and how to improve the customer delivery experience. - Use and analytical techniques to create scalable solutions for business problems.
US, WA, Seattle
Shape the Future of Human-Machine Interaction Are you a master of natural language processing, eager to push the boundaries of conversational AI? Amazon is seeking exceptional graduate students to join our cutting-edge research team, where they will have the opportunity to explore and push the boundaries of natural language processing (NLP), natural language understanding (NLU), and speech recognition technologies. Imagine waking up each morning, fueled by the excitement of tackling complex research problems that have the potential to reshape the world. You'll dive into production-scale data, exploring innovative approaches to natural language understanding, large language models, reinforcement learning with human feedback, conversational AI, and multimodal learning. Your days will be filled with brainstorming sessions, coding sprints, and lively discussions with brilliant minds from diverse backgrounds. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated.. Join us at the forefront of applied science, where your contributions will shape the future of AI and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Natural Language Processing & Speech Applied Science Internships in, but not limited to, Bellevue, WA; Boston, MA; Cambridge, MA; New York, NY; Santa Clara, CA; Seattle, WA; Sunnyvale, CA. Key job responsibilities We are particularly interested in candidates with expertise in: NLP/NLU, LLMs, Reinforcement Learning, Human Feedback/HITL, Deep Learning, Speech Recognition, Conversational AI, Natural Language Modeling, Multimodal Learning. In this role, you will work alongside global experts to develop and implement novel, scalable algorithms and modeling techniques that advance the state-of-the-art in areas at the intersection of Natural Language Processing and Speech Technologies. You will tackle challenging, groundbreaking research problems on production-scale data, with a focus on natural language processing, speech recognition, text-to-speech (TTS), text recognition, question answering, NLP models (e.g., LSTM, transformer-based models), signal processing, information extraction, conversational modeling, audio processing, speaker detection, large language models, multilingual modeling, and more. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life - Develop novel, scalable algorithms and modeling techniques that advance the state-of-the-art in natural language processing, speech recognition, text-to-speech, question answering, and conversational modeling. - Tackle groundbreaking research problems on production-scale data, leveraging techniques such as LSTM, transformer-based models, signal processing, information extraction, audio processing, speaker detection, large language models, and multilingual modeling. - Collaborate with cross-functional teams to solve complex business problems, leveraging your expertise in NLP/NLU, LLMs, reinforcement learning, human feedback/HITL, deep learning, speech recognition, conversational AI, natural language modeling, and multimodal learning. - Thrive in a fast-paced, ever-changing environment, embracing ambiguity and demonstrating strong attention to detail.
US, WA, Seattle
Do you enjoy solving challenging problems and driving innovations in research? Do you want to create scalable optimization models and apply machine learning techniques to guide real-world decisions? We are looking for builders, innovators, and entrepreneurs who want to bring their ideas to reality and improve the lives of millions of customers. As a Research Science intern focused on Operations Research and Optimization intern, you will be challenged to apply theory into practice through experimentation and invention, develop new algorithms using modeling software and programming techniques for complex problems, implement prototypes and work with massive datasets. As you navigate through complex algorithms and data structures, you'll find yourself at the forefront of innovation, shaping the future of Amazon's fulfillment, logistics, and supply chain operations. Imagine waking up each morning, fueled by the excitement of solving intricate puzzles that have a direct impact on Amazon's operational excellence. Your day might begin by collaborating with cross-functional teams, exchanging ideas and insights to develop innovative solutions. You'll then immerse yourself in a world of data, leveraging your expertise in optimization, causal inference, time series analysis, and machine learning to uncover hidden patterns and drive operational efficiencies. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Amazon has positions available for Operations Research Science Internships in, but not limited to, Bellevue, WA; Boston, MA; Cambridge, MA; New York, NY; Santa Clara, CA; Seattle, WA; Sunnyvale, CA. Key job responsibilities We are particularly interested in candidates with expertise in: Optimization, Causal Inference, Time Series, Algorithms and Data Structures, Statistics, Operations Research, Machine Learning, Programming/Scripting Languages, LLMs In this role, you will gain hands-on experience in applying cutting-edge analytical techniques to tackle complex business challenges at scale. If you are passionate about using data-driven insights to drive operational excellence, we encourage you to apply. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life Develop and apply optimization, causal inference, and time series modeling techniques to drive operational efficiencies and improve decision-making across Amazon's fulfillment, logistics, and supply chain operations Design and implement scalable algorithms and data structures to support complex optimization systems Leverage statistical methods and machine learning to uncover insights and patterns in large-scale operations data Prototype and validate new approaches through rigorous experimentation and analysis Collaborate closely with cross-functional teams of researchers, engineers, and business stakeholders to translate research outputs into tangible business impact