Optimizing neural networks for special-purpose hardware

Curating the neural-architecture search space and taking advantage of human intuition reduces latency on real-world applications by up to 55%.

As neural networks grow in size, deploying them on-device increasingly requires special-purpose hardware that parallelizes common operations. But for maximum efficiency, it’s not enough to optimize the hardware for the networks; the networks should be optimized for the hardware, too.

Related content
The first step in training a neural network to solve a problem is usually the selection of an architecture: a specification of the number of computational nodes in the network and the connections between them. Architectural decisions are generally based on historical precedent, intuition, and plenty of trial and error.

The standard way to optimize a neural network is through neural-architecture search (NAS), where the goal is to minimize both the size of the network and the number of floating-point operations (FLOPS) it performs. But this approach doesn’t work with neural chips, which can often execute easily parallelized but higher-FLOPS tasks more rapidly than they can harder-to-parallelize but lower-FLOPS tasks.

Minimizing latency is a more complicated optimization objective than minimizing FLOPS, so in the Amazon Devices Hardware group, we’ve developed a number of strategies for adapting NAS to the problem of optimizing network architectures for Amazon’s new Neural Engine family of accelerators. Those strategies involve curating the architecture search space to, for instance, reduce the chances of getting stuck in local minima. We’ve also found that combining a little human intuition with the results of NAS for particular tasks can help us generalize to new tasks more reliably and efficiently.

In experiments involving several different machine learning tasks, we’ve found that our NAS strategies can reduce latencies by as much as 55%.

Varieties of neural-architecture search

NAS needs three things: a definition of the search space, which specifies the building blocks available to construct a network; a cost model, which is a function of the network's accuracy, latency, and memory; and an optimization algorithm. We use a performance estimator to measure latency and memory footprint, but to measure accuracy, we must train the network. This is a major bottleneck, as training a single network can take days. Sampling thousands of architectures would take thousands of GPU days, which is clearly neither practical nor environmentally sustainable.

There are three categories of NAS algorithm, which require networks to be trained different numbers of times: multishot, single-shot, and zero-shot.

Related content
A new approach that grows networks dynamically promises improvements over GANs with fixed architectures or predetermined growing strategies.

Multishot methods sample a cohort of architectures in each iteration. Each network is trained and evaluated for accuracy and performance, and the next set of architectures is sampled based on their cost. Evolutionary or reinforcement-learning-based algorithms are generally used for multishot methods.

Single-shot methods start with a large network called the supernet, which has multiple possible subgraphs. During training, the subgraphs start converging to a single, small network. Single-shot methods are designed to be trained only once, but their training takes much longer than that of a single network in multishot methods.

Zero-shot methods works like multishot methods, with the key difference that the network is never trained. As a proxy for accuracy, we use the network’s trainability score, which is computed using the network's topology, nonlinearity, and operations. Zero-shot methods are the fastest to converge, because calculating the score is computationally very cheap. The downside is that the trainability may not correlate well with model accuracy.

Search space curation

The NAS cost function can be visualized as a landscape, with each point representing a potential architecture. A cost function based on FLOPS changes monotonically with factors such as sizes or channels: that is, if you find a direction across the terrain in which the cost is going down, you can be sure that continuing in that direction will not cause the cost to go up.

However, the inclusion of accelerator-aware constraints disrupts the function by introducing more asymptotes, or points at which the cost switches from going down to going up. This results in a more complex and rocky landscape.

Related content
How to make trained systems evolve gracefully.

To address this issue, we reduced the number of options in the search space. We were exploring convolutional architectures, meaning that the inputs are decomposed into several different components, each of which has its own channel through the network. The data in each channel, in turn, is filtered in several different ways; each filter involves a different data convolution.

Previously, we would have explored the number of channels — known as the channel size — at increments of one; instead, we considered only a handful of channel sizes. We limited the options for channel sizes to certain values that were favorable for the parallelism factor of the Neural Engine. The parallelism factor is a count of operations, such as dot product, that can be performed in parallel. In some cases, we even added "depth multiplier" ratio that could be used to scale the number of channels across the entire model to the search space.

These improvements can be visualized as taking fewer, larger steps across a smoother terrain, rather than trying to navigate the rocky landscape that resulted from the inclusion of accelerator-aware performance in the cost function. During the optimization process, they resulted in a faster convergence rate because of the reduced number of options and in improved stability and reliability thanks to the monotonic nature of the curated search space.

NAS - 3x1.png
Illustration of how the cost landscape (green) changes from smooth (left) to rocky (center and right) when a cost function based on Neural Engine performance replaces one based on FLOPS. Curation (right) reduces the discrete search space (black dots) and ensures that points are far apart. The trajectory of a search algorithm (blue arrows) shows how curation (right) ensures that with each step in a search, the cost is monotonically decreasing.

One key detail in our implementation is the performance estimator. Instead of deploying an architecture on real hardware or an emulator to obtain performance metrics, we estimated them using a machine learning regression model trained on measurements of different operators or subgraphs.

At inference time, the estimator would decompose the queried architecture into subgraphs and use the regression model to estimate the performance of each. Then it would accumulate these estimates to give the model-level performance. This regressor-based design simplified our NAS framework, as it no longer required compilation, inference, or hardware. This technique enables us to test accelerators in the design phase, before we’ve developed custom compilers and hardware emulators for them.

Productizing NAS with expert-in-the-loop

Curating the search space improves convergence rate, stability, and reliability, but transferability to new use cases is not straightforward. NAS results for a detector model, for instance, may not be easy to transfer to a classification model. On the other hand, running NAS from scratch for each new dataset may not be feasible, due to time constraints. In these situations, we found that combining NAS results and human expertise was the fastest approach.

Channel reduction step.png
The initial channel reduction step (1x1 conv.) in the inverted-bottleneck (IBN) block at left is fused with the channel expansion step (KxK depth. conv.) in the fused IBN at right. This proved to be a common subgraph modification across datasets.

When we performed NAS on different datasets, we saw common patterns, such as the fusion of convolution layers with previous convolution layers, reducing the number of channels and, aligning them with the hardware parallelism factor.

In particular, fusing convolution layers in inverted bottleneck (IBN) blocks contributed most to boosting efficiency. With just these modifications, we observed latency reductions of up to 50%, whereas a fully converged NAS model would yield a slightly better 53% reduction.

In situations where running NAS from scratch is not feasible, a human expert can rely on mathematical intuition and observations of the results of NAS on similar datasets to build the required model architecture.

Results and product impact

We applied this technique to multiple products in the Amazon Devices portfolio, ranging from Echo Show and Blink home security products to the latest Astro, the in-home consumer robot.

1. Reduced detection latency by half on Echo Show

Echo Show runs a model to detect human presence and locate the detected person in a room. The original model used IBN blocks. We used accelerator-aware NAS to reduce the latency of this model by 53%.

Human-presence detection.png
Schematic representation of human-presence detection.

We performed a search for depth multipliers — that is, layers that multiply the number of channels — and for opportunities to replace IBN blocks with fused-IBN blocks. The requirement was to maintain the same mean average precision (mAP) of the original model while improving the latency. Our V3 model improved the latency by more than 53% (i.e. 2.2x faster) while keeping the mAP scores same as baseline.

Latency results for the original model and three models found through NAS.

Fused-IBN search

Depth multiplier search

Latency reduction (%)

Baseline

No

No

Baseline

V1

No

Yes

14%

V2

Yes

No

35%

V3

Yes

Yes

53%

After performing NAS, we found that not every IBN fusion improves latency and accuracy. The later layers are larger, and replacing them with fused layers hurt performance. For the layers where fusion was selected, the FLOPs, as expected, increased, but the latency did not.

2. Model fitting within the tight memory budget of the Blink Floodlight Camera

Blink cameras use a classification model for security assistance. Our goal was to fit the model parameters and peak activation memory within a tight memory budget. In this case, we combined NAS techniques with an expert-in-the-loop to provide fine-tuning. The NAS result on the classification dataset provided intuition on what operator/subgraph changes could extract benefits from the accelerator design.

Classification.png
Schematic representation of the classification model output.

The expert recommendations were to replace the depth-wise convolutions with standard convolutions and reduce the channels by making them even across the model, preferably by a multiple of the parallelism factor. With these changes, model developers were able to reduce both the model size and the intermediate memory usage by 47% and fit the model within the required budget.

3. Fast semantic segmentation for robotics

In the context of robotics, semantic segmentation is used to understand the objects and scenes the robot is interacting with. For example, it can enable the robot to identify chairs, tables, or other objects in the environment, allowing it to navigate and interact with its surroundings more effectively. Our goal for this model was to reduce latency by half. Our starting point was a semantic-segmentation model that was optimized to run on a CPU.

Semantic segmentation.png
Left: original image of a room at night; center: semantic-segmentation image; right: semantic segmentation overlaid on original image.

For this model, we searched for different channel sizes, fusion, and also output and input dimensions. We used the multishot method with the evolutionary search algorithm. NAS gave us multiple candidates with different performances. The best candidate was able to reduce the latency by half.

Latency improvement for different architectures found through NAS.

Latency reduction (%)

Original

Baseline

Model A

27%

Model B

37%

Model C

38%

Model D

41%

Model E

51%

4. User privacy with on-device inference

Amazon's Neural Engine supports large-model inference on-device, so we can process microphone and video feeds without sending data to the cloud. For example, the Amazon Neural Engine has enabled Alexa to perform automatic speech recognition on-device. On-device processing also provides a better user experience because the inference pipeline is not affected by intermittent connection issues. In our NAS work, we discovered that even larger, more accurate models can now fit on-device with no hit on latency.

Making edge AI sustainable

We mentioned earlier that multishot NAS with full training can take up to 2,000 GPU-days. However, with some of the techniques described in this blog, we were able to create efficient architectures in a substantially shorter amount of time, making NAS much more scalable and sustainable. But our sustainability efforts don't end there.

Related content
Innovative training methods and model compression techniques combine with clever engineering to keep speech processing local.

Because of its parallelism and mixed-precision features, the Neural Engine is more power efficient than a generic CPU. For a million average users, the difference is on order of millions of kilowatt-hours per year, equivalent to 200 gasoline-powered passenger vehicles per year or the energy consumption of a hundred average US households.

When we optimize models through NAS, we increase the device's capability to run more neural-network models simultaneously. This allows us to use smaller application processors and, in some cases, fewer of them. By reducing the hardware footprint in this way, we are further reducing the carbon footprint of our devices.

Future work

We have identified that curation requires an expert who understands the hardware design well. This may not scale to future generations of more complex hardware. We have also identified that in situations where time is tight, having an expert in the loop is still faster than running NAS from scratch. Because of this, we are continuing to investigate how NAS algorithms with accelerator awareness can handle large search spaces. We are also working on improving the search algorithm’s efficiency and effectiveness by exploring how the three categories of algorithms can be combined. We also plan to explore model optimization by introducing sparsity through pruning and clustering. Stay tuned!

Acknowledgements: Manasa Manohara, Lingchuan Meng, Rahul Bakshi, Varada Gopalakrishnan, Lindo St. Angel

Research areas

Related content

US, WA, Seattle
The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through novel generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace ecosystem. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. Key job responsibilities As an applied scientist on our team, you will * Develop AI solutions for Sponsored Brands advertiser and shopper experiences. Build recommendation systems that leverage generative models to develop and improve campaigns. * You invent and design new solutions for scientifically-complex problem areas and/or opportunities in new business initiatives. * You drive or heavily influence the design of scientifically-complex software solutions or systems, for which you personally write significant parts of the critical scientific novelty. You take ownership of these components, providing a system-wide view and design guidance. These systems or solutions can be brand new or evolve from existing ones. * Define a long-term science vision and roadmap for our Sponsored Brands advertising business, driven from our customers' needs, translating that direction into specific plans for applied scientists and engineering teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. * Work closely with engineers and product managers to design, implement and launch AI solutions end-to-end; * Design and conduct A/B experiments to evaluate proposed solutions based on in-depth data analyses; * Think big about the arc of development of Gen AI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems * Effectively communicate technical and non-technical ideas with teammates and stakeholders; * Translate complex scientific challenges into clear and impactful solutions for business stakeholders. * Mentor and guide junior scientists, fostering a collaborative and high-performing team culture. * Stay up-to-date with advancements and the latest modeling techniques in the field About the team The Sponsored Brands Impressions-based Offerings team is responsible for evolving the value proposition of Sponsored Brands to drive brand advertising in retail media at scale, helping brands get discovered, acquire new customers and sustainably grow customer lifetime value. We build end-to-end solutions that enable brands to drive discovery, visibility and share of voice. This includes building advertiser controls, shopper experiences, monetization strategies and optimization features. We succeed when (1) shoppers discover, engage and build affinity with brands and (2) brands can grow their business at scale with our advertising products. #GenAI
US, CA, San Diego
The Private Brands team is looking for a Sr. Research Scientist to join the team in building science solutions at scale. Our team applies Optimization, Machine Learning, Statistics, Causal Inference, and Econometrics/Economics to derive actionable insights about the complex economy of Amazon’s retail business and develop Statistical Models and Algorithms to drive strategic business decisions and improve operations. We are an interdisciplinary team of Scientists, Engineers, PMTs and Economists. Key job responsibilities You will work with business leaders, scientists, and economists to translate business and functional requirements into concrete deliverables, including the design, development, testing, and deployment of highly scalable optimization solutions and ML models. This is a unique, high visibility opportunity for someone who wants to have business impact, dive deep into large-scale problems, enable measurable actions on the consumer economy, and work closely with scientists and economists. As a Sr Scientist, you bring business and industry context to science and technology decisions. You set the standard for scientific excellence and make decisions that affect the way we build and integrate algorithms. Your solutions are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility. You tackle intrinsically hard problems, acquiring expertise as needed. You decompose complex problems into straightforward solutions. We are particularly interested in candidates with experience in Operations Research, ML and predictive models and working with distributed systems. Academic and/or practical background in Operations Research and Machine Learning specifically Reinforcement Learning are particularly relevant for this position. To know more about Amazon science, Please visit https://www.amazon.science About the team We are a one pizza, agile team of scientists focused on solving supply chain challenges for Amazon Private Brands products. We collaborate with Amazon central teams like SCOT and develop both central as well as APB-specific solutions to address various challenges, including sourcing, demand forecasting, ordering optimization, inventory distribution, and inventory health management. Working closely with business stakeholders, Product Management Teams (PMTs), and engineering partners, we drive projects from initial concept through production deployment and ongoing monitoring.
US, CA, Sunnyvale
As a Reinforcement Learning Controls Scientist, you will be responsible for developing Reinforcement Learning models to control complex electromechanical systems. You will take responsibility for defining frameworks, performing analysis, and training models that guide and inform mechanical and electrical designs, software implementation, and other software modules that affect overall device safety and performance. You understand trade-offs between model-based and model-free approaches. You will demonstrate cross-functional collaboration and influence to accomplish your goals. You will play a role in defining processes and methods to improve the productivity of the entire team. You will interface with Amazon teams outside your immediate organization to collaborate and share knowledge. You will investigate applicable academic and industry research, prototype and test solutions to support product features, and design and validate production designs that deliver an exceptional user experience. Key job responsibilities - Produce models and simulations of complex, high degree-of-freedom dynamic electromechanical systems - Train Reinforcement Learning control policies that achieve performance targets within hardware and software constraints - Hands-on prototyping and testing of physical systems in the lab - Influence hardware and software design decisions owned by other teams to optimize system-level performance - Work with cross-functional teams (controls, firmware, perception, planning, sensors, mechanical, electrical, etc.) to solve complex system integration issues - Define key performance indicators and allocate error budgets across hardware and software modules - Perform root cause analysis of system-level failures and distinguish between hardware/software failures and hardware/software mitigations - Translate business requirements to engineering requirements and identify trade-offs and sensitivities - Mentor junior engineers in good design practice; actively participate in hiring of new team members About the team The Dynamic Systems and Control team develops models, algorithms, and code to bridge hardware and software development teams and bring robotic products to life. We contributed to Amazon Astro (https://www.amazon.com/Introducing-Amazon-Astro/dp/B078NSDFSB) and Echo Show 10 (https://www.amazon.com/echo-show-10/dp/B07VHZ41L8/), along with several new technology introductions and unannounced products currently in development.
US, WA, Seattle
About Sponsored Products and Brands: The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through industry leading generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. About Our Team: The Sponsored Brands Impressions-based Offerings team is responsible for evolving the value proposition of Sponsored Brands to drive brand advertising in retail media at scale, helping brands get discovered, acquire new customers and sustainably grow customer lifetime value. We build end-to-end solutions that enable brands to drive discovery, visibility and share of voice. This includes building advertiser controls, shopper experiences, monetization strategies and optimization features. We succeed when (1) shoppers discover, engage and build affinity with brands and (2) brands can grow their business at scale with our advertising products. About This Role: As a Principal Scientist for the team, you will have the opportunity to apply your deep subject matter expertise in the area of ML, LLM and GenAI models. You will invent new product experiences that enable novel advertiser and shopper experiences. This role will liaise with internal Amazon partners and work on bringing state-of-the-art GenAI models to production, and stay abreast of the latest developments in the space of GenAI and identify opportunities to improve the efficiency and productivity of the team. Additionally, you will define a long-term science vision for our advertising business, driven by our customer’s needs, and translate it into actionable plans for our team of applied scientists and engineers. This role will play a critical role in elevating the team’s scientific and technical rigor, identifying and implementing best-in-class algorithms, methodologies, and infrastructure that enable rapid experimentation and scaling. You will communicate learnings to leadership and mentor and grow Applied AI talent across org. * Develop AI solutions for Sponsored Brands advertiser and shopper experiences. Build monetization and optimization systems that leverage generative models to value and improve campaign performance. * Define a long-term science vision and roadmap for our Sponsored Brands advertising business, driven from our customers' needs, translating that direction into specific plans for applied scientists and engineering teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. * Design and conduct A/B experiments to evaluate proposed solutions based on in-depth data analyses. * Effectively communicate technical and non-technical ideas with teammates and stakeholders. * Stay up-to-date with advancements and the latest modeling techniques in the field. * Think big about the arc of development of Gen AI over a multi-year horizon and identify new opportunities to apply these technologies to solve real-world problems. #GenAI
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the limits. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. As a Data Scientist on our team, you'll analyze complex data, develop statistical methodologies, and provide critical insights that shape how we optimize our solutions. Working closely with our Applied Science team, you'll help build robust analytical frameworks to improve healthcare outcomes. This role offers a unique opportunity to impact healthcare through data-driven innovation. Key job responsibilities In this role, you will: - Analyze complex healthcare data to identify patterns, trends, and insights - Develop and validate statistical methodologies - Create and maintain analytical frameworks - Provide recommendations on data collection strategies - Collaborate with Applied Scientists to support model development efforts - Design and implement statistical analyses to validate analytical approaches - Present findings to stakeholders and contribute to scientific publications - Work with cross-functional teams to ensure solutions are built on sound statistical foundations - Design and implement causal inference analyses to understand underlying mechanisms - Develop frameworks for identifying and validating causal relationships in complex systems - Work with stakeholders to translate causal insights into actionable recommendations A day in the life You'll work with large-scale healthcare datasets, conducting sophisticated statistical analyses to generate actionable insights. You'll collaborate with Applied Scientists to validate model predictions and ensure statistical rigor in our approach. Regular interaction with product teams will help translate analytical findings into practical improvements for our services. About the team We represent Amazon's ambitious vision to solve the world's most pressing challenges. We are exploring new approaches to enhance research practices in the healthcare space, leveraging Amazon's scale and technological expertise. We operate with the agility of a startup while backed by Amazon's resources and operational excellence. We're looking for builders who are excited about working on ambitious, undefined problems and are comfortable with ambiguity.
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the limits. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. As an Applied Scientist on our team, you will focus on building state-of-the-art ML models for healthcare. Our team rewards curiosity while maintaining a laser-focus in bringing products to market. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the forefront of both academic and applied research in this product area, you have the opportunity to work together with a diverse and talented team of scientists, engineers, and product managers and collaborate with other teams. This role offers a unique opportunity to work on projects that could fundamentally transform healthcare outcomes. Key job responsibilities In this role, you will: • Design and implement novel AI/ML solutions for complex healthcare challenges • Drive advancements in machine learning and data science • Balance theoretical knowledge with practical implementation • Work closely with customers and partners to understand their requirements • Navigate ambiguity and create clarity in early-stage product development • Collaborate with cross-functional teams while fostering innovation in a collaborative work environment to deliver impactful solutions • Establish best practices for ML experimentation, evaluation, development and deployment • Partner with leadership to define roadmap and strategic initiatives You’ll need a strong background in AI/ML, proven leadership skills, and the ability to translate complex concepts into actionable plans. You’ll also need to effectively translate research findings into practical solutions. A day in the life You will solve real-world problems by getting and analyzing large amounts of data, generate insights and opportunities, design simulations and experiments, and develop statistical and ML models. The team is driven by business needs, which requires collaboration with other Scientists, Engineers, and Product Managers across the Special Projects organization. You will prepare written and verbal presentations to share insights to audiences of varying levels of technical sophistication. About the team We represent Amazon's ambitious vision to solve the world's most pressing challenges. We are exploring new approaches to enhance research practices in the healthcare space, leveraging Amazon's scale and technological expertise. We operate with the agility of a startup while backed by Amazon's resources and operational excellence. We're looking for builders who are excited about working on ambitious, undefined problems and are comfortable with ambiguity.
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the limits. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. As an Applied Scientist on our team, you will focus on building state-of-the-art ML models for healthcare. Our team rewards curiosity while maintaining a laser-focus in bringing products to market. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the forefront of both academic and applied research in this product area, you have the opportunity to work together with a diverse and talented team of scientists, engineers, and product managers and collaborate with other teams. This role offers a unique opportunity to work on projects that could fundamentally transform healthcare outcomes. Key job responsibilities In this role, you will: • Design and implement novel AI/ML solutions for complex healthcare challenges • Drive advancements in machine learning and data science • Balance theoretical knowledge with practical implementation • Work closely with customers and partners to understand their requirements • Navigate ambiguity and create clarity in early-stage product development • Collaborate with cross-functional teams while fostering innovation in a collaborative work environment to deliver impactful solutions • Establish best practices for ML experimentation, evaluation, development and deployment • Partner with leadership to define roadmap and strategic initiatives You’ll need a strong background in AI/ML, proven leadership skills, and the ability to translate complex concepts into actionable plans. You’ll also need to effectively translate research findings into practical solutions. A day in the life You will solve real-world problems by getting and analyzing large amounts of data, generate insights and opportunities, design simulations and experiments, and develop statistical and ML models. The team is driven by business needs, which requires collaboration with other Scientists, Engineers, and Product Managers across the Special Projects organization. You will prepare written and verbal presentations to share insights to audiences of varying levels of technical sophistication. About the team We represent Amazon's ambitious vision to solve the world's most pressing challenges. We are exploring new approaches to enhance research practices in the healthcare space, leveraging Amazon's scale and technological expertise. We operate with the agility of a startup while backed by Amazon's resources and operational excellence. We're looking for builders who are excited about working on ambitious, undefined problems and are comfortable with ambiguity.
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the limits. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. As an Applied Scientist on our team, you will focus on building state-of-the-art ML models for healthcare. Our team rewards curiosity while maintaining a laser-focus in bringing products to market. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the forefront of both academic and applied research in this product area, you have the opportunity to work together with a diverse and talented team of scientists, engineers, and product managers and collaborate with other teams. This role offers a unique opportunity to work on projects that could fundamentally transform healthcare outcomes. Key job responsibilities In this role, you will: • Design and implement novel AI/ML solutions for complex healthcare challenges • Drive advancements in machine learning and data science • Balance theoretical knowledge with practical implementation • Work closely with customers and partners to understand their requirements • Navigate ambiguity and create clarity in early-stage product development • Collaborate with cross-functional teams while fostering innovation in a collaborative work environment to deliver impactful solutions • Establish best practices for ML experimentation, evaluation, development and deployment • Partner with leadership to define roadmap and strategic initiatives You’ll need a strong background in AI/ML, proven leadership skills, and the ability to translate complex concepts into actionable plans. You’ll also need to effectively translate research findings into practical solutions. A day in the life You will solve real-world problems by getting and analyzing large amounts of data, generate insights and opportunities, design simulations and experiments, and develop statistical and ML models. The team is driven by business needs, which requires collaboration with other Scientists, Engineers, and Product Managers across the Special Projects organization. You will prepare written and verbal presentations to share insights to audiences of varying levels of technical sophistication. About the team We represent Amazon's ambitious vision to solve the world's most pressing challenges. We are exploring new approaches to enhance research practices in the healthcare space, leveraging Amazon's scale and technological expertise. We operate with the agility of a startup while backed by Amazon's resources and operational excellence. We're looking for builders who are excited about working on ambitious, undefined problems and are comfortable with ambiguity.
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the limits. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. As a Senior Applied Scientist on our team, you will focus on building state-of-the-art ML models for healthcare. Our team rewards curiosity while maintaining a laser-focus in bringing products to market. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the forefront of both academic and applied research in this product area, you have the opportunity to work together with a diverse and talented team of scientists, engineers, and product managers and collaborate with other teams. This role offers a unique opportunity to work on projects that could fundamentally transform healthcare outcomes. Key job responsibilities In this role, you will: • Design and implement novel AI/ML solutions for complex healthcare challenges • Drive advancements in machine learning and data science • Balance theoretical knowledge with practical implementation • Work closely with customers and partners to understand their requirements • Navigate ambiguity and create clarity in early-stage product development • Collaborate with cross-functional teams while fostering innovation in a collaborative work environment to deliver impactful solutions • Establish best practices for ML experimentation, evaluation, development and deployment • Partner with leadership to define roadmap and strategic initiatives You’ll need a strong background in AI/ML, proven leadership skills, and the ability to translate complex concepts into actionable plans. You’ll also need to effectively translate research findings into practical solutions. A day in the life You will solve real-world problems by getting and analyzing large amounts of data, generate insights and opportunities, design simulations and experiments, and develop statistical and ML models. The team is driven by business needs, which requires collaboration with other Scientists, Engineers, and Product Managers across the Special Projects organization. You will prepare written and verbal presentations to share insights to audiences of varying levels of technical sophistication. About the team We represent Amazon's ambitious vision to solve the world's most pressing challenges. We are exploring new approaches to enhance research practices in the healthcare space, leveraging Amazon's scale and technological expertise. We operate with the agility of a startup while backed by Amazon's resources and operational excellence. We're looking for builders who are excited about working on ambitious, undefined problems and are comfortable with ambiguity.
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a highly skilled and experienced Sr. Applied Scientist, to support the development and implementation of state-of-the-art algorithms and models for supervised fine-tuning and reinforcement learning through human feedback and complex reasoning; with a focus across text, image, and video modalities. As an Sr. Applied Scientist, you will play a critical role in supporting the development of Generative AI (Gen AI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities Collaborate with cross-functional teams of engineers, product managers, and scientists to identify and solve complex problems in Gen AI Design and execute experiments to evaluate the performance of different algorithms (PT, SFT, RL) and models, and iterate quickly to improve results Think big about the arc of development of Gen AI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports About the team We are passionate scientists dedicated to pushing the boundaries of innovation in Gen AI with focus on Software Development use cases.