Line art of silicon chips developed by Annapurna Labs since its acquisition by Amazon in 2015.  Line art includes mentions of Graviton, Inferentia, and Trainium chips, along with AWS Nitro system.
Amazon's acquisition of Annapurna Labs in 2015 has led to, among other advancements, the development of five generations of the AWS Nitro system, three generations of Arm-based Graviton processors, as well as AWS Trainium and AWS Inferentia chips that are optimized for machine learning training and inference. These chips and systems were discussed at the AWS Silicon Innovation Day event on August 3. The event included a talk by Nafea Bshara, AWS vice president and distinguished engineer, on silicon innovation emerging from Annapurna Labs.

How silicon innovation became the ‘secret sauce’ behind AWS’s success

Nafea Bshara, AWS vice president and distinguished engineer, discusses Annapurna Lab’s path to silicon success; Annapurna co-founder was a featured speaker at AWS Silicon Innovation Day virtual event.

Nafea Bshara, Amazon Web Services vice president and distinguished engineer, and the co-founder of Annapurna Labs, an Israeli-based chipmaker that Amazon acquired in 2015, maintains a low profile, as does his friend and Annapurna co-founder, Hrvoye (Billy) Bilic.

Nafea Bshara headshot image
Nafea Bshara, AWS vice president and distinguished engineer.

Each executive’s LinkedIn profile is sparse, in fact, Bilic’s is out of date.

“We hardly do any interviews; our philosophy is to let our products do the talking,” explains Bshara.

Those products, and silicon innovations, have done a lot of talking since 2015, as the acquisition has led to, among other advancements, the development of five generations of the AWS Nitro System, three generations (1, 2, 3) of custom-designed, Arm-based Graviton processors that support data-intensive workloads, as well as AWS Trainium, and AWS Inferentia chips optimized for machine learning training and inference.

Some observers have described the silicon that emerges from Annapurna Labs in the U.S. and Israel as AWS’s “secret sauce”.

Nafea’s silicon journey began at Technion University in Israel, where he earned bachelor’s and master’s degrees in computer engineering, and where he first met Hrvoye. The two then went on to work for Israel-based Galileo, a company that made chips for networking switches, and controllers for networking routers. Galileo was acquired by U.S. semiconductor manufacturer Marvell in 2000, where Bshara and Bilic would work for a decade before deciding to venture out on their own.

“We had developed at least 50 different chips together,” Bshara explained, “so we had a track record and a first-hand understanding of customer needs, and the market dynamics. We could see that some market segments were being underserved, and with the support from our spouses, Lana and Liat, and our funding friends Avigdor [Willenz] and Manuel [Alba], we started Annapurna Labs.”

That was mid-2011, and three and half years later Amazon acquired the company. The two friends have continued their journey at Amazon, where their team’s work has spoken for itself.

Last year, industry analyst David Vellante praised AWS’s “revolution in system architecture.”

“Much in the same way that AWS defined the cloud operating model last decade, we believe it is once again leading in future systems. The secret sauce underpinning these innovations is specialized designs… We believe these moves position AWS to accommodate a diversity of workloads that span cloud, data center as well as the near and far edge.”

Annapurna’s work was highlighted during the AWS Silicon Innovation Day virtual event on August 3. In fact, Nafea was a featured speaker in the event. The Silicon Innovation Day broadcast, which highlighted AWS silicon innovations, included a keynote from David Brown, vice president, Amazon EC2; a talk about the history of AWS silicon innovation from James Hamilton, Amazon senior vice president and distinguished engineer who holds more than 200 patents in 22 countries in server and datacenter infrastructure, database, and cloud computing; and a fireside chat on the Nitro System with Anthony Liguori, AWS vice president and distinguished engineer, and Jeff Barr, AWS vice president and chief evangelist.

In advance of the silicon-innovation event, Amazon Science connected with Bshara to discuss the history of Annapurna, how the company and the industry have evolved in the past decade, and what the future portends.

  1. Q. 

    You co-founded Annapurna Labs just over 11 years ago. Why Annapurna?

    A. 

     I co-founded the company with my longtime partner, Billy, and with an amazing set of engineers and leaders who believed in the mission. We started Annapurna Labs because we looked at the way the chip industry was investing in infrastructure and data centers; it was minuscule at that time because everybody was going after the gold rush of mobile phones, smartphones, and tablets.

    We believed the industry was over indexing on investment for mobile, and under investing in the data center. The data center market was underserved. That, combined with the fact that there was increasing disappointment with the ineffective and non-productive method of developing chips, especially when compared with software development. The productivity of software developers had improved significantly in the past 25 years, while the productivity of chip developers hadn’t improved much since the ‘90s. In assessing the opportunity, we saw a data-center market that was being underserved, and an opportunity to redefine chip development with greater productivity, and with a better business model. Those factors contributed to us starting Annapurna Labs.

  2. Q. 

    How has the chip industry evolved in the past 11 years?

    A. 

    The chip industry realized, a bit late, but nevertheless realized that productivity and time to market needed to be addressed. While Annapurna has been a pioneer in advancing productivity and time to market, many others are following in our footsteps and transitioning to a building-blocks-centric development mindset, similar to how the software industry moved toward object-oriented, and service-oriented software design.

    Chip companies have now transitioned to what we refer to as an intellectual property-oriented, or IP-oriented, correct-by-design approach. Secondly, the chip industry has adopted the cloud. Cloud adoption has led to an explosion of compute power for building chips. Using the cloud, we are able to use compute in a ‘bursty’ way and in parallel. We and our chip-industry colleagues couldn’t deliver the silicon we do today without the cloud. This has led to the creation of a healthy market where chip companies have realized they don’t need to build everything in house, in much the same way software companies have realized they can buy libraries from open source or other library providers. The industry has matured to the point where now there is a healthy business model around buying building blocks, or IPs, from providers like Arm, Synopsys, Alphawave, or Cadence.

  3. Q. 

    Annapurna Labs was named after one of the tallest peaks in the Himalayas that’s regarded as one of the most dangerous mountains to climb. What's been the tallest peak you've had to climb?

    A. 

    I’m up in the cloud, I don’t need to climb anything [laughing]. Yes, Billy and I picked the name Annapurna Labs for a couple of reasons. First, Billy and I originally planned to climb Annapurna before we started the company. But then we got excited about the idea, acquired funding, and suddenly time was of the essence, so we put our climbing plans on hold and started the company. We called it Annapurna because at that time – and it’s true even today – there is a high barrier to entry in starting a chip company. The challenge is steep, and the risk is high, so it’s just like climbing Annapurna. We also believed that we wanted to reach a point above the clouds where you could see things very clearly, and without clutter. That’s always been a mantra for us as a company: Avoid the clutter, and look far into the future to understand what the customer really needs versus getting distracted by the day-to-day noise.

  4. Q. 

    What are the unique challenges you face in designing chips for ML training and inference versus more general CPU designs?

    A. 

    First, I would want to emphasize what challenge we didn’t have to worry about: with the strong foundation, methodologies, and engineering muscle we built delivering multiple generations of Nitro, we had confidence in our ability to execute on building the chips and manufacturing them at high volume, and high quality. So that was a major thing we didn’t need to worry about. Designing for machine learning is one the most challenging, but also the most rewarding tasks I've had the pleasure to participate in. There is an insatiable demand for machine learning right now, so anyone with a good product won’t have any issues finding customer demand. The demand is there, but there are a couple of challenges.

    Related content
    Two authors of Amazon Redshift research paper that will be presented at leading international forum for database researchers reflect on how far the first petabyte scale cloud data warehouse has advanced since it was announced ten years ago.

    The first is that customers want ‘just works’ solutions because they have enough challenges to work on the science side. So they are looking for a frictionless migration from the incumbent, let's say GPU-based machine learning, to AWS Trainium or AWS Inferentia. Our biggest challenge is to hide all the complexity so it’s what we refer to internally as boring to migrate. We don’t want our customers, the scientists and researchers, to have to think about moving from one piece of hardware to another. This is a challenge because the incumbent GPUs, specifically NVIDIA, have done a very good job developing broadly adopted technologies. The customer shouldn’t see or experience any of the hard work we’ve done in developing our chips; what the customer should experience is that it’s transparent and frictionless to transition to Inferentia and Trainium. That’s a hefty task and one of our internal challenges as a team.

    Trainium artwork from AWS website
    "The customer shouldn’t see or experience any of the hard work we’ve done in developing our chips; what the customer should experience is that it’s transparent and frictionless to transition to Inferentia and Trainium," says Bshara.

    The second challenge is more external; it’s the fact that science and machine learning are moving very fast. As an organization that is building hardware, our job is to predict what customers will need three, four, five years down the road because the development cycle for a chip can be two years, and then it gets deployed for three years. The lifecycle is around five years and trying to predict how the needs of scientists and the machine-learning community will evolve over that time span is difficult. Unlike CPU workloads, which aren’t evolving very quickly, machine learning workloads are, and it’s a bit of an art to keep apace. I would give ourselves a high score, not a perfect score, in being efficient in terms of execution and cost, while still being future proof. It’s the art of predicting what customers will need three years from now, while still executing on time and budget. These things only come with experience, and I’m fortunate to be part of a great team that has the experience to strike the right balance between cost, schedule, and future-proofing the product.

  5. Q. 

    At the recent re:MARS conference Rohit Prasad, Amazon senior vice president and Alexa head scientist, said the voice assistant is interacting with customers billions of times each week. Alexa is powered by EC2 Inf1 instances, which use AWS Inferentia chips. Why is it more effective for Alexa workloads to take advantage of this kind of specialized processing versus more general-purpose GPUs?

    A. 

    Alexa is one of those Amazon technologies that we want to bring to as many people as possible. It’s also a great example of the Amazon flywheel; the more people use it, the more value it delivers. One of our goals is to provide this service with as low latency as possible, and at the lowest cost possible, and over time improve the machine-learning algorithms behind Alexa. When people say improving Alexa, it really means handling much more complex machine learning, much more sophisticated models while maintaining the performance, and low latency. Using Inferentia, the chip, and Inf1, the EC2 instances that actually hosts all of these chips, Alexa is able to run much more advanced machine learning algorithms at lower costs and with lower latency than a standard general-purpose chip. It's not that the general-purpose chip couldn't do the job, it's that it would do so at higher costs and higher latency. With Inferentia we deliver lower latency and support much more sophisticated algorithms. This results in customers having a better experience with Alexa, and benefitting from a smarter Alexa.

  6. Q. 

    AI has been called the new electricity. But as ML models become increasingly large and complex as you just discussed, there also are concerns that energy consumption for AI model training and inference is damaging to the environment. At the chip level, what can be done to reduce the environmental impact of ML model training and Inference?

    A. 

    What we can do at the chip level, at the EC2 level, is actually work on three vectors, which we’re doing right now. The first is drive to lower power quickly by using more advanced silicon processes. Every time we build a chip in an advanced silicon process we're utilizing new semiconductor processes with smaller transistors that require less power for the same work. Because of our focus on efficient execution, we can deliver to EC2 customers a new chip based on a more modern, power-efficient silicon process every 18 months or so.

    The second vector is building more technologies, trying to accelerate in hardware and in algorithms, to get training and inference done faster. The faster we can handle training and inference, the less power is consumed. For example, one of the technologies we innovated in the last Trainium chip was something called stochastic rounding which, depending upon which measure you're looking at for some neural workloads, could accelerate neural network training by up to 30%. When you say 30% less time that translates into 30% less power.

    Another thing we're doing at the algorithmic level is offering different data types. For example, historically machine learning used a 32-bit floating point. Now we’re offering multiple versions of 16-bit and a few versions of 8-bit. When these different data types are used, they not only accelerate machine learning training, they significantly reduce the power for the same amount of workload. For example, doing matrix multiplication on a 16-bit float point is less than one-third the total power if we had done it with 32-bit floating point. The ability to add things like stochastic rounding or new data types at the algorithmic level provides a step-function improvement in power consumption for the same amount of workload.

    The third vector is credit to EC2 and the Nitro System, we’re offering more choice for customers. There are different chips optimized for different workloads, and the best way for customers to save energy is to follow the classic Amazon mantra – the everything store. We offer all different types of chips, including multiple generations of Nvidia GPUs, Intel Habana, and Trainium, and share with the customer the power profile and performance of each of the instances hosting these chips, so the customer can choose the right chip for the right workload, and optimize for the lowest possible power consumption at the lowest cost.

  7. Q. 

    I’ve focused primarily on machine learning. But let’s turn our attention to more general-purpose workloads running in the cloud, and your work on Graviton processors for Amazon EC2. 

    A. 

    Yes, in a way Graviton is the opposite of our work on machine learning, in the sense that the focus is on building server processors for general-purpose workloads running in EC2. The market for general-purpose chips has been there for thirty or forty years, and the workloads themselves haven’t evolved as rapidly as machine learning, so when we started designing, the target was clear to us.

    This is an image of a Graviton silicon chip with a blue background.
    AWS is three generations into its Graviton chip journey, and Bshara says the company has plans for "many more generations" to come.

    Because this segment of the industry wasn’t moving that fast, we felt our challenge was to move the industry faster, specifically in offering step function improvement in performance, and reducing costs, and power consumption. There are many times when you build plans, especially for chips, where the original plans are rosy, but as the development progresses you have to make tradeoffs, and the actual product falls short of the original promise. With first-generation Graviton, we experienced the opposite; we were pleasantly surprised that both performance and power efficiency turned out better than our original plan. That’s very rare in our industry.

    Related content
    Amazon DynamoDB was introduced 10 years ago today; one of its key contributors reflects on its origins, and discusses the 'never-ending journey' to make DynamoDB more secure, more available and more performant.

    The same has been true with Graviton2. Because of this there has been a massive movement inside Amazon for general workloads to move to Graviton2, mainly to save on power, but also on costs. For the same workloads, Graviton2 will on average consume 60% less power than same-generation competitive offerings, and we’re passing on those cost-savings to customers. Outside Amazon, at least 48 of AWS’s top 50 customers have not just tested, but have production workloads running on Graviton2.

    In May, Graviton3 processors became available, so it’s still Day 1 as we’re only three generations into this journey. We have plans for many more generations, but it’s always very satisfying and rewarding to hear how boring it is for customers to migrate to Graviton, and to hear all the customer success stories. It is incredibly satisfying to come to work every day and hear some of the success stories from the tens of thousands of customers using Graviton.

  8. Q. 

    You have more than 100 openings on your jobs page. What kind of talent are you seeking? And what are the characteristics of employees who succeed at Annapurna Labs? 

    A. 

    We are seeking individuals who like to work on cutting-edge technology, and approach challenges from a principles-first approach because most of the challenges we confront haven’t been dealt with before. While actual experience is important, we place greater value on proper thinking and a principles-first mindset, or reasoning from first principles.

    We also value individuals who enjoy working in a dynamic environment where the solution isn’t always the same hammer after the same nail. Given our principles-first approach, many of our challenges get solved at the chip level, the terminal level, and the system level, so we seek individuals who have systems understanding, and are skilled at working across disciplines. It’s difficult for an individual with a single discipline, or single domain knowledge, who isn’t willing to challenge her or himself by learning across other domains, to succeed at Annapurna. Last but not least, we look for individuals who focus on delivering, within a team environment. We recognize ideas are “cheap”, and what makes the difference is delivering on the idea all the way to production. Ideas are a commodity. Executing on those ideas is not.

  9. Q. 

    I've read that Billy and you share the belief that if you can dream it, you can do it. So what's your dream about future silicon development?

    A. 

    That’s true, and it’s the main reason Billy and I wanted to join AWS, because we had a common vision that there’s so much value we can bring to customers, and AWS leadership and Amazon in general were willing to invest in that vision for the long term. We agreed to be acquired by Amazon not only because of the funding and our common long-term vision, but also because building components for our own data centers would allow us to quickly deliver customer value. We’ve been super happy with the relationship for many reasons, but primarily because of our ability to have customer impact at global scale.

    At Amazon, we operate at such a scale and with such a diversity of customers that we are capable of doing application-specific, or domain-specific acceleration. Machine learning is one example of that. What we’ve done with Aqua (advanced query accelerator) for Amazon Redshift is another example where we’ve delivered hardware-based acceleration for analytics. Our biggest challenge these days is deciding what project to prioritize. There’s no shortage of opportunities to deliver value. The only way we’re able to take this approach is because of AWS. Developing silicon requires significant investment, and the only way to gain a good return on that investment is by having a lot of volume and cost-effective development, and we’ve been able to develop a large, and successful customer base with AWS.

    I should also add that before joining Amazon we thought we really took a long-term perspective. But once you sit in Amazon meetings, you realize what long-term strategic thinking really means. I continue to learn every day about how to master that. Suffice to say, we have a product roadmap, and a technology and investment strategy that extends to 2032. As much uncertainty as there is in the future, there are a few things we’re highly convicted in, and we’re investing in them, even though they may be ten years out. I obviously can’t disclose future product plans, but we continue to dream big on behalf of our customers.

    The AWS Annapurna Labs team has more than 100 job openings for software developers, physical design engineers, design specification engineers, and many other technical roles. The team has development centers in the U.S. and Israel.

Research areas

Related content

US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a highly skilled and experienced Sr. Applied Scientist, to support the development and implementation of state-of-the-art algorithms and models for supervised fine-tuning and reinforcement learning through human feedback and complex reasoning; with a focus across text, image, and video modalities. As an Sr. Applied Scientist, you will play a critical role in supporting the development of Generative AI (Gen AI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities Collaborate with cross-functional teams of engineers, product managers, and scientists to identify and solve complex problems in Gen AI Design and execute experiments to evaluate the performance of different algorithms (PT, SFT, RL) and models, and iterate quickly to improve results Think big about the arc of development of Gen AI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports About the team We are passionate scientists dedicated to pushing the boundaries of innovation in Gen AI with focus on Software Development use cases.
IN, HR, Gurugram
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced ML systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real-world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning team for India Consumer Businesses. Machine Learning, Big Data and related quantitative sciences have been strategic to Amazon from the early years. Amazon has been a pioneer in areas such as recommendation engines, ecommerce fraud detection and large-scale optimization of fulfillment center operations. As Amazon has rapidly grown and diversified, the opportunity for applying machine learning has exploded. We have a very broad collection of practical problems where machine learning systems can dramatically improve the customer experience, reduce cost, and drive speed and automation. These include product bundle recommendations for millions of products, safeguarding financial transactions across by building the risk models, improving catalog quality via extracting product attribute values from structured/unstructured data for millions of products, enhancing address quality by powering customer suggestions We are developing state-of-the-art machine learning solutions to accelerate the Amazon India growth story. Amazon India is an exciting place to be at for a machine learning practitioner. We have the eagerness of a fresh startup to absorb machine learning solutions, and the scale of a mature firm to help support their development at the same time. As part of the India Machine Learning team, you will get to work alongside brilliant minds motivated to solve real-world machine learning problems that make a difference to millions of our customers. We encourage thought leadership and blue ocean thinking in ML. Key job responsibilities Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes Design, develop, evaluate and deploy, innovative and highly scalable ML models Work closely with software engineering teams to drive real-time model implementations Work closely with business partners to identify problems and propose machine learning solutions Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production Leading projects and mentoring other scientists, engineers in the use of ML techniques About the team International Machine Learning Team is responsible for building novel ML solutions that attack India first (and other Emerging Markets across MENA and LatAm) problems and impact the bottom-line and top-line of India business. Learn more about our team from https://www.amazon.science/working-at-amazon/how-rajeev-rastogis-machine-learning-team-in-india-develops-innovations-for-customers-worldwide
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Principal Applied Scientist with a strong deep learning background, to lead the development of industry-leading technology with multimodal systems. As a Principal Scientist within the Artificial General Intelligence (AGI) organization, you are a trusted part of the technical leadership. You bring business and industry context to science and technology decisions, set the standard for scientific excellence, and make decisions that affect the way we build and integrate algorithms. A Principal Applied Scientist will solicit differing views across the organization and are willing to change your mind as you learn more. Your artifacts are exemplary and often used as reference across organization. You are a hands-on scientific leader; develop solutions that are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility; and tackle intrinsically hard problems, acquiring expertise as needed. Principal Applied Scientists are expected to decompose complex problems into straightforward solutions. You amplify your impact by leading scientific reviews within your organization or at your location; and scrutinize and review experimental design, modeling, verification and other research procedures. You also probe assumptions, illuminate pitfalls, and foster shared understanding; align teams toward coherent strategies; and educate keeping the scientific community up to date on advanced techniques, state of the art approaches, the latest technologies, and trends. AGI Principal Applied Scientists help managers guide the career growth of other scientists by mentoring and play a significant role in hiring and developing scientists and leads. You will play a critical role in driving the development of Generative AI (GenAI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities You will be responsible for defining key research directions, inventing new machine learning techniques, conducting rigorous experiments, and ensuring that research is translated into practice. You will develop long-term strategies, persuade teams to adopt those strategies, propose goals and deliver on them. A Principal Applied Scientist will participate in organizational planning, hiring, mentorship and leadership development. You will also be build scalable science and engineering solutions, and serve as a key scientific resource in full-cycle development (conception, design, implementation, testing to documentation, delivery, and maintenance).
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the next level. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. As a Research Scientist, you will work with a unique and gifted team developing exciting products for consumers and collaborate with cross-functional teams. Our team rewards intellectual curiosity while maintaining a laser-focus in bringing products to market. At the edge of both academic and applied research in this product area, you have the opportunity to work together with some of the most talented scientists, engineers, and product managers. Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We are constantly learning through programs that are local, regional, and global. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Our team highly values work-life balance, mentorship and career growth. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We care about your career growth and strive to assign projects and offer training that will challenge you to become your best. Key job responsibilities * Partner with laboratory science teams on design and analysis of experiments * Originate and lead the development of new data collection workflows with cross-functional partners * Develop and deploy scalable bioinformatics analysis and QC workflows * Evaluate and incorporate novel bioinformatic approaches to solve critical business problems
US, CA, Sunnyvale
As a Principal Scientist within the Artificial General Intelligence (AGI) organization, you are a trusted part of the technical leadership. You bring business and industry context to science and technology decisions, set the standard for scientific excellence, and make decisions that affect the way we build and integrate algorithms. A Principal Applied Scientist will solicit differing views across the organization and are willing to change your mind as you learn more. Your artifacts are exemplary and often used as reference across organization. You are a hands-on scientific leader; develop solutions that are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility; and tackle intrinsically hard problems, acquiring expertise as needed. Principal Applied Scientists are expected to decompose complex problems into straightforward solutions. You amplify your impact by leading scientific reviews within your organization or at your location; and scrutinize and review experimental design, modeling, verification and other research procedures. You also probe assumptions, illuminate pitfalls, and foster shared understanding; align teams toward coherent strategies; and educate keeping the scientific community up to date on advanced techniques, state of the art approaches, the latest technologies, and trends. AGI Principal Applied Scientists help managers guide the career growth of other scientists by mentoring and play a significant role in hiring and developing scientists and leads. You will play a critical role in driving the development of Generative AI (GenAI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities You will be responsible for defining key research directions, inventing new machine learning techniques, conducting rigorous experiments, and ensuring that research is translated into practice. You will develop long-term strategies, persuade teams to adopt those strategies, propose goals and deliver on them. A Principal Applied Scientist will participate in organizational planning, hiring, mentorship and leadership development. You will also be build scalable science and engineering solutions, and serve as a key scientific resource in full-cycle development (conception, design, implementation, testing to documentation, delivery, and maintenance). A day in the life About the team Amazon’s AGI team is focused on building foundational AI to solve real-world problems at scale, delivering value to all existing businesses in Amazon, and enabling entirely new services and products for people and enterprises around the world.
LU, Luxembourg
Are you a MS or PhD student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models and speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for students interested in using a variety of domain expertise to invent, design and implement state-of-the-art solutions for never-before-solved problems. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science https://amazon.jobs/content/en/how-we-hire/university-roles/applied-science Key job responsibilities As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to write technical white papers, create roadmaps and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists and other science interns to develop solutions and deploy them into production. You will have the opportunity to design new algorithms, models, or other technical solutions whilst experiencing Amazon’s customer focused culture. The ideal intern must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, Estonia, France, Germany, Ireland, Israel, Italy, Jordan, Luxembourg, Netherlands, Poland, Romania, Spain, South Africa, UAE, and UK). Please note these are not remote internships.
US, WA, Seattle
Revolutionize the Future of AI at the Frontier of Applied Science Are you a brilliant mind seeking to push the boundaries of what's possible with artificial intelligence? Join our elite team of researchers and engineers at the forefront of applied science, where we're harnessing the latest advancements in natural language processing, deep learning, and generative AI to reshape industries and unlock new realms of innovation. As an Applied Science Intern, you'll have the unique opportunity to work alongside world-renowned experts, gaining invaluable hands-on experience with cutting-edge technologies such as large language models, transformers, and neural networks. You'll dive deep into complex challenges, fine-tuning state-of-the-art models, developing novel algorithms for named entity recognition, and exploring the vast potential of generative AI. This internship is not just about executing tasks – it's about being a driving force behind groundbreaking discoveries. You'll collaborate with cross-functional teams, leveraging your expertise in statistics, recommender systems, and question answering to tackle real-world problems and deliver impactful solutions. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied science, where your contributions will shape the future of AI and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for LLM & GenAI Applied Science Internships in, but not limited to, Bellevue, WA; Boston, MA; Cambridge, MA; New York, NY; Santa Clara, CA; Seattle, WA; Sunnyvale, CA; Pittsburgh, PA. Key job responsibilities We are particularly interested in candidates with expertise in: LLMs, NLP/NLU, Gen AI, Transformers, Fine-Tuning, Recommendation Systems, Deep Learning, NER, Statistics, Neural Networks, Question Answering. In this role, you will work alongside global experts to develop and implement novel, scalable algorithms and modeling techniques that advance the state-of-the-art in areas at the intersection of LLMs and GenAI. You will tackle challenging, groundbreaking research problems on production-scale data, with a focus on recommendation systems, question answering, deep learning and generative AI. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life - Collaborate with cross-functional teams to tackle complex challenges in natural language processing, computer vision, and generative AI. - Fine-tune state-of-the-art models and develop novel algorithms to push the boundaries of what's possible. - Explore the vast potential of generative AI and its applications across industries. - Attend cutting-edge research seminars and engage in thought-provoking discussions with industry luminaries. - Leverage state-of-the-art computing infrastructure and access to the latest research papers to fuel your innovation. - Present your groundbreaking work and insights to the team, fostering a culture of knowledge-sharing and continuous learning.
US, WA, Seattle
Unlock the Future with Amazon Science! Calling all visionary minds passionate about the transformative power of machine learning! Amazon is seeking boundary-pushing graduate student scientists who can turn revolutionary theory into awe-inspiring reality. Join our team of visionary scientists and embark on a journey to revolutionize the field by harnessing the power of cutting-edge techniques in bayesian optimization, time series, multi-armed bandits and more. At Amazon, we don't just talk about innovation – we live and breathe it. You'll conducting research into the theory and application of deep reinforcement learning. You will work on some of the most difficult problems in the industry with some of the best product managers, scientists, and software engineers in the industry. You will propose and deploy solutions that will likely draw from a range of scientific areas such as supervised, semi-supervised and unsupervised learning, reinforcement learning, advanced statistical modeling, and graph models. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied science, where your contributions will shape the future of AI and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Machine Learning Applied Science Internships in, but not limited to Arlington, VA; Bellevue, WA; Boston, MA; New York, NY; Palo Alto, CA; San Diego, CA; Santa Clara, CA; Seattle, WA. Key job responsibilities We are particularly interested in candidates with expertise in: Optimization, Programming/Scripting Languages, Statistics, Reinforcement Learning, Causal Inference, Large Language Models, Time Series, Graph Modeling, Supervised/Unsupervised Learning, Deep Learning, Predictive Modeling In this role, you will work alongside global experts to develop and implement novel, scalable algorithms and modeling techniques that advance the state-of-the-art in areas at the intersection of Reinforcement Learning and Optimization within Machine Learning. You will tackle challenging, groundbreaking research problems on production-scale data, with a focus on developing novel RL algorithms and applying them to complex, real-world challenges. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life - Develop scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation. - Design, development and evaluation of highly innovative ML models for solving complex business problems. - Research and apply the latest ML techniques and best practices from both academia and industry. - Think about customers and how to improve the customer delivery experience. - Use and analytical techniques to create scalable solutions for business problems.
US, WA, Seattle
Shape the Future of Human-Machine Interaction Are you a master of natural language processing, eager to push the boundaries of conversational AI? Amazon is seeking exceptional graduate students to join our cutting-edge research team, where they will have the opportunity to explore and push the boundaries of natural language processing (NLP), natural language understanding (NLU), and speech recognition technologies. Imagine waking up each morning, fueled by the excitement of tackling complex research problems that have the potential to reshape the world. You'll dive into production-scale data, exploring innovative approaches to natural language understanding, large language models, reinforcement learning with human feedback, conversational AI, and multimodal learning. Your days will be filled with brainstorming sessions, coding sprints, and lively discussions with brilliant minds from diverse backgrounds. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated.. Join us at the forefront of applied science, where your contributions will shape the future of AI and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Natural Language Processing & Speech Applied Science Internships in, but not limited to, Bellevue, WA; Boston, MA; Cambridge, MA; New York, NY; Santa Clara, CA; Seattle, WA; Sunnyvale, CA. Key job responsibilities We are particularly interested in candidates with expertise in: NLP/NLU, LLMs, Reinforcement Learning, Human Feedback/HITL, Deep Learning, Speech Recognition, Conversational AI, Natural Language Modeling, Multimodal Learning. In this role, you will work alongside global experts to develop and implement novel, scalable algorithms and modeling techniques that advance the state-of-the-art in areas at the intersection of Natural Language Processing and Speech Technologies. You will tackle challenging, groundbreaking research problems on production-scale data, with a focus on natural language processing, speech recognition, text-to-speech (TTS), text recognition, question answering, NLP models (e.g., LSTM, transformer-based models), signal processing, information extraction, conversational modeling, audio processing, speaker detection, large language models, multilingual modeling, and more. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life - Develop novel, scalable algorithms and modeling techniques that advance the state-of-the-art in natural language processing, speech recognition, text-to-speech, question answering, and conversational modeling. - Tackle groundbreaking research problems on production-scale data, leveraging techniques such as LSTM, transformer-based models, signal processing, information extraction, audio processing, speaker detection, large language models, and multilingual modeling. - Collaborate with cross-functional teams to solve complex business problems, leveraging your expertise in NLP/NLU, LLMs, reinforcement learning, human feedback/HITL, deep learning, speech recognition, conversational AI, natural language modeling, and multimodal learning. - Thrive in a fast-paced, ever-changing environment, embracing ambiguity and demonstrating strong attention to detail.
US, WA, Seattle
Do you enjoy solving challenging problems and driving innovations in research? Do you want to create scalable optimization models and apply machine learning techniques to guide real-world decisions? We are looking for builders, innovators, and entrepreneurs who want to bring their ideas to reality and improve the lives of millions of customers. As a Research Science intern focused on Operations Research and Optimization intern, you will be challenged to apply theory into practice through experimentation and invention, develop new algorithms using modeling software and programming techniques for complex problems, implement prototypes and work with massive datasets. As you navigate through complex algorithms and data structures, you'll find yourself at the forefront of innovation, shaping the future of Amazon's fulfillment, logistics, and supply chain operations. Imagine waking up each morning, fueled by the excitement of solving intricate puzzles that have a direct impact on Amazon's operational excellence. Your day might begin by collaborating with cross-functional teams, exchanging ideas and insights to develop innovative solutions. You'll then immerse yourself in a world of data, leveraging your expertise in optimization, causal inference, time series analysis, and machine learning to uncover hidden patterns and drive operational efficiencies. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Amazon has positions available for Operations Research Science Internships in, but not limited to, Bellevue, WA; Boston, MA; Cambridge, MA; New York, NY; Santa Clara, CA; Seattle, WA; Sunnyvale, CA. Key job responsibilities We are particularly interested in candidates with expertise in: Optimization, Causal Inference, Time Series, Algorithms and Data Structures, Statistics, Operations Research, Machine Learning, Programming/Scripting Languages, LLMs In this role, you will gain hands-on experience in applying cutting-edge analytical techniques to tackle complex business challenges at scale. If you are passionate about using data-driven insights to drive operational excellence, we encourage you to apply. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life Develop and apply optimization, causal inference, and time series modeling techniques to drive operational efficiencies and improve decision-making across Amazon's fulfillment, logistics, and supply chain operations Design and implement scalable algorithms and data structures to support complex optimization systems Leverage statistical methods and machine learning to uncover insights and patterns in large-scale operations data Prototype and validate new approaches through rigorous experimentation and analysis Collaborate closely with cross-functional teams of researchers, engineers, and business stakeholders to translate research outputs into tangible business impact