Line art of silicon chips developed by Annapurna Labs since its acquisition by Amazon in 2015.  Line art includes mentions of Graviton, Inferentia, and Trainium chips, along with AWS Nitro system.
Amazon's acquisition of Annapurna Labs in 2015 has led to, among other advancements, the development of five generations of the AWS Nitro system, three generations of Arm-based Graviton processors, as well as AWS Trainium and AWS Inferentia chips that are optimized for machine learning training and inference. These chips and systems were discussed at the AWS Silicon Innovation Day event on August 3. The event included a talk by Nafea Bshara, AWS vice president and distinguished engineer, on silicon innovation emerging from Annapurna Labs.

How silicon innovation became the ‘secret sauce’ behind AWS’s success

Nafea Bshara, AWS vice president and distinguished engineer, discusses Annapurna Lab’s path to silicon success; Annapurna co-founder was a featured speaker at AWS Silicon Innovation Day virtual event.

Nafea Bshara, Amazon Web Services vice president and distinguished engineer, and the co-founder of Annapurna Labs, an Israeli-based chipmaker that Amazon acquired in 2015, maintains a low profile, as does his friend and Annapurna co-founder, Hrvoye (Billy) Bilic.

Nafea Bshara headshot image
Nafea Bshara, AWS vice president and distinguished engineer.

Each executive’s LinkedIn profile is sparse, in fact, Bilic’s is out of date.

“We hardly do any interviews; our philosophy is to let our products do the talking,” explains Bshara.

Those products, and silicon innovations, have done a lot of talking since 2015, as the acquisition has led to, among other advancements, the development of five generations of the AWS Nitro System, three generations (1, 2, 3) of custom-designed, Arm-based Graviton processors that support data-intensive workloads, as well as AWS Trainium, and AWS Inferentia chips optimized for machine learning training and inference.

Some observers have described the silicon that emerges from Annapurna Labs in the U.S. and Israel as AWS’s “secret sauce”.

Nafea’s silicon journey began at Technion University in Israel, where he earned bachelor’s and master’s degrees in computer engineering, and where he first met Hrvoye. The two then went on to work for Israel-based Galileo, a company that made chips for networking switches, and controllers for networking routers. Galileo was acquired by U.S. semiconductor manufacturer Marvell in 2000, where Bshara and Bilic would work for a decade before deciding to venture out on their own.

“We had developed at least 50 different chips together,” Bshara explained, “so we had a track record and a first-hand understanding of customer needs, and the market dynamics. We could see that some market segments were being underserved, and with the support from our spouses, Lana and Liat, and our funding friends Avigdor [Willenz] and Manuel [Alba], we started Annapurna Labs.”

That was mid-2011, and three and half years later Amazon acquired the company. The two friends have continued their journey at Amazon, where their team’s work has spoken for itself.

Last year, industry analyst David Vellante praised AWS’s “revolution in system architecture.”

“Much in the same way that AWS defined the cloud operating model last decade, we believe it is once again leading in future systems. The secret sauce underpinning these innovations is specialized designs… We believe these moves position AWS to accommodate a diversity of workloads that span cloud, data center as well as the near and far edge.”

Annapurna’s work was highlighted during the AWS Silicon Innovation Day virtual event on August 3. In fact, Nafea was a featured speaker in the event. The Silicon Innovation Day broadcast, which highlighted AWS silicon innovations, included a keynote from David Brown, vice president, Amazon EC2; a talk about the history of AWS silicon innovation from James Hamilton, Amazon senior vice president and distinguished engineer who holds more than 200 patents in 22 countries in server and datacenter infrastructure, database, and cloud computing; and a fireside chat on the Nitro System with Anthony Liguori, AWS vice president and distinguished engineer, and Jeff Barr, AWS vice president and chief evangelist.

In advance of the silicon-innovation event, Amazon Science connected with Bshara to discuss the history of Annapurna, how the company and the industry have evolved in the past decade, and what the future portends.

  1. Q. 

    You co-founded Annapurna Labs just over 11 years ago. Why Annapurna?

    A. 

     I co-founded the company with my longtime partner, Billy, and with an amazing set of engineers and leaders who believed in the mission. We started Annapurna Labs because we looked at the way the chip industry was investing in infrastructure and data centers; it was minuscule at that time because everybody was going after the gold rush of mobile phones, smartphones, and tablets.

    We believed the industry was over indexing on investment for mobile, and under investing in the data center. The data center market was underserved. That, combined with the fact that there was increasing disappointment with the ineffective and non-productive method of developing chips, especially when compared with software development. The productivity of software developers had improved significantly in the past 25 years, while the productivity of chip developers hadn’t improved much since the ‘90s. In assessing the opportunity, we saw a data-center market that was being underserved, and an opportunity to redefine chip development with greater productivity, and with a better business model. Those factors contributed to us starting Annapurna Labs.

  2. Q. 

    How has the chip industry evolved in the past 11 years?

    A. 

    The chip industry realized, a bit late, but nevertheless realized that productivity and time to market needed to be addressed. While Annapurna has been a pioneer in advancing productivity and time to market, many others are following in our footsteps and transitioning to a building-blocks-centric development mindset, similar to how the software industry moved toward object-oriented, and service-oriented software design.

    Chip companies have now transitioned to what we refer to as an intellectual property-oriented, or IP-oriented, correct-by-design approach. Secondly, the chip industry has adopted the cloud. Cloud adoption has led to an explosion of compute power for building chips. Using the cloud, we are able to use compute in a ‘bursty’ way and in parallel. We and our chip-industry colleagues couldn’t deliver the silicon we do today without the cloud. This has led to the creation of a healthy market where chip companies have realized they don’t need to build everything in house, in much the same way software companies have realized they can buy libraries from open source or other library providers. The industry has matured to the point where now there is a healthy business model around buying building blocks, or IPs, from providers like Arm, Synopsys, Alphawave, or Cadence.

  3. Q. 

    Annapurna Labs was named after one of the tallest peaks in the Himalayas that’s regarded as one of the most dangerous mountains to climb. What's been the tallest peak you've had to climb?

    A. 

    I’m up in the cloud, I don’t need to climb anything [laughing]. Yes, Billy and I picked the name Annapurna Labs for a couple of reasons. First, Billy and I originally planned to climb Annapurna before we started the company. But then we got excited about the idea, acquired funding, and suddenly time was of the essence, so we put our climbing plans on hold and started the company. We called it Annapurna because at that time – and it’s true even today – there is a high barrier to entry in starting a chip company. The challenge is steep, and the risk is high, so it’s just like climbing Annapurna. We also believed that we wanted to reach a point above the clouds where you could see things very clearly, and without clutter. That’s always been a mantra for us as a company: Avoid the clutter, and look far into the future to understand what the customer really needs versus getting distracted by the day-to-day noise.

  4. Q. 

    What are the unique challenges you face in designing chips for ML training and inference versus more general CPU designs?

    A. 

    First, I would want to emphasize what challenge we didn’t have to worry about: with the strong foundation, methodologies, and engineering muscle we built delivering multiple generations of Nitro, we had confidence in our ability to execute on building the chips and manufacturing them at high volume, and high quality. So that was a major thing we didn’t need to worry about. Designing for machine learning is one the most challenging, but also the most rewarding tasks I've had the pleasure to participate in. There is an insatiable demand for machine learning right now, so anyone with a good product won’t have any issues finding customer demand. The demand is there, but there are a couple of challenges.

    Related content
    Two authors of Amazon Redshift research paper that will be presented at leading international forum for database researchers reflect on how far the first petabyte scale cloud data warehouse has advanced since it was announced ten years ago.

    The first is that customers want ‘just works’ solutions because they have enough challenges to work on the science side. So they are looking for a frictionless migration from the incumbent, let's say GPU-based machine learning, to AWS Trainium or AWS Inferentia. Our biggest challenge is to hide all the complexity so it’s what we refer to internally as boring to migrate. We don’t want our customers, the scientists and researchers, to have to think about moving from one piece of hardware to another. This is a challenge because the incumbent GPUs, specifically NVIDIA, have done a very good job developing broadly adopted technologies. The customer shouldn’t see or experience any of the hard work we’ve done in developing our chips; what the customer should experience is that it’s transparent and frictionless to transition to Inferentia and Trainium. That’s a hefty task and one of our internal challenges as a team.

    Trainium artwork from AWS website
    "The customer shouldn’t see or experience any of the hard work we’ve done in developing our chips; what the customer should experience is that it’s transparent and frictionless to transition to Inferentia and Trainium," says Bshara.

    The second challenge is more external; it’s the fact that science and machine learning are moving very fast. As an organization that is building hardware, our job is to predict what customers will need three, four, five years down the road because the development cycle for a chip can be two years, and then it gets deployed for three years. The lifecycle is around five years and trying to predict how the needs of scientists and the machine-learning community will evolve over that time span is difficult. Unlike CPU workloads, which aren’t evolving very quickly, machine learning workloads are, and it’s a bit of an art to keep apace. I would give ourselves a high score, not a perfect score, in being efficient in terms of execution and cost, while still being future proof. It’s the art of predicting what customers will need three years from now, while still executing on time and budget. These things only come with experience, and I’m fortunate to be part of a great team that has the experience to strike the right balance between cost, schedule, and future-proofing the product.

  5. Q. 

    At the recent re:MARS conference Rohit Prasad, Amazon senior vice president and Alexa head scientist, said the voice assistant is interacting with customers billions of times each week. Alexa is powered by EC2 Inf1 instances, which use AWS Inferentia chips. Why is it more effective for Alexa workloads to take advantage of this kind of specialized processing versus more general-purpose GPUs?

    A. 

    Alexa is one of those Amazon technologies that we want to bring to as many people as possible. It’s also a great example of the Amazon flywheel; the more people use it, the more value it delivers. One of our goals is to provide this service with as low latency as possible, and at the lowest cost possible, and over time improve the machine-learning algorithms behind Alexa. When people say improving Alexa, it really means handling much more complex machine learning, much more sophisticated models while maintaining the performance, and low latency. Using Inferentia, the chip, and Inf1, the EC2 instances that actually hosts all of these chips, Alexa is able to run much more advanced machine learning algorithms at lower costs and with lower latency than a standard general-purpose chip. It's not that the general-purpose chip couldn't do the job, it's that it would do so at higher costs and higher latency. With Inferentia we deliver lower latency and support much more sophisticated algorithms. This results in customers having a better experience with Alexa, and benefitting from a smarter Alexa.

  6. Q. 

    AI has been called the new electricity. But as ML models become increasingly large and complex as you just discussed, there also are concerns that energy consumption for AI model training and inference is damaging to the environment. At the chip level, what can be done to reduce the environmental impact of ML model training and Inference?

    A. 

    What we can do at the chip level, at the EC2 level, is actually work on three vectors, which we’re doing right now. The first is drive to lower power quickly by using more advanced silicon processes. Every time we build a chip in an advanced silicon process we're utilizing new semiconductor processes with smaller transistors that require less power for the same work. Because of our focus on efficient execution, we can deliver to EC2 customers a new chip based on a more modern, power-efficient silicon process every 18 months or so.

    The second vector is building more technologies, trying to accelerate in hardware and in algorithms, to get training and inference done faster. The faster we can handle training and inference, the less power is consumed. For example, one of the technologies we innovated in the last Trainium chip was something called stochastic rounding which, depending upon which measure you're looking at for some neural workloads, could accelerate neural network training by up to 30%. When you say 30% less time that translates into 30% less power.

    Another thing we're doing at the algorithmic level is offering different data types. For example, historically machine learning used a 32-bit floating point. Now we’re offering multiple versions of 16-bit and a few versions of 8-bit. When these different data types are used, they not only accelerate machine learning training, they significantly reduce the power for the same amount of workload. For example, doing matrix multiplication on a 16-bit float point is less than one-third the total power if we had done it with 32-bit floating point. The ability to add things like stochastic rounding or new data types at the algorithmic level provides a step-function improvement in power consumption for the same amount of workload.

    The third vector is credit to EC2 and the Nitro System, we’re offering more choice for customers. There are different chips optimized for different workloads, and the best way for customers to save energy is to follow the classic Amazon mantra – the everything store. We offer all different types of chips, including multiple generations of Nvidia GPUs, Intel Habana, and Trainium, and share with the customer the power profile and performance of each of the instances hosting these chips, so the customer can choose the right chip for the right workload, and optimize for the lowest possible power consumption at the lowest cost.

  7. Q. 

    I’ve focused primarily on machine learning. But let’s turn our attention to more general-purpose workloads running in the cloud, and your work on Graviton processors for Amazon EC2. 

    A. 

    Yes, in a way Graviton is the opposite of our work on machine learning, in the sense that the focus is on building server processors for general-purpose workloads running in EC2. The market for general-purpose chips has been there for thirty or forty years, and the workloads themselves haven’t evolved as rapidly as machine learning, so when we started designing, the target was clear to us.

    This is an image of a Graviton silicon chip with a blue background.
    AWS is three generations into its Graviton chip journey, and Bshara says the company has plans for "many more generations" to come.

    Because this segment of the industry wasn’t moving that fast, we felt our challenge was to move the industry faster, specifically in offering step function improvement in performance, and reducing costs, and power consumption. There are many times when you build plans, especially for chips, where the original plans are rosy, but as the development progresses you have to make tradeoffs, and the actual product falls short of the original promise. With first-generation Graviton, we experienced the opposite; we were pleasantly surprised that both performance and power efficiency turned out better than our original plan. That’s very rare in our industry.

    Related content
    Amazon DynamoDB was introduced 10 years ago today; one of its key contributors reflects on its origins, and discusses the 'never-ending journey' to make DynamoDB more secure, more available and more performant.

    The same has been true with Graviton2. Because of this there has been a massive movement inside Amazon for general workloads to move to Graviton2, mainly to save on power, but also on costs. For the same workloads, Graviton2 will on average consume 60% less power than same-generation competitive offerings, and we’re passing on those cost-savings to customers. Outside Amazon, at least 48 of AWS’s top 50 customers have not just tested, but have production workloads running on Graviton2.

    In May, Graviton3 processors became available, so it’s still Day 1 as we’re only three generations into this journey. We have plans for many more generations, but it’s always very satisfying and rewarding to hear how boring it is for customers to migrate to Graviton, and to hear all the customer success stories. It is incredibly satisfying to come to work every day and hear some of the success stories from the tens of thousands of customers using Graviton.

  8. Q. 

    You have more than 100 openings on your jobs page. What kind of talent are you seeking? And what are the characteristics of employees who succeed at Annapurna Labs? 

    A. 

    We are seeking individuals who like to work on cutting-edge technology, and approach challenges from a principles-first approach because most of the challenges we confront haven’t been dealt with before. While actual experience is important, we place greater value on proper thinking and a principles-first mindset, or reasoning from first principles.

    We also value individuals who enjoy working in a dynamic environment where the solution isn’t always the same hammer after the same nail. Given our principles-first approach, many of our challenges get solved at the chip level, the terminal level, and the system level, so we seek individuals who have systems understanding, and are skilled at working across disciplines. It’s difficult for an individual with a single discipline, or single domain knowledge, who isn’t willing to challenge her or himself by learning across other domains, to succeed at Annapurna. Last but not least, we look for individuals who focus on delivering, within a team environment. We recognize ideas are “cheap”, and what makes the difference is delivering on the idea all the way to production. Ideas are a commodity. Executing on those ideas is not.

  9. Q. 

    I've read that Billy and you share the belief that if you can dream it, you can do it. So what's your dream about future silicon development?

    A. 

    That’s true, and it’s the main reason Billy and I wanted to join AWS, because we had a common vision that there’s so much value we can bring to customers, and AWS leadership and Amazon in general were willing to invest in that vision for the long term. We agreed to be acquired by Amazon not only because of the funding and our common long-term vision, but also because building components for our own data centers would allow us to quickly deliver customer value. We’ve been super happy with the relationship for many reasons, but primarily because of our ability to have customer impact at global scale.

    At Amazon, we operate at such a scale and with such a diversity of customers that we are capable of doing application-specific, or domain-specific acceleration. Machine learning is one example of that. What we’ve done with Aqua (advanced query accelerator) for Amazon Redshift is another example where we’ve delivered hardware-based acceleration for analytics. Our biggest challenge these days is deciding what project to prioritize. There’s no shortage of opportunities to deliver value. The only way we’re able to take this approach is because of AWS. Developing silicon requires significant investment, and the only way to gain a good return on that investment is by having a lot of volume and cost-effective development, and we’ve been able to develop a large, and successful customer base with AWS.

    I should also add that before joining Amazon we thought we really took a long-term perspective. But once you sit in Amazon meetings, you realize what long-term strategic thinking really means. I continue to learn every day about how to master that. Suffice to say, we have a product roadmap, and a technology and investment strategy that extends to 2032. As much uncertainty as there is in the future, there are a few things we’re highly convicted in, and we’re investing in them, even though they may be ten years out. I obviously can’t disclose future product plans, but we continue to dream big on behalf of our customers.

    The AWS Annapurna Labs team has more than 100 job openings for software developers, physical design engineers, design specification engineers, and many other technical roles. The team has development centers in the U.S. and Israel.

Research areas

Related content

US, CA, San Francisco
Amazon AGI Autonomy develops foundational capabilities for useful AI agents. We are the research lab behind Amazon Nova Act, a state-of-the-art computer-use agent. Our work combines Large Language Models (LLMs) with Reinforcement Learning (RL) to solve reasoning, planning, and world modeling in the virtual world. We are a small, talent-dense lab with the autonomy to move fast and the long-term commitment to pursue high-risk, high-payoff research. Come be a part of our journey! -- About the team: We are a research engineering team responsible for data ingestion and research tooling that support model development across the lab. The lab’s ability to train state-of-the-art models depends on generating high-quality training data and having useful tools for understanding experimental outcomes. We accelerate research work across the lab while maintaining the operational reliability expected of critical infrastructure. -- About the role: As a frontend engineer on the team, you will build the platform and tooling that power data creation, evaluation, and experimentation across the lab. Your work will be used daily by annotators, engineers, and researchers. This is a hands-on technical leadership role. You will ship a lot of code while defining frontend architecture, shared abstractions, and UI systems across the platform. We are looking for someone with strong engineering fundamentals, sound product judgment, and the ability to build polished UIs in a fast-moving research environment. Key job responsibilities - Be highly productive in the codebase and drive the team’s engineering velocity. - Define and evolve architecture for a research tooling platform with multiple independently evolving tools. - Design and implement reusable UI components, frontend infrastructure, and APIs. - Collaborate directly with Research, Human -Feedback, Product Engineering, and other teams to understand workflows and define requirements. - Write technical RFCs to communicate design decisions and tradeoffs across teams. - Own projects end to end, from technical design through implementation, rollout, and long-term maintenance. - Raise the team’s technical bar through thoughtful code reviews, architectural guidance, and mentorship.
US, CA, San Francisco
Amazon AGI Autonomy develops foundational capabilities for useful AI agents. We are the research lab behind Amazon Nova Act, a state-of-the-art computer-use agent. Our work combines Large Language Models (LLMs) with Reinforcement Learning (RL) to solve reasoning, planning, and world modeling in the virtual world. We are a small, talent-dense lab with the autonomy to move fast and the long-term commitment to pursue high-risk, high-payoff research. Come be a part of our journey! -- About the team: We are a research engineering team responsible for data ingestion and research tooling that support model development across the lab. The lab’s ability to train state-of-the-art models depends on generating high-quality training data and having useful tools for understanding experimental outcomes. We accelerate research work across the lab while maintaining the operational reliability expected of critical infrastructure. -- About the role: As a backend engineer on the team, you will build and operate core services that ingest, process, and distribute large-scale, multi-modal datasets to internal tools and data pipelines across the lab. This is a hands-on technical leadership role. You will ship a lot of code while defining backend architecture and operational standards across the platform. The platform is built primarily in TypeScript today, with plans to introduce Python services in the future. We are looking for someone who can balance rapid experimentation with operational rigor to build reliable services in a fast-moving research environment. Key job responsibilities - Be highly productive in the codebase and drive the team’s engineering velocity. - Design and evolve backend architecture and interfaces for core services. - Define and own standards for production health, performance, and observability. - Collaborate directly with Research, Human Feedback, Product Engineering, and other teams to understand workflows and define requirements. - Write technical RFCs to communicate design decisions and tradeoffs across teams. - Own projects end to end, from technical design through long-term maintenance. - Raise the team’s technical bar through thoughtful code reviews, architectural guidance, and mentorship.
IN, KA, Bengaluru
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced algorithmic systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning and Data Sciences team for India Consumer Businesses. If you have an entrepreneurial spirit, know how to deliver, love to work with data, are deeply technical, highly innovative and long for the opportunity to build solutions to challenging problems that directly impact the company's bottom-line, we want to talk to you. Major responsibilities Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes Design, development, evaluate and deploy innovative and highly scalable models for predictive learning Research and implement novel machine learning and statistical approaches Work closely with software engineering teams to drive real-time model implementations and new feature creations Work closely with business owners and operations staff to optimize various business operations Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation Mentor other scientists and engineers in the use of ML techniques
US, CA, Pasadena
The Amazon Center for Quantum Computing (CQC) team is looking for a passionate, talented, and inventive Research Engineer specializing in hardware design for cryogenic environments. The ideal candidate should have expertise in 3D CAD (SolidWorks), thermal and structural FEA (Ansys/COMSOL), hardware design for cryogenic applications, design for manufacturing, and mechanical engineering principles. The candidate must have demonstrated experience driving designs through full product development cycles (requirements, conceptual design, detailed design, manufacturing, integration, and testing). Candidates must also have a strong background in both cryogenic mechanical engineering theory and implementation. Working effectively within a cross-functional team environment is critical. Key job responsibilities The CQC collaborates across teams and projects to offer state-of-the-art, cost-effective solutions for scaling the signal delivery to quantum processor systems at cryogenic temperatures. Equally important is the ability to scale the thermal performance and improve EMI mitigation of the cryogenic environment. You will work on the following: - High density novel packaging solutions for quantum processor units - Cryogenic mechanical design for novel cryogenic signal conditioning sub-assemblies - Cryogenic mechanical design for signal delivery systems - Simulation-driven designs (shielding, filtering, etc.) to reduce sources of EMI within the qubit environment. - Own end-to-end product development through requirements, design reports, design reviews, assembly/testing documentation, and final delivery A day in the life As you design and implement cryogenic hardware solutions, from requirements definition to deployment, you will also: - Participate in requirements, design, and test reviews and communicate with internal stakeholders - Work cross-functionally to help drive decisions using your unique technical background and skill set - Refine and define standards and processes for operational excellence - Work in a high-paced, startup-like environment where you are provided the resources to innovate quickly About the team The Amazon Center for Quantum Computing (CQC) is a multi-disciplinary team of scientists, engineers, and technicians, on a mission to develop a fault-tolerant quantum computer. Inclusive Team Culture Here at Amazon, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Export Control Requirement Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a US export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility.
US, CA, Pasadena
The Amazon Center for Quantum Computing (CQC) team is looking for a passionate, talented, and inventive Research Engineer specializing in hardware design for cryogenic environments. The ideal candidate should have expertise in 3D CAD (SolidWorks), thermal and structural FEA (Ansys/COMSOL), hardware design for cryogenic applications, design for manufacturing, and mechanical engineering principles. The candidate must have demonstrated experience driving designs through full product development cycles (requirements, conceptual design, detailed design, manufacturing, integration, and testing). Candidates must also have a strong background in both cryogenic mechanical engineering theory and implementation. Working effectively within a cross-functional team environment is critical. Key job responsibilities The CQC collaborates across teams and projects to offer state-of-the-art, cost-effective solutions for scaling the signal delivery to quantum processor systems at cryogenic temperatures. Equally important is the ability to scale the thermal performance and improve EMI mitigation of the cryogenic environment. You will work on the following: - High density novel packaging solutions for quantum processor units - Cryogenic mechanical design for novel cryogenic signal conditioning sub-assemblies - Cryogenic mechanical design for signal delivery systems - Simulation-driven designs (shielding, filtering, etc.) to reduce sources of EMI within the qubit environment. - Own end-to-end product development through requirements, design reports, design reviews, assembly/testing documentation, and final delivery A day in the life As you design and implement cryogenic hardware solutions, from requirements definition to deployment, you will also: - Participate in requirements, design, and test reviews and communicate with internal stakeholders - Work cross-functionally to help drive decisions using your unique technical background and skill set - Refine and define standards and processes for operational excellence - Work in a high-paced, startup-like environment where you are provided the resources to innovate quickly About the team The Amazon Center for Quantum Computing (CQC) is a multi-disciplinary team of scientists, engineers, and technicians, on a mission to develop a fault-tolerant quantum computer. Inclusive Team Culture Here at Amazon, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Export Control Requirement Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a US export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility.
FR, Courbevoie
Are you a MS or PhD student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models, speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for students interested in using a variety of domain expertise to invent, design and implement state-of-the-art solutions for never-before-solved problems. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science https://amazon.jobs/content/en/how-we-hire/university-roles/applied-science Key job responsibilities As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to write technical white papers, create roadmaps and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists and other science interns to develop solutions and deploy them into production. You will have the opportunity to design new algorithms, models, or other technical solutions whilst experiencing Amazon’s customer focused culture. The ideal intern must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, Estonia, France, Germany, Ireland, Israel, Italy, Jordan, Luxembourg, Netherlands, Poland, Romania, South Africa, Spain, Sweden, UAE, and UK). Please note these are not remote internships.
US, WA, Seattle
Amazon's Pricing & Promotions Science is seeking a driven Applied Scientist to harness planet scale multi-modal datasets, and navigate a continuously evolving competitor landscape, in order to regularly generate fresh customer-relevant prices on billions of Amazon and Third Party Seller products worldwide. We are looking for a talented, organized, and customer-focused applied researchers to join our Pricing and Promotions Optimization science group, with a charter to measure, refine, and launch customer-obsessed improvements to our algorithmic pricing and promotion models across all products listed on Amazon. This role requires an individual with exceptional machine learning and reinforcement learning modeling expertise, excellent cross-functional collaboration skills, business acumen, and an entrepreneurial spirit. We are looking for an experienced innovator, who is a self-starter, comfortable with ambiguity, demonstrates strong attention to detail, and has the ability to work in a fast-paced and ever-changing environment. Key job responsibilities - See the big picture. Understand and influence the long term vision for Amazon's science-based competitive, perception-preserving pricing techniques - Build strong collaborations. Partner with product, engineering, and science teams within Pricing & Promotions to deploy machine learning price estimation and error correction solutions at Amazon scale - Stay informed. Establish mechanisms to stay up to date on latest scientific advancements in machine learning, neural networks, natural language processing, probabilistic forecasting, and multi-objective optimization techniques. Identify opportunities to apply them to relevant Pricing & Promotions business problems - Keep innovating for our customers. Foster an environment that promotes rapid experimentation, continuous learning, and incremental value delivery. - Successfully execute & deliver. Apply your exceptional technical machine learning expertise to incrementally move the needle on some of our hardest pricing problems. A day in the life We are hiring an applied scientist to drive our pricing optimization initiatives. The Price Optimization science team drives cross-domain and cross-system improvements through: - invent and deliver price optimization, simulation, and competitiveness tools for Sellers. - shape and extend our RL optimization platform - a pricing centric tool that automates the optimization of various system parameters and price inputs. - Promotion optimization initiatives exploring CX, discount amount, and cross-product optimization opportunities. - Identifying opportunities to optimally price across systems and contexts (marketplaces, request types, event periods) Price is a highly relevant input into many partner-team architectures, and is highly relevant to the customer, therefore this role creates the opportunity to drive extremely large impact (measured in Bs not Ms), but demands careful thought and clear communication. About the team About the team: the Pricing Discovery and Optimization team within P2 Science owns price quality, discovery and discount optimization initiatives, including criteria for internal price matching, price discovery into search, p13N and SP, pricing bandits, and Promotion type optimization. We leverage planet scale data on billions of Amazon and external competitor products to build advanced optimization models for pricing, elasticity estimation, product substitutability, and optimization. We preserve long term customer trust by ensuring Amazon's prices are always competitive and error free.
US, CA, Pasadena
The Amazon Center for Quantum Computing (CQC) is a multi-disciplinary team of scientists, engineers, and technicians, on a mission to develop a fault-tolerant quantum computer. We are looking to hire a Control Stack Manager to join our growing software group. You will lead a team of interdisciplinary scientists and software engineers, focused on developing research software and infrastructure to support the development and operation of scalable fault-tolerant quantum computers. You will interface directly with our experimental physics and control hardware teams to develop and drive a vision for the experimental quantum computing software-hardware interface. The ideal candidate will (1) have strong technical breadth across low-level programming, scientific instrumentation, and computer architecture, (2) have excellent communication skills and a proven track record of collaborating with scientists and hardware engineers, and (3) be excited about empowering and growing a team of scientists and software engineers. Inclusive Team Culture Here at Amazon, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Export Control Requirement Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a US export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility. Key job responsibilities - Develop a technical vision for the quantum software-hardware interface in collaboration w/ senior engineers - Collaborate effectively with science and hardware teams to derive software needs and priorities - Own resource allocation and planning activities for your team to meet the needs of (internal) customers - Be comfortable “getting your hands dirty” (i.e. diving deep into architecture, metrics, and implementation) - Regularly provide technical evaluation and feedback to your reports (i.e. via code review, design docs, etc.) - Drive hiring activities for your team — develop growth plans, source candidates, and design interview loops - Coach and empower your employees to become better engineers, scientists, and communicators We are looking for candidates with strong engineering principles, a bias for action, superior problem-solving, and excellent communication skills. Thriving in ambiguity and leading with empathy are essential. As a manager embedded in a broader research science organization, you will have the opportunity to work on new ideas and stay abreast of the field of experimental quantum computation. A day in the life The majority of your time will be spent orchestrating, coaching, and growing the control stack team at the Center for Quantum Computing. This requires collaborating with other science and software teams and working backwards from the needs of our science staff in the context of our larger experimental roadmap. You will translate science needs and priorities into software project proposals and resource allocations. Once project proposals have been accepted, you will support and empower your team to deliver these projects on time while maintaining high standards of engineering excellence. Because many high-level experimental goals have cross-cutting requirements, you’ll need to stay in sync with partner science and software teams. About the team You will be joining the software group within the Center of Quantum Computing. Our team is comprised of scientists and software engineers who are building scalable software that enables quantum computing technologies.
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! We are looking for a self-motivated, passionate and resourceful Applied Scientist to bring diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. You will spend your time as a hands-on machine learning practitioner and a research leader. You will play a key role on the team, building and guiding machine learning models from the ground up. At the end of the day, you will have the reward of seeing your contributions benefit millions of Amazon.com customers worldwide. Key job responsibilities - Develop AI solutions for various Prime Video recommendation systems using Deep learning, GenAI, Reinforcement Learning, and optimization methods; - Work closely with engineers and product managers to design, implement and launch AI solutions end-to-end; - Design and conduct offline and online (A/B) experiments to evaluate proposed solutions based on in-depth data analyses; - Effectively communicate technical and non-technical ideas with teammates and stakeholders; - Stay up-to-date with advancements and the latest modeling techniques in the field; - Publish your research findings in top conferences and journals. About the team Prime Video Recommendation Science team owns science solution to power personalized experience on various devices, from sourcing, relevance, ranking, to name a few. We work closely with the engineering teams to launch our solutions in production.
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video subscriptions such as Apple TV+, HBO Max, Peacock, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video team member, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities We are looking for passionate, hard-working, and talented individuals to help us push the envelope of content localization. We work on a broad array of research areas and applications, including but not limited to multimodal machine translation, speech synthesis, speech analysis, and asset quality assessment. Candidates should be prepared to help drive innovation in one or more areas of machine learning, audio processing, and natural language understanding. The ideal candidate would have experience in audio processing, natural language understanding and machine learning. Familiarity with machine translation, foundational models, and speech synthesis will be a plus. As an Applied Scientist, you should be a strong communicator, able to describe scientifically rigorous work to business stakeholders of varying levels of technical sophistication. You will closely partner with the solution development teams, and should be intensely curious about how the research is moving the needle for business. Strong inter-personal and mentoring skills to develop applied science talent in the team is another important requirement.