Amazon announces Ocelot quantum chip

Prototype is the first realization of a scalable, hardware-efficient quantum computing architecture based on bosonic quantum error correction.

Today we are happy to announce Ocelot, our first-generation quantum chip. Ocelot represents Amazon Web Services’ pioneering effort to develop, from the ground up, a hardware implementation of quantum error correction that is both resource efficient and scalable. Based on superconducting quantum circuits, Ocelot achieves the following major technical advances: 

  • The first realization of a scalable architecture for bosonic error correction, surpassing traditional qubit approaches to reducing error correction overhead;
  • The first implementation of a noise-biased gate — a key to unlocking the type of hardware-efficient error correction necessary for building scalable, commercially viable quantum computers;
  • State-of-the-art performance for superconducting qubits, with bit-flip times approaching one second in tandem with phase-flip times of 20 microseconds.
1920x1080_Ocelot.jpg
The pair of silicon microchips that compose the Ocelot logical-qubit memory chip.

We believe that scaling Ocelot to a full-fledged quantum computer capable of transformative societal impact would require as little as one-tenth as many resources as common approaches, helping bring closer the age of practical quantum computing.

The quantum performance gap

Quantum computers promise to perform some computations much faster — even exponentially faster — than classical computers. This means quantum computers can solve some problems that are forever beyond the reach of classical computing.

Practical applications of quantum computing will require sophisticated quantum algorithms with billions of quantum gates — the basic operations of a quantum computer. But current quantum computers’ extreme sensitivity to environmental noise means that the best quantum hardware today can run only about a thousand gates without error. How do we bridge this gap?

Quantum error correction: the key to reliable quantum computing

Quantum error correction, first proposed theoretically in the 1990s, offers a solution. By sharing the information in each logical qubit across multiple physical qubits, one can protect the information within a quantum computer from external noise. Not only this, but errors can be detected and corrected in a manner analogous to the classical error correction methods used in digital storage and communication.

Recent experiments have demonstrated promising progress, but today’s best logical qubits, based on superconducting or atomic qubits, still exhibit error rates a billion times larger than the error rates needed for known quantum algorithms of practical utility and quantum advantage.

The challenge of qubit overhead

While quantum error correction provides a path to bridging the enormous chasm between today’s error rates and those required for practical quantum computation, it comes with a severe penalty in terms of resource overhead. Reducing logical-qubit error rates requires scaling up the redundancy in the number of physical qubits per logical qubit.

Traditional quantum error correction methods, such as those using the surface error-correcting code, currently require thousands (and if we work really, really hard, maybe in the future, hundreds) of physical qubits per logical qubit to reach the desired error rates. That means that a commercially relevant quantum computer would require millions of physical qubits — many orders of magnitude beyond the qubit count of current hardware.

One fundamental reason for this high overhead is that quantum systems experience two types of errors: bit-flip errors (also present in classical bits) and phase-flip errors (unique to qubits). Whereas classical bits require only correction of bit flips, qubits require an additional layer of redundancy to handle both types of errors.

Although subtle, this added complexity leads to quantum systems’ large resource overhead requirement. For comparison, a good classical error-correcting code could realize the error rate we desire for quantum computing with less than 30% overhead, roughly one-ten-thousandth the overhead of the conventional surface code approach (assuming bit error rates of 0.5%, similar to qubit error rates in current hardware).

Cat qubits: an approach to more efficient error correction

Quantum systems in nature can be more complex than qubits, which consist of just two quantum states (usually labeled 0 and 1 in analogy to classical digital bits). Take for example the simple harmonic oscillator, which oscillates with a well-defined frequency. Harmonic oscillators come in all sorts of shapes and sizes, from the mechanical metronome used to keep time while playing music to the microwave electromagnetic oscillators used in radar and communication systems.

Classically, the state of an oscillator can be represented by the amplitude and phase of its oscillations. Quantum mechanically, the situation is similar, although the amplitude and phase are never simultaneously perfectly defined, and there is an underlying graininess to the amplitude associated with each quanta of energy one adds to the system.

These quanta of energy are what are called bosonic particles, the best known of which is the photon, associated with the electromagnetic field. The more energy we pump into the system, the more bosons (photons) we create, and the more oscillator states (amplitudes) we can access. Bosonic quantum error correction, which relies on bosons instead of simple two-state qubit systems, uses these extra oscillator states to more effectively protect quantum information from environmental noise and to do more efficient error correction.

One type of bosonic quantum error correction uses cat qubits, named after the dead/alive Schrödinger cat of Erwin Schrödinger's famous thought experiment. Cat qubits use the quantum superposition of classical-like states of well-defined amplitude and phase to encode a qubit’s worth of information. Just a few years after Peter Shor’s seminal 1995 paper on quantum error correction, researchers began quietly developing an alternative approach to error correction based on cat qubits.

A major advantage of cat qubits is their inherent protection against bit-flip errors. Increasing the number of photons in the oscillator can make the rate of the bit-flip errors exponentially small. This means that instead of increasing qubit count, we can simply increase the energy of an oscillator, making error correction far more efficient.

The past decade has seen pioneering experiments demonstrating the potential of cat qubits. However, these experiments have mostly focused on single-cat-qubit demonstrations, leaving open the question of whether cat qubits could be integrated into a scalable architecture.

Ocelot: demonstrating the scalability of bosonic quantum error correction

Today in Nature, we published the results of our measurements on Ocelot, and its quantum error correction performance. Ocelot represents an important step on the road to practical quantum computers, leveraging chip-scale integration of cat qubits to form a scalable, hardware-efficient architecture for quantum error correction. In this approach,

  • bit-flip errors are exponentially suppressed at the physical-qubit level;
  • phase-flip errors are corrected using a repetition code, the simplest classical error-correcting code; and
  • highly noise-biased controlled-NOT (C-NOT) gates, between each cat qubit and ancillary transmon qubits (the conventional qubit used in superconducting quantum circuits), enable phase-flip-error detection while preserving the cat’s bit-flip protection.
Ocelot logical qubit.png
Pictorial representation of the logical qubit as implemented in the Ocelot chip. The logical qubit is formed from a linear array of cat data qubits, transmon ancilla qubits, and buffer modes. A buffer mode connected to each of the cat data qubits, are used to correct for bit-flip errors, while a repetition code across the linear array of cat data qubits is used to detect and correct for phase-flip errors. The repetition code uses noise-biased controlled-not gate operations between each pair of neighboring cat data qubits and a shared transmon ancilla qubit to flag and locate phase-flip errors within the cat data qubit array. In this figure, a phase-flip (or Z) error has been detected on the middle cat data qubit.

The Ocelot logical-qubit memory chip, shown schematically above, consists of five cat data qubits, each housing an oscillator that is used to store the quantum data. The storage oscillator of each cat qubit is connected to two ancillary transmon qubits for phase-flip-error detection and paired with a special nonlinear buffer circuit used to stabilize the cat qubit states and exponentially suppress bit-flip errors.

Tuning up the Ocelot device involves calibrating the bit- and phase-flip error rates of the cat qubits against the cat amplitude (average photon number) and optimizing the noise-bias of the C-NOT gate used for phase-flip-error detection. Our experimental results show that we can achieve bit-flip times approaching one second, more than a thousand times longer than the lifetime of conventional superconducting qubits.

Critically, this can be accomplished with a cat amplitude as small as four photons, enabling us to retain phase-flip times of tens of microseconds, sufficient for quantum error correction. From there, we run a sequence of error correction cycles to test the performance of the circuit as a logical-qubit memory. In order to characterize the performance of the repetition code and the scalability of the architecture, we studied subsets of the Ocelot cat qubits, representing different repetition code lengths.

The logical phase-flip error rate was seen to drop significantly when the code distance was increased from distance-3 to distance-5 (i.e., from a code with three cat qubits to one with five) across a wide range of cat photon numbers, indicating the effectiveness of the repetition code.

When bit-flip errors were included, the total logical error rate was measured to be 1.72% per cycle for the distance-3 code and 1.65% per cycle for the distance-5 code. The comparability of the total error rate of the distance-5 code to that of the shorter distance-3 code, with fewer cat qubits and opportunities for bit-flip errors, can be attributed to the large noise bias of the C-NOT gate and its effectiveness in suppressing bit-flip errors. This noise bias is what allows Ocelot to achieve a distance-5 code with less than a fifth as many qubits — five data qubits and four ancilla qubits, versus 49 qubits for a surface code device.

What we scale matters

From the billions of transistors in a modern GPU to the massive-scale GPU clusters powering AI models, the ability to scale efficiently is a key driver of technological progress. Similarly, scaling the number of qubits to accommodate the overhead required of quantum error correction will be key to realizing commercially valuable quantum computers.

But the history of computing shows that scaling the right component can have massive consequences for cost, performance, and even feasibility. The computer revolution truly took off when the transistor replaced the vacuum tube as the fundamental building block to scale.

Ocelot represents our first chip with the cat qubit architecture, and an initial test of its suitability as a fundamental building block for implementing quantum error correction. Future versions of Ocelot are being developed that will exponentially drive down logical error rates, enabled by both an improvement in component performance and an increase in code distance.

Codes tailored to biased noise, such as the repetition code used in Ocelot, can significantly reduce the number of physical qubits required. In our forthcoming paper “Hybrid cat-transmon architecture for scalable, hardware-efficient quantum error correction”, we find that scaling Ocelot could reduce quantum error correction overhead by up to 90% compared to conventional surface code approaches with similar physical-qubit error rates.

We believe that Ocelot's architecture, with its hardware-efficient approach to error correction, positions us well to tackle the next phase of quantum computing: learning how to scale. Using a hardware-efficient approach will allow us to more quickly and cost effectively achieve an error-corrected quantum computer that benefits society.

Over the last few years, quantum computing has entered an exciting new era in which quantum error correction has moved from the blackboard to the test bench. With Ocelot, we are just beginning down a path to fault-tolerant quantum computation. For those interested in joining us on this journey, we are hiring for positions across our quantum computing stack. Visit Amazon Jobs and enter the keyword “quantum”.

Research areas

Related content

EG, Cairo
Are you a MS or PhD student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models and speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for students interested in using a variety of domain expertise to invent, design and implement state-of-the-art solutions for never-before-solved problems. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science https://amazon.jobs/content/en/how-we-hire/university-roles/applied-science Key job responsibilities As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to write technical white papers, create roadmaps and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists and other science interns to develop solutions and deploy them into production. You will have the opportunity to design new algorithms, models, or other technical solutions whilst experiencing Amazon’s customer focused culture. The ideal intern must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, Estonia, France, Germany, Ireland, Israel, Italy, Jordan, Luxembourg, Netherlands, Poland, Romania, Spain, South Africa, UAE, and UK). Please note these are not remote internships.
IN, KA, Bengaluru
Alexa+ is Amazon’s next-generation, AI-powered virtual assistant. Building on the original Alexa, it uses generative AI to deliver a more conversational, personalised, and effective experience. Alexa Sensitive Content Intelligence (ASCI) team is developing responsible AI (RAI) solutions for Alexa+, empowering it to provide useful information responsibly. The Mission Build AI safety systems that protect millions of Alexa customers every day. As conversational AI evolves, you'll solve challenging problems in Responsible AI by ensuring LLMs provide safe, trustworthy responses, building AI systems that understand nuanced human values across cultures, and maintaining customer trust at scale. We are looking for a passionate, talented, and inventive Data Scientist-II to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems, requiring good learning and generative models knowledge. You will be working with a team of exceptional Data Scientists working in a hybrid, fast-paced organization where scientists, engineers, and product managers work together to build customer facing experiences. You will collaborate with other data scientists while understanding the role data plays in developing data sets and exemplars that meet customer needs. You will analyze and automate processes for collecting and annotating LLM inputs and outputs to assess data quality and measurement. You will apply state-of-the-art Generative AI techniques to analyze how well our data represents human language and run experiments to gauge downstream interactions. You will work collaboratively with other data scientists and applied scientists to design and implement principled strategies for data optimization. Key job responsibilities A Data Scientist-II should have a reasonably good understanding of NLP models (e.g. LSTM, LLMs, other transformer based models) or CV models (e.g. CNN, AlexNet, ResNet, GANs, ViT) and know of ways to improve their performance using data. You leverage your technical expertise in improving and extending existing models. Your work will directly impact our customers in the form of products and services that make use of speech, language, and computer vision technologies. You will be joining a select group of people making history producing one of the most highly rated products in Amazon's history, so if you are looking for a challenging and innovative role where you can solve important problems while growing in your career, this may be the place for you. A day in the life You will be working with a group of talented scientists on running experiments to test scientific proposal/solutions to improve our sensitive contents detection and mitigation for worldwide coverage. This will involve collaboration with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, policy, model development, and solution implementation. You will work with other scientists, collaborating and contributing to extending and improving solutions for the team. About the team Our team pioneers Responsible AI for conversational assistants. We ensure Alexa delivers safe, trustworthy experiences across all devices, modalities, and languages worldwide. We work on frontier AI safety challenges—and we're looking for scientists who want to help shape the future of trustworthy AI.
IN, KA, Bengaluru
Alexa+ is Amazon’s next-generation, AI-powered virtual assistant. Building on the original Alexa, it uses generative AI to deliver a more conversational, personalised, and effective experience. Alexa Sensitive Content Intelligence (ASCI) team is developing responsible AI (RAI) solutions for Alexa+, empowering it to provide useful information responsibly. The Mission Build AI safety systems that protect millions of Alexa customers every day. As conversational AI evolves, you'll solve challenging problems in Responsible AI by ensuring LLMs provide safe, trustworthy responses, building AI systems that understand nuanced human values across cultures, and maintaining customer trust at scale. We are looking for a passionate, talented, and inventive Data Scientist-II to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems, requiring good learning and generative models knowledge. You will be working with a team of exceptional Data Scientists working in a hybrid, fast-paced organization where scientists, engineers, and product managers work together to build customer facing experiences. You will collaborate with other data scientists while understanding the role data plays in developing data sets and exemplars that meet customer needs. You will analyze and automate processes for collecting and annotating LLM inputs and outputs to assess data quality and measurement. You will apply state-of-the-art Generative AI techniques to analyze how well our data represents human language and run experiments to gauge downstream interactions. You will work collaboratively with other data scientists and applied scientists to design and implement principled strategies for data optimization. Key job responsibilities A Data Scientist-II should have a reasonably good understanding of NLP models (e.g. LSTM, LLMs, other transformer based models) or CV models (e.g. CNN, AlexNet, ResNet, GANs, ViT) and know of ways to improve their performance using data. You leverage your technical expertise in improving and extending existing models. Your work will directly impact our customers in the form of products and services that make use of speech, language, and computer vision technologies. You will be joining a select group of people making history producing one of the most highly rated products in Amazon's history, so if you are looking for a challenging and innovative role where you can solve important problems while growing in your career, this may be the place for you. A day in the life You will be working with a group of talented scientists on running experiments to test scientific proposal/solutions to improve our sensitive contents detection and mitigation for worldwide coverage. This will involve collaboration with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, policy, model development, and solution implementation. You will work with other scientists, collaborating and contributing to extending and improving solutions for the team. About the team Our team pioneers Responsible AI for conversational assistants. We ensure Alexa delivers safe, trustworthy experiences across all devices, modalities, and languages worldwide. We work on frontier AI safety challenges—and we're looking for scientists who want to help shape the future of trustworthy AI.
US, CA, San Diego
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to apply their macroeconomics and forecasting skillsets to solve real world problems. The intern will work in the area of forecasting, developing models to improve the success of new product launches in Private Brands. Our PhD Economist Internship Program offers hands-on experience in applied economics, supported by mentorship, structured feedback, and professional development. Interns work on real business and research problems, building skills that prepare them for full-time economist roles at Amazon and beyond. You will learn how to build data sets and perform applied econometric analysis collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis About the team The Amazon Private Brands Intelligence team applies Machine Learning, Statistics and Econometrics/economics to solve high-impact business problems, develop prototypes for Amazon-scale science solutions, and optimize key business functions of Amazon Private Brands and other Amazon orgs. We are an interdisciplinary team, using science and technology and leveraging the strengths of engineers and scientists to build solutions for some of the toughest business problems at Amazon, covering areas such as pricing, discovery, negotiation, forecasting, supply chain and product selection/development.
US, VA, Arlington
About Sponsored Products and Brands The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. About our team The SPB Offsite team builds solutions to extend campaigns to reach customers off the store and extend shopping experiences on third party sites where shoppers search and discover products. We use industry leading machine learning, high scale low latency systems, and AI technologies to create better sponsored customer experiences off the store. This role will have deep interest in building the next innovations in ad tech and shopping wherever shoppers go. You will work with external and internal partners to connect ad tech systems, understand customers, and drive results at scale. You are a deeply technical leader who operates with a GenAI first approach to product, engineering, and science based solutions. As an Applied Scientist on this team, you will: - Drive end-to-end Machine Learning projects that have a high degree of ambiguity, scale, complexity. - Perform hands-on analysis and modeling of enormous data sets to develop insights that increase traffic monetization and merchandise sales, without compromising the shopper experience. - Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models. - Run A/B experiments, gather data, and perform statistical analysis. - Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. - Research new and innovative machine learning approaches. Why you will love this opportunity: Amazon is investing heavily in building a world-class advertising business. This team defines and delivers a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are a highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. Impact and Career Growth: You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding.
US, MA, Boston
MULTIPLE POSITIONS AVAILABLE Employer: AMAZON WEB SERVICES, INC. Offered Position: Data Scientist III Job Location: Boston, Massachusetts Job Number: AMZ9674163 Position Responsibilities: Own the data science elements of various products to help with data-based decision making, product performance optimization, and product performance tracking. Work directly with product managers to help drive the design of the product. Work with Technical Product Managers to help drive the build planning. Translate business problems and products into data requirements and metrics. Initiate the design, development, and implementation of scientific analysis projects or deliverables. Own the analysis, modelling, system design, and development of data science solutions for products. Write documents and make presentations that explain model/analysis results to the business. Bridge the degree of uncertainty in both problem definition and data scientific solution approaches. Build consensus on data, metrics, and analysis to drive business and system strategy. Position Requirements: Master's degree or foreign equivalent degree in Statistics, Applied Mathematics, Economics, Engineering, Computer Science or a related field and two years of experience in the job offered or a related occupation. Employer will accept a Bachelor's degree or foreign equivalent degree in Statistics, Applied Mathematics, Economics, Engineering, Computer Science, or a related field and five years of progressive post-baccalaureate experience in the job offered or a related occupation as equivalent to the Master's degree and two years of experience. Must have one year of experience in the following skills: (1) building statistical models and machine learning models using large datasets from multiple resources; (2) building complex data analyses by leveraging scripting languages including Python, Java, or related scripting language; and (3) communicating with users, technical teams, and management to collect requirements, evaluate alternatives, and develop processes and tools to support the organization. Amazon.com is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation. 40 hours / week, 8:00am-5:00pm, Salary Range $161,803/year to $215,300/year. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, visit: https://www.aboutamazon.com/workplace/employee-benefits.#0000
US, CA, Palo Alto
The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through industry leading generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. Amazon Ads Response Prediction team is your choice, if you want to join a highly motivated, collaborative, and fun-loving team with a strong entrepreneurial spirit and bias for action. We are seeking an experienced and motivated Machine Learning Applied Scientist who loves to innovate at the intersection of customer experience, deep learning, and high-scale machine-learning systems. Amazon Advertising operates at the intersection of eCommerce and advertising, and is investing heavily in building a world-class advertising business. We are defining and delivering a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses driving long-term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products to improve both shopper and advertiser experience. With a broad mandate to experiment and innovate, we grow at an unprecedented rate with a seemingly endless range of new opportunities. We are looking for a talented Machine Learning Applied Scientist for our Amazon Ads Response Prediction team to grow the business. We are providing advanced real-time machine learning services to connect shoppers with right ads on all platforms and surfaces worldwide. Through the deep understanding of both shoppers and products, we help shoppers discover new products they love, be the most efficient way for advertisers to meet their customers, and helps Amazon continuously innovate on behalf of all customers. Key job responsibilities As a Machine Learning Applied Scientist, you will: * Conduct deep data analysis to derive insights to the business, and identify gaps and new opportunities * Develop scalable and effective machine-learning models and optimization strategies to solve business problems * Run regular A/B experiments, gather data, and perform statistical analysis * Work closely with software engineers to deliver end-to-end solutions into production * Improve the scalability, efficiency and automation of large-scale data analytics, model training, deployment and serving * Conduct research on new machine-learning modeling to optimize all aspects of Sponsored Products and Brands business About the team We are pioneers in applying advanced machine learning and generative AI algorithms in Sponsored Products and Brands business. We empower every customer with a customized discovery experiences from back-end optimization (such as customized response prediction models) to front-end CX innovation (such as widgets), to help shoppers feel understood and shop efficiently on and off Amazon.
US, WA, Seattle
MULTIPLE POSITIONS AVAILABLE Employer: AMAZON WEB SERVICES, INC. Offered Position: Research Scientist II Job Location: Seattle, Washington Job Number: AMZ9698004 Position Responsibilities: Perform and support the main psychometric aspects of exam development and operations, including but not limited to automated test assembly, item and test analyses, optimal item bank design, job task analysis, standard setting, quality assurance, and project planning. Conduct main aspects of psychometric analysis in operational work including performing item analysis using psychometric methods, building optimal test forms and pools via optimization techniques, analyzing and monitoring item bank health, setting pass standards via standard setting studies, and supporting Job Task Analysis (JTA) to define and refresh test blueprints. Conduct main aspects of psychometric analysis in developing and applying statistical and psychometric modeling to evaluate and ensure AWS certification exams’ validity, reliability, applicability, efficiency, and accuracy. Participate in research projects to improve existing operational processes and quality using advanced techniques such as Machine Learning (ML), statistical modeling, Natural Language Processing (NLP), Generative Artificial Intelligence (GenAI), etc. Develop automation code using R or Python for psychometric workflow pipeline and other tasks to improve operational efficiencies. Present, interpret, and communicate the results of analyses to stakeholders through written and oral reports. Follow the accreditation standards set by ISO/IEC:2012 17024 and the National Council for Certifying Agencies (NCCA) as they relate to valid psychometric practices. Engage with the professional community through conferences and publications. Position Requirements: PhD or foreign equivalent degree in Statistics, Psychometrics, Educational Measurement, Quantitative Psychology, Data Science, Industrial-Organizational (I/O) Psychology, or a related field and one year of research or work experience in the job offered, or as a Research Scientist, Research Assistant, Software Engineer, or a related occupation. Must have 1 year of experience in the following skill(s): 1. large-scale education, licensure, or certification assessment programs. 2. operational psychometric tasks on large-scale education, licensure, or certification assessment programs including item analysis, equating and scaling, item response theory, classical test theory, form and pool assembly, item bank health analysis, standard setting, and job task analysis. 3. at least one of the complex test designs such as linear-on-the-fly testing (LOFT), computerized adaptive testing (CAT). 4. at least one of the following areas including machine learning (ML) or natural language processing (NLP). 5. Programming skills in at least one script-based programming language (R, Python). Amazon.com is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation. 40 hours / week, 8:00am-5:00pm, Salary Range $136,000/year to $184,000/ year. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, visit: https://www.aboutamazon.com/workplace/employee-benefits.#0000
US, NY, New York
The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through novel generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace ecosystem. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. Key job responsibilities As a Senior Applied Scientist on our team, you will * Develop AI solutions for Sponsored Brands advertiser and shopper experiences. Build recommendation systems that leverage generative models to develop and improve campaigns. * You invent and design new solutions for scientifically-complex problem areas and/or opportunities in new business initiatives. * You drive or heavily influence the design of scientifically-complex software solutions or systems, for which you personally write significant parts of the critical scientific novelty. You take ownership of these components, providing a system-wide view and design guidance. These systems or solutions can be brand new or evolve from existing ones. * Define a long-term science vision and roadmap for our Sponsored Brands advertising business, driven from our customers' needs, translating that direction into specific plans for applied scientists and engineering teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. * Work closely with engineers and product managers to design, implement and launch AI solutions end-to-end; * Design and conduct A/B experiments to evaluate proposed solutions based on in-depth data analyses; * Think big about the arc of development of Gen AI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems * Effectively communicate technical and non-technical ideas with teammates and stakeholders; * Translate complex scientific challenges into clear and impactful solutions for business stakeholders. * Mentor and guide junior scientists, fostering a collaborative and high-performing team culture. * Stay up-to-date with advancements and the latest modeling techniques in the field About the team We are on a mission to make Amazon the best in class destination for shoppers to discover, engage, and purchase relevant products, from brands that are relevant to them. In this role, you will design and implement Gen AI solutions that help millions of advertisers create more effective ad campaigns with intelligent recommendations, while improving the overall experience at Amazon's global scale.
US, WA, Seattle
The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through industry leading generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. Key job responsibilities We are looking for an Applied Science Manager to lead the Insights & Prompt Generation vertical within the Conversational Discovery Experiences (CAX) team in Sponsored Products and Brands (SPB). This team owns prompt generation, quality, personalization, and coverage for Sponsored Prompts, a new conversational ad format powered by large language models (LLMs) that helps shoppers discover products across Amazon.com. As an Applied Science Manager, you will lead a team of applied scientists and engineers to build and scale the prompt generation pipeline, develop new prompt themes and quality frameworks, and drive coverage expansion across all surfaces. You will own the science roadmap for prompt generation and personalization. You will define the metrics that measure prompt effectiveness and drive experimentation to improve CTR, helpfulness, and advertiser outcomes. This role requires strong technical depth in NLP, LLMs, and information retrieval, combined with the ability to manage and grow a science team, set research direction, and influence product strategy. You will work across organizational boundaries with engineering, product, and business teams to translate science investments into measurable business impact.