Quantum key distribution and authentication: Separating facts from myths

Key exchange protocols and authentication mechanisms solve distinct problems and must be integrated in a secure communication system.

Quantum key distribution (QKD) is a technology that leverages the laws of quantum physics to securely share secret information between distant communicating parties. With QKD, quantum-mechanical properties ensure that if anyone tries to tamper with the secret-sharing process, the communicating parties will know. Keys established through QKD can then be used in traditional symmetric encryption or with other cryptographic technologies to secure communications.

“Record now, decrypt later" (RNDL) is a cybersecurity risk arising from advances in quantum computing. The term refers to the situation in which attackers record encrypted data today, even though they cannot decrypt it immediately. They store this data with the expectation that future quantum computers will be powerful enough to break the cryptographic algorithms currently securing it. Sensitive information such as financial records, healthcare data, or state secrets could be at risk, even years after it was transmitted.

Mitigating RNDL requires adopting quantum-resistant cryptographic methods, such as post-quantum cryptography (PQC) and/or quantum key distribution (QKD), to ensure confidentiality against future quantum advancements. AWS has invested in the migration to post-quantum cryptography to protect the confidentiality, integrity, and authenticity of customer data.

Quantum communication is important enough that in 2022, three of its pioneers won the Nobel Prize for physics. However, misconceptions about QKD’s role still persist. One of them is that QKD lacks practical value because it “doesn’t solve the authentication problem”. This view can obscure the broad benefits that QKD brings to secure communications when integrated properly into existing systems.

QKD should be viewed as a complement to — rather than a replacement for — existing cybersecurity frameworks. Functionally, QKD solves the same problem solved by other key establishment protocols, including the well-known Diffie-Hellman (DH) method and the module-lattice-based key encapsulation mechanism (ML-KEM), the standard recently ratified by the FIPS — but it does it in a fundamentally different way. Like those methods, QKD depends on strong authentication to defend against threats such as man-in-the-middle attacks, where an attacker poses as one of the communicating parties.

Related content
The head of Amazon Web Services’ quantum communication program on the Nobel winners’ influence on her field.

In short, key exchange protocols and authentication mechanisms are different security primitives for solving distinct problems and must be integrated together in a secure communication system.

The challenge, then, is not to give QKD an authentication mechanism but to understand how it can be integrated with other established mechanisms to strengthen the overall security infrastructure. As quantum technologies continue to evolve, it’s important to shift the conversation from skepticism about authentication to consideration of how QKD can be thoughtfully and practically implemented to address today’s and tomorrow’s cybersecurity needs — such as the need to mitigating the “record now, decrypt later” (RNDL) attack (see sidebar).

Understanding the role of authentication in QKD

When discussing authentication in the context of QKD, we focus on the classical digital channel that the parties use to exchange information about their activities on the quantum channel. This isn’t about user authentication methods, such as logging in with passwords or biometrics, but rather about authenticating the communicating entities and the data exchanged. Entity authentication ensures that the parties are who they claim to be; data authentication guarantees that the information received is the same as what was sent by the claimed source. QKD protocols include a classical-communication component that uses both authentication methods to assure the overall security of the interaction.

Entity authentication

Entity authentication is the process by which one party (the "prover") asserts its identity, and another party (the "verifier") validates that assertion. This typically involves a registration step, in which the verifier obtains reliable identification information about the prover, as a prelude to any further authentication activity. The purpose of this step is to establish a “root of trust” or “trust anchor”, ensuring that the verifier has a trusted baseline for future authentications.

Related content
Collaboration will seek to advance the development of a quantum internet.

Several entity authentication methods are in common use, each based on a different type of trust anchor:

  • Public-key-infrastructure (PKI) authentication: In this method, a prover’s certificate is issued by a trusted certificate authority (CA). The verifier relies on this CA, or the root CA in a certificate chain, to establish trust. The certificate acts as the trust anchor that links the prover’s identity to its public key.
  • PGP-/GPG-based (web of trust) authentication: Here, trust is decentralized. A prover’s public key is trusted if it has been vouched for by one or more trusted third parties, such as a mutual acquaintance or a public-key directory. These third parties serve as the trust anchors.
  • Pre-shared-key-based (PSK) authentication: In this case, both the prover and the verifier share a secret key that was exchanged via an offline or other secure out-of-band method. The trust anchor is the method of securely sharing this key a priori, such as a secure courier or another trusted channel.

These trust anchors form the technical backbones of all authentication systems. However, all entity authentication methods are based on a fundamental assumption: the prover is either the only party that holds the critical secret data (e.g., the prover’s private key in PKI or PGP) or the only other party that shares the secret with the verifier (PSK). If this assumption is broken — e.g., the prover's private key is stolen or compromised, or the PSK is leaked — the entire authentication process can fail.

Data authentication

Data authentication, also known as message authentication, ensures both the integrity and authenticity of the transmitted data. This means the data received by the verifier is exactly what the sender sent, and it came from a trusted source. As with entity authentication, the foundation of data authentication is the secure management of secret information shared by the communicating parties.

Related content
Among the ‘first wave’ of scientists to gain a PhD in quantum technology, the senior manager of research science discusses her two-decade-long career journey.

The most common approach to data authentication is symmetric cryptography, where both parties share a secret key. A keyed message authentication code (MAC), such as HMAC or GMAC, is used to compute a unique tag for the transmitted data. This tag allows the receiver to verify that the data hasn’t been altered during transit. The security of this method depends on the collision resistance of the chosen MAC algorithm — that is, the computational infeasibility of finding two or more plaintexts that could yield the same tag — and the confidentiality of the shared key. The authentication tag ensures data integrity, while the secret key guarantees the authenticity of the data origin.

An alternative method uses asymmetric cryptography with digital signatures. In this approach, the sender generates a signature using a private key and the data itself. The receiver, or anyone else, can verify the signature’s authenticity using the sender’s public key. This method provides data integrity through the signature algorithm, and it assures data origin authenticity as long as only the sender holds the private key. In this case, the public key serves as a verifiable link to the sender, ensuring that the signature is valid.

In both the symmetric and the asymmetric approaches, successful data authentication depends on effective entity authentication. Without knowing and trusting the identity of the sender, the verification of the data’s authenticity is compromised. Therefore, the strength of data authentication is closely tied to the integrity of the underlying entity authentication process.

Authentication in QKD

The first quantum cryptography protocol, known as BB84, was developed by Bennett and Brassard in 1984. It remains foundational to many modern QKD technologies, although notable advancements have been made since then.

Related content
New method enables entanglement between vacancy centers tuned to different wavelengths of light.

QKD protocols are unique because they rely on the fundamental principles of quantum physics, which allow for “information-theoretic security.” This is distinct from the security provided by computational complexity. In the quantum model, any attempt to eavesdrop on the key exchange is detectable, providing a layer of security that classical cryptography cannot offer.

QKD relies on an authenticated classical communication channel to ensure the integrity of the data exchanged between parties, but it does not depend on the confidentiality of that classical channel. (This is why RNDL is not an effective attack against QKD). Authentication just guarantees that the entities establishing keys are legitimate, protecting against man-in-the-middle attacks.

Currently, several commercial QKD products are available, many of which implement the original BB84 protocol and its variants. These solutions offer secure key distribution in real-world applications, and they all pair with strong authentication processes to ensure the communication remains secure from start to finish. By integrating both technologies, organizations can build communication infrastructures capable of withstanding both classical and quantum threats.

Authentication in QKD bootstrap: A manageable issue

During the initial bootstrap phase of a QKD system, the authentic classical channel is established using traditional authentication methods based on PKI or PSK. As discussed earlier, all of these methods ultimately rely on the establishment of a trust anchor.

Related content
Automated reasoning and optimizations specific to CPU microarchitectures improve both performance and assurance of correct implementation.

While confidentiality may need to be maintained for an extended period (sometimes decades), authentication is a real-time process. It verifies identity claims and checks data integrity in the moment. Compromising an authentication mechanism at some future point will not affect past verifications. Once an authentication process is successfully completed, the opportunity for an adversary to tamper with it has passed. That is, even if, in the future, a specific authentication mechanism used in QKD is broken by a new technology, QKD keys generated prior to that point are still safe to use, because no adversary can go back in time to compromise past QKD key generation.

This means that the reliance on traditional, non-QKD authentication methods presents an attack opportunity only during the bootstrap phase, which typically lasts just a few minutes. Given that this phase is so short compared to the overall life cycle of a QKD deployment, the potential risks posed by using authentication mechanisms are relatively minor.

Authentication after QKD bootstrap: Not a new issue

Once the bootstrap phase is complete, the QKD devices will have securely established shared keys. These keys can then be used for PSK-based authentication in future communications. In essence, QKD systems can maintain the authenticated classical communication channel by utilizing a small portion of the very keys they generate, ensuring continued secure communication beyond the initial setup phase.

It is important to note that if one of the QKD devices is compromised locally for whatever reason, the entire system’s security could be at risk. However, this is not a unique vulnerability introduced by QKD. Any cryptographic system faces similar challenges when the integrity of an endpoint is compromised. In this respect, QKD is no more susceptible to it than any other cryptographic system.

Overcoming key challenges to QKD’s role in cybersecurity

Up to now we have focused on clarifying the myths about authentication needs in QKD. Next we will discuss several other challenges in using QKD in practice.

Bridging the gap between QKD theory and implementation

While QKD protocols are theoretically secure, there remains a significant gap between theory and real-world implementations. Unlike traditional cryptographic methods, which rely on well-understood algorithms that can be thoroughly reviewed and certified, QKD systems depend on specialized hardware. This introduces complexity, as the process of reviewing and certifying QKD hardware is not yet mature.

Related content
Using time to last byte — rather than time to first byte — to assess the effects of data-heavy TLS 1.3 on real-world connections yields more encouraging results.

In conventional cryptography, risks like side-channel attacks — which use runtime clues such as memory access patterns or data retrieval times to deduce secrets — are well understood and mitigated through certification processes. QKD systems are following a similar path. The European Telecommunications Standards Institute (ETSI) has made a significant move by introducing the Common Criteria Protection Profile for QKD, the first international effort to create a standardized certification framework for these systems. ISO/IEC has also published standards on security requirements and test and evaluation methods for QKD. These represent crucial steps in building the same level of trust that traditional cryptography enjoys.

Once the certification process is fully established, confidence in QKD’s hardware implementations will continue to grow, enabling the cybersecurity community to embrace QKD as a reliable, cutting-edge solution for secure communication. Until then, the focus remains on advancing the review and certification processes to ensure that these systems meet the highest security standards.

QKD deployment considerations

One of the key challenges in the practical deployment of QKD is securely transporting the keys generated by QKD devices to their intended users. While it’s accepted that QKD is a robust mechanism for distributing keys to the QKD devices themselves, it does not cover the secure delivery of keys from the QKD device to the end user (or key consumer).

QKD diagram.png
A schematic representation of two endpoints — site A and site B — that want to communicate safely. The top line represents the user traffic being protected, and the bottom lines are the channels required to establish secure communication. An important practical consideration is how to transmit a key between a QKD device and an end user within an endpoint.

This issue arises whether the QKD system is deployed within a large intranet or a small local-area network. In both cases, the keys must be transported over a non-QKD system. The standard deployment requirement is that the key delivery from the QKD system to the key consumer occurs “within the same secure site”, and the definition of a “secure site” is up to the system operator.

Related content
Prize honors Amazon senior principal scientist and Penn professor for a protocol that achieves a theoretical limit on information-theoretic secure multiparty computation.

The best practice is to make the boundary of the secure site as small as is practical. One extreme option is to remove the need for transporting keys over classical networks entirely, by putting the QKD device and the key user’s computing hardware in the same physical unit. This eliminates the need for traditional network protocols for key transport and realizes the full security benefits of QKD without external dependency. In cases where the extreme option is infeasible or impractical, the secure site should cover only the local QKD system and the intended key consumers.

Conclusion

QKD-generated keys will remain secure even when quantum computers emerge, and communications using these keys are not vulnerable to RNDL attacks. For QKD to reach its full potential, however, the community must collaborate closely with the broader cybersecurity ecosystem, particularly in areas like cryptography and governance, risk, and compliance (GRC). By integrating the insights and frameworks established in these fields, QKD can overcome its current challenges in trust and implementation.

This collective effort is essential to ensure that QKD becomes a reliable and integral part of secure communication systems. As these collaborations deepen, QKD will be well-positioned to enhance existing security frameworks, paving the way for its adoption across industries and applications.

Related content

IL, Tel Aviv
Come join the AWS Agentic AI science team in building the next generation models for intelligent automation. AWS, the world-leading provider of cloud services, has fostered the creation and growth of countless new businesses, and is a positive force for good. Our customers bring problems that will give Applied Scientists like you endless opportunities to see your research have a positive and immediate impact in the world. You will have the opportunity to partner with technology and business teams to solve real-world problems, have access to virtually endless data and computational resources, and to world-class engineers and developers that can help bring your ideas into the world. As part of the team, we expect that you will develop innovative solutions to hard problems, and publish your findings at peer reviewed conferences and workshops. We are looking for world class researchers with experience in one or more of the following areas - autonomous agents, API orchestration, Planning, large multimodal models (especially vision-language models), reinforcement learning (RL) and sequential decision making.
IL, Tel Aviv
Are you a Masters or PhD student interested in a 2026 Internship in Data Science? If so, we want to hear from you! We are looking for a customer obsessed Data Scientist Intern who can innovate in a business environment and is comfortable owning data to drive step-change innovation in the EMEA region or worldwide. If this describes you, come and join our Data Science teams at Amazon for an exciting internship opportunity. If you are insatiably curious and always want to learn more, then you’ve come to the right place. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science Key job responsibilities As a Data Science Intern, you will have the following key job responsibilities: • Work closely with scientists and engineers to develop new algorithms to implement scientific solutions for Amazon problems • Design, run, and analyze A/B tests • Work on an interdisciplinary team on customer-obsessed research • Experience Amazon's customer-focused culture • Create and deliver projects that can be quickly applied starting locally and scaled to EMEA/worldwide • Create and share data with audiences of varying levels technical papers and presentations • Define metrics and design algorithms to estimate customer satisfaction and engagement A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships or 6-12 months for part time internships. Please note these are not remote internships.
IN, KA, Bengaluru
Alexa+ is the world’s best Generative AI powered personal assistant / agent for consumers. We are seeking an Applied Scientist to join our newly expanding team in India focused on Alexa Conversational Ads and Personalization. In this role, you will build machine learning models that seamlessly and naturally integrate relevant advertising into the Alexa experience while deeply personalizing user interactions. You will work closely with other scientists, engineers, and product managers to take models from conception to production. Key job responsibilities Design, develop, and evaluate innovative deep learning and GenAI models for natural language processing (NLP), recommendation systems, and personalization. Conduct hands-on data analysis and build scalable ML pipelines. Design and run A/B experiments to measure the impact of new models on customer experience and ad performance. Collaborate with software development engineers to deploy models into high-scale, real-time production environments. About the team We are building a new science team in Bangalore to solve some of the most impactful problems in computational advertising. This isn't about tweaking existing models as we are rethinking how ads are ranked, priced, and personalized across voice-first and screen-first surfaces. These are problems that don't have textbook solutions. Key points to note about the team: 🧪 Greenfield team - you are not joining a mature org with rigid processes. You will shape the science roadmap, pick the problems, and define the culture from day one. 📈 Direct business impact — your models directly drive revenue. No yearly cycles to see if your work matters. 🌏 Global scope, local autonomy — collaborate with scientists and engineers across Seattle, Sunnyvale, and Bangalore, but own your problem space end-to-end. 🎓 Ship AND Publish: We encourage top-tier publications (NeurIPS, ACL, EMNLP, KDD, ICML, WWW) while ensuring your research hits production.
IN, KA, Bengaluru
Alexa+ is the world’s best Generative AI powered personal assistant / agent for consumers. We are seeking an Applied Scientist to join our newly expanding team in India focused on Alexa Conversational Ads and Personalization. In this role, you will build machine learning models that seamlessly and naturally integrate relevant advertising into the Alexa experience while deeply personalizing user interactions. You will work closely with other scientists, engineers, and product managers to take models from conception to production. Key job responsibilities Design, develop, and evaluate innovative deep learning and GenAI models for natural language processing (NLP), recommendation systems, and personalization. Conduct hands-on data analysis and build scalable ML pipelines. Design and run A/B experiments to measure the impact of new models on customer experience and ad performance. Collaborate with software development engineers to deploy models into high-scale, real-time production environments. About the team We are building a new science team in Bangalore to solve some of the most impactful problems in computational advertising. This isn't about tweaking existing models as we are rethinking how ads are ranked, priced, and personalized across voice-first and screen-first surfaces. These are problems that don't have textbook solutions. Key points to note about the team: 🧪 Greenfield team - you are not joining a mature org with rigid processes. You will shape the science roadmap, pick the problems, and define the culture from day one. 📈 Direct business impact — your models directly drive revenue. No yearly cycles to see if your work matters. 🌏 Global scope, local autonomy — collaborate with scientists and engineers across Seattle, Sunnyvale, and Bangalore, but own your problem space end-to-end. 🎓 Ship AND Publish: We encourage top-tier publications (NeurIPS, ACL, EMNLP, KDD, ICML, WWW) while ensuring your research hits production.
IN, KA, Bengaluru
Alexa+ is the world’s best Generative AI powered personal assistant / agent for consumers. We are seeking an Applied Scientist to join our newly expanding team in India focused on Alexa Conversational Ads and Personalization. In this role, you will build machine learning models that seamlessly and naturally integrate relevant advertising into the Alexa experience while deeply personalizing user interactions. You will work closely with other scientists, engineers, and product managers to take models from conception to production. Key job responsibilities Design, develop, and evaluate innovative deep learning and GenAI models for natural language processing (NLP), recommendation systems, and personalization. Conduct hands-on data analysis and build scalable ML pipelines. Design and run A/B experiments to measure the impact of new models on customer experience and ad performance. Collaborate with software development engineers to deploy models into high-scale, real-time production environments. About the team We are building a new science team in Bangalore to solve some of the most impactful problems in computational advertising. This isn't about tweaking existing models as we are rethinking how ads are ranked, priced, and personalized across voice-first and screen-first surfaces. These are problems that don't have textbook solutions. Key points to note about the team: 🧪 Greenfield team - you are not joining a mature org with rigid processes. You will shape the science roadmap, pick the problems, and define the culture from day one. 📈 Direct business impact — your models directly drive revenue. No yearly cycles to see if your work matters. 🌏 Global scope, local autonomy — collaborate with scientists and engineers across Seattle, Sunnyvale, and Bangalore, but own your problem space end-to-end. 🎓 Ship AND Publish: We encourage top-tier publications (NeurIPS, ACL, EMNLP, KDD, ICML, WWW) while ensuring your research hits production.
IN, KA, Bengaluru
Alexa+ is the world’s best Generative AI powered personal assistant / agent for consumers. We are seeking an Applied Scientist to join our newly expanding team in India focused on Alexa Conversational Ads and Personalization. In this role, you will build machine learning models that seamlessly and naturally integrate relevant advertising into the Alexa experience while deeply personalizing user interactions. You will work closely with other scientists, engineers, and product managers to take models from conception to production. Key job responsibilities - Design, develop, and evaluate innovative machine learning and deep learning models for natural language processing (NLP), recommendation systems, and personalization. - Conduct hands-on data analysis and build scalable ML pipelines. - Design and run A/B experiments to measure the impact of new models on customer experience and ad performance. - Collaborate with software development engineers to deploy models into high-scale, real-time production environments.
US, CA, San Francisco
Join Amazon's Frontier AI & Robotics team as a Member of Technical Staff, this Technical Program Manager will become the driving force behind breakthrough robotics innovation. You'll orchestrate complex, cross-functional programs that bridge AI research, software, hardware, and production deployment—managing the technical workstreams that enable robots to see, reason, and act in Amazon's warehouse environments. Your program leadership will directly accelerate our mission to build the next generation of embodied intelligence. Key job responsibilities · Establish and drive program management mechanisms and cadence for complex robotics and AI development initiatives spanning research, software engineering, hardware, and operations · Manage end-to-end program execution across the full robotics stack—including AI models, software engineering, and hardware deployment · Drive decision-making velocity by facilitating tradeoff discussions when there are conflicting priorities; determine whether decisions are one-way or two-way doors · Own program-level risk management, proactively identifying technical, schedule, and resource risks; escalate where necessary and drive mitigation strategies · Manage dependencies and scope changes across internal teams and partner organizations, ensuring alignment on commitments, timelines, and technical requirements · Create transparency through clear RACI frameworks, program dashboards, and communication mechanisms that keep stakeholders aligned on status, risks, and decisions · Exercise strong technical judgment to influence program-level decisions on deployment methodology, scalability requirements, and technical feasibility—acting as the voice back to research and engineering teams · Build sustainable program management processes that scale as our organization grows, adapting agile frameworks to the unique challenges of AI robotics A day in the life Your focus centers on driving velocity and alignment across our robotics programs. You might start your morning facilitating tradeoff decisions between AI researchers and software engineers on a critical prototype milestone, then transition to managing dependencies across hardware and operations teams to keep timelines on track. In the afternoon, you could be conducting risk assessments on supply chain constraints that impact our development roadmap, updating program dashboards to provide leadership visibility, or working with partner teams to align on deployment strategies. You'll establish the mechanisms and cadence that keep our fast-moving organization synchronized—from sprint planning rituals to cross-functional design reviews. Throughout the day, you balance hands-on program execution with strategic escalation, ensuring technical decisions align with our long-term vision while removing obstacles that slow teams down. You're the connective tissue that enables researchers, engineers, and operations specialists to move fast together. About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through frontier foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, CA, San Francisco
We are seeking a hands-on Electrical Engineer to lead the design and integration of electrical systems or subsystems for high-degree-of-freedom robotic platforms. This role involves architecting the robot’s power distribution, sensor wiring, and embedded electrical infrastructure. You will be responsible for designing across the full electrical system for advanced robotics platforms including power distribution, sensing, compute, motor controllers, communication infrastructure, battery system and power electronics in close collaboration with mechanical, controls and software engineers. You’ll play a key role in ensuring high-performance, reliable operation of complex electromechanical systems under real-world conditions. Key job responsibilities * Electrical system architect / owner for power electronics, actuation, PCBAs, battery, ware harness specs and high speed electrical/communications protocols * Design, develop and integrate power distribution, embedded electronics, motor controllers and safety-critical circuits for complex robotic systems * Own board layout of PCBAs including SoCs, microcontrollers, sensors, power devices, etc. using Cadence OrCAD/Allegro or equivalent tools. Oversee bring-up and validation * Determine appropriate high speed electrical and communication protocols (e.g., CAN, EtherCAT, USB, etc) for reliable and efficient system operation * Specify and design custom power electronics and power distribution boards to meet performance, thermal, and safety requirements * Design and route all cabling and wire harnesses across the robotic platform, considering EMI, signal integrity, serviceability, and integration with mechanical structures * Architect and integrate the robot’s battery system, including protection circuitry, battery management, charging systems, and thermal considerations * Define and implement wiring and electrical interfaces for sensors (e.g., lidar, stereo cameras, IMUs, tactile) and compute modules * Ownership over prototyping and bringing up electrical designs and creation of test & validation rigs About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through innovative foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, NY, New York
We are seeking an Applied Scientist to develop and optimize Visual Inertial Odometry (VIO) and sensor fusion systems for our intelligent robots. In this role, you will design, implement, and deploy state estimation and tracking algorithms that enable robots to understand their position and motion in real time, even in challenging and dynamic environments. You will own the full pipeline from algorithm development through embedded deployment, ensuring that perception systems run efficiently on resource-constrained robotic hardware. You will also leverage modern machine learning approaches to push the boundaries of classical perception methods, combining learned representations with geometric techniques to achieve robust, real-time performance. This is a deeply hands-on role. You will work directly with sensors, hardware, and real-world data, while prototyping, testing, and iterating in physical environments. The ideal candidate has strong foundations in VIO and sensor fusion, practical experience optimizing algorithms for embedded platforms, and familiarity with how modern deep learning is transforming perception. Key job responsibilities - Design and implement Visual Inertial Odometry algorithms for robust real-time state estimation on robotic platforms like Sprout - Develop multi-sensor fusion pipelines integrating cameras, IMUs, and other sensing modalities for accurate pose tracking - Optimize perception and tracking algorithms for deployment on embedded hardware (e.g., ARM, GPU-accelerated edge devices) under strict latency and power constraints - Apply modern ML-based perception techniques (learned features, depth estimation, neural odometry) to complement and improve classical geometric approaches - Build and maintain calibration, evaluation, and benchmarking infrastructure for perception systems - Collaborate with hardware, controls, and navigation teams to integrate perception outputs into the robot’s autonomy stack - Lead technical projects from research prototyping through production deployment
US, WA, Bellevue
The candidate in this role will own delivery of science products and solutions to help Amazon Devices Sales and Marketing org. make better decisions: product recommendations to customers, segmentation, financial incrementality of marketing initiatives, A/B testing etc. Key job responsibilities The Amazon Devices organization designs, produces and markets Echo Speakers, Kindle e-readers, Fire Tablets, Fire TV Streaming Media Players, Ring and Blink Smart Home & Security products. We are constantly looking to innovate on behalf of customers with new devices in existing or new categories or improving customer experience on existing platforms. The Devices Data Services (DDS) team provides Data Science, Analytics and Engineering support to the broader organization to enable Sales and Marketing activities across all these product lines. We are looking for an innovative, hands-on and customer-obsessed Data Scientist who can be a strategic partner to the product managers and engineers on the team. Our projects span multiple organizations and require coordination of experimentation, economic and causal analysis, and building predictive machine learning models. A successful candidate will be a problem solver who enjoys diving into data, is excited by difficult modeling challenges, is motivated to build something that will eventually become a production software system, and possesses strong communication skills to effectively interface between technical and business teams. In this role, you will be a technical expert with massive impact. You will take the lead on developing advanced ML systems that are key to reaching our customers with the right recommendations at the right time. Your work will directly impact the success of Amazon's growing Devices business. You will work across diverse science/engineering/business teams. You will work on critical data science problems, building high quality, reliable, accurate, and consistent code sets that are aligned with our business needs. Key Performance Areas - Implement statistical or machine learning methods to solve specific business problems. - Improve upon existing methodologies by developing new data sources, testing model enhancements, and fine-tuning model parameters. - Directly contribute to development of modern automated recommendation systems - Build customer-facing reporting tools to provide insights and metrics to track model performance and explain variance - Collaborate with researchers, software developers, and business leaders to define product requirements, provide analytical support, and communicate feedback A day in the life You will work with other scientists, engineers, product managers, and marketers to develop new products that benefit our customers and help us reach our business goals. You will own solutions from end to end: conceptualization, prioritization, development, delivery, and productionalization. About the team We are a full stack science team that empowers product, marketing, and other business leaders to better understand customers who use Amazon devices, make decisions on product development or optimization, and measure the effectiveness of their efforts against our customer’s expectation. Our focus area is to build analytical frameworks that help the organization either access data, better understand the decisions customers are making and why, or assess customer satisfaction.