Formal verification makes RSA faster — and faster to deploy

Optimizations for Amazon's Graviton2 chip boost efficiency, and formal verification shortens development time.

Most secure transactions online are protected by public-key encryption schemes like RSA, whose security depends on the difficulty of factoring large numbers. Public-key encryption improves security because it enables the encrypted exchange of private keys. But because it depends on operations like modular exponentiation of large integers, it introduces significant computational overhead.

Researchers and engineers have introduced all kinds of optimizations to make public-key encryption more efficient, but the resulting complexity makes it difficult to verify that the encryption algorithms are behaving properly. And a bug in an encryption algorithm can be disastrous.

This post explains how Amazon’s Automated Reasoning group improved the throughput of RSA signatures on Amazon’s Graviton2 chip by 33% to 94%, depending on the key size, while also proving the functional correctness of our optimizations using formal verification.

Graviton chip.png
An AWS Graviton chip.

Graviton2 is a server-class CPU developed by Amazon Annapurna Labs, based on Arm Neoverse N1 cores. To improve the throughput of RSA signatures on Graviton2, we combined various techniques for fast modular arithmetic with assembly-level optimizations specific to Graviton2. To show that the optimized code is functionally correct, we formally verified it using the HOL Light interactive theorem prover, which was developed by a member of our team (John Harrison).

Our code is written in a constant-time style (for example, no secret-dependent branches or memory access patterns) to avoid side-channel attacks, which can learn secret information from operational statistics like function execution time. The optimized functions and their proofs are included in Amazon Web Services’ s2n-bignum library of formally verified big-number operations. The functions are also adopted by AWS-LC, the cryptographic library maintained by AWS, and by its bindings Amazon Corretto Crypto Provider (ACCP) and AWS Libcrypto for Rust (AWS-LC-RS).

Key size (bits)Baseline throughput (ops/sec)Improved throughput (ops/sec)Speedup (%)
204829954181.00%
30729512733.50%
4096428194.20%

Improvements in the throughput times of RSA signatures in AWS-LC on Graviton2. 

Step 1. Making RSA fast on Graviton2

Optimizing the execution of RSA algorithms on Graviton2 requires the careful placement and use of multiplication instructions. On 64-bit Arm CPUs, the multiplication of two 64-bit numbers, with a product of up to 128 bits (conventionally designated 64×64→128), are accomplished by two instructions: MUL, producing the lower 64 bits, and UMULH, producing the upper 64 bits. On Graviton2, MUL has a latency of four cycles and stalls the multiplier pipeline for two cycles after issue, while UMULH has a latency of five cycles and stalls the multiplier pipeline for three cycles after issue. Since Neoverse N1 has a single multiplier pipeline but three addition pipelines, multiplication throughput is around one-tenth the throughput of 64-bit addition.

To improve throughput, we (1) applied a different multiplication algorithm, trading multiplication for addition instructions, and (2) used single-instruction/multiple-data (SIMD) instructions to offload a portion of multiplication work to the vector units of the CPU.

Algorithmic optimization

For fast and secure modular arithmetic, Montgomery modular multiplication is a widely used technique. Montgomery multiplication represents numbers in a special form called Montgomery form, and when a sequence of modular operations needs to be executed — as is the case with the RSA algorithm — keeping intermediary products in Montgomery form makes computation more efficient.

We implement Montgomery multiplication as the combination of big-integer multiplication and a separate Montgomery reduction, which is one of its two standard implementations.

Related content
Solution method uses new infrastructure that reduces proof-checking overhead by more than 90%.

On Graviton2, the benefit of this approach is that we can use the well-known Karatsuba algorithm to trade costly multiplications for addition operations. The Karatsuba algorithm decomposes a multiplication into three smaller multiplications, together with some register shifts. It can be performed recursively, and for large numbers, it’s more efficient than the standard multiplication algorithm.

We used Karatsuba’s algorithm for power-of-two bit sizes, such as 2,048 bits and 4,096 bits. For other sizes (e.g., 3072 bits), we still use a quadratic multiplication. The Karatsuba multiplication can be further optimized when the two operands are equal, and we wrote functions specialized for squaring as well.

With these optimizations we achieved a 31–49% speedup in 2,048- and 4,096-bit RSA signatures compared with our original code.

Microarchitectural optimization

Many Arm CPUs implement the Neon single-instruction/multiple-data (SIMD) architecture extension. It adds a file of 128-bit registers, which are viewed as vectors of various sizes (8/16/32/64 bit), and SIMD instructions that can operate on some or all of those vectors in parallel. Furthermore, SIMD instructions use different pipelines than scalar instructions, so both types of instructions can be executed in parallel.

Vectorization strategy. Vectorization is a process that replaces sequential executions of the same operation with a single operation over multiple values; it usually increases efficiency. Using SIMD instructions, we vectorized scalar 64-bit multiplications.

For big-integer multiplication, vectorized 64-bit multiply-low code nicely overlapped with scalar 64-bit multiply-high instructions (UMULH). For squaring, vectorizing two 64×64→128-bit squaring operations worked well. For multiplications occurring in Montgomery reduction, vectorizing 64×64→128-bit multiplications and 64×64→64 multiply-lows worked. To choose which scalar multiplications to vectorize, we wrote a script that enumerated differently vectorized codes and timed their execution. For short code fragments, exhaustive enumeration was possible, but for larger code fragments, we had to rely on experience. The overall solution was chosen only after extensive experiments with other alternatives, such as those described by Seo et. al. at ICISC’14.

Related content
Using time to last byte — rather than time to first byte — to assess the effects of data-heavy TLS 1.3 on real-world connections yields more encouraging results.

Although the scalar and SIMD units are able to operate in parallel, it is sometimes necessary to move inputs and intermediate results between integer and SIMD registers, and this brings significant complications. The FMOV instruction copies data from a 64-bit scalar register to a SIMD register, but it uses the same pipeline as the scalar multiplier, so its use would reduce scalar-multiplier throughput.

The alternative of loading into a vector register first and then using MOV to copy it to a scalar register has lower latency, but it occupies the SIMD pipeline and hence lowers the throughput of SIMD arithmetic operations. Somewhat counterintuitively, the best solution was to make two separate memory loads into the integer and SIMD registers, with care for their relative placement. We did still use MOV instructions to copy certain SIMD results into integer registers when the SIMD results were already placed at SIMD registers because it was faster than a round trip via store-load instructions.

Fast constant-time table lookup code. Another independent improvement was the reimplementation of a vectorized constant-time lookup table for a fast modular-exponentiation algorithm. Combining this with our earlier optimization further raises our speedup to 80–94% when compared to the throughput of 2,048-/4,096-bit RSA signatures from our initial code, as well as a 33% speedup for 3,072-bit signatures.

Instruction scheduling. Even though Graviton2 is an out-of-order CPU, carefully scheduling instructions is important for performance, due to the finite capacity of components like reorder buffers and issue queues. The implementations discussed here were obtained by manual instruction scheduling, which led to good results but was time consuming.

We also investigated automating the process using the SLOTHY superoptimizer, which is based on constraint solving and a (simplified) microarchitecture model. With additional tweaks to Montgomery reduction to precalculate some numbers used in Karatsuba, SLOTHY optimization enabled a 95–120% improvement on 2,048-/4,096-bit throughputs and 46% on 3,072-bit! However, this method is not yet incorporated into AWS-LC since verifying the automated scheduling proved to be challenging. Studying the potential for automatically proving correctness of scheduling optimizations is a work in progress.

Step 2. Formally verifying the code

To deploy the optimized code in production we need to ensure that it works correctly. Random testing is a cheap approach for quickly checking simple and known cases, but to deliver a higher level of assurance, we rely on formal verification. In this section we explain how we apply formal verification to prove functional correctness of cryptographic primitives.

Introduction to s2n-bignum

AWS’s s2n-bignum is both (1) a framework for formally verifying assembly code in x86-64 and Arm and (2) a collection of fast assembly functions for cryptography, verified using the framework itself.

Related content
New IAM Access Analyzer feature uses automated reasoning to ensure that access policies written in the IAM policy language don’t grant unintended access.

Specification in s2n-bignum. Every assembly function in s2n-bignum — including the new assembly functions used in RSA — has a specification stating its functional correctness. A specification states that for any program state satisfying some precondition, the output state of the program must satisfy some postcondition. For example, bignum_mul_4_8(uint64_t *z, uint64_t *x, uint64_t *y) is intended to multiply two 256-bit (four-word) numbers producing a 512-bit (eight-word) result. Its (abbreviated) precondition over an input state s is

  aligned_bytes_loaded s (word pc) bignum_mul_4_8_mc
∧ read PC s = word pc
∧ C_ARGUMENTS [z, x, y] s
∧ bignum_from_memory (x,4) s = a
∧ bignum_from_memory (y,4) s = b

This means that the machine code of bignum_mul_4_8 is loaded at the address currently contained in the program counter PC (aligned_bytes_loaded), symbolic values are assigned to the function arguments according to C’s application binary interface (C_ARGUMENTS ...), and big integers logically represented by the symbols a and b are stored in the memory location pointed to by x and y for four words (bignum_from_memory ...).

The (abbreviated) postcondition over an output state s is

bignum_from_memory (z,8) s = a * b

This means that the multiplied result a * b is stored in the eight-word buffer starting at location z.

One more component is a relation between the input and output states that must be satisfied:

(MAYCHANGE_REGS_AND_FLAGS_PERMITTED_BY_ABI;
MAYCHANGE [memory :> bytes(z,8 * 8)]) (s_in,s_out)

This means that executing the code may change registers/flags permitted by the application binary interface (ABI) and the eight-word buffer starting at z, but all other state components must remain unchanged.

Verifying assembly using HOL Light. To prove that the implementation is correct with respect to the specification, we use the HOL Light interactive theorem prover. In contrast to “black-box” automated theorem provers, tools like HOL Light emphasize a balance between automating routine proof steps and allowing explicit, and programmable, user guidance. When a proof exists on paper or inside someone’s head, a proficient user can effectively rewrite the proof in an interactive theorem prover. S2n-bignum uses a combination of two strategies to verify a program:

Related content
Both secure multiparty computation and differential privacy protect the privacy of data used in computation, but each has advantages in different contexts.

Symbolic execution. Given a representation of the input program state using symbolic variables in place of specific values, symbolic execution infers a symbolic output state at the end of some code snippet, in effect doing a more rigorous and generalized form of program execution. While this still leaves the postcondition to be proved, it strips away artifacts of program execution and leaves a purely mathematical problem.

Intermediate annotations in the style of Floyd-Hoare logic. Each intermediate assertion serves as a postcondition for the preceding code and a precondition for the subsequent code. The assertion need contain only the details that are necessary to prove its corresponding postcondition. This abstraction helps make symbolic simulation more tractable, in terms of both automated-reasoning capacity and the ease with which humans can understand the result.

We assume that the Arm hardware behaves in conformance with the model of s2n-bignum, but the model was developed with care, and it was validated by extensively cross-checking its interpretations against hardware.

Future formal-verification improvements. The formal verification for s2n-bignum does not yet cover nonfunctional properties of the implementation, including whether it may leak information through side channels such as the running time of the code. Rather, we handle this through a disciplined general style of implementation: never using instructions having variable timing, such as division, and no conditional branching/memory access patterns that depend on secret data. Also, we sanity-check some of these properties using simple static checks, and we execute the code on inputs with widely differing bit densities to analyze the corresponding run times and investigate any unexpected correlations.

These disciplines and sanity checks are standard practice with us, and we apply them to all the new implementations described here. In ongoing work, we are exploring the possibility of formally verifying the absence of information leakage.

Research areas

Related content

US, CA, San Francisco
We are seeking a highly motivated PhD Research Scientist Intern to join our robotics teams at Amazon. This internship offers a unique opportunity to work on cutting-edge robotics projects that directly impact millions of customers worldwide. You will collaborate with world-class experts, tackle groundbreaking research problems, and contribute to the development of innovative solutions that shape the future of robotics and artificial intelligence. As a Research Scientist intern, you will be challenged to apply theory into practice through experimentation and invention, develop new algorithms using modeling software and programming techniques for complex problems, implement prototypes, and work with massive datasets. You'll find yourself at the forefront of innovation, working with large language models, multi-modal models, and modern reinforcement learning techniques, especially as applied to real-world robots. Imagine waking up each morning, fueled by the excitement of solving intricate puzzles that have a direct impact on Amazon's operational excellence. Your day might begin by collaborating with cross-functional teams, exchanging ideas and insights to develop innovative solutions in robotics and AI. You'll then immerse yourself in a world of data and algorithms, leveraging your expertise in large language models and multi-modal systems to uncover hidden patterns and drive operational efficiencies. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Amazon has positions available for Research Scientist Internships in, but not limited to, Bellevue, WA; Boston, MA; Cambridge, MA; New York, NY; Santa Clara, CA; Seattle, WA; Sunnyvale, CA, and San Francisco, CA. We are particularly interested in candidates with expertise in: Robotics, Computer Vision, Artificial Intelligence, Causal Inference, Time Series, Large Language Models, Multi-Modal Models, and Reinforcement Learning. In this role, you gain hands-on experience in applying cutting-edge analytical and AI techniques to tackle complex business challenges at scale. If you are passionate about using data-driven insights and advanced AI models to drive operational excellence in robotics, we encourage you to apply. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail, and have the ability to thrive in a fast-paced, ever-changing environment. A day in the life Work alongside global experts to develop and implement novel scalable algorithms in robotics, incorporating large language models and multi-modal systems. Develop modeling techniques that advance the state-of-the-art in areas of robotics, particularly focusing on modern reinforcement learning for real-world robotic applications. Anticipate technological advances and work with leading-edge technology in AI and robotics. Collaborate with Amazon scientists and cross-functional teams to develop and deploy cutting-edge robotics solutions into production, leveraging the latest in language models and multi-modal AI. Contribute to technical white papers, create technical roadmaps, and drive production-level projects that support Amazon Science in the intersection of robotics and advanced AI. Embrace ambiguity, maintain strong attention to detail, and thrive in a fast-paced, ever-changing environment at the forefront of AI and robotics research.
US, MA, Westborough
Are you inspired by invention? Is problem solving through teamwork in your DNA? Do you like the idea of seeing how your work impacts the bigger picture? Answer yes to any of these and you’ll fit right in here at Amazon Robotics. We are a smart team of doers that work passionately to apply cutting edge advances in robotics and software to solve real-world challenges that will transform our customers’ experiences in ways we can’t even imagine yet. We invent new improvements every day. We are Amazon Robotics and we will give you the tools and support you need to invent with us in ways that are rewarding, fulfilling and fun. Amazon Robotics is seeking Research Science Interns and Co-ops with a passion for robotic research to work on cutting edge algorithms for robotics. Our team works on challenging and high-impact projects within robotics. Examples of projects include allocating resources to complete a million orders a day, coordinating the motion of thousands of robots, autonomous navigation in warehouses, identifying objects and damage, and learning how to grasp all the products Amazon sells. As an Research Science Intern/Co-op at Amazon Robotics, you will be working on one or more of our robotic technologies such as autonomous mobile robots, robot manipulators, and computer vision identification technologies. The intern/co-op project(s) and the internship/co-op location are determined by the team the student will be working on. Please note that by applying to this role you would be considered for Research Scientist summer intern, spring co-op, and fall co-op roles on various Amazon Robotics teams. These teams work on robotics research within areas such as computer vision, machine learning, robotic manipulation, navigation, path planning, perception, optimization and more. Learn more about Amazon Robotics: https://amazon.jobs/en/teams/amazon-robotics
US, NY, New York
Amazon is looking for an Applied Scientist to help build the next generation of sourcing and vendor experience systems. The Optimal Sourcing Systems (OSS) owns the optimization of inventory sourcing and the orchestration of inbound flows from vendors worldwide. We source inventory from thousands of vendors for millions of products globally while orchestrating the inbound flow for billions of units. Our goals are to increase reliable access to supply, improve supply chain-driven vendor experience, and reduce end-to-end supply chain costs, all in service of maximizing Long-Term Free Cash Flow (LTFCF) for Amazon. As an Applied Scientist, you will work with software engineers, product managers, and business teams to understand the business problems and requirements, distill that understanding to crisply define the problem, and design and develop innovative solutions to address them. Our team is highly cross-functional and employs a wide array of scientific tools and techniques to solve key challenges, including optimization, causal inference, and machine learning/deep learning. Some critical research areas in our space include modeling buying decisions under high uncertainty, vendors' behavior and incentives, supply risk and enhancing visibility and reliability of inbound signals. Key job responsibilities You will be a science tech leader for the team. As a Applied Scientist you will: - Set the scientific strategic vision for the team. You - - lead the decomposition of problems and development of roadmaps to execute on it. - Set an example for other scientists with exemplary scientific analyses; maintainable, extensible, and well-tested code; and simple, intuitive, and effective solutions. - Influence team business and engineering strategies. - Exercise sound judgment to prioritize between short-term vs. long-term and business vs. technology needs. - Communicate clearly and effectively with stakeholders to drive alignment and build consensus on key initiatives. - Foster collaborations between scientists across Amazon researching similar or related problems. - Actively engage in the development of others, both within and outside the team. - Engage with the broader scientific community through presentations, publications, and patents.
US, CA, San Francisco
If you are interested in this position, please apply on Twitch's Career site https://www.twitch.tv/jobs/en/ About Us: Twitch is the world’s biggest live streaming service, with global communities built around gaming, entertainment, music, sports, cooking, and more. It is where thousands of communities come together for whatever, every day. We’re about community, inside and out. You’ll find coworkers who are eager to team up, collaborate, and smash (or elegantly solve) problems together. We’re on a quest to empower live communities, so if this sounds good to you, see what we’re up to on LinkedIn and X, and discover the projects we’re solving on our Blog. Be sure to explore our Interviewing Guide to learn how to ace our interview process. About the Role Data is central to Twitch's decision-making process, and data scientists are a critical component to evangelize data-driven decision making in all of our operations. As a data scientist at Twitch, you will be on the ground floor with your team, shaping the way product performance is measured, defining what questions should be asked, and scaling analytics methods and tools to support our growing business, leading the way for high quality, high velocity decisions for your team. For this role, we're looking for an experienced product data scientist who will help develop the strategy and evaluate/improve product initiatives within our Creator product team. You will be responsible to define and track KPIs, design experiments, evaluate A/B tests, implement data instrumentation, and inform on investment. Our ideal candidate is a "full-stack" data powerhouse who uses data to drive decision making to make the best products for our creators and their communities. Your input will be core to decision making across all major product strategies and initiatives that our team builds. You will work closely with product managers, technical program managers, engineering, data scientists, and organization leadership within and outside of the Creator organization. You Will - Inform product strategies by defining and updating core metrics for each initiative - Establish analytical framework for your team: ad-hoc analysis, automated dashboards, and self-service reporting tools to surface key data to stakeholders - Evaluate and forecast impact of product features on creators, viewers, and the entire Twitch ecosystem - Design A/B experiments to drive product direction with iterative innovation and measurement - Drive the team's analysis roadmap and prioritize the most valuable projects - Tackle complex and ambiguous analytic projects, resolve ambiguity and accurately identify the trade-offs between speed and quality and apply or route work as necessary - Dive deep into the data to understand how creator and viewer behaviors change with the evolution of our product - Act as our team's thought leader on best practices and move towards long-term vision of sustainable and thriving data processes - Own data collection and product instrumentation implementation and quality assurance - Work hand-in-hand with business, product, engineering, and design to proactively influence and inform teammates' decisions throughout the product life cycle - Distill ambiguous product or business questions, find clever ways to answer them, and to quantify the uncertainty Perks - Medical, Dental, Vision & Disability Insurance - 401(k) - Maternity & Parental Leave - Flexible PTO - Amazon Employee Discount About the team Twitch is all about community, and our Community Team is a core pillar of what makes Twitch, Twitch. Teams within Community are responsible for a myriad of product areas impacting the creator, viewer, and moderator journeys on our platform. As a member of our team, you'll build solutions that improve g the experience of millions of daily active users on our platform and create tools that keep both streamers and viewers engaged and connected on our platform.
US, NY, New York
The Think Forward Lab team at Deep Science for Systems & Services (DS3), AWS AI/ML is looking for world class scientists and engineers to join its group working on deployment of autonomous agents. Agents with full autonomy need to be trustworthy and verifiable. The team develops AI systems that exhibit autonomous proficiency across a wide range of domains, demonstrating competency in many (complex) tasks previously performed by human knowledge workers. Such agents sense, plan, and act effectively in interactive and previously unseen environments. To accomplish this goal we are seeking scientists with expertise in large language models, user alignment, neuro-symbolic AI, synthetic data generation and agentic environments. This is a role that combines science knowledge, technical strength, and product focus. It will be your job to develop novel generative AI-based agentic systems and algorithms while working with the engineering team to integrate them into different projects in the AWS AI portfolio of services. You will be at the heart of a growing and exciting focus area for AWS and work with other acclaimed engineers and world famous scientists. Key job responsibilities You will be a hands on contributor to science at Amazon. You will help raise the scientific bar by mentoring, educating, and publishing in your field. You will help build the scientific roadmap for agents, neuro-symbolic AI and LLMs. You will be a technical leader in your domain. You will be a strong mentor and lead for your team. About the team The DS3 org encompasses scientists who work closely with different AWS AI/ML product services, innovating on the behalf of our customers customers. About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Utility Computing (UC) AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (IoT), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.
US, NY, New York
The Think Forward Lab team at Deep Science for Systems & Services (DS3), AWS AI/ML is looking for world class scientists and engineers to join its group working on deployment of autonomous agents. Agents with full autonomy need to be trustworthy and verifiable. The team develops AI systems that exhibit autonomous proficiency across a wide range of domains, demonstrating competency in many (complex) tasks previously performed by human knowledge workers. Such agents sense, plan, and act effectively in interactive and previously unseen environments. To accomplish this goal we are seeking scientists with expertise in large language models, user alignment, neuro-symbolic AI, synthetic data generation and agentic environments. This is a role that combines science knowledge, technical strength, and product focus. It will be your job to develop novel generative AI-based agentic systems and algorithms while working with the engineering team to integrate them into different projects in the AWS AI portfolio of services. You will be at the heart of a growing and exciting focus area for AWS and work with other acclaimed engineers and world famous scientists. Key job responsibilities You will be a hands on contributor to science at Amazon. You will help raise the scientific bar by mentoring, educating, and publishing in your field. You will help build the scientific roadmap for agents, neuro-symbolic AI and LLMs. You will be a technical leader in your domain. You will be a strong mentor and lead for your team. About the team The DS3 org encompasses scientists who work closely with different AWS AI/ML product services, innovating on the behalf of our customers customers. About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Utility Computing (UC) AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (IoT), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.
US, CA, Santa Clara
The Think Forward Lab team at Deep Science for Systems & Services (DS3), AWS AI/ML is looking for world class scientists and engineers to join its group working on deployment of structure-aware next generation systems that can reason over heterogenous data assets and reduce hallucination making AI systems reliable. The team develops AI systems that utilize structure exhibit autonomous proficiency across a wide range of domains, demonstrating competency in many (complex) tasks previously performed by human knowledge workers. To accomplish this goal we are seeking scientists with expertise in large language models, graph machine learning, user alignment, neuro-symbolic AI, synthetic data generation and agentic environments. This is a role that combines science knowledge, technical strength, and product focus. It will be your job to develop novel generative AI-based agentic systems and algorithms while working with the engineering team to integrate them into different projects in the AWS AI portfolio of services. You will be at the heart of a growing and exciting focus area for AWS and work with other acclaimed engineers and world famous scientists. Key job responsibilities You will be a hands on contributor to science at Amazon. You will help raise the scientific bar by mentoring, educating, and publishing in your field. You will help build the scientific roadmap for graph retrieval augmented generation, agents, neuro-symbolic AI and LLMs. You will be a technical leader in your domain. You will be a strong mentor and lead for your team. A day in the life Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. About the team The DS3 org encompasses scientists who work closely with different AWS AI/ML product services, innovating on the behalf of our customers customers. About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Utility Computing (UC) AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (IoT), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.
US, WA, Seattle
AWS Infrastructure Services owns the design, planning, delivery, and operation of all AWS global infrastructure. In other words, we’re the people who keep the cloud running. We support all AWS data centers and all of the servers, storage, networking, power, and cooling equipment that ensure our customers have continual access to the innovation they rely on. We work on the most challenging problems, with thousands of variables impacting the supply chain — and we’re looking for talented people who want to help. You’ll join a diverse team of software, hardware, and network engineers, supply chain specialists, security experts, operations managers, and other vital roles. You’ll collaborate with people across AWS to help us deliver the highest standards for safety and security while providing seemingly infinite capacity at the lowest possible cost for our customers. And you’ll experience an inclusive culture that welcomes bold ideas and empowers you to own them to completion. In 2019, Amazon co-founded The Climate Pledge and made a commitment to achieve net-zero carbon by 2040 —10 years ahead of the Paris Agreement. We invited others to join us and there are now more than 300 businesses and organizations across 51 industries and 29 countries that have signed the Pledge, which means we are collectively coming at the climate crisis from nearly every sector and nearly every angle. As part of our efforts to decarbonize our business, we became the world’s largest corporate purchaser of renewable energy in 2020, and last year, we reached 85% renewable energy across our business, and are on a path to power our operations with 100% renewable energy by 2025. We recently announced that AWS will be water positive by 2030, returning more water to communities than it uses in its direct operations. The company also announced its 2021 global water use efficiency (WUE) metric of 0.25 liters of water per kilowatt-hour, demonstrating AWS’s leadership in water efficiency among cloud providers. To learn more about AWS’s water+ commitment visit: Water Stewardship. Come join the team that is building the tools and innovative technology to manage our growing portfolio of renewable energy investments, including solar, on-shore and off-shore wind farms. Key job responsibilities As an data scientist, you will employ machine learning and analytics to create scalable solutions for problems in sustainable energy space. You will dissect large historical business data sets to enhance and streamline essential processes. You will partner with data and software teams to create models for predictive insights and establish automated methods for large data analysis. A day in the life To learn more, you can visit: amazon sustainability in the cloud About the team Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
US, CA, Santa Clara
Are you passionate about applying automated reasoning and program analysis to real world problems? Do you want to create products that help customers? If so, then we have an exciting opportunity for you. We’re looking for an Applied Scientist to help strengthen our customers' security with automation for managed controls. AWS Identity provides the bedrock for secure and continuous access to all AWS services. By quickly connecting millions of users, across the world we empower organizations and enterprises to accelerate their cloud and digital transformation. In this role, you will interact with internal teams and external customers to understand their requirements. You will apply your knowledge to propose innovative solutions, create software prototypes, and productize prototypes into production systems using software development tools and methodologies. In addition, you will support and scale your solutions to meet the ever growing demand of customer use. Key job responsibilities * Interact with various teams to develop an understanding of their security and safety requirements. * Apply the acquired knowledge to build tools and algorithms, find problems, or show the absence of security/safety problems. * Implement these capabilities through the use of Automated Reasoning and various concepts from programming languages. * Perform analysis of the customer systems using tools developed in-house or externally provided * Create software prototypes to verify and validate the devised solutions methodologies; integrate the prototypes into production systems using standard software development tools and methodologies. About the team About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. This team is part of AWS Utility Computing: Utility Computing (UC) AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services.
US, WA, Seattle
Amazon Prime is looking for an ambitious Economist to help create econometric insights for world-wide Prime. Prime is Amazon's premiere membership program, with over 200M members world-wide. This role is at the center of many major company decisions that impact Amazon's customers. These decisions span a variety of industries, each reflecting the diversity of Prime benefits. These range from fast-free e-commerce shipping, digital content (e.g., exclusive streaming video, music, gaming, photos), and grocery offerings. Prime Science creates insights that power these decisions. As an economist in this role, you will create statistical tools that embed causal interpretations. You will utilize massive data, state-of-the-art scientific computing, econometrics (causal, counterfactual/structural, time-series forecasting, experimentation), and machine-learning, to do so. Some of the science you create will be publishable in internal or external scientific journals and conferences. You will work closely with a team of economists, applied scientists, data professionals (business analysts, business intelligence engineers), product managers, and software engineers. You will create insights from descriptive statistics, as well as from novel statistical and econometric models. You will create internal-to-Amazon-facing automated scientific data products to power company decisions. You will write strategic documents explaining how senior company leaders should utilize these insights to create sustainable value for customers. These leaders will often include the senior-most leaders at Amazon. The team is unique in its exposure to company-wide strategies as well as senior leadership. It operates at the cutting-edge of utilizing data, econometrics, artificial intelligence, and machine-learning to form business strategies. A successful candidate will have demonstrated a capacity for building, estimating, and defending statistical models (e.g., causal, counterfactual, time-series, machine-learning) using software such as R, Python, or STATA. They will have a willingness to learn and apply a broad set of statistical and computational techniques to supplement deep-training in one area of econometrics. For example, many applications on the team use structural econometrics, machine-learning, and time-series forecasting. They rely on building scalable production software, which involves a broad set of world-class software-building skills often learned on-the-job. As a consequence, already-obtained knowledge of SQL, machine learning, and large-scale scientific computing using distributed computing infrastructures such as Spark-Scala or PySpark would be a plus. Additionally, this candidate will show a track-record of delivering projects well and on-time, preferably in collaboration with other team members (e.g. co-authors). Candidates must have very strong writing and emotional intelligence skills (for collaborative teamwork, often with colleagues in different functional roles), a growth mindset, and a capacity for dealing with a high-level of ambiguity. Endowed with these traits and on-the-job-growth, the role will provide the opportunity to have a large strategic, world-wide impact on the customer experiences of Prime members.