Formal verification makes RSA faster — and faster to deploy

Optimizations for Amazon's Graviton2 chip boost efficiency, and formal verification shortens development time.

Most secure transactions online are protected by public-key encryption schemes like RSA, whose security depends on the difficulty of factoring large numbers. Public-key encryption improves security because it enables the encrypted exchange of private keys. But because it depends on operations like modular exponentiation of large integers, it introduces significant computational overhead.

Researchers and engineers have introduced all kinds of optimizations to make public-key encryption more efficient, but the resulting complexity makes it difficult to verify that the encryption algorithms are behaving properly. And a bug in an encryption algorithm can be disastrous.

This post explains how Amazon’s Automated Reasoning group improved the throughput of RSA signatures on Amazon’s Graviton2 chip by 33% to 94%, depending on the key size, while also proving the functional correctness of our optimizations using formal verification.

Graviton chip.png
An AWS Graviton chip.

Graviton2 is a server-class CPU developed by Amazon Annapurna Labs, based on Arm Neoverse N1 cores. To improve the throughput of RSA signatures on Graviton2, we combined various techniques for fast modular arithmetic with assembly-level optimizations specific to Graviton2. To show that the optimized code is functionally correct, we formally verified it using the HOL Light interactive theorem prover, which was developed by a member of our team (John Harrison).

Our code is written in a constant-time style (for example, no secret-dependent branches or memory access patterns) to avoid side-channel attacks, which can learn secret information from operational statistics like function execution time. The optimized functions and their proofs are included in Amazon Web Services’ s2n-bignum library of formally verified big-number operations. The functions are also adopted by AWS-LC, the cryptographic library maintained by AWS, and by its bindings Amazon Corretto Crypto Provider (ACCP) and AWS Libcrypto for Rust (AWS-LC-RS).

Key size (bits)Baseline throughput (ops/sec)Improved throughput (ops/sec)Speedup (%)
204829954181.00%
30729512733.50%
4096428194.20%

Improvements in the throughput times of RSA signatures in AWS-LC on Graviton2. 

Step 1. Making RSA fast on Graviton2

Optimizing the execution of RSA algorithms on Graviton2 requires the careful placement and use of multiplication instructions. On 64-bit Arm CPUs, the multiplication of two 64-bit numbers, with a product of up to 128 bits (conventionally designated 64×64→128), are accomplished by two instructions: MUL, producing the lower 64 bits, and UMULH, producing the upper 64 bits. On Graviton2, MUL has a latency of four cycles and stalls the multiplier pipeline for two cycles after issue, while UMULH has a latency of five cycles and stalls the multiplier pipeline for three cycles after issue. Since Neoverse N1 has a single multiplier pipeline but three addition pipelines, multiplication throughput is around one-tenth the throughput of 64-bit addition.

To improve throughput, we (1) applied a different multiplication algorithm, trading multiplication for addition instructions, and (2) used single-instruction/multiple-data (SIMD) instructions to offload a portion of multiplication work to the vector units of the CPU.

Algorithmic optimization

For fast and secure modular arithmetic, Montgomery modular multiplication is a widely used technique. Montgomery multiplication represents numbers in a special form called Montgomery form, and when a sequence of modular operations needs to be executed — as is the case with the RSA algorithm — keeping intermediary products in Montgomery form makes computation more efficient.

We implement Montgomery multiplication as the combination of big-integer multiplication and a separate Montgomery reduction, which is one of its two standard implementations.

Related content
Solution method uses new infrastructure that reduces proof-checking overhead by more than 90%.

On Graviton2, the benefit of this approach is that we can use the well-known Karatsuba algorithm to trade costly multiplications for addition operations. The Karatsuba algorithm decomposes a multiplication into three smaller multiplications, together with some register shifts. It can be performed recursively, and for large numbers, it’s more efficient than the standard multiplication algorithm.

We used Karatsuba’s algorithm for power-of-two bit sizes, such as 2,048 bits and 4,096 bits. For other sizes (e.g., 3072 bits), we still use a quadratic multiplication. The Karatsuba multiplication can be further optimized when the two operands are equal, and we wrote functions specialized for squaring as well.

With these optimizations we achieved a 31–49% speedup in 2,048- and 4,096-bit RSA signatures compared with our original code.

Microarchitectural optimization

Many Arm CPUs implement the Neon single-instruction/multiple-data (SIMD) architecture extension. It adds a file of 128-bit registers, which are viewed as vectors of various sizes (8/16/32/64 bit), and SIMD instructions that can operate on some or all of those vectors in parallel. Furthermore, SIMD instructions use different pipelines than scalar instructions, so both types of instructions can be executed in parallel.

Vectorization strategy. Vectorization is a process that replaces sequential executions of the same operation with a single operation over multiple values; it usually increases efficiency. Using SIMD instructions, we vectorized scalar 64-bit multiplications.

For big-integer multiplication, vectorized 64-bit multiply-low code nicely overlapped with scalar 64-bit multiply-high instructions (UMULH). For squaring, vectorizing two 64×64→128-bit squaring operations worked well. For multiplications occurring in Montgomery reduction, vectorizing 64×64→128-bit multiplications and 64×64→64 multiply-lows worked. To choose which scalar multiplications to vectorize, we wrote a script that enumerated differently vectorized codes and timed their execution. For short code fragments, exhaustive enumeration was possible, but for larger code fragments, we had to rely on experience. The overall solution was chosen only after extensive experiments with other alternatives, such as those described by Seo et. al. at ICISC’14.

Related content
Using time to last byte — rather than time to first byte — to assess the effects of data-heavy TLS 1.3 on real-world connections yields more encouraging results.

Although the scalar and SIMD units are able to operate in parallel, it is sometimes necessary to move inputs and intermediate results between integer and SIMD registers, and this brings significant complications. The FMOV instruction copies data from a 64-bit scalar register to a SIMD register, but it uses the same pipeline as the scalar multiplier, so its use would reduce scalar-multiplier throughput.

The alternative of loading into a vector register first and then using MOV to copy it to a scalar register has lower latency, but it occupies the SIMD pipeline and hence lowers the throughput of SIMD arithmetic operations. Somewhat counterintuitively, the best solution was to make two separate memory loads into the integer and SIMD registers, with care for their relative placement. We did still use MOV instructions to copy certain SIMD results into integer registers when the SIMD results were already placed at SIMD registers because it was faster than a round trip via store-load instructions.

Fast constant-time table lookup code. Another independent improvement was the reimplementation of a vectorized constant-time lookup table for a fast modular-exponentiation algorithm. Combining this with our earlier optimization further raises our speedup to 80–94% when compared to the throughput of 2,048-/4,096-bit RSA signatures from our initial code, as well as a 33% speedup for 3,072-bit signatures.

Instruction scheduling. Even though Graviton2 is an out-of-order CPU, carefully scheduling instructions is important for performance, due to the finite capacity of components like reorder buffers and issue queues. The implementations discussed here were obtained by manual instruction scheduling, which led to good results but was time consuming.

We also investigated automating the process using the SLOTHY superoptimizer, which is based on constraint solving and a (simplified) microarchitecture model. With additional tweaks to Montgomery reduction to precalculate some numbers used in Karatsuba, SLOTHY optimization enabled a 95–120% improvement on 2,048-/4,096-bit throughputs and 46% on 3,072-bit! However, this method is not yet incorporated into AWS-LC since verifying the automated scheduling proved to be challenging. Studying the potential for automatically proving correctness of scheduling optimizations is a work in progress.

Step 2. Formally verifying the code

To deploy the optimized code in production we need to ensure that it works correctly. Random testing is a cheap approach for quickly checking simple and known cases, but to deliver a higher level of assurance, we rely on formal verification. In this section we explain how we apply formal verification to prove functional correctness of cryptographic primitives.

Introduction to s2n-bignum

AWS’s s2n-bignum is both (1) a framework for formally verifying assembly code in x86-64 and Arm and (2) a collection of fast assembly functions for cryptography, verified using the framework itself.

Related content
New IAM Access Analyzer feature uses automated reasoning to ensure that access policies written in the IAM policy language don’t grant unintended access.

Specification in s2n-bignum. Every assembly function in s2n-bignum — including the new assembly functions used in RSA — has a specification stating its functional correctness. A specification states that for any program state satisfying some precondition, the output state of the program must satisfy some postcondition. For example, bignum_mul_4_8(uint64_t *z, uint64_t *x, uint64_t *y) is intended to multiply two 256-bit (four-word) numbers producing a 512-bit (eight-word) result. Its (abbreviated) precondition over an input state s is

  aligned_bytes_loaded s (word pc) bignum_mul_4_8_mc
∧ read PC s = word pc
∧ C_ARGUMENTS [z, x, y] s
∧ bignum_from_memory (x,4) s = a
∧ bignum_from_memory (y,4) s = b

This means that the machine code of bignum_mul_4_8 is loaded at the address currently contained in the program counter PC (aligned_bytes_loaded), symbolic values are assigned to the function arguments according to C’s application binary interface (C_ARGUMENTS ...), and big integers logically represented by the symbols a and b are stored in the memory location pointed to by x and y for four words (bignum_from_memory ...).

The (abbreviated) postcondition over an output state s is

bignum_from_memory (z,8) s = a * b

This means that the multiplied result a * b is stored in the eight-word buffer starting at location z.

One more component is a relation between the input and output states that must be satisfied:

(MAYCHANGE_REGS_AND_FLAGS_PERMITTED_BY_ABI;
MAYCHANGE [memory :> bytes(z,8 * 8)]) (s_in,s_out)

This means that executing the code may change registers/flags permitted by the application binary interface (ABI) and the eight-word buffer starting at z, but all other state components must remain unchanged.

Verifying assembly using HOL Light. To prove that the implementation is correct with respect to the specification, we use the HOL Light interactive theorem prover. In contrast to “black-box” automated theorem provers, tools like HOL Light emphasize a balance between automating routine proof steps and allowing explicit, and programmable, user guidance. When a proof exists on paper or inside someone’s head, a proficient user can effectively rewrite the proof in an interactive theorem prover. S2n-bignum uses a combination of two strategies to verify a program:

Related content
Both secure multiparty computation and differential privacy protect the privacy of data used in computation, but each has advantages in different contexts.

Symbolic execution. Given a representation of the input program state using symbolic variables in place of specific values, symbolic execution infers a symbolic output state at the end of some code snippet, in effect doing a more rigorous and generalized form of program execution. While this still leaves the postcondition to be proved, it strips away artifacts of program execution and leaves a purely mathematical problem.

Intermediate annotations in the style of Floyd-Hoare logic. Each intermediate assertion serves as a postcondition for the preceding code and a precondition for the subsequent code. The assertion need contain only the details that are necessary to prove its corresponding postcondition. This abstraction helps make symbolic simulation more tractable, in terms of both automated-reasoning capacity and the ease with which humans can understand the result.

We assume that the Arm hardware behaves in conformance with the model of s2n-bignum, but the model was developed with care, and it was validated by extensively cross-checking its interpretations against hardware.

Future formal-verification improvements. The formal verification for s2n-bignum does not yet cover nonfunctional properties of the implementation, including whether it may leak information through side channels such as the running time of the code. Rather, we handle this through a disciplined general style of implementation: never using instructions having variable timing, such as division, and no conditional branching/memory access patterns that depend on secret data. Also, we sanity-check some of these properties using simple static checks, and we execute the code on inputs with widely differing bit densities to analyze the corresponding run times and investigate any unexpected correlations.

These disciplines and sanity checks are standard practice with us, and we apply them to all the new implementations described here. In ongoing work, we are exploring the possibility of formally verifying the absence of information leakage.

Research areas

Related content

GB, MLN, Edinburgh
We’re looking for a Machine Learning Scientist in the Personalization team for our Edinburgh office experienced in generative AI and large models. You will be responsible for developing and disseminating customer-facing personalized recommendation models. This is a hands-on role with global impact working with a team of world-class engineers and scientists across the Edinburgh offices and wider organization. You will lead the design of machine learning models that scale to very large quantities of data, and serve high-scale low-latency recommendations to all customers worldwide. You will embody scientific rigor, designing and executing experiments to demonstrate the technical efficacy and business value of your methods. You will work alongside a science team to delight customers by aiding in recommendations relevancy, and raise the profile of Amazon as a global leader in machine learning and personalization. Successful candidates will have strong technical ability, focus on customers by applying a customer-first approach, excellent teamwork and communication skills, and a motivation to achieve results in a fast-paced environment. Our position offers exceptional opportunities for every candidate to grow their technical and non-technical skills. If you are selected, you have the opportunity to make a difference to our business by designing and building state of the art machine learning systems on big data, leveraging Amazon’s vast computing resources (AWS), working on exciting and challenging projects, and delivering meaningful results to customers world-wide. Key job responsibilities Develop machine learning algorithms for high-scale recommendations problems. Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative analysis and business judgement. Collaborate with software engineers to integrate successful experimental results into large-scale, highly complex Amazon production systems capable of handling 100,000s of transactions per second at low latency. Report results in a manner which is both statistically rigorous and compellingly relevant, exemplifying good scientific practice in a business environment.
IN, TS, Hyderabad
Welcome to the Worldwide Returns & ReCommerce team (WWR&R) at Amazon.com. WWR&R is an agile, innovative organization dedicated to ‘making zero happen’ to benefit our customers, our company, and the environment. Our goal is to achieve the three zeroes: zero cost of returns, zero waste, and zero defects. We do this by developing products and driving truly innovative operational excellence to help customers keep what they buy, recover returned and damaged product value, keep thousands of tons of waste from landfills, and create the best customer returns experience in the world. We have an eye to the future – we create long-term value at Amazon by focusing not just on the bottom line, but on the planet. We are building the most sustainable re-use channel we can by driving multiple aspects of the Circular Economy for Amazon – Returns & ReCommerce. Amazon WWR&R is comprised of business, product, operational, program, software engineering and data teams that manage the life of a returned or damaged product from a customer to the warehouse and on to its next best use. Our work is broad and deep: we train machine learning models to automate routing and find signals to optimize re-use; we invent new channels to give products a second life; we develop highly respected product support to help customers love what they buy; we pilot smarter product evaluations; we work from the customer backward to find ways to make the return experience remarkably delightful and easy; and we do it all while scrutinizing our business with laser focus. You will help create everything from customer-facing and vendor-facing websites to the internal software and tools behind the reverse-logistics process. You can develop scalable, high-availability solutions to solve complex and broad business problems. We are a group that has fun at work while driving incredible customer, business, and environmental impact. We are backed by a strong leadership group dedicated to operational excellence that empowers a reasonable work-life balance. As an established, experienced team, we offer the scope and support needed for substantial career growth. Amazon is earth’s most customer-centric company and through WWR&R, the earth is our customer too. Come join us and innovate with the Amazon Worldwide Returns & ReCommerce team!
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! In Prime Video READI, our mission is to automate infrastructure scaling and operational readiness. We are growing a team specialized in time series modeling, forecasting, and release safety. This team will invent and develop algorithms for forecasting multi-dimensional related time series. The team will develop forecasts on key business dimensions with optimization recommendations related to performance and efficiency opportunities across our global software environment. As a founding member of the core team, you will apply your deep coding, modeling and statistical knowledge to concrete problems that have broad cross-organizational, global, and technology impact. Your work will focus on retrieving, cleansing and preparing large scale datasets, training and evaluating models and deploying them to production where we continuously monitor and evaluate. You will work on large engineering efforts that solve significantly complex problems facing global customers. You will be trusted to operate with complete independence and are often assigned to focus on areas where the business and/or architectural strategy has not yet been defined. You must be equally comfortable digging in to business requirements as you are drilling into design with development teams and developing production ready learning models. You consistently bring strong, data-driven business and technical judgment to decisions. You will work with internal and external stakeholders, cross-functional partners, and end-users around the world at all levels. Our team makes a big impact because nothing is more important to us than delivering for our customers, continually earning their trust, and thinking long term. You are empowered to bring new technologies to your solutions. If you crave a sense of ownership, this is the place to be.
US, WA, Seattle
Amazon Advertising operates at the intersection of eCommerce and advertising, and is investing heavily in building a world-class advertising business. We are defining and delivering a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses driving long-term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products to improve both shopper and advertiser experience. With a broad mandate to experiment and innovate, we grow at an unprecedented rate with a seemingly endless range of new opportunities. The Ad Response Prediction team in Sponsored Products organization build advanced deep-learning models, large-scale machine-learning pipelines, and real-time serving infra to match shoppers’ intent to relevant ads on all devices, for all contexts and in all marketplaces. Through precise estimation of shoppers’ interaction with ads and their long-term value, we aim to drive optimal ads allocation and pricing, and help to deliver a relevant, engaging and delightful ads experience to Amazon shoppers. As the business and the complexity of various new initiatives we take continues to grow, we are looking for talented Applied Scientists to join the team. Key job responsibilities As a Applied Scientist II, you will: * Conduct hands-on data analysis, build large-scale machine-learning models and pipelines * Work closely with software engineers on detailed requirements, technical designs and implementation of end-to-end solutions in production * Run regular A/B experiments, gather data, perform statistical analysis, and communicate the impact to senior management * Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving * Provide technical leadership, research new machine learning approaches to drive continued scientific innovation * Be a member of the Amazon-wide Machine Learning Community, participating in internal and external MeetUps, Hackathons and Conferences
US, WA, Bellevue
mmPROS Surface Research Science seeks an exceptional Applied Scientist with expertise in optimization and machine learning to optimize Amazon's middle mile transportation network, the backbone of its logistics operations. Amazon's middle mile transportation network utilizes a fleet of semi-trucks, trains, and airplanes to transport millions of packages and other freight between warehouses, vendor facilities, and customers, on time and at low cost. The Surface Research Science team delivers innovation, models, algorithms, and other scientific solutions to efficiently plan and operate the middle mile surface (truck and rail) transportation network. The team focuses on large-scale problems in vehicle route planning, capacity procurement, network design, forecasting, and equipment re-balancing. Your role will be to build innovative optimization and machine learning models to improve driver routing and procurement efficiency. Your models will impact business decisions worth billions of dollars and improve the delivery experience for millions of customers. You will operate as part of a team of innovative, experienced scientists working on optimization and machine learning. You will work in close collaboration with partners across product, engineering, business intelligence, and operations. Key job responsibilities - Design and develop optimization and machine learning models to inform our hardest planning decisions. - Implement models and algorithms in Amazon's production software. - Lead and partner with product, engineering, and operations teams to drive modeling and technical design for complex business problems. - Lead complex modeling and data analyses to aid management in making key business decisions and set new policies. - Write documentation for scientific and business audiences. About the team This role is part of mmPROS Surface Research Science. Our mission is to build the most efficient and optimal transportation network on the planet, using our science and technology as our biggest advantage. We leverage technologies in optimization, operations research, and machine learning to grow our businesses and solve Amazon's unique logistical challenges. Scientists in the team work in close collaboration with each other and with partners across product, software engineering, business intelligence, and operations. They regularly interact with software engineering teams and business leadership.
IL, Tel Aviv
Come join the AWS Agentic AI science team in building the next generation models for intelligent automation. AWS, the world-leading provider of cloud services, has fostered the creation and growth of countless new businesses, and is a positive force for good. Our customers bring problems that will give Applied Scientists like you endless opportunities to see your research have a positive and immediate impact in the world. You will have the opportunity to partner with technology and business teams to solve real-world problems, have access to virtually endless data and computational resources, and to world-class engineers and developers that can help bring your ideas into the world. As part of the team, we expect that you will develop innovative solutions to hard problems, and publish your findings at peer reviewed conferences and workshops. We are looking for world class researchers with experience in one or more of the following areas - autonomous agents, API orchestration, Planning, large multimodal models (especially vision-language models), reinforcement learning (RL) and sequential decision making. Key job responsibilities PhD, or Master's degree and 4+ years of CS, CE, ML or related field experience 3+ years of building models for business application experience Experience in patents or publications at top-tier peer-reviewed conferences or journals Experience programming in Java, C++, Python or related language Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing
IL, Haifa
Come join the AWS Agentic AI science team in building the next generation models for intelligent automation. AWS, the world-leading provider of cloud services, has fostered the creation and growth of countless new businesses, and is a positive force for good. Our customers bring problems that will give Applied Scientists like you endless opportunities to see your research have a positive and immediate impact in the world. You will have the opportunity to partner with technology and business teams to solve real-world problems, have access to virtually endless data and computational resources, and to world-class engineers and developers that can help bring your ideas into the world. As part of the team, we expect that you will develop innovative solutions to hard problems, and publish your findings at peer reviewed conferences and workshops. We are looking for world class researchers with experience in one or more of the following areas - autonomous agents, API orchestration, Planning, large multimodal models (especially vision-language models), reinforcement learning (RL) and sequential decision making.
US, NY, New York
Join us in a historic endeavor to make Generative AI accessible to the world with breakthrough research! The AWS AI team has a world-leading team of researchers and academics, and we are looking for world-class colleagues to join us and make the AI revolution happen. Our team of scientists drives the innovation that enables external and internal SageMaker customers to train their next generation models on both GPU and Trainium instances. As part of the team, we expect that you will develop innovative solutions to hard problems, and publish your findings at peer reviewed conferences and workshops. AWS is the world-leading provider of cloud services, has fostered the creation and growth of countless new businesses, and is a positive force for good. Our customers bring problems which will give Applied Scientists like you endless opportunities to see your research have a positive and immediate impact in the world. You will have the opportunity to partner with technology and business teams to solve real-world problems, have access to virtually endless data and computational resources, and to world-class engineers and developers that can help bring your ideas into the world. About the team Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Utility Computing (UC) AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (IoT), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video team member, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities As an Applied Scientist in the Content Understanding Team, you will lead the end-to-end research and deployment of video and multi-modal models applied to a variety of downstream applications. More specifically, you will: - Work backwards from customer problems to research and design scientific approaches for solving them - Work closely with other scientists, engineers and product managers to expand the depth of our product insights with data, create a variety of experiments to determine the high impact projects to include in planning roadmaps - Stay up-to-date with advancements and the latest modeling techniques in the field - Publish your research findings in top conferences and journals About the team Our Prime Video Content Understanding team builds holistic media representations (e.g. descriptions of scenes, semantic embeddings) and apply them to new customer experiences supply chain problems. Our technology spans the entire Prime Video catalogue globally, and we enable instant recaps, skip intro timing, ad placement, search, and content moderation.
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading technology with Large Language Models (LLMs) and multi-modal systems. You will support projects that work on technologies including multi-modal model alignment, moderation systems and evaluation. Key job responsibilities As an Applied Scientist with the AGI team, you will support the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). You are also expected to publish in top tier conferences. About the team The AGI team has a mission to push the envelope in LLMs and multimodal systems. Specifically, we focus on model alignment with an aim to maintain safety while not denting utility, in order to provide the best-possible experience for our customers.