Formal verification makes RSA faster — and faster to deploy

Optimizations for Amazon's Graviton2 chip boost efficiency, and formal verification shortens development time.

Most secure transactions online are protected by public-key encryption schemes like RSA, whose security depends on the difficulty of factoring large numbers. Public-key encryption improves security because it enables the encrypted exchange of private keys. But because it depends on operations like modular exponentiation of large integers, it introduces significant computational overhead.

Researchers and engineers have introduced all kinds of optimizations to make public-key encryption more efficient, but the resulting complexity makes it difficult to verify that the encryption algorithms are behaving properly. And a bug in an encryption algorithm can be disastrous.

This post explains how Amazon’s Automated Reasoning group improved the throughput of RSA signatures on Amazon’s Graviton2 chip by 33% to 94%, depending on the key size, while also proving the functional correctness of our optimizations using formal verification.

Graviton chip.png
An AWS Graviton chip.

Graviton2 is a server-class CPU developed by Amazon Annapurna Labs, based on Arm Neoverse N1 cores. To improve the throughput of RSA signatures on Graviton2, we combined various techniques for fast modular arithmetic with assembly-level optimizations specific to Graviton2. To show that the optimized code is functionally correct, we formally verified it using the HOL Light interactive theorem prover, which was developed by a member of our team (John Harrison).

Our code is written in a constant-time style (for example, no secret-dependent branches or memory access patterns) to avoid side-channel attacks, which can learn secret information from operational statistics like function execution time. The optimized functions and their proofs are included in Amazon Web Services’ s2n-bignum library of formally verified big-number operations. The functions are also adopted by AWS-LC, the cryptographic library maintained by AWS, and by its bindings Amazon Corretto Crypto Provider (ACCP) and AWS Libcrypto for Rust (AWS-LC-RS).

Key size (bits)Baseline throughput (ops/sec)Improved throughput (ops/sec)Speedup (%)
204829954181.00%
30729512733.50%
4096428194.20%

Improvements in the throughput times of RSA signatures in AWS-LC on Graviton2. 

Step 1. Making RSA fast on Graviton2

Optimizing the execution of RSA algorithms on Graviton2 requires the careful placement and use of multiplication instructions. On 64-bit Arm CPUs, the multiplication of two 64-bit numbers, with a product of up to 128 bits (conventionally designated 64×64→128), are accomplished by two instructions: MUL, producing the lower 64 bits, and UMULH, producing the upper 64 bits. On Graviton2, MUL has a latency of four cycles and stalls the multiplier pipeline for two cycles after issue, while UMULH has a latency of five cycles and stalls the multiplier pipeline for three cycles after issue. Since Neoverse N1 has a single multiplier pipeline but three addition pipelines, multiplication throughput is around one-tenth the throughput of 64-bit addition.

To improve throughput, we (1) applied a different multiplication algorithm, trading multiplication for addition instructions, and (2) used single-instruction/multiple-data (SIMD) instructions to offload a portion of multiplication work to the vector units of the CPU.

Algorithmic optimization

For fast and secure modular arithmetic, Montgomery modular multiplication is a widely used technique. Montgomery multiplication represents numbers in a special form called Montgomery form, and when a sequence of modular operations needs to be executed — as is the case with the RSA algorithm — keeping intermediary products in Montgomery form makes computation more efficient.

We implement Montgomery multiplication as the combination of big-integer multiplication and a separate Montgomery reduction, which is one of its two standard implementations.

Related content
Solution method uses new infrastructure that reduces proof-checking overhead by more than 90%.

On Graviton2, the benefit of this approach is that we can use the well-known Karatsuba algorithm to trade costly multiplications for addition operations. The Karatsuba algorithm decomposes a multiplication into three smaller multiplications, together with some register shifts. It can be performed recursively, and for large numbers, it’s more efficient than the standard multiplication algorithm.

We used Karatsuba’s algorithm for power-of-two bit sizes, such as 2,048 bits and 4,096 bits. For other sizes (e.g., 3072 bits), we still use a quadratic multiplication. The Karatsuba multiplication can be further optimized when the two operands are equal, and we wrote functions specialized for squaring as well.

With these optimizations we achieved a 31–49% speedup in 2,048- and 4,096-bit RSA signatures compared with our original code.

Microarchitectural optimization

Many Arm CPUs implement the Neon single-instruction/multiple-data (SIMD) architecture extension. It adds a file of 128-bit registers, which are viewed as vectors of various sizes (8/16/32/64 bit), and SIMD instructions that can operate on some or all of those vectors in parallel. Furthermore, SIMD instructions use different pipelines than scalar instructions, so both types of instructions can be executed in parallel.

Vectorization strategy. Vectorization is a process that replaces sequential executions of the same operation with a single operation over multiple values; it usually increases efficiency. Using SIMD instructions, we vectorized scalar 64-bit multiplications.

For big-integer multiplication, vectorized 64-bit multiply-low code nicely overlapped with scalar 64-bit multiply-high instructions (UMULH). For squaring, vectorizing two 64×64→128-bit squaring operations worked well. For multiplications occurring in Montgomery reduction, vectorizing 64×64→128-bit multiplications and 64×64→64 multiply-lows worked. To choose which scalar multiplications to vectorize, we wrote a script that enumerated differently vectorized codes and timed their execution. For short code fragments, exhaustive enumeration was possible, but for larger code fragments, we had to rely on experience. The overall solution was chosen only after extensive experiments with other alternatives, such as those described by Seo et. al. at ICISC’14.

Related content
Using time to last byte — rather than time to first byte — to assess the effects of data-heavy TLS 1.3 on real-world connections yields more encouraging results.

Although the scalar and SIMD units are able to operate in parallel, it is sometimes necessary to move inputs and intermediate results between integer and SIMD registers, and this brings significant complications. The FMOV instruction copies data from a 64-bit scalar register to a SIMD register, but it uses the same pipeline as the scalar multiplier, so its use would reduce scalar-multiplier throughput.

The alternative of loading into a vector register first and then using MOV to copy it to a scalar register has lower latency, but it occupies the SIMD pipeline and hence lowers the throughput of SIMD arithmetic operations. Somewhat counterintuitively, the best solution was to make two separate memory loads into the integer and SIMD registers, with care for their relative placement. We did still use MOV instructions to copy certain SIMD results into integer registers when the SIMD results were already placed at SIMD registers because it was faster than a round trip via store-load instructions.

Fast constant-time table lookup code. Another independent improvement was the reimplementation of a vectorized constant-time lookup table for a fast modular-exponentiation algorithm. Combining this with our earlier optimization further raises our speedup to 80–94% when compared to the throughput of 2,048-/4,096-bit RSA signatures from our initial code, as well as a 33% speedup for 3,072-bit signatures.

Instruction scheduling. Even though Graviton2 is an out-of-order CPU, carefully scheduling instructions is important for performance, due to the finite capacity of components like reorder buffers and issue queues. The implementations discussed here were obtained by manual instruction scheduling, which led to good results but was time consuming.

We also investigated automating the process using the SLOTHY superoptimizer, which is based on constraint solving and a (simplified) microarchitecture model. With additional tweaks to Montgomery reduction to precalculate some numbers used in Karatsuba, SLOTHY optimization enabled a 95–120% improvement on 2,048-/4,096-bit throughputs and 46% on 3,072-bit! However, this method is not yet incorporated into AWS-LC since verifying the automated scheduling proved to be challenging. Studying the potential for automatically proving correctness of scheduling optimizations is a work in progress.

Step 2. Formally verifying the code

To deploy the optimized code in production we need to ensure that it works correctly. Random testing is a cheap approach for quickly checking simple and known cases, but to deliver a higher level of assurance, we rely on formal verification. In this section we explain how we apply formal verification to prove functional correctness of cryptographic primitives.

Introduction to s2n-bignum

AWS’s s2n-bignum is both (1) a framework for formally verifying assembly code in x86-64 and Arm and (2) a collection of fast assembly functions for cryptography, verified using the framework itself.

Related content
New IAM Access Analyzer feature uses automated reasoning to ensure that access policies written in the IAM policy language don’t grant unintended access.

Specification in s2n-bignum. Every assembly function in s2n-bignum — including the new assembly functions used in RSA — has a specification stating its functional correctness. A specification states that for any program state satisfying some precondition, the output state of the program must satisfy some postcondition. For example, bignum_mul_4_8(uint64_t *z, uint64_t *x, uint64_t *y) is intended to multiply two 256-bit (four-word) numbers producing a 512-bit (eight-word) result. Its (abbreviated) precondition over an input state s is

  aligned_bytes_loaded s (word pc) bignum_mul_4_8_mc
∧ read PC s = word pc
∧ C_ARGUMENTS [z, x, y] s
∧ bignum_from_memory (x,4) s = a
∧ bignum_from_memory (y,4) s = b

This means that the machine code of bignum_mul_4_8 is loaded at the address currently contained in the program counter PC (aligned_bytes_loaded), symbolic values are assigned to the function arguments according to C’s application binary interface (C_ARGUMENTS ...), and big integers logically represented by the symbols a and b are stored in the memory location pointed to by x and y for four words (bignum_from_memory ...).

The (abbreviated) postcondition over an output state s is

bignum_from_memory (z,8) s = a * b

This means that the multiplied result a * b is stored in the eight-word buffer starting at location z.

One more component is a relation between the input and output states that must be satisfied:

(MAYCHANGE_REGS_AND_FLAGS_PERMITTED_BY_ABI;
MAYCHANGE [memory :> bytes(z,8 * 8)]) (s_in,s_out)

This means that executing the code may change registers/flags permitted by the application binary interface (ABI) and the eight-word buffer starting at z, but all other state components must remain unchanged.

Verifying assembly using HOL Light. To prove that the implementation is correct with respect to the specification, we use the HOL Light interactive theorem prover. In contrast to “black-box” automated theorem provers, tools like HOL Light emphasize a balance between automating routine proof steps and allowing explicit, and programmable, user guidance. When a proof exists on paper or inside someone’s head, a proficient user can effectively rewrite the proof in an interactive theorem prover. S2n-bignum uses a combination of two strategies to verify a program:

Related content
Both secure multiparty computation and differential privacy protect the privacy of data used in computation, but each has advantages in different contexts.

Symbolic execution. Given a representation of the input program state using symbolic variables in place of specific values, symbolic execution infers a symbolic output state at the end of some code snippet, in effect doing a more rigorous and generalized form of program execution. While this still leaves the postcondition to be proved, it strips away artifacts of program execution and leaves a purely mathematical problem.

Intermediate annotations in the style of Floyd-Hoare logic. Each intermediate assertion serves as a postcondition for the preceding code and a precondition for the subsequent code. The assertion need contain only the details that are necessary to prove its corresponding postcondition. This abstraction helps make symbolic simulation more tractable, in terms of both automated-reasoning capacity and the ease with which humans can understand the result.

We assume that the Arm hardware behaves in conformance with the model of s2n-bignum, but the model was developed with care, and it was validated by extensively cross-checking its interpretations against hardware.

Future formal-verification improvements. The formal verification for s2n-bignum does not yet cover nonfunctional properties of the implementation, including whether it may leak information through side channels such as the running time of the code. Rather, we handle this through a disciplined general style of implementation: never using instructions having variable timing, such as division, and no conditional branching/memory access patterns that depend on secret data. Also, we sanity-check some of these properties using simple static checks, and we execute the code on inputs with widely differing bit densities to analyze the corresponding run times and investigate any unexpected correlations.

These disciplines and sanity checks are standard practice with us, and we apply the to all the new implementations described here. In ongoing work, we are exploring the possibility of formally verifying the absence of information leakage.

Research areas

Related content

US, CA, Santa Clara
The Geospatial science team solves problems at the interface of ML/AI and GIS for Amazon's last mile delivery programs. We have access to Earth-scale datasets and use them to solve challenging problems that affect hundreds of thousands of transporters. We are looking for strong candidates to join the transportation science team which owns time estimation, GPS trajectory learning, and sensor fusion from phone data. You will join a team of GIS and ML domain experts and be expected to develop ML models, present research results to stakeholders, and collaborate with SDEs to implement the models in production. Key job responsibilities - Understand business problems and translate them into science problems - Develop ML models - Present research results - Write and publish papers - Collaborate with other scientists
US, CA, San Francisco
If you are interested in this position, please apply on Twitch's Career site https://www.twitch.tv/jobs/en/ About Us: Twitch is the world’s biggest live streaming service, with global communities built around gaming, entertainment, music, sports, cooking, and more. It is where thousands of communities come together for whatever, every day. We’re about community, inside and out. You’ll find coworkers who are eager to team up, collaborate, and crush (or elegantly solve) problems together. We’re on a quest to empower live communities, so if this sounds good to you, see what we’re up to on LinkedIn and X, and discover the projects we’re solving on our Blog. Be sure to explore our Interviewing Guide to learn how to ace our interview process. About the Role: We are looking for an experienced Data Scientist to support our central analytics and finance disciplines at Twitch. Bringing to bear a mixture of data analysis, dashboarding, and SQL query skills, you will use data-driven methods to answer business questions, and deliver insights that deepen understanding of our viewer behavior and monetization performance. Reporting to the Head of Finance, Analytics, and Business Operations, your team will be located in San Francisco. While there is a preference for the San Francisco Bay Area, we are open to this role operating remotely within the U.S. You Will: - Create actionable insights from data related to Twitch viewers, creators, advertising revenue, commerce revenue, and content deals. - Develop dashboards and visualizations to communicate points of view that inform business decision-making. - Create and maintain complex queries and data pipelines for ad-hoc analyses. - Author narratives and documentation that support conclusions. - Collaborate effectively with business partners, product managers, and data team members to align data science efforts with strategic goals.
US, WA, Seattle
The Private Brands Discovery team designs innovative machine learning solutions to enhance customer awareness of Amazon’s own brands and help customers find products they love. This interdisciplinary team of scientists and engineers incubates and develops disruptive solutions using cutting-edge technology to tackle some of the most challenging scientific problems at Amazon. To achieve this, the team utilizes methods from Natural Language Processing, deep learning, large language models (LLMs), multi-armed bandits, reinforcement learning, Bayesian optimization, causal and statistical inference, and econometrics to drive discovery throughout the customer journey. Our solutions are crucial to the success of Amazon’s private brands and serve as a model for discovery solutions across the company. This role presents a high-visibility opportunity for someone eager to make a business impact, delve into large-scale problems, drive measurable actions, and collaborate closely with scientists and engineers. As a team lead, you will be responsible for developing and coaching talent, guiding the team in designing and developing cutting-edge models, and working with business, marketing, and software teams to address key challenges. These challenges include building and improving models for sourcing, relevance, and CTR/CVR estimation, deploying reinforcement learning methods in production etc. In this role, you will be a technical leader in applied science research with substantial scope, impact, and visibility. A successful team lead will be an analytical problem solver who enjoys exploring data, leading problem-solving efforts, guiding the development of new frameworks, and engaging in investigations and algorithm development. You should be capable of effectively interfacing between technical teams and business stakeholders, pushing the boundaries of what is scientifically possible, and maintaining a sharp focus on measurable customer and business impact. Additionally, you will mentor and guide scientists to enhance the team's talent and expand the impact of your work.
US, MD, Annapolis Junction
Are you excited to help the US Intelligence Community design, build, and implement AI algorithms to augment decision making while meeting the highest standards for reliability, transparency, and scalability? The Amazon Web Services (AWS) US Federal Professional Services team works directly with US Intelligence Community agencies and other public sector entities to achieve their mission goals through the adoption of Machine Learning (ML) methods. We build models for text, image, video, audio, and multi-modal use cases, using traditional or generative approaches to fit the mission. Our team collaborates across the entire AWS organization to bring access to product and service teams, to get the right solution delivered and drive feature innovation based on customer needs. At AWS, we're hiring experienced data scientists with a background in both traditional and generative AI who can help our customers understand the opportunities their data presents, and build solutions that earn the customer trust needed for deployment to production systems. In this role, you will work closely with customers to deeply understand their data challenges and requirements, and design tailored solutions that best fit their use cases. You should have broad experience building models using all kinds of data sources, and building data-intensive applications at scale. You should possess excellent business acumen and communication skills to collaborate effectively with stakeholders, develop key business questions, and translate requirements into actionable solutions. You will provide guidance and support to other engineers, sharing industry best practices and driving innovation in the field of data science and AI. This position may require local travel up to 25% It is expected to work from one of the above locations (or customer sites) at least 1+ days in a week. This is not a remote position. You are expected to be in the office or with customers as needed. This position requires that the candidate selected must currently possess and maintain an active TS/SCI Security Clearance with Polygraph. The position further requires the candidate to opt into a commensurate clearance for each government agency for which they perform AWS work. Key job responsibilities As an Data Scientist, you will: - Collaborate with AI/ML scientists and architects to research, design, develop, and evaluate cutting-edge AI algorithms to address real-world challenges - Interact with customers directly to understand the business problem, help and aid them in implementation of AI solutions, deliver briefing and deep dive sessions to customers and guide customer on adoption patterns and paths to production. - Create and deliver best practice recommendations, tutorials, blog posts, sample code, and presentations adapted to technical, business, and executive stakeholder - Provide customer and market feedback to Product and Engineering teams to help define product direction About the team About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.
US, VA, Arlington
Are you looking to work at the forefront of Machine Learning and AI? Would you be excited to apply cutting edge Generative AI algorithms to solve real world problems with significant impact? Amazon Web Services (AWS) Professional Services (ProServe) is looking for Data Scientists who like helping U.S. Federal agencies implement innovative cloud computing solutions and solve technical problems using state-of-the-art language models in the cloud. AWS ProServe engages in a wide variety of projects for customers and partners, providing collective experience from across the AWS customer base and are obsessed about strong success for the Customer. Our team collaborates across the entire AWS organization to bring access to product and service teams, to get the right solution delivered and drive feature innovation based upon customer needs. At AWS, we're hiring experienced data scientists with a background in NLP, generative AI, and document processing to help our customers understand, plan, and implement best practices around leveraging these technologies within their AWS cloud environments. Our consultants deliver proof-of-concept projects, reusable artifacts, reference architectures, and lead implementation projects to assist organizations in harnessing the power of their data and unlocking the potential of advanced NLP and AI capabilities. In this role, you will work closely with customers to deeply understand their data challenges and requirements, and design tailored solutions that best fit their use cases. You should have deep expertise in NLP/NLU, generative AI, and building data-intensive applications at scale. You should possess excellent business acumen and communication skills to collaborate effectively with stakeholders, develop key business questions, and translate requirements into actionable solutions. You will provide guidance and support to other engineers, sharing industry best practices and driving innovation in the field of data science and AI. It is expected to work from one of the above locations (or customer sites) at least 1+ days in a week. This is not a remote position. You are expected to be in the office or with customers as needed. This position requires that the candidate selected be a US Citizen and obtain and maintain a security clearance at the TS/SCI with polygraph level. Upon start, the selected candidate will be sponsored for a commensurate clearance for each government agency for which they perform AWS work. Key job responsibilities In this role, you will: - Collaborate with AI/ML scientists and architects to research, design, develop, and evaluate cutting-edge generative AI solutions to address real-world challenges. - Interact with customers directly to understand the business problem, help and aid them in implementation of generative AI solutions, deliver briefing and deep dive sessions to customers and guide customer on adoption patterns and paths to production. - Provide expertise and guidance in generative AI and document processing infrastructure, design, implementation, and optimization. - Maintain domain knowledge and expertise in generative AI, NLP, and NLU. - Architect and build large-scale solutions. - Build technical solutions that are secure, maintainable, scalable, reliable, performant, and cost-effective. - Identify and prepare metrics and reports for the internal team and for customers to delineate the value of their solution to the customer. - Identify, mitigate and communicate risks related to solution and service constraints by making technical trade-offs. - Participate in growing their team’s skills and help mentor internal and customer team members. - Provide guidance on the people, organizational, security and compliance aspects of AI/ML transformations for the customer. About the team Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
US, WA, Seattle
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Applied Scientist with the AGI team, you will work with talented peers to lead the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in spoken language understanding. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, Sunnyvale
Amazon's AGI Web & Knowledge Services group is seeking a passionate, talented, and inventive Applied Scientist to lead the development of industry-leading structured Information retrieval systems. As part of our cutting-edge AGI-SIR team, you will play a pivotal role in developing efficient AI solutions for Knowledge Graphs, Graph Search and Question Answering Systems. In this role, your work will focus on creating scalable and efficient AI-driven technologies that push the boundaries of information retrieval. You will work on a broad range of problems, from low-level data processing to the development of novel retrieval models, leveraging state-of-the-art machine learning methods. Key job responsibilities - Lead the development of advanced algorithms for knowledge graphs, graph search and question answering systems, guiding the team in solving complex problems and setting technical direction. - Design models that address customer needs, making informed trade-offs to balance accuracy, efficiency, and user experience. - Collaborate with engineering teams to implement successful models into scalable, reliable Amazon production systems. - Present results to technical and business audiences, ensuring clarity, statistical rigor, and relevance to business goals. - Establish and uphold high scientific and engineering standards, driving best practices across the team. - Promote a culture of experimentation and continuous learning within Amazon’s applied science community.
US, WA, Seattle
Join an innovative team of scientists and engineers who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon's customers. Key job responsibilities “Amazon Science gives you insight into the company’s approach to customer-obsessed scientific innovation. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. It’s the company’s ability to have an impact at scale that allows us to attract some of the brightest minds in artificial intelligence and related fields. Our scientists continue to publish, teach, and engage with the academic community, in addition to utilizing our working backwards method to enrich the way we live and work.” Please visit https://www.amazon.science for more information Do you want to join an innovative team of scientists and engineers who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon's customers? Do you want to build advanced algorithmic systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning team for our International Consumer Businesses. The team builds the next generation of Machine Learning solutions for a wide spectrum of problems by leveraging generative-AI and LLMs, in areas such as Recommendations, Search Relevance, Catalog Quality, Online Ads, Pricing, Demand/Forecasting, Computer Vision, and Conversational Systems. A day in the life Build advanced algorithmic systems that help optimize millions of transactions every day. About the team The team builds the next generation of Machine Learning services for a wide spectrum of problems in areas such as Recommendations, Search Relevance, Computer Vision, Catalog Quality, Online Ads, Pricing, Demand/Forecasting, and Conversational Systems.
GB, Cambridge
We are looking for a researcher in cutting-edge LLM technologies for applications across Alexa, AWS, and other Amazon businesses. In this role, you will innovate in the fastest-moving fields of current AI research, in particular in how to integrate a broad range of structured and unstructured information into AI systems (e.g. with RAG techniques), and get to immediately apply your results in highly visible Amazon products. If you are deeply familiar with LLMs, natural language processing, and machine learning and have experience managing high-performing research teams, this may be the right opportunity for you. Our fast-paced environment requires a high degree of independence in making decisions and driving ambitious research agendas all the way to production. You will work with other science and engineering teams as well as business stakeholders to maximize velocity and impact of your team's contributions. It's an exciting time to be a leader in AI research. In Amazon's AGI Information team, you can make your mark by improving information-driven experience of Amazon customers worldwide!
CA, BC, Vancouver
Alexa Daily Essentials is hiring an Applied Scientist to research and implement large language model innovations to enhance Alexa's language understanding, knowledge representation, reasoning and generation capabilities. The Alexa Daily Essentials team delivers experiences critical to how customers interact with Alexa as part of daily life. We drive over 40 billion+ actions annually across 60 million+ monthly customers, who engage with our products across experiences connected to Timers, Alarms, Calendars, Food, and News. Our experiences include critical time saving techniques, ad-supported news audio and video, and in-depth kitchen guidance aimed at serving the needs of the family from sunset to sundown. Our upcoming launches are at the forefront of innovation, delivering step-function improvements in experiences that stretch across the customer journey, and new AI technologies that will enable customers to send Alexa information for future recall and conversation. We collaborate closely with partners such as Amazon.com, Whole Foods, Spotify, CNN, Fox, NPR, BBC, Discovery, and Food Network to deliver our vision. If you are passionate about redefining the personal assistant experience and leveraging innovative technology to improve daily life, we’d love to hear from you. This is an opportunity to make a tangible impact at the heart of the Alexa ecosystem. As an applied scientist, you will advance state of the art techniques in ML and LLM, and work closely with product and engineering teams to build the next generation of the Alexa smart assistant. Key job responsibilities - Rapidly prototype ML/LLM solutions, evaluate feasibility, and drive projects to production deployment - Continuously monitor and improve model performance through retraining, parameter tuning, and architecture refinements - Develop new training and inference techniques to improve model performance - Work cross-functionally across engineering, product, and business teams to understand customer needs, scope science work, and drive science solutions from conception to customer delivery - Research and develop LLM innovations, and lead paper publications. - Code proficiently in Python (required) and Java (preferred); optimize systems for high performance at scale; contribute code directly into production services - Innovate and develop science and engineering solutions that optimize team operations and increase team effectiveness. - Clearly communicate complex technical concepts to non-technical stakeholders and leadership