How the Lean language brings math to coding and coding to math

Uses of the functional programming language include formal mathematics, software and hardware verification, AI for math and code synthesis, and math and computer science education.

This post is an adaptation of a keynote address that Leo de Moura delivered at the International Conference on Computer Aided Verification (CAV), in July 2024.

LEAN logo.png
The Lean logo.

In 2013, I launched the Lean project with the goal of bridging the gap between automated and interactive theorem provers. Since its inception, Lean has seen unparalleled adoption in the mathematical community, surpassing previous efforts in formalized mathematics. Lean 4, the latest version, is implemented in Lean itself and is also a fully fledged, extensible programming language with robust IDE support, package management, and a thriving ecosystem.

In 2023, Sebasian Ullrich and I founded the Lean Focused Research Organization (FRO), a nonprofit dedicated to advancing Lean and supporting its community. The Lean project embraces a philosophy that promotes decentralized innovation, empowering a diverse community of researchers, developers, and enthusiasts to collaboratively push the boundaries of mathematical practice and software development. In this blog post, we will provide a brief introduction to the project and describe how it is used at AWS.

A brief introduction to Lean

Lean is an open-source, extensible, functional programming language and interactive theorem prover that makes writing correct and maintainable code easy. Lean programming primarily involves defining types and functions, allowing users to focus on the problem domain and its data rather than on coding details. Lean has four primary use cases: formal mathematics, software and hardware verification, AI for math and code synthesis, and math and computer science education.

Formal mathematics

Lean allows mathematicians to work with advanced mathematical structures using syntax that feels natural to them. The math community recognizes its usefulness: for instance, Fields medalists Peter Scholze and Terence Tao used Lean to confirm their new results; Quanta Magazine has lauded Lean as one of the biggest breakthroughs in mathematics, and it has been featured in numerous popular scientific and academic publications, including the Wired magazine article “The effort to build the mathematical library of the future”. Recently, DeepMind used Lean to build an AI engine that met the silver-medal standard at the International Math Olympiad.

As of July 2024, the Lean Mathematical Library has received contributions from over 300 mathematicians and contains 1.58 million lines of code, surpassing other formal-mathematics systems in use. This remarkable growth has come despite Lean’s concision and youth: it’s at least a decade younger than comparable libraries.

Software and hardware verification

Lean’s combination of formal verification, user interaction, and mathematical rigor makes it invaluable for both software and hardware verification. Lean is a system for programming your proofs and proving your programs. An additional benefit is that Lean produces efficient code, and its extensibility features, originally designed for mathematicians, are also highly convenient for creating abstractions when writing clean and maintainable code. Its benefits extend to any system requiring exceptional accuracy and security, including industries such as aerospace, cryptography, web services, autonomous vehicles, biomedical systems, and medical devices. Later on, we will provide several examples of Lean's applications at AWS.

AI for math and code synthesis

Lean is popular with groups developing AI for mathematics and code synthesis. One of the key reasons is that Lean formal proofs are machine checkable and can be independently audited by external proof checkers. Additionally, Lean's extensibility allows users to peer into the system internals, including data structures for representing proofs and code. This capability is also used to automatically generate animations from Lean proofs.

AI researchers are leveraging large language models (LLMs) to create Lean formal proofs and automatically translate prose into formalized mathematics. OpenAI has released lean-gym, a reinforcement learning environment based on Lean. Harmonic used Lean in the development of its Mathematical Superintelligence Platform (MSI), an AI model designed to guarantee accuracy and avoid hallucinations. Meta AI created an AI model that has solved 10 International Mathematical Olympiad problems, and DeepMind has formalized a theoretical result related to AI safety in Lean. Additionally, LeanDojo is an open-source project using LLMs to automate proof construction in Lean.

Lean's unique combination of machine-checkable proofs, system introspection, and extensibility makes it an ideal tool for advancing AI research in mathematics and code synthesis. The synergy between LLMs and Lean formal proofs is emphasized in Terence Tao's colloquium lecture at the American Mathematical Society, “Machine Assisted Proof”; in the Scientific American article “AI will become mathematicians' co-pilot”; and in the New York Times article “A.I. Is coming for mathematics, too.”

Math and CS education

Millions of people learn mathematics as students and use it throughout their careers. Since its inception, the Lean project has supported students' mathematical-reasoning needs and enabled a more diverse population to contribute to the fields of math and computer science. Numerous educational resources are available for learning Lean, including interactive computer games such as the Natural Number Game, computer science and mathematics textbooks, university courses, and on-demand tutorials. The Lean FRO is committed to expanding Lean’s educational content and envisions a future where children use Lean as a playground for learning mathematics, progressing at their own paces and receiving instantaneous feedback, similar to how many have learned to code.

A quick tour of Lean

Lean combines programming and formal verification. Let's take a quick tour through a small example to see how we write code in Lean and prove properties about that code.

Writing code in Lean

First, let's define a simple function that appends two lists:

def append (xs ys : List a) : List a :=
  match xs with
  | [] => ys
  | x :: xs => x :: append xs ys

This function is defined using pattern matching. For the base case, appending an empty list [] to ys results in ys. The notation x :: xs represents a list with head x and tail xs. For the recursive case, appending x :: xs to ys results in x :: append xs ys. Additionally, the append function is polymorphic, meaning it works with lists of any type a.

Extensible syntax

The notation x :: xs used above is not built into Lean but is defined using the infixr command:

infixr:67 " :: " => List.cons

The infixr command defines a new infix operator x :: xs, denoting List.cons x xs. This command is actually a macro implemented using Lean's hygienic macro system. Lean's extensible syntax allows users to define their own domain-specific languages. For example, Verso, the Lean documentation-authoring system, is implemented in Lean using this mechanism. Verso defines alternative concrete syntaxes that closely resemble Markdown and HTML.

Proving properties about code

Next, we'll prove a property about our append function: that the length of the appended lists is the sum of their lengths.

theorem append_length (xs ys : List a)
        : (append xs ys).length = xs.length + ys.length := by
  induction xs with
  | nil => simp [append]
  | cons x xs ih => simp [append, ih]; omega

Here, theorem introduces a new theorem named append_length. The statement (append xs ys).length = xs.length + ys.length is what we want to prove. The by ... block contains the proof. In this proof,

  • induction xs with initiates a proof by induction on xs;
  • the nil case proves the base case using simp, the Lean simplifier. The parameter append instructs the simplifier to expand append’s definition; and
  • the cons x xs ih case proves the inductive step where ih is the inductive hypothesis. It also uses simp and omega, which complete the proof using arithmetical reasoning.

In this proof, induction, simp, and omega are tactics. Tactics, which transform one state of the proof into another, are key to interactive theorem proving in Lean. Users can inspect the states of their proofs using the Lean InfoView, a panel in the IDE. The InfoView is an interactive object that can be inspected and browsed by the user. In the following picture, we see the state of our proof before the simp tactic at line 10. Note that the proof state contains all hypotheses and the goal (append (x :: xs) ys).length = (x :: xs).length + ys.length, which remains to be proved.

LEAN example.png
The state of the proof before the simp tactic at line 10, as visualized in the Lean InfoView.

How Lean is used at AWS

At AWS, Lean is used in several open-source projects to address complex verification and modeling challenges. These projects not only highlight the practical applications of Lean in different domains but also emphasize AWS's commitment to open-source development and collaboration. We cover four key projects: Cedar, LNSym, and SampCert, whose Lean source code is already available on GitHub, and AILean, which is exploring the relationship between LLMs and formal mathematics and whose code is not open source yet. 

Cedar: an open-source policy language and evaluation engine 

Cedar is an open-source policy language and evaluation engine. Cedar enables developers to express fine-grained permissions as easy-to-understand policies enforced in their applications and to decouple access control from application logic. Cedar supports common authorization models such as role-based access control and attribute-based access control. It is the first policy language built from the ground up to be verified formally using automated reasoning and tested rigorously using differential random testing.

The Cedar project uses Lean to create an executable formal model of each core component of the Cedar runtime (such as the authorization engine) and static-analysis tools (such as the type checker). This model serves as a highly readable specification, allowing the team to prove key correctness properties using Lean.

Lean was chosen for modeling Cedar due to its fast runtime, extensive libraries, IDE support, and small trusted computing base (TCB). The fast runtime enables efficient differential testing of Cedar models. The libraries provide reusable verified data structures and tactics built by the open-source community. Lean’s small TCB allows Cedar to leverage these contributions confidently, as Lean checks their correctness, requiring trust only in Lean’s minimal proof-checking kernel.

LNSym: Symbolic simulation for cryptographic verification

LNSym is a symbolic simulator for Armv8 native-code programs. It’s currently under development, with a focus on enabling automated reasoning of cryptographic machine-code programs. Many cryptographic routines are written in assembly to optimize performance and security on the underlying processor. LNSym aims to reduce the cost of verifying cryptographic routines, particularly block ciphers and secure hashes, ultimately empowering cryptography developers to formally reason about their native-code programs.

LNSym uses Lean as a specification language to model the Arm instruction semantics and cryptographic protocols and as a theorem prover for reasoning about these artifacts. Since Lean programs are executable, the specifications achieve a high degree of trust through thorough conformance testing. Lean orchestrates proofs such that the heavy and often tedious lifting is done automatically, using decision procedures like SAT solvers or custom domain-specific tactics. When proof automation fails, users can employ Lean as an interactive theorem prover. This combination of interactive and automated theorem proving ensures that progress on verification tasks is not hindered by the limitations of proof automation.

SampCert: formally verified differential-privacy primitives

SampCert is an open-source library of formally verified differential-privacy primitives used by the AWS Clean Rooms Differential Privacy service for its fast and sound sampling algorithms. Using Lean, SampCert provides the only verified implementation of the discrete Gaussian sampler and the primitives of zero concentrated differential privacy.

Although SampCert focuses on software, its verification relies heavily on Mathlib, the Lean Mathematical Library. The verification of code addressing practical problems in data privacy depends on the formalization of mathematical concepts from Fourier analysis to number theory and topology.

AILean: AI for math and math for AI

AILean is exploring the relationship between LLMs and formal mathematics in collaboration with the Technology Innovation Institute (TII). This exploration works in both directions: AI for math and math for AI. In AILean, LLMs are used to enhance proof automation and user experience in formal mathematics. LLMs can analyze theorem statements and existing proof steps, suggesting relevant lemmas, definitions, or tactics to guide users in completing proofs. They can also identify common mistakes or inconsistencies, proposing corrections or alternative approaches that avoid dead ends and thereby improving the proof development process.

Takeaways

Lean is a complex system, but its correctness relies only on a small trusted kernel. Moreover, all proofs and definitions can be exported and independently audited and checked. This is a crucial feature for both the mathematical and software verification communities because it eliminates the trust bottleneck. It doesn't matter who you are; if Lean checked your proof, the whole world can build on top of it. This enables large groups of mathematicians who have never met to collaborate and work together. Additionally, it allows users to extend Lean without fearing the introduction of soundness bugs that could compromise the logical consistency of the system.

Lean's extensibility enables customization, which was particularly important during its first ten years, when resources were limited. Lean’s extensibility allowed the community to extend the system without needing to synchronize with its developers. Self-hosting, or implementing Lean in Lean, also ensured that users can access all parts of the system without having to learn a different programming language. This makes it easy and convenient to extend Lean. Packages such as ProofWidgets and SciLean are excellent examples of user-defined extensions that leverage these features.

The FRO model introduced by Convergent Research has been instrumental in supporting Lean and helping it transition to a self-sufficient foundation. The Lean project has grown significantly, and driving it forward would have been difficult without Convergent Research’s efforts to secure philanthropic support. Just as foundations like the Rust and Linux Foundations are vital for the success and sustainability of open-source projects, the support of Convergent Research has been critical for Lean's ongoing progress.

To learn more about Lean, visit the website.

Research areas

Related content

GB, MLN, Edinburgh
We’re looking for a Machine Learning Scientist in the Personalization team for our Edinburgh office experienced in generative AI and large models. You will be responsible for developing and disseminating customer-facing personalized recommendation models. This is a hands-on role with global impact working with a team of world-class engineers and scientists across the Edinburgh offices and wider organization. You will lead the design of machine learning models that scale to very large quantities of data, and serve high-scale low-latency recommendations to all customers worldwide. You will embody scientific rigor, designing and executing experiments to demonstrate the technical efficacy and business value of your methods. You will work alongside a science team to delight customers by aiding in recommendations relevancy, and raise the profile of Amazon as a global leader in machine learning and personalization. Successful candidates will have strong technical ability, focus on customers by applying a customer-first approach, excellent teamwork and communication skills, and a motivation to achieve results in a fast-paced environment. Our position offers exceptional opportunities for every candidate to grow their technical and non-technical skills. If you are selected, you have the opportunity to make a difference to our business by designing and building state of the art machine learning systems on big data, leveraging Amazon’s vast computing resources (AWS), working on exciting and challenging projects, and delivering meaningful results to customers world-wide. Key job responsibilities Develop machine learning algorithms for high-scale recommendations problems. Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative analysis and business judgement. Collaborate with software engineers to integrate successful experimental results into large-scale, highly complex Amazon production systems capable of handling 100,000s of transactions per second at low latency. Report results in a manner which is both statistically rigorous and compellingly relevant, exemplifying good scientific practice in a business environment.
IN, TS, Hyderabad
Welcome to the Worldwide Returns & ReCommerce team (WWR&R) at Amazon.com. WWR&R is an agile, innovative organization dedicated to ‘making zero happen’ to benefit our customers, our company, and the environment. Our goal is to achieve the three zeroes: zero cost of returns, zero waste, and zero defects. We do this by developing products and driving truly innovative operational excellence to help customers keep what they buy, recover returned and damaged product value, keep thousands of tons of waste from landfills, and create the best customer returns experience in the world. We have an eye to the future – we create long-term value at Amazon by focusing not just on the bottom line, but on the planet. We are building the most sustainable re-use channel we can by driving multiple aspects of the Circular Economy for Amazon – Returns & ReCommerce. Amazon WWR&R is comprised of business, product, operational, program, software engineering and data teams that manage the life of a returned or damaged product from a customer to the warehouse and on to its next best use. Our work is broad and deep: we train machine learning models to automate routing and find signals to optimize re-use; we invent new channels to give products a second life; we develop highly respected product support to help customers love what they buy; we pilot smarter product evaluations; we work from the customer backward to find ways to make the return experience remarkably delightful and easy; and we do it all while scrutinizing our business with laser focus. You will help create everything from customer-facing and vendor-facing websites to the internal software and tools behind the reverse-logistics process. You can develop scalable, high-availability solutions to solve complex and broad business problems. We are a group that has fun at work while driving incredible customer, business, and environmental impact. We are backed by a strong leadership group dedicated to operational excellence that empowers a reasonable work-life balance. As an established, experienced team, we offer the scope and support needed for substantial career growth. Amazon is earth’s most customer-centric company and through WWR&R, the earth is our customer too. Come join us and innovate with the Amazon Worldwide Returns & ReCommerce team!
US, WA, Bellevue
mmPROS Surface Research Science seeks an exceptional Applied Scientist with expertise in optimization and machine learning to optimize Amazon's middle mile transportation network, the backbone of its logistics operations. Amazon's middle mile transportation network utilizes a fleet of semi-trucks, trains, and airplanes to transport millions of packages and other freight between warehouses, vendor facilities, and customers, on time and at low cost. The Surface Research Science team delivers innovation, models, algorithms, and other scientific solutions to efficiently plan and operate the middle mile surface (truck and rail) transportation network. The team focuses on large-scale problems in vehicle route planning, capacity procurement, network design, forecasting, and equipment re-balancing. Your role will be to build innovative optimization and machine learning models to improve driver routing and procurement efficiency. Your models will impact business decisions worth billions of dollars and improve the delivery experience for millions of customers. You will operate as part of a team of innovative, experienced scientists working on optimization and machine learning. You will work in close collaboration with partners across product, engineering, business intelligence, and operations. Key job responsibilities - Design and develop optimization and machine learning models to inform our hardest planning decisions. - Implement models and algorithms in Amazon's production software. - Lead and partner with product, engineering, and operations teams to drive modeling and technical design for complex business problems. - Lead complex modeling and data analyses to aid management in making key business decisions and set new policies. - Write documentation for scientific and business audiences. About the team This role is part of mmPROS Surface Research Science. Our mission is to build the most efficient and optimal transportation network on the planet, using our science and technology as our biggest advantage. We leverage technologies in optimization, operations research, and machine learning to grow our businesses and solve Amazon's unique logistical challenges. Scientists in the team work in close collaboration with each other and with partners across product, software engineering, business intelligence, and operations. They regularly interact with software engineering teams and business leadership.
US, CA, Palo Alto
Amazon’s Advertising Technology team builds the technology infrastructure and ad serving systems to manage billions of advertising queries every day. The result is better quality advertising for publishers and more relevant ads for customers. In this organization you’ll experience the benefits of working in a dynamic, entrepreneurial environment, while leveraging the resources of Amazon.com (AMZN), one of the world's leading companies. Amazon Publisher Services (APS) helps publishers of all sizes and on all channels better monetize their content through effective advertising. APS unites publishers with advertisers across devices and media channels. We work with Amazon teams across the globe to solve complex problems for our customers. The end results are Amazon products that let publishers focus on what they do best - publishing. The APS Publisher Products Engineering team is responsible for building cloud-based advertising technology services that help Web, Mobile, Streaming TV broadcasters and Audio publishers grow their business. The engineering team focuses on unlocking our ad tech on the most impactful Desktop, mobile and Connected TV devices in the home, bringing real-time capabilities to this medium for the first time. As a successful Data Scientist in our team, · You are an analytical problem solver who enjoys diving into data, is excited about investigations and algorithms, and can credibly interface between technical teams and business stakeholders. You will collaborate directly with product managers, BIEs and our data infra team. · You will analyze large amounts of business data, automate and scale the analysis, and develop metrics (e.g., user recognition, ROAS, Share of Wallet) that will enable us to continually measure the impact of our initiatives and refine the product strategy. · Your analytical abilities, business understanding, and technical aptitude will be used to identify specific and actionable opportunities to solve existing business problems and look around corners for future opportunities. Your expertise in synthesizing and communicating insights and recommendations to audiences of varying levels of technical sophistication will enable you to answer specific business questions and innovate for the future. · You will have direct exposure to senior leadership as we communicate results and provide scientific guidance to the business. Major responsibilities include: · Utilizing code (Apache, Spark, Python, R, Scala, etc.) for analyzing data and building statistical models to solve specific business problems. · Collaborate with product, BIEs, software developers, and business leaders to define product requirements and provide analytical support · Build customer-facing reporting to provide insights and metrics which track system performance · Influence the product strategy directly through your analytical insights · Communicating verbally and in writing to business customers and leadership team with various levels of technical knowledge, educating them about our systems, as well as sharing insights and recommendations
US, WA, Seattle
Amazon Advertising operates at the intersection of eCommerce and advertising, and is investing heavily in building a world-class advertising business. We are defining and delivering a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses driving long-term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products to improve both shopper and advertiser experience. With a broad mandate to experiment and innovate, we grow at an unprecedented rate with a seemingly endless range of new opportunities. The Ad Response Prediction team in Sponsored Products organization build advanced deep-learning models, large-scale machine-learning pipelines, and real-time serving infra to match shoppers’ intent to relevant ads on all devices, for all contexts and in all marketplaces. Through precise estimation of shoppers’ interaction with ads and their long-term value, we aim to drive optimal ads allocation and pricing, and help to deliver a relevant, engaging and delightful ads experience to Amazon shoppers. As the business and the complexity of various new initiatives we take continues to grow, we are looking for talented Applied Scientists to join the team. Key job responsibilities As a Applied Scientist II, you will: * Conduct hands-on data analysis, build large-scale machine-learning models and pipelines * Work closely with software engineers on detailed requirements, technical designs and implementation of end-to-end solutions in production * Run regular A/B experiments, gather data, perform statistical analysis, and communicate the impact to senior management * Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving * Provide technical leadership, research new machine learning approaches to drive continued scientific innovation * Be a member of the Amazon-wide Machine Learning Community, participating in internal and external MeetUps, Hackathons and Conferences
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! In Prime Video READI, our mission is to automate infrastructure scaling and operational readiness. We are growing a team specialized in time series modeling, forecasting, and release safety. This team will invent and develop algorithms for forecasting multi-dimensional related time series. The team will develop forecasts on key business dimensions with optimization recommendations related to performance and efficiency opportunities across our global software environment. As a founding member of the core team, you will apply your deep coding, modeling and statistical knowledge to concrete problems that have broad cross-organizational, global, and technology impact. Your work will focus on retrieving, cleansing and preparing large scale datasets, training and evaluating models and deploying them to production where we continuously monitor and evaluate. You will work on large engineering efforts that solve significantly complex problems facing global customers. You will be trusted to operate with complete independence and are often assigned to focus on areas where the business and/or architectural strategy has not yet been defined. You must be equally comfortable digging in to business requirements as you are drilling into design with development teams and developing production ready learning models. You consistently bring strong, data-driven business and technical judgment to decisions. You will work with internal and external stakeholders, cross-functional partners, and end-users around the world at all levels. Our team makes a big impact because nothing is more important to us than delivering for our customers, continually earning their trust, and thinking long term. You are empowered to bring new technologies to your solutions. If you crave a sense of ownership, this is the place to be.
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Senior Applied Scientist with a strong deep learning background, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Senior Applied Scientist with the AGI team, you will work with talented peers to lead the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). About the team The AGI team has a mission to push the envelope in LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, WA, Seattle
Do you want to transform millions of customer's experience of interacting with AWS products using artificial intelligence and machine learning? Do you want to see the impacts of your work moving the needles on the billions dollars of AWS business? Do you want to stay on the cutting edge of technology (e.g. Gen AI, graph neural network, reinforcement learning, and forecasting models) to build scalable ML products that help AWS grow? The AWS Product Analytics and Data Science (PANDAS) team is at the forefront of leveraging cutting-edge AI/ML technology and infrastructure to redefine how internal product teams interact with and derive insights from their data. Our vision is to use artificial intelligence and machine learning to enable AWS product teams and business leaders to drive product growth and create personalized, optimized, and simplified product experience. We strive to improve customers’ product experience, directly influence AWS’s top line and bottom line, and help AWS business leaders drive product growth. We want to be a centralized ML platform team that democratizes ML capabilities to AWS product teams and transform their product and customer experience. You will work cross-functionally, typically collaborating with several teams of scientists, data engineer, product managers, and business leaders (GM/VP) in order to influence the business and technical strategy for a complex, high-performance organization. You will also drive impactful, long-term choices on system architecture, spearhead a high-quality science and engineering culture, leading the science innovation and business impacts across the org. Key job responsibilities - Utilize state-of-the-art machine learning, deep learning, and statistical techniques to develop models that can predict/classify business outcomes, automate decision-making processes, and enhance user experiences. - Conduct comprehensive data analyses to extract insights, identify patterns, and inform model development, utilizing large and complex datasets from diverse sources. - Design, development, and evaluation of innovative models for predictive learning, ensuring high-quality standards are maintained. Drive the best science and engineering practices. - Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation, and model implementation. - Monitor and assess the performance of deployed models, implementing continuous improvement strategies to adapt to changing data patterns and business requirements. - Work cross-functionally with data & software engineering teams to build model implementations and integrate successful models and algorithms in production systems at very large scale, focusing on scalability, efficiency, and performance. - Research and implement novel machine learning and statistical approaches that can contribute to state of the art science with publication A day in the life In your role as an applied scientist, you will play a pivotal role in shaping product development by working closely with product managers, software engineers, and designers to translate business objectives into actionable scientific projects. You will be instrumental in identifying and securing the necessary datasets in collaboration with data management teams. Your expertise will guide the selection and implementation of advanced statistical and machine learning methods, ensuring the development of robust models. These models will then be refined, tested, and deployed in production. You'll communicate your ML solution to stakeholders and product teams through effective verbal and written communication. About the team We are a team of scientists and engineers supporting AWS product leaders to make high impact decisions through sophisticated analytical frameworks, trusted data science methods, and scalable ML products. We came from diverse backgrounds from statistics, computer science, engineering, and business analytics. We specialized in the full end to end ML development process, including data ingestion, ETL, model development, and model deployment in production. We are supporting the data science needs across AWS EC2, Database & Analytics, and S3 teams using deep learning, graph neural network, forecasting, reinforcement learning, causal inference, etc. About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. This team is part of AWS Utility Computing: Utility Computing (UC) AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services.
US, WA, Seattle
Amazon Prime is looking for an ambitious Economist to help create econometric insights for world-wide Prime. Prime is Amazon's premiere membership program, with over 200M members world-wide. This role is at the center of many major company decisions that impact Amazon's customers. These decisions span a variety of industries, each reflecting the diversity of Prime benefits. These range from fast-free e-commerce shipping, digital content (e.g., exclusive streaming video, music, gaming, photos), and grocery offerings. Prime Science creates insights that power these decisions. As an economist in this role, you will create statistical tools that embed causal interpretations. You will utilize massive data, state-of-the-art scientific computing, econometrics (causal, counterfactual/structural, time-series forecasting, experimentation), and machine-learning, to do so. Some of the science you create will be publishable in internal or external scientific journals and conferences. You will work closely with a team of economists, applied scientists, data professionals (business analysts, business intelligence engineers), product managers, and software engineers. You will create insights from descriptive statistics, as well as from novel statistical and econometric models. You will create internal-to-Amazon-facing automated scientific data products to power company decisions. You will write strategic documents explaining how senior company leaders should utilize these insights to create sustainable value for customers. These leaders will often include the senior-most leaders at Amazon. The team is unique in its exposure to company-wide strategies as well as senior leadership. It operates at the cutting-edge of utilizing data, econometrics, artificial intelligence, and machine-learning to form business strategies. A successful candidate will have demonstrated a capacity for building, estimating, and defending statistical models (e.g., causal, counterfactual, time-series, machine-learning) using software such as R, Python, or STATA. They will have a willingness to learn and apply a broad set of statistical and computational techniques to supplement deep-training in one area of econometrics. For example, many applications on the team use structural econometrics, machine-learning, and time-series forecasting. They rely on building scalable production software, which involves a broad set of world-class software-building skills often learned on-the-job. As a consequence, already-obtained knowledge of SQL, machine learning, and large-scale scientific computing using distributed computing infrastructures such as Spark-Scala or PySpark would be a plus. Additionally, this candidate will show a track-record of delivering projects well and on-time, preferably in collaboration with other team members (e.g. co-authors). Candidates must have very strong writing and emotional intelligence skills (for collaborative teamwork, often with colleagues in different functional roles), a growth mindset, and a capacity for dealing with a high-level of ambiguity. Endowed with these traits and on-the-job-growth, the role will provide the opportunity to have a large strategic, world-wide impact on the customer experiences of Prime members.
IN, KA, Bengaluru
The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key responsibilities include: - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues Basic Qualifications: - Master’s or PhD in computer science, statistics or a related field or relevant science experience (publications/scientific prototypes) in lieu of Masters - Experience in deep learning, machine learning, and data science. - Proficiency in coding and software development, with a strong focus on machine learning frameworks. - Experience in Python, or another language; command line usage; familiarity with Linux and AWS ecosystems. - Understanding of relevant statistical measures such as confidence intervals, significance of error measurements, development and evaluation data sets, etc. - Excellent communication skills (written & spoken) and ability to collaborate effectively in a distributed, cross-functional team setting. Preferred Qualifications: - Track record of diving into data to discover hidden patterns and conducting error/deviation analysis - Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations - The motivation to achieve results in a fast-paced environment. - Exceptional level of organization and strong attention to detail - Comfortable working in a fast paced, highly collaborative, dynamic work environment - Papers published in AI/ML venues of repute