Lessons learned from 10 years of DynamoDB

Prioritizing predictability over efficiency, adapting data partitioning to traffic, and continuous verification are a few of the principles that help ensure stability, availability, and efficiency.

Amazon DynamoDB is one of the most popular NoSQL database offerings on the Internet, designed for simplicity, predictability, scalability, and reliability. To celebrate DynamoDB’s 10th anniversary, the DynamoDB team wrote a paper describing lessons we’d learned in the course of expanding a fully managed cloud-based database system to hundreds of thousands of customers. The paper was presented at this year’s USENIX ATC conference.

The paper captures the following lessons that we have learned over the years:

  • Designing systems for predictability over absolute efficiency improves system stability. While components such as caches can improve performance, they should not introduce bimodality, in which the system has two radically different ways of responding to similar requests (e.g., one for cache misses and one for cache hits). Consistent behaviors ensure that the system is always provisioned to handle the unexpected. 
  • Adapting to customers’ traffic patterns to redistribute data improves customer experience. 
  • Continuously verifying idle data is a reliable way to protect against both hardware failures and software bugs in order to meet high durability goals. 
  • Maintaining high availability as a system evolves requires careful operational discipline and tooling. Mechanisms such as formal proofs of complex algorithms, game days (chaos and load tests), upgrade/downgrade tests, and deployment safety provide the freedom to adjust and experiment with the code without the fear of compromising correctness. 
Related content
Amazon DynamoDB was introduced 10 years ago today; one of its key contributors reflects on its origins, and discusses the 'never-ending journey' to make DynamoDB more secure, more available and more performant.

Before we dig deeper into these topics, a little terminology. A DynamoDB table is a collection of items (e.g., products), and each item is a collection of attributes (e.g., name, price, category, etc.). Each item is uniquely identified by its primary key. In DynamoDB, tables are typically partitioned, or divided into smaller sub-tables, which are assigned to nodes. A node is a set of dedicated computational resources — a virtual machine — running on a single server in a datacenter.

DynamoDB stores three copies of each partition, in different availability zones. This makes the partition highly available and durable because the availability zones’ storage resources share nothing and are substantially independent. For instance, we wouldn’t assign a partition and one of its copies to nodes that share a power supply, because a power outage would take both of them offline. The three copies of the same partition are known as a replication group, and there is a leader for the group that is responsible for replicating all the customer mutations and serving strongly consistent reads.

DynamoDB architecture.png
The DynamoDB architecture, including a request router, the partition metadata system, and storage nodes in different availability zones (AZs).

Those definitions in hand, let’s turn to our lessons learned.

Predictability over absolute efficiency

DynamoDB employs a lot of metadata caches in order to reduce latency. One of those caches stores the routing metadata for data requests. This cache is deployed on a fleet of thousands of request routers, DynamoDB’s front-end service.

In the original implementation, when the request router received the first request for a table, it downloaded the routing information for the entire table and cached it locally. Since the configuration information about partition replicas rarely changed, the cache hit rate was approximately 99.75%.

Related content
How Alexa scales machine learning models to millions of customers.

This was an amazing hit rate. However, on the flip side, the fallback mechanism for this cache was to hit the metadata table directly. When the cache becomes ineffective, the metadata table needs to instantaneously scale from handling 0.25% of requests to 100%. The sudden increase in traffic can cause the metadata table to fail, causing cascading failure in other parts of the system. To mitigate against such failures, we redesigned our caches to behave predictably.

First, we built an in-memory datastore called MemDS, which significantly reduced request routers’ and other metadata clients’ reliance on local caches. MemDS stores all the routing metadata in a highly compressed manner and replicates it across a fleet of servers. MemDS scales horizontally to handle all incoming requests to DynamoDB.

Second, we deployed a new local cache that avoids the bimodality of the original cache. All requests, even if satisfied by the local cache, are asynchronously sent to the MemDS. This ensures that the MemDS fleet is always serving a constant volume of traffic, regardless of cache hit or miss. The regular exercise of the fallback code helps prevent surprises during fallback.

DDB-MemDS.png
DynamoDB architecture with MemDS.

Unlike conventional local caches, MemDS sees traffic that is proportional to the customer traffic seen by the service; thus, during cache failures, it does not see a sudden amplification of traffic. Doing constant work removed the need for complex logic to handle edge cases around cache misses and reduced the reliance on local caches, improving system stability.

Reshaping partitioning based on traffic

Partitions offer a way to dynamically scale both the capacity and performance of tables. In the original DynamoDB release, customers explicitly specified the throughput that a table required in terms of read capacity units (RCUs) and write capacity units (WCUs). The original system assigned partitions to nodes based on both available space and computational capacity.

Related content
Optimizing placement of configuration data ensures that it’s available and consistent during “network partitions”.

As the demands on a table changed (because it grew in size or because the load increased), partitions could be further split to allow the table to scale elastically. Partition abstraction proved really valuable and continues to be central to the design of DynamoDB.

However, the early version of DynamoDB assigned both space and capacity to individual partitions on the basis of size, evenly distributing computational resources across table entries. This led to challenges of “hot partitions” and throughput dilution.

Hot partitions happened because customer workloads were not uniformly distributed and kept hitting a subset of items. Throughput dilution happened when partitions that had been split to handle increased load ended up with so few keys that they could quickly max out their meager allocated capacity.

Our initial response to these challenges was to add bursting and adaptive capacity (along with other features such as split for consumption) to DynamoDB. This line of work also led to the launch of on-demand tables.

Bursting is a way to absorb temporal spikes in workloads at a partition level. It’s based on the observation that not all partitions hosted by a storage node use their allocated throughput simultaneously.

Related content
Amazon researchers describe new method for distributing database tables across servers.

The idea is to let applications tap into unused capacity at a partition level on a best-effort basis to absorb short-lived spikes. DynamoDB still maintains workload isolation by ensuring that a partition can burst only if there is unused throughput at the node level.

DynamoDB also launched adaptive capacity to handle long-lived spikes that cannot be absorbed by the burst capacity. Adaptive capacity monitors traffic patterns and repartitions tables so that heavily accessed items reside on different nodes.

Both bursting and adaptive capacity had limitations, however. Bursting was helpful only for short-lived spikes in traffic, and it was dependent on nodes’ having enough throughput to support it. Adaptive capacity was reactive and kicked in only after transmission rates had been throttled down to avoid overloads.

To address these limitations, the DynamoDB team replaced adaptive capacity with global admission control (GAC). GAC builds on the idea of token buckets, in which bandwidth is allocated to network nodes as tokens, and the nodes “cash in” tokens in order to transmit data. Each request router maintains a local token bucket and communicates with GAC to replenish tokens at regular intervals (on the order of every few seconds). For an extra layer of defense, DynamoDB also uses token buckets at the partition level.

Continuous verification 

To provide durability and crash recovery, DynamoDB uses write-ahead logs, which record data writes before they occur. In the event of a crash, DynamoDB can use the write-ahead logs to reconstruct lost data writes, bringing partitions up to date.

Write-ahead logs are stored in all three replicas of a partition. For higher durability, the write-ahead logs are periodically archived to S3, an object store that is designed for more than 99.99% (in fact, 11 nines) durability. Each replica contains the most recent write-ahead logs, which are usually waiting to be archived. The unarchived logs are typically a few hundred megabytes in size.

Storage replica vs. log replica.png
Healing a storage replica by copying the B-tree can take several minutes, while adding a log replica, which takes only a few seconds, ensures that there is no impact on durability.

DynamoDB continuously verifies data at rest. Our goal is to detect any silent data errors or “bit rot” — bit errors caused by degradation of the storage medium. An example of continuous verification is the scrub process.

The scrub process verifies two things: that all three copies in a replication group have the same data and that the live replicas match a reference replica built offline using the archived write-ahead-log entries.

The verification is done by computing the checksum of the live replica and matching that with a snapshot of the reference replica. A similar technique is used to verify replicas of global tables. Over the years, we have learned that continuous verification of data at rest is the most reliable method of protecting against hardware failures, silent data corruption, and even software bugs.

Availability

DynamoDB regularly tests its resilience to node, rack, and availability zone (AZ) failures. For example, to test the availability and durability of the overall service, DynamoDB performs power-off tests. Using realistic simulated traffic, a job scheduler powers off random nodes. At the end of all the power-off tests, the test tools verify that the data stored in the database is logically valid and not corrupted.

Related content
Amazon Athena reduces query execution time by 14% by eliminating redundant operations.

The first point about availability is that it needs to be measurable. DynamoDB is designed for 99.999% availability for global tables and 99.99% availability for regional tables. To ensure that these goals are being met, DynamoDB continuously monitors availability at the service and table levels. The tracked availability data is used to estimate customer-perceived availability trends and trigger alarms if the number of errors that customers see crosses a certain threshold.

These alarms are called customer-facing alarms (CFAs). The goal of these alarms is to report any availability-related problems and proactively mitigate them either automatically or through operator intervention. The key point to note here is that availability is measured not only on the server side but on the client side.

We also use two sets of clients to measure the user-perceived availability. The first set of clients is internal Amazon services using DynamoDB as the data store. These services share the availability metrics for DynamoDB API calls as observed by their software.

The second set of clients is our DynamoDB canary applications. These applications are run from every AZ in the region, and they talk to DynamoDB through every public endpoint. Real application traffic allows us to reason about DynamoDB availability and latencies as seen by our customers. The canary applications offer a good representation of what our customers might be experiencing both long and short term.

The second point is that read and write availability need to be handled differently. A partition’s write availability depends on the health of its leader and of its write quorum, meaning two out of the three replicas from different AZs. A partition remains available as long as there are enough healthy replicas for a write quorum and a leader.

Related content
“Anytime query” approach adapts to the available resources.

In a large service, hardware failures such as memory and disk failures are common. When a node fails, all replication groups hosted on the node are down to two copies. The process of healing a storage replica can take several minutes because the repair process involves copying the B-tree — a data structure that maps partitions to storage locations — and write-ahead logs.

Upon detecting an unhealthy storage replica, the leader of a replication group adds a log replica to ensure there is no impact on durability. Adding a log replica takes only a few seconds, because the system has to copy only the most recent write-ahead logs from a healthy replica; reconstructing the more memory-intensive B-tree can wait. Quick healing of affected replication groups using log replicas thus ensures the high durability of the most recent writes. Adding a log replica is the fastest way to ensure that the write quorum of the group is always met. This minimizes disruption to write availability due to an unhealthy write quorum. The leader replica serves consistent reads.

Introducing log replicas was a big change to the system, but the Paxos consensus protocol, which is formally provable, gave us the confidence to safely tweak and experiment with the system to achieve higher availability. We have been able to run millions of Paxos groups in a region with log replicas. Eventually, consistent reads can be served by any of the replicas. In case a leader fails, other replicas detect its failure and elect a new leader to minimize disruptions to the availability of consistent reads.

Research areas

Related content

US, VA, Arlington
Are you looking to work at the forefront of Machine Learning (ML) and Artificial Intelligence (AI)? Would you be excited to apply AI algorithms to solve real world problems with significant impact? The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Senior Data Scientist to help customers implement AI/ML solutions and realize transformational business opportunities. This is a team of scientists, engineers, and architects working step-by-step with customers to build bespoke solutions that harness the power of AI. The team helps customers imagine and scope the use cases that will create the greatest value for their businesses, select and train and fine-tune the right models, define paths to navigate technical or business challenges, develop scalable solutions and applications, and launch them in production. The team provides guidance and implements best practices for applying AI responsibly and cost efficiently. You will work directly with customers and innovate in a fast-paced organization that contributes to game-changing projects and technologies. You will design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. We’re looking for Data Scientists capable of using AI/ML and other techniques to design, evangelize, and implement state-of-the-art solutions for never-before-solved problems. Key job responsibilities As an experienced Senior Data Scientist, you will be responsible for: 1. Lead end-to-end AI/ML and GenAI projects, from understanding business needs to data preparation, model development, solution deployment, and post-production monitoring 2. Collaborate with AI/ML scientists, engineers, and architects to research, design, develop, and evaluate AI algorithms and build ML systems and operations (MLOps) using AWS services to address real-world challenges 3. Interact with customers directly to understand the business challenges, deliver briefing and deep dive sessions to customers and guide them on adoption patterns and paths to production 4. Create and deliver best practice recommendations, tutorials, blog posts, publications, sample code, and presentations tailored to technical, business, and executive stakeholders 5. Provide customer and market feedback to product and engineering teams to help define product direction This is a customer-facing role with potential travel to customer sites as needed. About the team ABOUT AWS: Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
AU, VIC, Melbourne
We are scaling an advanced team of talented Machine Learning Scientists in Melbourne. This is your chance to join our a wider international community of ML experts changing the way our customers experience Amazon. Amazon's International Machine Learning team partners with businesses across the diverse Amazon ecosystem to drive innovation and deliver exceptional experiences for customers around the globe. Our team works on a wide variety of high-impact projects that deliver innovation at global scale, leveraging unrivalled access to the latest technology, whilst actively contributing to the research community by publishing in top machine learning conferences. As part of Amazon's Research and Development organization, you will have the opportunity to push the boundaries of applied science and deploy solutions that directly benefit millions of Amazon customers worldwide. Whether you are exploring the frontiers of generative AI, developing next-generation recommender systems, or optimizing agentic workflows, your work at Amazon has the power to truly change the world. Join us in this exciting journey as we redefine the present and the future of innovative applied science. Key job responsibilities - You will take on complex problems, work on solutions that either leverage or extend existing academic and industrial research, and utilize your own out-of-the-box pragmatic thinking. - In addition to coming up with novel solutions and building prototypes, you will deliver these to production in customer facing applications, in partnership with product and development teams. - You will publish papers internally and externally, contributing to advancing knowledge in the field of applied machine learning and generative AI. About the team Our team is composed of scientists with PhDs, with a strong publication profile and an appetite to see the impact of innovation on real-world systems at scale.
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the next-level. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We are constantly learning through programs that are local, regional, and global. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Key job responsibilities * Partner with laboratory science teams on design and analysis of experiments * Originate and lead the development of new data collection workflows with cross-functional partners * Develop and deploy scalable bioinformatics analysis and QC workflows * Evaluate and incorporate novel bioinformatic approaches to solve critical business problems About the team Our team highly values work-life balance, mentorship and career growth. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We care about your career growth and strive to assign projects and offer training that will challenge you to become your best.
US, WA, Seattle
Join the Worldwide Sustainability (WWS) organization where we capitalize on our size, scale, and inventive culture to build a more resilient and sustainable company. WWS manages our social and environmental impacts globally, driving solutions that enable our customers, businesses, and the world around us to become more sustainable. Sustainability Science and Innovation is a multi-disciplinary team within the WW Sustainability organization that combines science, analytics, economics, statistics, machine learning, product development, and engineering expertise to identify, evaluate and/or develop new science, technologies, and innovations that aim to address long-term sustainability challenges. We are looking for a Sr. Research Scientist to help us develop and drive innovative scientific solutions that will improve the sustainability of materials in our products, packaging, operations, and infrastructure. You will be at the forefront of exploring and resolving complex sustainability issues, bringing innovative ideas to the table, and making meaningful contributions to projects across SSI’s portfolio. This role not only demands technical expertise but also a strategic mindset and the agility to adapt to evolving sustainability challenges through self-driven learning and exploration. In this role, you will leverage your breadth of expertise in AI models and methodologies and industrial research experience to build scientific tools that inform sustainability strategies related to materials and energy. The successful applicant will lead by example, pioneering science-vetted data-driven approaches, and working collaboratively to implement strategies that align with Amazon’s long-term sustainability vision. Key job responsibilities - Develop scientific models that help solve complex and ambiguous sustainability problems, and extract strategic learnings from large datasets. - Work closely with applied scientists and software engineers to implement your scientific models. - Support early-stage strategic sustainability initiatives and effectively learn from, collaborate with, and influence stakeholders to scale-up high-value initiatives. - Support research and development of cross-cutting technologies for industrial decarbonization, including building the data foundation and analytics for new AI models. - Drive innovation in key focus areas including packaging materials, building materials, and alternative fuels. About the team Diverse Experiences: World Wide Sustainability (WWS) values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Inclusive Team Culture: It’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth: We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance: We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.
GB, MLN, Edinburgh
Do you want a role with deep meaning and the ability to make a major impact? As part of Intelligent Talent Acquisition (ITA), you'll have the opportunity to reinvent the hiring process and deliver unprecedented scale, sophistication, and accuracy for Amazon Talent Acquisition operations. ITA is an industry-leading people science and technology organization made up of scientists, engineers, analysts, product professionals and more, all with the shared goal of connecting the right people to the right jobs in a way that is fair and precise. Last year we delivered over 6 million online candidate assessments, and helped Amazon deliver billions of packages around the world by making it possible to hire hundreds of thousands of workers in the right quantity, at the right location and at exactly the right time. You’ll work on state-of-the-art research, advanced software tools, new AI systems, and machine learning algorithms, leveraging Amazon's in-house tech stack to bring innovative solutions to life. Join ITA in using technologies to transform the hiring landscape and make a meaningful difference in people's lives. Together, we can solve the world's toughest hiring problems. A day in the life As a Research Scientist, you will partner on design and development of AI-powered systems to scale job analyses enterprise-wide, match potential candidates to the jobs they’ll be most successful in, and conduct validation research for top-of-funnel AI-based evaluation tools. You’ll have the opportunity to develop and implement novel research strategies using the latest technology and to build solutions while experiencing Amazon’s customer-focused culture. The ideal scientist must have the ability to work with diverse groups of people and inter-disciplinary cross-functional teams to solve complex business problems. About the team The Lead Generation & Detection Services (LEGENDS) organization is a specialized organization focused on developing AI-driven solutions to enable fair and efficient talent acquisition processes across Amazon. Our work encompasses capabilities across the entire talent acquisition lifecycle, including role creation, recruitment strategy, sourcing, candidate evaluation, and talent deployment. The focus is on utilizing state-of-the-art solutions using Deep Learning, Generative AI, and Large Language Models (LLMs) for recruitment at scale that can support immediate hiring needs as well as longer-term workforce planning for corporate roles. We maintain a portfolio of capabilities such as job-person matching, person screening, duplicate profile detection, and automated applicant evaluation, as well as a foundational competency capability used throughout Amazon to help standardize the assessment of talent interested in Amazon.
US, MA, N.reading
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. - We are pioneering the development of robotics dexterous hands that: - Enable unprecedented generalization across diverse tasks - Are compliant but at the same time impact resistant - Can enable power grasps with the same reliability as fine dexterity and nonprehensile manipulation - Can naturally cope with the uncertainty of the environment - Leverage mechanical intelligence, multi-modal sensor feedback and advanced control techniques. The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. Key job responsibilities - Design and implement novel sensing and actuation technologies for dexterous manipulation - Develop parallel paths for rapid finger design and prototyping combining different actuation and sensing technologies as well as different finger morphologies - Develop new testing and validation strategies to support fast continuous integration and modularity - Build and test full hand prototypes to validate the performance of the solution - Create hybrid approaches combining different actuation technologies, under-actuation, active and passive compliance - Hand integration into rest of the embodiment - Partner with cross-functional teams to rapidly create new concepts and prototypes - Work with Amazon's robotics engineering and operations teams to grasp their requirements and develop tailored solutions - Document the designs, performance, and validation of the final system
US, MA, North Reading
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of systems that: • Enables unprecedented generalization across diverse tasks • Enables contact-rich manipulation in different environments • Seamlessly integrates mobility and manipulation • Leverage mechanical intelligence, multi-modal sensor feedback and advanced control techniques. The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration!
US, WA, Seattle
We are a passionate team applying the latest advances in technology to solve real-world challenges. As a Data Scientist working at the intersection of machine learning and advanced analytics, you will help develop innovative products that enhance customer experiences. Our team values intellectual curiosity while maintaining sharp focus on bringing products to market. Successful candidates demonstrate responsiveness, adaptability, and thrive in our open, collaborative, entrepreneurial environment. Working at the forefront of both academic and applied research, you will join a diverse team of scientists, engineers, and product managers to solve complex business and technology problems using scientific approaches. You will collaborate closely with other teams to implement innovative solutions and drive improvements. At Amazon, we cultivate an inclusive culture through our Leadership Principles, which emphasize seeking diverse perspectives, continuous learning, and building trust. Our global community includes thirteen employee-led affinity groups with 40,000 members across 190 chapters, showcasing our commitment to embracing differences and fostering continuous learning through local, regional, and global programs. We prioritize work-life balance, recognizing it as fundamental to long-term happiness and fulfillment. Our team is committed to supporting your career development through challenging projects, mentorship opportunities, and targeted training programs that help you reach your full potential. Key job responsibilities Key job responsibilities * Deliver data analyses that optimize overall team process and guide decision-making * Deep dive to understand source of anomalies across a variety of datasets including low-level sequencing read data * Identify key metrics that are drivers to achieve team goals; work with senior stakeholders to refine your results * Use modern statistical methods to highlight insights for predictive & generative ML models and assay process * Perform correlation analysis, significance testing, and simulation on high- and low-fidelity datasets for various types of readouts * Generate reports with tables and visualization that support operational cycle analysis and one-off POC experiments * Collaborate with multi-disciplinary domain experts to support your findings and their experiments * Write well-tested scripts that can be promoted by our software teams to production pipelines * Learn about new statistical methods for our domain and adopt them in your work * Work fluently in SQL and Python. Be skilled in generating compelling visualizations. A day in the life New data has just landed and promoted to our datalake. You load the data and verify it's overall integrity by visualizing variation across target subsets. You realize we may have made progress toward our goals and begin to test the validity of your nominal results. At midday you grab lunch with new coworkers and learn about their fields or weird interests (there are many). You generate visualizations for the entire dataset and perform significance tests that reinforce specific findings. You meet with peers in the afternoon to discuss your findings and breakdown the remaining tasks to finalize your group report! About the team Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the limits. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you.
IN, KA, Bengaluru
Amazon is looking for a passionate, talented, and inventive Scientist with a strong machine learning background to help build industry-leading Speech and Language technology. Our mission is to push the envelope in Automatic Speech Recognition (ASR), Natural Language Understanding (NLU), and Audio Signal Processing, in order to provide the best-possible experience for our customers. As a Speech and Language Scientist, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art in spoken language understanding. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in spoken language understanding. We are hiring in the area of speech and audio understanding technologies including ASR.
US, NY, New York
About Sponsored Products and Brands The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through industry leading generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. About our team The Search Ranking and Interleaving (R&I) team within Sponsored Products and Brands is responsible for determining which ads to show and the quality of ads shown on the search page (e.g., relevance, personalized and contextualized ranking to improve shopper experience, where to place them, and how many ads to show on the search page. This helps shoppers discover new products while helping advertisers put their products in front of the right customers, aligning shoppers’, advertisers’, and Amazon’s interests. To do this, we apply a broad range of GenAI and ML techniques to continuously explore, learn, and optimize the ranking and allocation of ads on the search page. We are an interdisciplinary team with a focus on improving the SP experience in search by gaining a deep understanding of shopper pain points and developing new innovative solutions to address them. A day in the life As an Applied Scientist on this team, you will identify big opportunities for the team to make a direct impact on customers and the search experience. You will work closely with with search and retail partner teams, software engineers and product managers to build scalable real-time GenAI and ML solutions. You will have the opportunity to design, run, and analyze A/B experiments that improve the experience of millions of Amazon shoppers while driving quantifiable revenue impact while broadening your technical skillset. Key job responsibilities - Solve challenging science and business problems that balance the interests of advertisers, shoppers, and Amazon. - Drive end-to-end GenAI & Machine Learning projects that have a high degree of ambiguity, scale, complexity. - Develop real-time machine learning algorithms to allocate billions of ads per day in advertising auctions. - Develop efficient algorithms for multi-objective optimization using deep learning methods to find operating points for the ad marketplace then evolve them - Research new and innovative machine learning approaches.