DynamoDB 10-year anniversary Swami Sivasubramanian and Werner Vogels
The early success of the Dynamo database encouraged Swaminathan (Swami) Sivasubramanian (top right), Werner Vogels (lower right) and colleagues to write the Dynamo research paper, and share it at the 2007 ACM Symposium on Operating Systems Principles (SOSP conference). The Dynamo paper served as a catalyst to create the category of distributed database technologies commonly known as NoSQL. Dynamo is the progenitor to Amazon DynamoDB, the company's cloud-based NoSQL database service that launched 10 years ago today.

Amazon’s DynamoDB — 10 years later

Amazon DynamoDB was introduced 10 years ago today; one of its key contributors reflects on its origins, and discusses the 'never-ending journey' to make DynamoDB more secure, more available and more performant.

Ten years ago, Amazon Web Services (AWS) launched Amazon DynamoDB, a fast, flexible NoSQL database service that offers single-digit millisecond performance at any scale.

In an online post on Jan. 18, 2012, Werner Vogels, chief technical officer at Amazon.com, wrote: “Today is a very exciting day as we release Amazon DynamoDB, a fast, highly reliable and cost-effective NoSQL database service designed for internet scale applications. DynamoDB is the result of 15 years of learning in the areas of large scale non-relational databases and cloud services.

“Several years ago we published a paper on the details of Amazon’s Dynamo technology, which was one of the first non-relational databases developed at Amazon,” Vogels continued. “The original Dynamo design was based on a core set of strong distributed systems principles resulting in an ultra-scalable and highly reliable database system. Amazon DynamoDB, which is a new service, continues to build on these principles, and also builds on our years of experience with running non-relational databases and cloud services, such as Amazon SimpleDB and Amazon S3, at scale. It is very gratifying to see all of our learning and experience become available to our customers in the form of an easy-to-use managed service.”

One of Vogels’s coauthors on the 2007 Dynamo paper, and a key contributor to the development of DynamoDB was Swaminathan (Swami) Sivasubramanian, then an Amazon research engineer working on the design, implementation, and analysis of distributed systems technology, and now vice president of Database, Analytics, and Machine Learning at AWS.

More and more, CIOs and organizations are realizing that it is going to be survival of the most informed, and those that put their data to work are the ones that won't just survive, they will thrive.
Swami Sivasubramanian

A decade after the launch of DynamoDB, Sivasubramanian says we’re “experiencing an amazing era of renaissance when it comes to data and machine learning.”

“We now live in an era where you can actually store your data in these databases and quickly start building your data lakes within Amazon S3 and then analyze them using Amazon SageMaker in a matter of a couple of weeks, if not days. That is simply remarkable.

“We now have the opportunity to help customers gain insights from their data faster,” Sivasubramanian added. “This is a mission that truly excites me because customers really want to put their data to work to enable data-driven decision making. More and more, CIOs and organizations are realizing that it is going to be survival of the most informed, and those that put their data to work are the ones that won't just survive, they will thrive.”

To mark the 10-year anniversary of the launch of Amazon DynamoDB, Amazon Science asked Sivasubramanian three questions about the origins of DynamoDB, its progenitor Dynamo, and the future of DynamoDB.

  1. Q. 

    You were a co-author on the 2007 Dynamo paper. At that time, the industry was transitioning to a scale out vs scale up architectural approach. Can you tell us about the origin story for Dynamo?

    A. 

    To get to 2007, I have to start with 2004, 2005. Even as I was working on my PhD [Sivasubramanian earned his PhD in computer science in 2006 from Vrije Universiteit Amsterdam] I was contemplating where I would work. Ultimately what convinced me to join Amazon as a research engineer intern [2005] was seeing how Amazon was pushing the boundaries of scale.

    I admit I was a little bit of a skeptic as an outsider. At that time, AWS didn’t even exist. But when I joined, I soon had an ‘a ha moment’ that, yes, Amazon was an e-commerce company, but actually it was a technology company that also did e-commerce. It was an interesting revelation for me seeing how Amazon had to invent so many new technologies to even support its e-commerce workload.

    As an intern, I was working as an engineer on amazon.com and during our peak holiday traffic time we experienced a serious scaling failure due to a database transaction deadlocking issue. The problem was caused by the relational database from a commercial vendor that we were using at the time. A bunch of engineers got together and wrote what we call a COE, a correction of errors document in which we say what happened, what we learned, how we fixed the issue, and how we would avoid a recurrence.

    I don't know if it was me being naive or just being confident in the way only a 20 something intern can be, but I asked the question ‘Why are we using a relational database for this? These workloads don't need the SQL level of complexity and transactional guarantees.’

    Peter Vosshall presents Dynamo at 2007 ACM Symposium on Operating System Principles (SOSP).

    This led us to start rethinking how we architected our underlying data stores altogether. At the time there wasn’t a scalable non-relational database. This is what led us to build the original Dynamo, and which led us to write the paper. Dynamo was not the only thing we were rethinking about our architecture at this time. We realized we also needed a scalable storage system, which led us to build S3, and we also realized that we needed a more managed relational database with the ability to do automated replication, failover, and backups/restore, which led us to build Amazon RDS.

    One rule we had related to writing the original Dynamo paper was not to publish when we developed the original design, but first let Dynamo run in production supporting several Amazon.com services, so that the Dynamo paper would be an end-to-end experience paper. Werner and I felt very strongly about this because we didn't want it to be just another academic paper. That’s why I was very proud when 10 years later that paper was awarded a test of time award.

  2. Q. 

    What’s the origin story for DynamoDB, and how has the technology evolved in the past decade?

    A. 

    The idea behind DynamoDB developed from discussions with customers like Don MacAskill, the CEO of SmugMug and Flickr. More and more companies like Don’s were web-based companies, and the number of users online was exploding. The traditional relational database model of storing all the data in a single box was not scaling well. It forced the complexity back on the users to shard their relational databases and then manage all the partitioning and re-partitioning and so forth.

    This wasn’t new to us; these challenges are why we built the original Dynamo, but it wasn’t yet a service. It was a software system that Amazon engineers had to operate. At some point in one of our customer advisory board meetings, Don said, ‘You all started Dynamo and showed what is possible with a scalable non-relational database system. Why can't we have that as an external service?’

    All senior AWS executives were there, and honestly it was a question we were asking ourselves at the time. Don wasn’t the only customer asking for it, more and more customers wanted that kind of scalable database where they didn't have to deal with partitioning and re-partitioning, and they also wanted extreme availability. This led to the genesis of our thinking about what it would take to build a scalable cloud database that wasn’t constrained by the SQL API.

    DynamoDB was different from the original Dynamo because it actually exposed several of the original Dynamo components via very easy-to-use cloud controls. Our customers didn’t have to provision clusters anymore. They could just create a table and seamlessly scale it up and down; they didn’t have to deal with any of the operations, or even install a single library to operate a database. This evolution of Dynamo to DynamoDB was important because we truly embraced the cloud, and its elasticity and scalability in an unprecedented manner.

    Werner Vogels, vice president and chief technology officer of Amazon.com, introduced DynamoDB on Jan. 18, 2012 with this post in which he said DynamoDB "brings the power of the cloud to the NoSQL database world."

    We launched it on January 18th, 2012 and it was a hit right out of the gate. Don’s company and several others started using it. Right from the launch, not just elasticity, but single-digit latency performance was something that resonated really well with customers. We had innovated quite a bit, all the way from the protocol layer, to the underlying storage layer for SSD storage, and other capabilities that we enabled.

    One of the first production projects was a customer with an interesting use case; they were doing a Super Bowl advertisement. Because DynamoDB was extremely elastic it could seamlessly scale up to 100,000 writes a second, and then scale down after the Super Bowl was over so they wouldn’t incur costs anymore. This was a big deal; it wasn’t considered possible at that time. It seems super obvious now, but at that time databases were not that elastic and scalable.

    It was a bold vision. But DynamoDB’s built-for-the-cloud architecture made all of these scale-out use cases possible, and that is one of the reasons why DynamoDB now powers multiple high-traffic Amazon sites and systems including Alexa, Amazon.com, and all Amazon fulfillment centers. Last year, over the course of our 66-hour Prime Day, these sources made trillions of API calls and DynamoDB maintained high availability with single-digit millisecond performance, peaking at 89.2 million requests per second.

    And since 2012, we have added so many innovations, not just for its underlying availability, durability, security and scale, but ease-of-use features as well.

    Swami Sivasubramanian, AWS | CUBE Conversation, January 2022

    We’ve gone beyond key value store and now support not just a hash-based partition but also range-based partitioning, and we’ve added support for secondary indexes to enable more complex query capabilities —without compromising on scale or availability.

    We also now support scalable change data capture through Amazon Kinesis Data Steams for DynamoDB. One of the things I strongly believe with any database is that it should not be an island; it can’t be a dead end. It should generate streams of what data changed and then use that to bridge it to your analytics applications, or other data stores.

    We have continued innovating across the board on features like backup and restore. For a large-scale database system like DynamoDB with millions of partitions, doing backup and restore isn’t easy, and a lot of great innovations went into making this experience easy for customers.

    We have also added the ability to do global tables so customers can operate across multiple regions. And then we added the ability to do transactions with DynamoDB, all with an eye on how do you continue to keep DynamoDB’s mission around availability and scalability?

    Recently we also launched the ability to reduce the cost of storage with the Amazon DynamoDB Standard Infrequent Access table class. Customers often need to store data long term, and while this older data may be accessed infrequently, it must remain highly available. For example, end users of social media apps rarely access older posts and uploaded images, but the app must ensure that these artifacts are immediately accessible when requested. This infrequently accessed data can represent significant storage expense for customers due to their growing volume and the relatively high cost of storing this data, so customers optimize costs in these cases by writing code to move older, less frequently accessed data from DynamoDB to lower cost storage alternatives like Amazon S3. So at the most recent re:Invent we launched Amazon DynamoDB Standard-Infrequent Access table class, a new cost-efficient table class to store infrequently accessed data, yet maintain the high availability and performance of DynamoDB.

    We are on this journey of maintaining the original vision of DynamoDB as the guiding light, but continue to innovate to help customers with use cases around ease of querying, the ability to do complex, global transaction replication, while also continuing to manage costs.

  3. Q. 

    What might the next 10 years bring?

    A. 

    When we started with DynamoDB ten years ago, the cloud itself was something customers were just starting to understand better — its benefits and what they could do.

    Now we live in a world where cloud is the new normal in terms of how customers are building IT applications, and scale is also the new normal because every app is being built to handle viral moments. DynamoDB itself will be on this continuous journey where we will continue to innovate on behalf of customers. One of the things we will continue moving toward is an end-to-end data strategy mission because, as I mentioned earlier, no database is an island.

    Customers no longer want to just store and query the data in their databases. They then want to analyze that data to create value, whether that’s a better personalization or recommendation engine, or a forecasting system that you can run predictive analytics against using machine learning. Connecting the dots end to end, and continuing to make DynamoDB more secure, more available, more performant, and easier to use will be our never-ending journey.

Research areas

Related content

US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Our organization is building world-class teams with deep expertise in large-scale recommender systems. This role sits at the intersection of AI research and direct business impact, where recommendation quality directly influences business outcomes and customer satisfaction. You'll be joining a team focused on foundational models for recommender systems and working on production systems that serve millions of customers and shape the future of personalized entertainment experiences. We're seeking talent who can deliver measurable impact on our core business metrics while advancing the state-of-the-art in personalization and recommendation technology. Key job responsibilities - Develop AI solutions for various Prime Video Search & Recommendation systems using Deep Learning, Reinforcement Learning, Optimization Methods, and most importantly, GenAI - Work closely with engineers and product managers to design, implement and launch AI solutions end-to-end - Design and conduct offline and online (A/B) experiments to evaluate proposed solutions based on in-depth data analyses - Effectively communicate technical and non-technical ideas with teammates and stakeholders - Stay up-to-date with advancements and the latest modeling techniques in the field - Publish your research findings in top conferences and journals About the team The Prime Video - Personalization & Discovery Science team owns science solution to power search experience on various devices, from sourcing, relevance, & ranking (to name a few). We are on a mission to deliver an AI-first customer experience. At the heart of this transformation are our recommendation systems -- core, customer-facing components that serve as primary drivers of engagement & growth.
US, WA, Bellevue
Amazon LEO is Amazon's low Earth orbit satellite network. Our mission is to deliver fast, reliable internet connectivity to customers beyond the reach of existing networks. From individual households to schools, hospitals, businesses, and government agencies, Amazon LEO will serve people and organizations operating in locations without reliable connectivity. The Amazon LEO Global Business Operations (GBO) team drives data-driven decision-making across sales, marketing, operations, product, engineering, finance, and legal functions. We build scalable business intelligence solutions and data infrastructure to solve complex, ambiguous problems with LEO-wide impact. We are looking for a talented Research Scientist to contribute to LEO's long-term vision and strategy for capacity simulations and inventory optimization. This effort will be instrumental in helping LEO execute on its business plans globally. As one of our valued team members, you will be obsessed with matching our standards for operational excellence with a relentless focus on delivering results. Key job responsibilities In this role, you will: Collaborate with product, business development, sales, marketing, operations, finance, and various technical teams (engineering, science, R&D, simulations, etc.) to support the implementation of capacity simulations and inventory optimization solutions. Develop and prototype scalable solutions to optimization problems for operating and planning satellite resources. Support technical roadmap definition efforts by building models to predict future inventory availability and key operational and financial metrics across the network. Design experiments and simulations to evaluate optimization improvements and understand how they interact with each other. Analyze large amounts of satellite and business data to identify simulation and optimization opportunities. Communicate insights and recommendations to technical and non-technical audiences to support decision-making across LEO. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum.
IN, HR, Gurugram
Our customers have immense faith in our ability to deliver packages timely and as expected. A well planned network seamlessly scales to handle millions of package movements a day. It has monitoring mechanisms that detect failures before they even happen (such as predicting network congestion, operations breakdown), and perform proactive corrective actions. When failures do happen, it has inbuilt redundancies to mitigate impact (such as determine other routes or service providers that can handle the extra load), and avoids relying on single points of failure (service provider, node, or arc). Finally, it is cost optimal, so that customers can be passed the benefit from an efficiently set up network. Amazon Shipping is hiring Applied Scientists to help improve our ability to plan and execute package movements. As an Applied Scientist in Amazon Shipping, you will work on multiple challenging machine learning problems spread across a wide spectrum of business problems. You will build ML models to help our transportation cost auditing platforms effectively audit off-manifest (discrepancies between planned and actual shipping cost). You will build models to improve the quality of financial and planning data by accurately predicting ship cost at a package level. Your models will help forecast the packages required to be pick from shipper warehouses to reduce First Mile shipping cost. Using signals from within the transportation network (such as network load, and velocity of movements derived from package scan events) and outside (such as weather signals), you will build models that predict delivery delay for every package. These models will help improve buyer experience by triggering early corrective actions, and generating proactive customer notifications. Your role will require you to demonstrate Think Big and Invent and Simplify, by refining and translating Transportation domain-related business problems into one or more Machine Learning problems. You will use techniques from a wide array of machine learning paradigms, such as supervised, unsupervised, semi-supervised and reinforcement learning. Your model choices will include, but not be limited to, linear/logistic models, tree based models, deep learning models, ensemble models, and Q-learning models. You will use techniques such as LIME and SHAP to make your models interpretable for your customers. You will employ a family of reusable modelling solutions to ensure that your ML solution scales across multiple regions (such as North America, Europe, Asia) and package movement types (such as small parcel movements and truck movements). You will partner with Applied Scientists and Research Scientists from other teams in US and India working on related business domains. Your models are expected to be of production quality, and will be directly used in production services. You will work as part of a diverse data science and engineering team comprising of other Applied Scientists, Software Development Engineers and Business Intelligence Engineers. You will participate in the Amazon ML community by authoring scientific papers and submitting them to Machine Learning conferences. You will mentor Applied Scientists and Software Development Engineers having a strong interest in ML. You will also be called upon to provide ML consultation outside your team for other problem statements. If you are excited by this charter, come join us!
IN, KA, Bengaluru
If you have ever bought or sold anything on Amazon, you have touched Amazon Marketplace. Amazon’s Marketplace business is one of the largest in the world. We are now in 23 countries. We are growing fast, with customers in many more countries. Amazon’s platform is the engine that powers Amazon’s Marketplace businesses, and Sellers rely on this platform and our support to start selling on Amazon and to grow their business. Amazon Marketplace enables millions of Sellers worldwide to list hundreds of millions of products and manage orders for inventory across dozens of different categories and languages. While working with millions of Sellers worldwide, we constantly strive to improve the selection for Customers and the capabilities of our platform for Sellers. The Seller Fulfillment Services (SFS) team is looking for a motivated and innovative Applied Scientist with strong analytical skills and practical experience to join our science team. As a key member of the SFS science team, you will provide expertise that helps accelerate the business. You will build science solutions that will help us to provide our customers with the largest selection of merchants at the lowest, and the most reliable delivery service regardless of the seller. You will research, design and improve on the models that will impact Amazon’s customer directly. You will be working in a highly collaborative environment partnering with various science, product management, engineering, operations, finance, business intelligence and analytics teams to develop science models to solve business problems. You will need to understand the business requirements and translate them into complex analytical outputs. You will design tests to explain performance of the models from impact on customer and cost perspective. You will create ML models to capture features impacting performance. You should be comfortable building prototypes, testing and improving them given the feedback from the real time data. You should be able to present your model and findings to a various range of stakeholders. An ideal candidate will be an expert in the areas of machine learning, operations research, and statistics. With expertise in applying theoretical models in an applied environment relying heavily on the latest advances in machine learning, optimization, stochastic modeling, and engineering. The candidate will be expected to work on numerous aspects, such as feature engineering, modeling, probabilistic modeling, hyper-parameter tuning, scalable inference methods and latent variable models. Challenges will involve dealing with very large data sets and requirements on throughput. Key job responsibilities - Design, implement, test, deploy, and maintain innovative science solutions to accelerate our business. - Create experiments and prototype implementations of new learning algorithms and prediction techniques - Collaborate with scientists, engineers, product managers, and stakeholders to design and implement software solutions for science problems - Use best practices to ensure a high standard of quality for all of the team deliverables
US, WA, Seattle
WW Amazon Stores Finance Science (ASFS) works to leverage science and economics to drive improved financial results, foster data backed decisions, and embed science within Finance. ASFS is focused on developing products that empower controllership, improve business decisions and financial planning by understanding financial drivers, and innovate science capabilities for efficiency and scale. We are looking for a data scientist to lead high visibility initiatives for forecasting Amazon Stores' financials. You will develop new science-based forecasting methodologies and build scalable models to improve financial decision making and planning for senior leadership up to VP and SVP level. You will build new ML and statistical models from the ground up that aim to transform financial planning for Amazon Stores. We prize creative problem solvers with the ability to draw on an expansive methodological toolkit to transform financial decision-making with science. The ideal candidate combines data-science acumen with strong business judgment. You have versatile modeling skills and are comfortable owning and extracting insights from data. You are excited to learn from and alongside seasoned scientists, engineers, and business leaders. You are an excellent communicator and effectively translate technical findings into business action. Key job responsibilities Demonstrating thorough technical knowledge, effective exploratory data analysis, and model building using industry standard ML models Working with technical and non-technical stakeholders across every step of science project life cycle Collaborating with finance, product, data engineering, and software engineering teams to create production implementations for large-scale ML models Innovating by adapting new modeling techniques and procedures Presenting research results to our internal research community
US, NJ, Newark
Employer: Audible, Inc. Title: Data Scientist II Location: 1 Washington Street, Newark, NJ 07102 Duties: Independently own, design, and implement scalable and reliable solutions to support or automate decision making throughout the business. Apply a range of data science techniques and tools combined with subject matter expertise to solve difficult business problems and cases in which the approach is unclear. Acquire data by building the necessary SQL/ETL queries. Import processes through various company specific interfaces for accessing RedShift, and S3/edX storage systems. Deliver artifacts on medium size projects that affect important business decisions. Build relationships with stakeholders and counterparts, and communicate model outputs, observations, and key performance indicators (KPIs) to the management to develop sustainable and consumable products and product features. Explore and analyze data by inspecting univariate distributions and multivariate interactions, constructing appropriate transformations, and tracking down the source and meaning of anomalies. Build production-ready models using statistical modeling, mathematical modeling, econometric modeling, machine learning algorithms, network modeling, social network modeling, natural language processing, large language models and/or genetic algorithms. Validate models against alternative approaches, expected and observed outcome, and other business defined key performance indicators. Implement models that comply with evaluations of the computational demands, accuracy, and reliability of the relevant ETL processes at various stages of production. Position reports to Newark, NJ office; however, telecommuting from a home office may be allowed. Requirements: Requires a Master’s degree in Statistics, Computer Science, Computer Engineering, Data Science, Machine Learning, Applied Math, Operations Research, or a related field plus two (2) years of experience as a Data Scientist or other occupation involving data processing and predictive Machine Learning modeling at scale. Experience may be gained concurrently and must include: Two (2) years in each of the following: - Utilizing specialized modelling software including Python or R - Building statistical models and machine learning models using large datasets from multiple resources - Building non-linear models including Neural Nets, Deep Learning, or Gradient Boosting. One (1) year in each of the following: - Building production-ready solutions or applications relying on Large Language Models (LLM), accessed programmatically and beyond just prompting - Evaluating LLM results at scale or fine-tuning LLMs - Building production-ready recommendation systems - Using database technologies including SQL or ETL. Alternatively, will accept a Bachelor’s degree and five (5) years of experience. Salary: $169,550 - 207,500 /year. Multiple positions. Apply online: www.amazon.jobs Job Code: ADBL175.
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the limits. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. As a Senior Applied Scientist on our team, you will focus on building state-of-the-art ML models for healthcare. Our team rewards curiosity while maintaining a laser-focus in bringing products to market. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the forefront of both academic and applied research in this product area, you have the opportunity to work together with a diverse and talented team of scientists, engineers, and product managers and collaborate with other teams. This role offers a unique opportunity to work on projects that could fundamentally transform healthcare outcomes. Key job responsibilities In this role, you will: • Design and implement novel AI/ML solutions for complex healthcare challenges • Drive advancements in machine learning and data science • Balance theoretical knowledge with practical implementation • Work closely with customers and partners to understand their requirements • Navigate ambiguity and create clarity in early-stage product development • Collaborate with cross-functional teams while fostering innovation in a collaborative work environment to deliver impactful solutions • Establish best practices for ML experimentation, evaluation, development and deployment • Partner with leadership to define roadmap and strategic initiatives You’ll need a strong background in AI/ML, proven leadership skills, and the ability to translate complex concepts into actionable plans. You’ll also need to effectively translate research findings into practical solutions. A day in the life You will solve real-world problems by getting and analyzing large amounts of data, generate insights and opportunities, design simulations and experiments, and develop statistical and ML models. The team is driven by business needs, which requires collaboration with other Scientists, Engineers, and Product Managers across the Special Projects organization. You will prepare written and verbal presentations to share insights to audiences of varying levels of technical sophistication. About the team We represent Amazon's ambitious vision to solve the world's most pressing challenges. We are exploring new approaches to enhance research practices in the healthcare space, leveraging Amazon's scale and technological expertise. We operate with the agility of a startup while backed by Amazon's resources and operational excellence. We're looking for builders who are excited about working on ambitious, undefined problems and are comfortable with ambiguity.
IN, TS, Hyderabad
Are you fascinated by the power of Natural Language Processing (NLP) and Large Language Models (LLM) to transform the way we interact with technology? Are you passionate about applying advanced machine learning techniques to solve complex challenges in the e-commerce space? If so, Amazon's International Seller Services team has an exciting opportunity for you as an Applied Scientist. At Amazon, we strive to be Earth's most customer-centric company, where customers can find and discover anything they want to buy online. Our International Seller Services team plays a pivotal role in expanding the reach of our marketplace to sellers worldwide, ensuring customers have access to a vast selection of products. As an Applied Scientist, you will join a talented and collaborative team that is dedicated to driving innovation and delivering exceptional experiences for our customers and sellers. You will be part of a global team that is focused on acquiring new merchants from around the world to sell on Amazon’s global marketplaces around the world. The position is based in Seattle but will interact with global leaders and teams in Europe, Japan, China, Australia, and other regions. Join us at the Central Science Team of Amazon's International Seller Services and become part of a global team that is redefining the future of e-commerce. With access to vast amounts of data, cutting-edge technology, and a diverse community of talented individuals, you will have the opportunity to make a meaningful impact on the way sellers engage with our platform and customers worldwide. Together, we will drive innovation, solve complex problems, and shape the future of e-commerce. Please visit https://www.amazon.science for more information Key job responsibilities - Apply your expertise in LLM models to design, develop, and implement scalable machine learning solutions that address complex language-related challenges in the international seller services domain. - Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, to define project requirements, establish success metrics, and deliver high-quality solutions. - Conduct thorough data analysis to gain insights, identify patterns, and drive actionable recommendations that enhance seller performance and customer experiences across various international marketplaces. - Continuously explore and evaluate state-of-the-art NLP techniques and methodologies to improve the accuracy and efficiency of language-related systems. - Communicate complex technical concepts effectively to both technical and non-technical stakeholders, providing clear explanations and guidance on proposed solutions and their potential impact.
US, CA, San Francisco
Amazon AGI Autonomy develops foundational capabilities for useful AI agents. We are the research lab behind Amazon Nova Act, a state-of-the-art computer-use agent. Our work combines Large Language Models (LLMs) with Reinforcement Learning (RL) to solve reasoning, planning, and world modeling in the virtual world. We are a small, talent-dense lab with the autonomy to move fast and the long-term commitment to pursue high-risk, high-payoff research. Come be a part of our journey! -- About the team: We are a research engineering team responsible for data ingestion and research tooling that support model development across the lab. The lab’s ability to train state-of-the-art models depends on generating high-quality training data and having useful tools for understanding experimental outcomes. We accelerate research work across the lab while maintaining the operational reliability expected of critical infrastructure. -- About the role: As a frontend engineer on the team, you will build the platform and tooling that power data creation, evaluation, and experimentation across the lab. Your work will be used daily by annotators, engineers, and researchers. This is a hands-on technical leadership role. You will ship a lot of code while defining frontend architecture, shared abstractions, and UI systems across the platform. We are looking for someone with strong engineering fundamentals, sound product judgment, and the ability to build polished UIs in a fast-moving research environment. Key job responsibilities - Be highly productive in the codebase and drive the team’s engineering velocity. - Define and evolve architecture for a research tooling platform with multiple independently evolving tools. - Design and implement reusable UI components, frontend infrastructure, and APIs. - Collaborate directly with Research, Human -Feedback, Product Engineering, and other teams to understand workflows and define requirements. - Write technical RFCs to communicate design decisions and tradeoffs across teams. - Own projects end to end, from technical design through implementation, rollout, and long-term maintenance. - Raise the team’s technical bar through thoughtful code reviews, architectural guidance, and mentorship.
US, CA, San Francisco
Amazon AGI Autonomy develops foundational capabilities for useful AI agents. We are the research lab behind Amazon Nova Act, a state-of-the-art computer-use agent. Our work combines Large Language Models (LLMs) with Reinforcement Learning (RL) to solve reasoning, planning, and world modeling in the virtual world. We are a small, talent-dense lab with the autonomy to move fast and the long-term commitment to pursue high-risk, high-payoff research. Come be a part of our journey! -- About the team: We are a research engineering team responsible for data ingestion and research tooling that support model development across the lab. The lab’s ability to train state-of-the-art models depends on generating high-quality training data and having useful tools for understanding experimental outcomes. We accelerate research work across the lab while maintaining the operational reliability expected of critical infrastructure. -- About the role: As a backend engineer on the team, you will build and operate core services that ingest, process, and distribute large-scale, multi-modal datasets to internal tools and data pipelines across the lab. This is a hands-on technical leadership role. You will ship a lot of code while defining backend architecture and operational standards across the platform. The platform is built primarily in TypeScript today, with plans to introduce Python services in the future. We are looking for someone who can balance rapid experimentation with operational rigor to build reliable services in a fast-moving research environment. Key job responsibilities - Be highly productive in the codebase and drive the team’s engineering velocity. - Design and evolve backend architecture and interfaces for core services. - Define and own standards for production health, performance, and observability. - Collaborate directly with Research, Human Feedback, Product Engineering, and other teams to understand workflows and define requirements. - Write technical RFCs to communicate design decisions and tradeoffs across teams. - Own projects end to end, from technical design through long-term maintenance. - Raise the team’s technical bar through thoughtful code reviews, architectural guidance, and mentorship.