Lessons learned from 10 years of DynamoDB

Prioritizing predictability over efficiency, adapting data partitioning to traffic, and continuous verification are a few of the principles that help ensure stability, availability, and efficiency.

Amazon DynamoDB is one of the most popular NoSQL database offerings on the Internet, designed for simplicity, predictability, scalability, and reliability. To celebrate DynamoDB’s 10th anniversary, the DynamoDB team wrote a paper describing lessons we’d learned in the course of expanding a fully managed cloud-based database system to hundreds of thousands of customers. The paper was presented at this year’s USENIX ATC conference.

The paper captures the following lessons that we have learned over the years:

  • Designing systems for predictability over absolute efficiency improves system stability. While components such as caches can improve performance, they should not introduce bimodality, in which the system has two radically different ways of responding to similar requests (e.g., one for cache misses and one for cache hits). Consistent behaviors ensure that the system is always provisioned to handle the unexpected. 
  • Adapting to customers’ traffic patterns to redistribute data improves customer experience. 
  • Continuously verifying idle data is a reliable way to protect against both hardware failures and software bugs in order to meet high durability goals. 
  • Maintaining high availability as a system evolves requires careful operational discipline and tooling. Mechanisms such as formal proofs of complex algorithms, game days (chaos and load tests), upgrade/downgrade tests, and deployment safety provide the freedom to adjust and experiment with the code without the fear of compromising correctness. 
Related content
Amazon DynamoDB was introduced 10 years ago today; one of its key contributors reflects on its origins, and discusses the 'never-ending journey' to make DynamoDB more secure, more available and more performant.

Before we dig deeper into these topics, a little terminology. A DynamoDB table is a collection of items (e.g., products), and each item is a collection of attributes (e.g., name, price, category, etc.). Each item is uniquely identified by its primary key. In DynamoDB, tables are typically partitioned, or divided into smaller sub-tables, which are assigned to nodes. A node is a set of dedicated computational resources — a virtual machine — running on a single server in a datacenter.

DynamoDB stores three copies of each partition, in different availability zones. This makes the partition highly available and durable because the availability zones’ storage resources share nothing and are substantially independent. For instance, we wouldn’t assign a partition and one of its copies to nodes that share a power supply, because a power outage would take both of them offline. The three copies of the same partition are known as a replication group, and there is a leader for the group that is responsible for replicating all the customer mutations and serving strongly consistent reads.

DynamoDB architecture.png
The DynamoDB architecture, including a request router, the partition metadata system, and storage nodes in different availability zones (AZs).

Those definitions in hand, let’s turn to our lessons learned.

Predictability over absolute efficiency

DynamoDB employs a lot of metadata caches in order to reduce latency. One of those caches stores the routing metadata for data requests. This cache is deployed on a fleet of thousands of request routers, DynamoDB’s front-end service.

In the original implementation, when the request router received the first request for a table, it downloaded the routing information for the entire table and cached it locally. Since the configuration information about partition replicas rarely changed, the cache hit rate was approximately 99.75%.

Related content
How Alexa scales machine learning models to millions of customers.

This was an amazing hit rate. However, on the flip side, the fallback mechanism for this cache was to hit the metadata table directly. When the cache becomes ineffective, the metadata table needs to instantaneously scale from handling 0.25% of requests to 100%. The sudden increase in traffic can cause the metadata table to fail, causing cascading failure in other parts of the system. To mitigate against such failures, we redesigned our caches to behave predictably.

First, we built an in-memory datastore called MemDS, which significantly reduced request routers’ and other metadata clients’ reliance on local caches. MemDS stores all the routing metadata in a highly compressed manner and replicates it across a fleet of servers. MemDS scales horizontally to handle all incoming requests to DynamoDB.

Second, we deployed a new local cache that avoids the bimodality of the original cache. All requests, even if satisfied by the local cache, are asynchronously sent to the MemDS. This ensures that the MemDS fleet is always serving a constant volume of traffic, regardless of cache hit or miss. The regular exercise of the fallback code helps prevent surprises during fallback.

DDB-MemDS.png
DynamoDB architecture with MemDS.

Unlike conventional local caches, MemDS sees traffic that is proportional to the customer traffic seen by the service; thus, during cache failures, it does not see a sudden amplification of traffic. Doing constant work removed the need for complex logic to handle edge cases around cache misses and reduced the reliance on local caches, improving system stability.

Reshaping partitioning based on traffic

Partitions offer a way to dynamically scale both the capacity and performance of tables. In the original DynamoDB release, customers explicitly specified the throughput that a table required in terms of read capacity units (RCUs) and write capacity units (WCUs). The original system assigned partitions to nodes based on both available space and computational capacity.

Related content
Optimizing placement of configuration data ensures that it’s available and consistent during “network partitions”.

As the demands on a table changed (because it grew in size or because the load increased), partitions could be further split to allow the table to scale elastically. Partition abstraction proved really valuable and continues to be central to the design of DynamoDB.

However, the early version of DynamoDB assigned both space and capacity to individual partitions on the basis of size, evenly distributing computational resources across table entries. This led to challenges of “hot partitions” and throughput dilution.

Hot partitions happened because customer workloads were not uniformly distributed and kept hitting a subset of items. Throughput dilution happened when partitions that had been split to handle increased load ended up with so few keys that they could quickly max out their meager allocated capacity.

Our initial response to these challenges was to add bursting and adaptive capacity (along with other features such as split for consumption) to DynamoDB. This line of work also led to the launch of on-demand tables.

Bursting is a way to absorb temporal spikes in workloads at a partition level. It’s based on the observation that not all partitions hosted by a storage node use their allocated throughput simultaneously.

Related content
Amazon researchers describe new method for distributing database tables across servers.

The idea is to let applications tap into unused capacity at a partition level on a best-effort basis to absorb short-lived spikes. DynamoDB still maintains workload isolation by ensuring that a partition can burst only if there is unused throughput at the node level.

DynamoDB also launched adaptive capacity to handle long-lived spikes that cannot be absorbed by the burst capacity. Adaptive capacity monitors traffic patterns and repartitions tables so that heavily accessed items reside on different nodes.

Both bursting and adaptive capacity had limitations, however. Bursting was helpful only for short-lived spikes in traffic, and it was dependent on nodes’ having enough throughput to support it. Adaptive capacity was reactive and kicked in only after transmission rates had been throttled down to avoid overloads.

To address these limitations, the DynamoDB team replaced adaptive capacity with global admission control (GAC). GAC builds on the idea of token buckets, in which bandwidth is allocated to network nodes as tokens, and the nodes “cash in” tokens in order to transmit data. Each request router maintains a local token bucket and communicates with GAC to replenish tokens at regular intervals (on the order of every few seconds). For an extra layer of defense, DynamoDB also uses token buckets at the partition level.

Continuous verification 

To provide durability and crash recovery, DynamoDB uses write-ahead logs, which record data writes before they occur. In the event of a crash, DynamoDB can use the write-ahead logs to reconstruct lost data writes, bringing partitions up to date.

Write-ahead logs are stored in all three replicas of a partition. For higher durability, the write-ahead logs are periodically archived to S3, an object store that is designed for more than 99.99% (in fact, 11 nines) durability. Each replica contains the most recent write-ahead logs, which are usually waiting to be archived. The unarchived logs are typically a few hundred megabytes in size.

Storage replica vs. log replica.png
Healing a storage replica by copying the B-tree can take several minutes, while adding a log replica, which takes only a few seconds, ensures that there is no impact on durability.

DynamoDB continuously verifies data at rest. Our goal is to detect any silent data errors or “bit rot” — bit errors caused by degradation of the storage medium. An example of continuous verification is the scrub process.

The scrub process verifies two things: that all three copies in a replication group have the same data and that the live replicas match a reference replica built offline using the archived write-ahead-log entries.

The verification is done by computing the checksum of the live replica and matching that with a snapshot of the reference replica. A similar technique is used to verify replicas of global tables. Over the years, we have learned that continuous verification of data at rest is the most reliable method of protecting against hardware failures, silent data corruption, and even software bugs.

Availability

DynamoDB regularly tests its resilience to node, rack, and availability zone (AZ) failures. For example, to test the availability and durability of the overall service, DynamoDB performs power-off tests. Using realistic simulated traffic, a job scheduler powers off random nodes. At the end of all the power-off tests, the test tools verify that the data stored in the database is logically valid and not corrupted.

Related content
Amazon Athena reduces query execution time by 14% by eliminating redundant operations.

The first point about availability is that it needs to be measurable. DynamoDB is designed for 99.999% availability for global tables and 99.99% availability for regional tables. To ensure that these goals are being met, DynamoDB continuously monitors availability at the service and table levels. The tracked availability data is used to estimate customer-perceived availability trends and trigger alarms if the number of errors that customers see crosses a certain threshold.

These alarms are called customer-facing alarms (CFAs). The goal of these alarms is to report any availability-related problems and proactively mitigate them either automatically or through operator intervention. The key point to note here is that availability is measured not only on the server side but on the client side.

We also use two sets of clients to measure the user-perceived availability. The first set of clients is internal Amazon services using DynamoDB as the data store. These services share the availability metrics for DynamoDB API calls as observed by their software.

The second set of clients is our DynamoDB canary applications. These applications are run from every AZ in the region, and they talk to DynamoDB through every public endpoint. Real application traffic allows us to reason about DynamoDB availability and latencies as seen by our customers. The canary applications offer a good representation of what our customers might be experiencing both long and short term.

The second point is that read and write availability need to be handled differently. A partition’s write availability depends on the health of its leader and of its write quorum, meaning two out of the three replicas from different AZs. A partition remains available as long as there are enough healthy replicas for a write quorum and a leader.

Related content
“Anytime query” approach adapts to the available resources.

In a large service, hardware failures such as memory and disk failures are common. When a node fails, all replication groups hosted on the node are down to two copies. The process of healing a storage replica can take several minutes because the repair process involves copying the B-tree — a data structure that maps partitions to storage locations — and write-ahead logs.

Upon detecting an unhealthy storage replica, the leader of a replication group adds a log replica to ensure there is no impact on durability. Adding a log replica takes only a few seconds, because the system has to copy only the most recent write-ahead logs from a healthy replica; reconstructing the more memory-intensive B-tree can wait. Quick healing of affected replication groups using log replicas thus ensures the high durability of the most recent writes. Adding a log replica is the fastest way to ensure that the write quorum of the group is always met. This minimizes disruption to write availability due to an unhealthy write quorum. The leader replica serves consistent reads.

Introducing log replicas was a big change to the system, but the Paxos consensus protocol, which is formally provable, gave us the confidence to safely tweak and experiment with the system to achieve higher availability. We have been able to run millions of Paxos groups in a region with log replicas. Eventually, consistent reads can be served by any of the replicas. In case a leader fails, other replicas detect its failure and elect a new leader to minimize disruptions to the availability of consistent reads.

Research areas

Related content

US, WA, Seattle
The Amazon Economics Team is hiring Economist Interns. We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets to solve real-world business problems. Some knowledge of econometrics, as well as basic familiarity with Stata, R, or Python is necessary. Experience with SQL, UNIX, Sawtooth, and Spark would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, data scientists and MBAʼs. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with future job market placement. Roughly 85% of interns from previous cohorts have converted to full-time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
GB, Cambridge
Our team undertakes research together with multiple organizations to advance the state-of-the-art in speech technologies. We not only work on giving Alexa, the ground-breaking service that powers Echo, her voice, but we also develop cutting-edge technologies with Amazon Studios, the provider of original content for Prime Video. Do you want to be part of the team developing the latest technology that impacts the customer experience of ground-breaking products? Then come join us and make history. We are looking for a passionate, talented, and inventive Senior Applied Scientist with a background in Machine Learning to help build industry-leading Speech, Language and Video technology. As a Senior Applied Scientist at Amazon you will work with talented peers to develop novel algorithms and modelling techniques to drive the state of the art in speech and vocal arts synthesis. Position Responsibilities: - Participate in the design, development, evaluation, deployment and updating of data-driven models for digital vocal arts applications. - Participate in research activities including the application and evaluation and digital vocal and video arts techniques for novel applications. - Research and implement novel ML and statistical approaches to add value to the business. - Mentor junior engineers and scientists. We are open to hiring candidates to work out of one of the following locations: Cambridge, GBR
US, VA, Arlington
The People eXperience and Technology Central Science Team (PXTCS) uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, wellbeing, and the value of work to Amazonians. We are an interdisciplinary team that combines the talents of science and engineering to develop and deliver solutions that measurably achieve this goal. We are looking for economists who are able to apply economic methods to address business problems. The ideal candidate will work with engineers and computer scientists to estimate models and algorithms on large scale data, design pilots and measure their impact, and transform successful prototypes into improved policies and programs at scale. We are looking for creative thinkers who can combine a strong technical economic toolbox with a desire to learn from other disciplines, and who know how to execute and deliver on big ideas as part of an interdisciplinary technical team. Ideal candidates will work in a team setting with individuals from diverse disciplines and backgrounds. They will work with teammates to develop scientific models and conduct the data analysis, modeling, and experimentation that is necessary for estimating and validating models. They will work closely with engineering teams to develop scalable data resources to support rapid insights, and take successful models and findings into production as new products and services. They will be customer-centric and will communicate scientific approaches and findings to business leaders, listening to and incorporate their feedback, and delivering successful scientific solutions. Key job responsibilities Use reduced-form causal analysis and/or structural economic modeling methods to evaluate the impact of policies on employee outcomes, and examine how external labor market and economic conditions impact Amazon's ability to hire and retain talent. A day in the life Work with teammates to apply economic methods to business problems. This might include identifying the appropriate research questions, writing code to implement a DID analysis or estimate a structural model, or writing and presenting a document with findings to business leaders. Our economists also collaborate with partner teams throughout the process, from understanding their challenges, to developing a research agenda that will address those challenges, to help them implement solutions. About the team We are a multidisciplinary team that combines the talents of science and engineering to develop innovative solutions to make Amazon Earth's Best Employer. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA
US, WA, Seattle
We are expanding our Global Risk Management & Claims team and insurance program support for Amazon’s growing risk portfolio. This role will partner with our risk managers to develop pricing models, determine rate adequacy, build underwriting and claims dashboards, estimate reserves, and provide other analytical support for financially prudent decision making. As a member of the Global Risk Management team, this role will provide actuarial support for Amazon’s worldwide operation. Key job responsibilities ● Collaborate with risk management and claims team to identify insurance gaps, propose solutions, and measure impacts insurance brings to the business ● Develop pricing mechanisms for new and existing insurance programs utilizing actuarial skills and training in innovative ways ● Build actuarial forecasts and analyses for businesses under rapid growth, including trend studies, loss distribution analysis, ILF development, and industry benchmarks ● Design actual vs expected and other metrics dashboards to assist decision makings in pricing analysis ● Create processes to monitor loss cost and trends ● Propose and implement loss prevention initiatives with impact on insurance pricing in mind ● Advise underwriting decisions with analysis on driver risk profile ● Support insurance cost budgeting activities ● Collaborate with external vendors and other internal analytics teams to extract insurance insight ● Conduct other ad hoc pricing analyses and risk modeling as needed We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | New York, NY, USA | Seattle, WA, USA
US, NY, New York
The Amazon SCOT Forecasting team seeks a Senior Applied Scientist to join our team. Our research team conducts research into the theory and application of reinforcement learning. This research is shared in top journals and conferences and has a significant impact on the field. Through our launch of several Deep RL models into production, our work also affects decision making in the real world. Members of our group have varied interests—from the mathematical foundations of reinforcement learning, to language modeling, to maintaining the performance of generative models in the face of copyrights, and more. Recent work has focused on sample efficiency of RL algorithms, treatment effect estimation, and RL agents integrating real-world constraints, as applied in supply chains. Previous publications include: - Linear Reinforcement Learning with Ball Structure Action Space - Meta-Analysis of Randomized Experiments with Applications to Heavy-Tailed Response Data - A Few Expert Queries Suffices for Sample-Efficient RL with Resets and Linear Value Approximation - Deep Inventory Management - What are the Statistical Limits of Offline RL with Linear Function Approximation? Working collaboratively with a group of fellow scientists and engineers, you will identify complex problems and develop solutions in the RL space. We encourage collaboration across teammates and their areas of specialty, leading to creative and ambitious projects with the goal of publication and production. Key job responsibilities - Drive collaborative research and creative problem solving - Constructively critique peer research; mentor junior scientists - Create experiments and prototype implementations of new algorithms and techniques - Collaborate with engineering teams to design and implement software built on these new algorithms - Contribute to progress of the Amazon and broader research communities by producing publications We are open to hiring candidates to work out of one of the following locations: New York, NY, USA
US, CA, Virtual Location - California
If you are interested in this position, please apply on Twitch's Career site https://www.twitch.tv/jobs/en/ About Us: Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate and grow their personal interests and passions. We're always live at Twitch. About the Role: As a Data Scientist, Analytics member of the Data Platform - Insights team, you'll provide data analysis and support for platform, service, and operational engineering teams at Twitch, shaping the way success is measured. Defining what questions should be asked and scaling analytics methods and tools to support our growing business. Additionally, you will help support the vision for business analytics, solutions architecture for data related business constructs, as well as tactical execution such as experiment analysis and campaign performance reporting. You are paving the way for high-quality, high-velocity decisions and will report to the Manager, Data Science. For this role, we're looking for an experienced data staff who will oversee data instrumentation, dashboard/report building, metrics reviews, inform team investments, guidance on success/failure metrics and ad-hoc analysis. You will also work with technical and non-technical staff members throughout the company, and your effort will have an impact on hundreds of partners at Twitch You Will: - Work with members of Platforms & Services to guide them towards better decision making from the available data. - Promote data knowledge and insights through managing communications with partners and other teams, collaborate with colleagues to complete data projects and ensure all parties can use the insights to further improve. - Maintain a customer-centric focus while being a domain and product expert through data, develop trust amongst peers, and ensure that the teams and programs have access to data to make decisions - Manage ambiguous problems and adapt tools to answer complicated questions. - Identify the trade-offs between speed and quality of different approaches. - Create analytical frameworks to measure team success by partnering with teams to establish success metrics, create approaches to track the data and troubleshoot errors, measure and evaluate the data to develop a common language for all colleagues to understand these metrics. - Operationalize data processes to provide partners with ad-hoc analysis, automated dashboards, and self-service reporting tools so that everyone gets a good sense of the state of the business Perks: - Medical, Dental, Vision & Disability Insurance - 401(k), Maternity & Parental Leave - Flexible PTO - Commuter Benefits - Amazon Employee Discount - Monthly Contribution & Discounts for Wellness Related Activities & Programs (e.g., gym memberships, off-site massages), -Breakfast, Lunch & Dinner Served Daily - Free Snacks & Beverages We are open to hiring candidates to work out of one of the following locations: Irvine, CA, USA | Seattle, WA, USA | Virtual Location - CA
US, WA, Bellevue
Have you ever ordered a product on Amazon and when that box with the smile arrived you wondered how it got to you so fast? Have you wondered where it came from and how much it cost Amazon to deliver it to you? Have you also wondered what are different ways that the transportation assets can be used to delight the customer even more. If so, the Amazon transportation Services, Product and Science is for you . We manage the delivery of tens of millions of products every week to Amazon’s customers, achieving on-time delivery in a cost-effective manner. We are looking for an enthusiastic, customer obsessed Applied Scientist with strong scientific thinking, good software and statistics experience, skills to help manage projects and operations, improve metrics, and develop scalable processes and tools. The primary role of an Applied Scientist within Amazon is to address business challenges through building a compelling case, and using data to influence change across the organization. This individual will be given responsibility on their first day to own those business challenges and the autonomy to think strategically and make data driven decisions. Decisions and tools made in this role will have significant impact to the customer experience, as it will have a major impact on how we operate the middle mile network. Ideal candidates will be a high potential, strategic and analytic graduate with a PhD in (Operations Research, Statistics, Engineering, and Supply Chain) ready for challenging opportunities in the core of our world class operations space. Great candidates have a history of operations research, machine learning , and the ability to use data and research to make changes. This role requires robust skills in research and implementation of scalable products and models . This individual will need to be able to work with a team, but also be comfortable making decisions independently, in what is often times an ambiguous environment. Responsibilities may include: - Develop input and assumptions based preexisting models to estimate the costs and savings opportunities associated with varying levels of network growth and operations - Creating metrics to measure business performance, identify root causes and trends, and prescribe action plans - Managing multiple projects simultaneously - Working with technology teams and product managers to develop new tools and systems to support the growth of the business - Communicating with and supporting various internal stakeholders and external audiences We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
US, CA, Los Angeles
The Alexa team is looking for a passionate, talented, and inventive Applied Scientist with a strong machine learning background, to help build industry-leading Speech and Language technology. Key job responsibilities As an Applied Scientist with the Alexa team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art in spoken language understanding. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in spoken language understanding. About the team The Alexa team has a mission to push the envelope in Automatic Speech Recognition (ASR), Natural Language Understanding (NLU), and Audio Signal Processing, in order to provide the best-possible experience for our customers. We are open to hiring candidates to work out of one of the following locations: Los Angeles, CA, USA
US, WA, Seattle
Are you fascinated by the power of Natural Language Processing (NLP) and Large Language Models (LLM) to transform the way we interact with technology? Are you passionate about applying advanced machine learning techniques to solve complex challenges in the e-commerce space? If so, Amazon's International Seller Services team has an exciting opportunity for you as an Applied Scientist. At Amazon, we strive to be Earth's most customer-centric company, where customers can find and discover anything they want to buy online. Our International Seller Services team plays a pivotal role in expanding the reach of our marketplace to sellers worldwide, ensuring customers have access to a vast selection of products. As an Applied Scientist, you will join a talented and collaborative team that is dedicated to driving innovation and delivering exceptional experiences for our customers and sellers. You will be part of a global team that is focused on acquiring new merchants from around the world to sell on Amazon’s global marketplaces around the world. The position is based in Seattle but will interact with global leaders and teams in Europe, Japan, China, Australia, and other regions. Join us at the Central Science Team of Amazon's International Seller Services and become part of a global team that is redefining the future of e-commerce. With access to vast amounts of data, cutting-edge technology, and a diverse community of talented individuals, you will have the opportunity to make a meaningful impact on the way sellers engage with our platform and customers worldwide. Together, we will drive innovation, solve complex problems, and shape the future of e-commerce. Please visit https://www.amazon.science for more information Key job responsibilities - Apply your expertise in LLM models to design, develop, and implement scalable machine learning solutions that address complex language-related challenges in the international seller services domain. - Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, to define project requirements, establish success metrics, and deliver high-quality solutions. - Conduct thorough data analysis to gain insights, identify patterns, and drive actionable recommendations that enhance seller performance and customer experiences across various international marketplaces. - Continuously explore and evaluate state-of-the-art NLP techniques and methodologies to improve the accuracy and efficiency of language-related systems. - Communicate complex technical concepts effectively to both technical and non-technical stakeholders, providing clear explanations and guidance on proposed solutions and their potential impact. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, CA, Palo Alto
We’re working to improve shopping on Amazon using the conversational capabilities of large language models. We are open to hiring candidates to work out of one of the following locations: Palo Alto, CA, USA