Amazon DynamoDB features

Why DynamoDB?

Amazon DynamoDB is a serverless NoSQL database service that supports key-value and document data models. Developers can use DynamoDB to build modern, serverless applications that can start small and scale globally. DynamoDB scales to support tables of virtually any size with automated horizontal scaling.

Availability, durability, and fault tolerance are built-in and cannot be turned off, removing the need to architect your applications for these capabilities.

DynamoDB is designed to run high-performance, internet-scale applications that would overburden traditional relational databases. With over ten years of pioneering innovation investment, DynamoDB offers limitless scalability with consistent single-digit millisecond performance and up to 99.999% availability.

To learn about new features and capabilities, visit DynamoDB What's New announcements.

Serverless performance with limitless scalability

DynamoDB supports both key-value and document data models. As a NoSQL database, DynamoDB has a flexible schema, so each item can have many different attributes. A flexible schema allows you to easily adapt as your business requirements change, without the burden of having to redefine the table schema as you would in relational databases.

With DynamoDB, there are no servers to provision, patch, or manage, and no software to install, maintain, or operate. DynamoDB does not have versions (major, minor, or patch), there are no maintenance windows, and DynamoDB provides zero-downtime maintenance. DynamoDB on-demand pricing provides pay-as-you-go pricing, scales to zero, and automatically scales tables to adjust for capacity and maintains performance with zero administration.

DynamoDB is built for mission-critical workloads, including support for atomicity, consistency, isolation, and durability (ACID) transactions for applications that require complex business logic. DynamoDB provides native, server-side support for transactions, simplifying the developer experience of making coordinated, all-or-nothing changes to multiple items both within and across tables.

DynamoDB supports 100 actions per transaction, improving developer productivity. With support for transactions, developers can extend the scale, performance, and enterprise benefits of DynamoDB to a broader set of mission-critical workloads.

DynamoDB global tables provides active-active replication of your data across your choice of AWS Regions with 99.999% availability. Global tables are multi active, meaning you can read and write from any replica, and your globally distributed applications can locally access data in the selected Regions to get single-digit millisecond read and write performance.

Also, global tables automatically scale capacity to accommodate your multi-Region workloads. Global tables improve your application’s multi-Region resiliency and should be considered as part of your organization’s business continuity strategy.

DynamoDB Streams is a change data–capture capability. Whenever an application creates, updates, or deletes items in a table, DynamoDB Streams record a time-ordered sequence of every item-level change in near real time, making it ideal for event-driven architecture applications to consume and action the changes. All changes are deduplicated and stored for 24 hours.

Applications can also access this log and view the data items as they appeared before and after they were modified in near real time. DynamoDB Streams ensures that each stream record appears exactly once in the stream and, for each modified item, the stream records appear in the same sequence as the actual modifications to the item.

Similar to all other database systems, you start by creating a table, which is a collection of items. With DynamoDB, each item in the table has its own primary key. Many applications can also benefit from having one or more secondary keys to more efficiently search data using other attributes. DynamoDB offers the option to create both global and local secondary indexes, which lets you query the data in the table using a secondary or alternate key.

Global secondary indexes are also known as sparse indexes. In addition to giving you maximum flexibility on how to access your data, you can provision lower write throughput with excellent performance at a lower cost.

Security and reliability

With DynamoDB, there are no username or passwords. DynamoDB uses AWS Identity and Access Management (IAM) to authenticate, create, and access resources. You can specify IAM policies, resource-based policies, and conditions that allow fine-grained access, restricting read or write access down to specific items and attributes in a table, based on the identity of that user. This allows customers to enforce security policies at the code level.

DynamoDB encrypts all customer data at rest by default. Encryption at rest enhances the security of your data by using encryption keys stored in AWS Key Management Service (AWS KMS). With the addition of AWS Database Encryption SDK, you can perform attribute-level encryption to further enforce granular access control on data within your table. DynamoDB helps you to build security-sensitive applications that meet strict encryption compliance and regulatory requirements.

Encryption keys provide an additional layer of data protection by securing your data from  unauthorized access to the underlying storage. You can specify whether DynamoDB should use an AWS owned key (default encryption type), an AWS managed key, or a customer managed key to encrypt user data. The default encryption using AWS KMS keys is provided at no additional charge.

Point-in-time recovery (PITR) helps protect your DynamoDB tables from accidental write or delete operations. PITR provides continuous backups of your DynamoDB table data, and you can restore that table to any point in time up to the second during the preceding 35 days.

PITR does not use provisioned capacity and has no impact on performance or availability of your applications. Enabling PITR or initiating backup and restore operations is as simple as a single step in the AWS Management Console or a single API call.

On-demand backup and restore allows you to create full backups of your DynamoDB tables’ data for data archiving, which can help you meet your corporate and governmental regulatory requirements. You can back up tables from a few megabytes to hundreds of terabytes of data and not affect performance or availability to your production applications. With AWS Backup integration, you can also copy on-demand backups cross-account and cross-Region, create cost allocation tagging for backups, and transition backups to cold storage.

DynamoDB supports gateway virtual private cloud (VPC) endpoints and interface VPC endpoints for connections within a VPC or from on-premises data centers. You can configure private network connectivity from your on-premises applications to DynamoDB through interface VPC endpoints enabled with AWS PrivateLink. This allows customers to simplify private connectivity to DynamoDB and maintain compliance.

Cost-effectiveness

DynamoDB provides capacity modes for each table: on demand and provisioned.

  • For workloads that are less predictable and for which you are unsure whether you'll have high utilization, on-demand capacity mode takes care of managing capacity for you, and you pay only for what you consume.
  • Tables using provisioned capacity mode require you to set read and write capacity. Provisioned capacity mode is more cost-effective when you’re confident you’ll have decent utilization of the provisioned capacity you specify. 

For tables using on-demand capacity mode, DynamoDB instantly accommodates your workloads as they ramp up or down to any previously reached traffic level. If a workload’s traffic level hits a new peak, DynamoDB adapts rapidly to accommodate the workload. You also can optionally configure maximum read or write (or both) throughput for individual on-demand tables and associated secondary indexes, making it easy to balance costs and performance. You can use on-demand capacity mode for both new and existing tables, and you can continue using the existing DynamoDB APIs without changing code.

For data that is infrequently accessed, you can use the Amazon DynamoDB Standard-IA table class, which helps reduce your DynamoDB costs by up to 60%. Standard-IA table's lower storage cost is designed for long-term storage of data that is infrequently accessed, such as application logs, historical gaming data, old social media posts, and more. It has the same availability, durability, and performance as Amazon DynamoDB Standard tables, which is the default and most cost-effective option for most workloads.

For tables using provisioned capacity, DynamoDB provides auto scaling of throughput and storage based on your previously set capacity by monitoring the performance usage of your application.

  • If your application traffic grows, DynamoDB increases throughput to accommodate the load.
  • If your application traffic shrinks, DynamoDB scales down so that you pay less for unused capacity.

Integrations with AWS services

Bulk import and export from Amazon Simple Storage Service (Amazon S3) helps you get more value from your data by removing the need to write code to move, transform, and copy your DynamoDB tables from one application, account, or Region to another. Bulk import and export does not use your table’s read or write capacity, so you don’t need to plan or provision additional capacity. The bulk import and export process is fully managed by DynamoDB.

Bulk imports from Amazon S3 allow you to import data at any scale, from megabytes to terabytes, using supported formats including CSV, DynamoDB JSON, and Amazon Ion. With bulk imports from Amazon S3, customers can save up to 66% compared to client-based writes using provisioned capacity.

With bulk exports to Amazon S3, you can export data from tables with PITR enabled for any point in time in the last 35 days, with a per-second granularity. Once you export data from DynamoDB to Amazon S3, you can use other AWS services such as Amazon Athena, Amazon SageMaker, and more to analyze your data and extract actionable insights.

Amazon Kinesis Data Streams for DynamoDB captures item-level changes in your DynamoDB tables to power live dashboards, generate metrics, and deliver data into data lakes. Kinesis Data Streams enables you to build advanced streaming applications such as real-time log aggregation, real-time business analytics, and IoT data capture.

Through Kinesis Data Streams, you also can use Amazon Kinesis Data Firehose to automatically deliver DynamoDB data to other AWS services such as Amazon S3, Amazon OpenSearch Service, and Amazon Redshift.

To easily monitor your database performance, DynamoDB is integrated with Amazon Cloudwatch, which collects and processes raw database performance data. You can use CloudWatch to create customized views and dashboards of metrics and alarms for your DynamoDB databases. This monitoring capability is offered by default and is complimentary. You also can create alarms that are automatically sent to you based on metric performance.

Amazon CloudWatch Contributor Insights helps you to quickly identify who or what is impacting your databases and application performance. This capability makes it easier to quickly isolate, diagnose, and remediate issues during an operational event.

FAQs

The DynamoDB unique advantages include that it is a proven fully managed, scale-to-zero serverless database that provides single-digit millisecond performance and up to 99.999% availability. With its consistent performance at scale, DynamoDB also offers built-in security, durability, and the reliability required for global applications with the most stringent requirements.

With its ease of use, DynamoDB is often chosen for both new modern applications and established internet scale applications seeking consistent fast performance with limitless scalability.

DynamoDB is built for developers and. as it is serverless, it is straightforward to set up using our technical documentation.