Large object storage strategies for Amazon DynamoDB

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Customers across all industries use [Amazon DynamoDB](https://aws.amazon.com/dynamodb/) as the primary database for their mission critical workloads. DynamoDB is designed to provide consistent performance at any scale. To take advantage of this performance, one of the most important tasks to complete is data modeling. Your success with DynamoDB depends on how well you [define and model your access patterns](https://docs.aws.amazon.com/prescriptive-guidance/latest/dynamodb-data-modeling/step3.html). One of the aspects of data modeling is how to treat large objects within DynamoDB. It’s important to have a defined strategy for how to handle items larger than the [maximum size of 400 KB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ServiceQuotas.html#limits-items) to prevent unexpected behavior and to ensure your solution remains performant as it scales.\n\nIn this post, I show you some different options for handling large objects within DynamoDB and the benefits and disadvantages of each approach. I provide some sample code for each option to help you get started with any of these approaches with your own workloads.\n\nIn DynamoDB, an item is a set of attributes. Each attribute has a name and a value. Both the attribute name and the value count toward the total item size. For the purposes of this post, large object refers to any item that exceeds the current maximum size for a single item, which is 400 KB. This item could contain long string attributes, a binary object, or [any other data type supported by DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.NamingRulesDataTypes.html#HowItWorks.DataTypes) that exceeds the maximum item size.\n\n\n#### **Solution overview**\n\n\nThis post covers multiple approaches that you can use to model large objects within your DynamoDB-backed application. You should have some familiarity with DynamoDB. If you’re just starting out, be sure to review [Getting started with Amazon DynamoDB](https://aws.amazon.com/blogs/database/getting-started-with-amazon-dynamodb/) first.\n\nThe solutions you’ll explore are:\n\n**Option 1:** Default behavior\n\n**Option 2:** Store large objects in [Amazon Simple Storage Service (Amazon S3)](https://aws.amazon.com/s3/) with a pointer in DynamoDB\n\n**Option 3:** Split large items into [item collections](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.Partitions.html#HowItWorks.Partitions.CompositeKey)\n\n**Option 4:** Compress large objects\n\n\n#### **Deploy the examples**\n\n\nFor examples of each solution, you can deploy the [Amazon Web Services Serverless Application Model (SAM)](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html)template that accompanies this post. You can use these examples as a reference for your own implementations. These examples are all written in Node.JS, but bindings exist for the techniques used in this post in most popular programming languages.\n\nTo deploy the SAM template, clone this [GitHub repository](https://github.com/aws-samples/aws-dynamodb-large-object-patterns) and follow the instructions in the repository.\n\n**Option 1: Default behavior**\n\nThe default behavior is to reject items that are over the maximum item size, which can be a perfectly valid design choice. In that situation, you return an error message to the caller indicating the item size is too large and it’s up to the caller to implement the correct behavior, which can be to split the item into multiple parts, send it to a dead letter queue, or return an exception indicating that the item is too large.\n\nIn the example repository, you can observe this behavior by running the unencoded-write [Amazon Web Services Lambda](https://aws.amazon.com/lambda/) function that was deployed by the template. The repository includes a large sample event that exceeds the maximum item size (420 KB—20 KB over the maximum item size). If you invoke the function using this payload, you’ll get a ValidationException error from DynamoDB with a message that the item has exceeded the maximum allowed size, as shown in Figure 1 that follows.\n\n![image.png](https://dev-media.amazoncloud.cn/44daec4d0cbd4fdf9a45bb257da310c7_image.png)\n\nFigure 1: DynamoDB validation exception\n\nThis approach has the benefit that no custom server-side logic needs to be implemented and so costs and complexity remain low. In addition, requests that fail with this validation error don’t consume any write capacity units (WCU) from the table. The downside is that this approach might not satisfy your users’ requirements, so implementing one of the other strategies in this post is recommended.\n\n**Option 2: Store large objects in [Amazon S3](https://aws.amazon.com/cn/s3/?trk=cndc-detail) with a pointer in DynamoDB**\n\nOne strategy for storing a large object is to use an alternative storage platform and use a pointer to the object in DynamoDB. [Amazon S3](https://aws.amazon.com/s3/) is well suited to storing this data due to its high durability and low-cost characteristics. The implementation writes the large object to an S3 bucket, then creates a DynamoDB item with an attribute that points to the [Amazon S3](https://aws.amazon.com/cn/s3/?trk=cndc-detail) URL of the object. You can then use this URL to [generate a pre-signed URL](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html) to return to the caller, which has the added benefit of saving compute resources by avoiding downloading the object server-side. This architecture is illustrated in Figure 2 that follows.\n\n![image.png](https://dev-media.amazoncloud.cn/3a67f048d357419587cc71d7e8990fef_image.png)\n\nFigure 2: [Amazon S3](https://aws.amazon.com/cn/s3/?trk=cndc-detail) large object storage architecture\n\nThis pattern is common in content-indexing solutions where the content itself could be any size, is semi-structured, and is stored in an S3 bucket. DynamoDB then forms an index that is used as a fast lookup of where this data is stored within the S3 bucket, as explored in [Building and maintaining an Amazon S3 metadata index](https://aws.amazon.com/blogs/big-data/building-and-maintaining-an-amazon-s3-metadata-index-without-servers/).\n\nIn the example repository, there are two functions created that demonstrate this pattern: one that writes the item to the S3 bucket and DynamoDB, and one that reads the item and generates a pre-signed URL to return to the caller. This lets you securely download the object to the client without needing to add an additional load to the server.\n\nIf you invoke the write function with the same payload as used previously, you’ll see that this time the write succeeds, as shown in Figure 3 that follows:\n\n![image.png](https://dev-media.amazoncloud.cn/18b6854e60b54e01b1654d27a2abe420_image.png)\n\nFigure 3: Item written successfully\n\nThen, if you run the corresponding read function, you get a pre-signed URL. This is shown in Figure 4 that follows.\n\n![image.png](https://dev-media.amazoncloud.cn/7bfa5cca386542089801c39cfca64e44_image.png)\n\nFigure 4: Generation of pre-signed URL\n\nThis temporary URL provides access to a private-resource stored in an S3 bucket and is a secure way to grant access to those resources. If you take this URL and paste it into the browser, you retrieve the object that you stored in the S3 bucket, as shown in Figure 5 that follows:\n\n![image.png](https://dev-media.amazoncloud.cn/bbaf80f51bdf4d0bb60c66ce8b1212bf_image.png)\n\nFigure 5: Retrieval of object using pre-signed URL\n\nOne advantage of this approach is that you can store data of virtually any size in [Amazon S3](https://aws.amazon.com/cn/s3/?trk=cndc-detail) (up to 5 TB per object). It also keeps your DynamoDB costs lower as you consume less space for storage and fewer read capacity units (RCU) and WCUs to read and write the item by storing just the URL in DynamoDB. The disadvantage is that a second call is required to retrieve the large object, which adds additional latency and client complexity to resolve the additional call. There are also costs associated with the storage and retrieval from S3 (see [the S3 pricing page](https://aws.amazon.com/s3/pricing/) for more information).\n\n**Option 3: Split large items into item collections**\n\nAnother strategy is to split a large item into a collection of items with the same partition key. In this pattern, the partition key acts as a bucket that contains all the individual parts of the original large object as separate items.\n\nThere are multiple ways to split large items into item collections. The preferred method if there are natural delineations within the object—for example if it is a large JSON object with many attributes—is to split the object so that different sets of attributes are stored as different items. This means you can selectively retrieve only the attributes you need, reducing I/O costs. For example, suppose the item is currently structured as shown in the following table:\n\n![image.png](https://dev-media.amazoncloud.cn/ceb0ec640fa54f07ad7c5346377eeebd_image.png)\n\nIf this item is 10 KB, it will take 1.5 RCU to make a single eventually consistent read of the item. The following table shows how this item could be split into an item collection:\n\n![image.png](https://dev-media.amazoncloud.cn/d0a14afa246046a4a44e8427c1febbf0_image.png)\n\nUsing this structure any single attribute can be retrieved, instead consuming 0.5 RCU per eventually consistent read, potentially reducing the required capacity of the table and therefore the cost. The entire item collection can also still be retrieved via the partition key.\n\nThis structure could be costly however if multiple items need to be frequently retrieved or updated, plus the initial write cost could be high if there are many individual attributes. An optimization is to group attributes together based on how frequently they are updated. For example, if all the attributes describing the person are fairly static, but the notes attribute changes frequently, you could structure the item collection as in the following table:\n\n![image.png](https://dev-media.amazoncloud.cn/eb35bdf9d1a944588e5bd79071e7cf37_image.png)\n\nUsing this approach, all the less frequently changed attributes can be retrieved with a single request and 0.5 RCU, but can also be updated with a single WCU as they are cumulatively below 1 KB. If the original item had many hundreds or even thousands of attributes, you could create multiple of these attribute groups to help minimize the WCU required.\n\nIf the notes field could still exceed the maximum item size, or there is no clear way to split the object, such as when it is a binary object, the other option is to split the object into as many parts as necessary so that each part is small enough to fit into a DynamoDB item. You then concatenate these parts back together at retrieval time to transparently return it to the caller as one object.\n\nReferring back to the repository samples, the write function to split items takes the input string and splits it into as many parts as needed to fit each part into the maximum item size in DynamoDB. The function then creates a separate item with the same partition key. The item is uniquely identified by an incrementing part number as the sort key. The sort key is required to provide uniqueness, see [Using Sort Keys to Organize Data](https://aws.amazon.com/blogs/database/using-sort-keys-to-organize-data-in-amazon-dynamodb/) for more details.\n\nIn this example, each item is sent individually in a loop for simplicity. You could construct the item collection separately and write this to DynamoDB in a single operation using the [BatchWriteItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchWriteItem.html) API action, or if you want all items to either succeed or fail atomically, you can use the [TransactWriteItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_TransactWriteItems.html) API action. The caveat is that both these API actions are limited to 25 items per request (and 16 MB aggregate size for BatchWriteItem or 4 MB aggregate size for TransactWriteItem), so you still need to incorporate iterator logic into your code if the item collection might exceed those limits.\n\nAs shown in Figure 6 that follows, the results of running this function show two items in the collection with the same partition key in DynamoDB:\n\n![image.png](https://dev-media.amazoncloud.cn/dbb66e977af145aa96f84e39c6102a15_image.png)\n\nFigure 6: DynamoDB items\n\nTo read the object, you have to recombine the parts. Retrieval is straightforward as DynamoDB lets you use the partition part of the composite key to retrieve all items with that partition key. By using the [Query API action](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html) to retrieve the results, you can guarantee that the parts are processed in order based on the numeric sort key of the table. You then join these parts back together server-side to abstract this implementation detail away from the caller.\n\nThis pattern also allows for previewing items. Return the first element in the item collection—the 0 part in this case—as a preview with the option of choosing a button in the UI to view more that then returns the whole string. This reduces the IOPs—and therefore cost—required by only loading a preview of the data by default and loading the rest only if requested.\n\nThe disadvantages of this approach are that complexity is high on the server-side to split and recombine the parts and so these functions must be well-tested in order to avoid data loss. Another consideration is the extra RCUs needed to return all of the items to make up the original object. For larger objects, consider using an alternate data-store, as discussed in Option 2 or combining with Option 4 to reduce the capacity required per item.\n\n**Option 4: Compress large objects**\n\nIn this final section, you explore compressing large objects. You learn two compression algorithms that can be used to perform the compression and the benefits and drawbacks of each.\n\nFirst, let’s look at the native zlib compression built into the Node.JS standard libraries. Within the samples repository there is a zlib folder. Within this folder are two Lambda functions, one for writing the data and one for reading the data.\n\nUsing the same payload as earlier, you first invoke the write function, which uses the gzip function to compress the item. When the function completes, you get a success message as shown in Figure 7 that follows.\n\n![image.png](https://dev-media.amazoncloud.cn/6929409f206e434a89bc706f51e42321_image.png)\n\nFigure 7: Gzip write success\n\nThen the read function uncompresses this string to the original verbose payload which is returned when the read function is complete, as shown in Figure 8 that follows.\n\n![image.png](https://dev-media.amazoncloud.cn/3f53ff1d24ba4954a63d85130588f388_image.png)\n\nFigure 8: Gzip read success\n\nNext, try snappy compression. Again, under the snappy folder there are two functions—one to write, one to read.\n\nInvoking the write function uses the snappy node.js bindings to compress the large string and write the resulting smaller string to DynamoDB:\n\n![image.png](https://dev-media.amazoncloud.cn/8a06f2744e0a4c8a8e32f93c2ed00d5d_image.png)\n\nFigure 9: Snappy write success\n\nThen the read function uncompresses the string back to the original:\n\n![image.png](https://dev-media.amazoncloud.cn/99dabd34d3f14abe9fb962e373bb2e4a_image.png)\n\nFigure 10: Snappy read success\n\nAs you see in the table that follows, snappy is faster—as it’s described to be—averaging 180 ms per invocation compared to 440 ms for zlib for writes over the 100 executions I tried, but with a comparatively weak compression ratio of 50 percent (208 KB) compared to zlib’s 66 percent (139 KB). This is fine for our use-case, and might be for yours too. Remember that in a serverless environment such as this, you’re billed by the millisecond, so if speed of execution is the primary concern, snappy might be a good fit. zlib, while providing slower compression, is a native implementation. It doesn’t require any additional packages to be installed, so if you’re working in an environment that prohibits third-party software, then this provides a quick and simple solution that might be good enough for your use-case. A comparison of how these differences effect the cost of your application is shown in the table that follows:\n\n![image.png](https://dev-media.amazoncloud.cn/648b07bdb6144144b927b79fdf8f834b_image.png)\n\nBased on 100 executions per second of a 512 MB Lambda function for one month in the eu-west-1 Region\n\nThere are other compression algorithm libraries you can use, such as lzo and zstandard, that [offer different performance characteristics](https://www.percona.com/blog/2016/04/13/evaluating-database-compression-methods-update/).Please keep in mind that there might be licenses needed for the different algorithm libraries you adopt within your own projects.\n\nOne disadvantage of the compression strategy is that it adds some overhead to the compute time, regardless of the algorithm you use. This approach is also only suitable if the data can be compressed down to fit within the item size limit, which you might not know ahead of time. If the size of your data is predictable or there is a defined upper limit that happens to be within the compression ratio, then it might be suitable as a standalone solution. For data that is orders of magnitude above the item limit, it’s unlikely to compress down enough to fit. If that’s the case, using compression in conjunction with the item-splitting solution discussed earlier is a viable option. The advantage of compression is simplicity, with the implementation being transparent to the caller and reducing the need to restructure the item or use additional Amazon Web Services services to supplement the storage.\n\n\n#### **Clean-up**\n\n\nThe Lambda function invocations and DynamoDB usage described in this post should fall within the [Amazon Web Services (Amazon Web Services)](https://aws.amazon.com/)free tier for those services, so you shouldn’t incur any charges for these services until you exceed the free usage tier. After that, pricing details for both services are found on the [DynamoDB pricing page](https://aws.amazon.com/dynamodb/pricing/) and [Lambda pricing page](https://aws.amazon.com/lambda/pricing/).\n\nIf you deployed the example SAM template, then [delete the Amazon Web Services CloudFormation stack](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html) from the CloudFormation console.\n\n\n#### **Final considerations**\n\nDepending on the access patterns and characteristics of your workload, any of these approaches could be suitable. Below are some final considerations to help you pick the right strategy for your workload.\n\n**Storing large objects in [Amazon S3](https://aws.amazon.com/cn/s3/?trk=cndc-detail)**\n\n- By distributing your workload across multiple data stores, you lose transactional consistency. [Amazon Web Services Step Functions](https://aws.amazon.com/step-functions/)can be used to establish consistency in distributed systems, for example by [using the saga pattern](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-the-serverless-saga-pattern-by-using-aws-step-functions.html).\n- If you’re using DynamoDB’s global table feature for multi-Region replication, consider replicating the S3 bucket to the other Regions, particularly for disaster recovery scenarios. This can be achieved using the [S3 Cross-Region Replication](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html) feature.\n\n**Splitting large items into item collections**\n\n- If you’re using [global or local secondary indexes](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SecondaryIndexes.html) as part of your DynamoDB table, consider whether the large attributes need to be projected to the secondary indexes. Using the base table for lookups of the large object parts and secondary indexes for other access patterns without this attribute projected can save costs on the storage and the write capacity required to propagate these changes to the index.\n- [Queries](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Query.html) and [scans](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Scan.html) in DynamoDB can retrieve a maximum of 1 MB of data. If the total result size is greater than this, you will have to make multiple paginated calls or use the [BatchGetItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchGetItem.html) API.\n\n**Compression**\n\nIf you need to perform [filter expressions](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Query.html#Query.FilterExpression) for your queries, then this won’t be possible with a compressed string as DynamoDB cannot natively decompress the object.\n\n\n#### **Conclusion**\n\n\nFor many DynamoDB use-cases, you have to consider how you treat large objects. Making a conscious design decision up-front allows for the limits and access patterns of the system to be understood and provides a scalable and performant solution as the data size grows. To understand more design considerations when adopting DynamoDB, see [How to determine if Amazon DynamoDB is appropriate for your needs.](https://aws.amazon.com/blogs/database/how-to-determine-if-amazon-dynamodb-is-appropriate-for-your-needs-and-then-plan-your-migration/)For more data modeling tips, see [Data modeling with NoSQL Workbench for Amazon DynamoDB](https://aws.amazon.com/blogs/database/data-modeling-with-nosql-workbench-for-amazon-dynamodb/).\n\nTest the options discussed in this post by deploying the code samples yourself and let me know your thoughts in the comments section below.\n\n\n#### **About the Author**\n\n\n![image.png](https://dev-media.amazoncloud.cn/ca67001b1f0b4654a25ff575d6794897_image.png)\n\n**Josh Hart** is a Senior Solutions Architect at Amazon Web Services. He works with ISV customers in the UK to help them build and modernize their SaaS applications on Amazon Web Services.","render":"<p>Customers across all industries use <a href=\\"https://aws.amazon.com/dynamodb/\\" target=\\"_blank\\">Amazon DynamoDB</a> as the primary database for their mission critical workloads. DynamoDB is designed to provide consistent performance at any scale. To take advantage of this performance, one of the most important tasks to complete is data modeling. Your success with DynamoDB depends on how well you <a href=\\"https://docs.aws.amazon.com/prescriptive-guidance/latest/dynamodb-data-modeling/step3.html\\" target=\\"_blank\\">define and model your access patterns</a>. One of the aspects of data modeling is how to treat large objects within DynamoDB. It’s important to have a defined strategy for how to handle items larger than the <a href=\\"https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ServiceQuotas.html#limits-items\\" target=\\"_blank\\">maximum size of 400 KB</a> to prevent unexpected behavior and to ensure your solution remains performant as it scales.</p>\\n<p>In this post, I show you some different options for handling large objects within DynamoDB and the benefits and disadvantages of each approach. I provide some sample code for each option to help you get started with any of these approaches with your own workloads.</p>\n<p>In DynamoDB, an item is a set of attributes. Each attribute has a name and a value. Both the attribute name and the value count toward the total item size. For the purposes of this post, large object refers to any item that exceeds the current maximum size for a single item, which is 400 KB. This item could contain long string attributes, a binary object, or <a href=\\"https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.NamingRulesDataTypes.html#HowItWorks.DataTypes\\" target=\\"_blank\\">any other data type supported by DynamoDB</a> that exceeds the maximum item size.</p>\\n<h4><a id=\\"Solution_overview_7\\"></a><strong>Solution overview</strong></h4>\\n<p>This post covers multiple approaches that you can use to model large objects within your DynamoDB-backed application. You should have some familiarity with DynamoDB. If you’re just starting out, be sure to review <a href=\\"https://aws.amazon.com/blogs/database/getting-started-with-amazon-dynamodb/\\" target=\\"_blank\\">Getting started with Amazon DynamoDB</a> first.</p>\\n<p>The solutions you’ll explore are:</p>\n<p><strong>Option 1:</strong> Default behavior</p>\\n<p><strong>Option 2:</strong> Store large objects in <a href=\\"https://aws.amazon.com/s3/\\" target=\\"_blank\\">Amazon Simple Storage Service (Amazon S3)</a> with a pointer in DynamoDB</p>\\n<p><strong>Option 3:</strong> Split large items into <a href=\\"https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.Partitions.html#HowItWorks.Partitions.CompositeKey\\" target=\\"_blank\\">item collections</a></p>\\n<p><strong>Option 4:</strong> Compress large objects</p>\\n<h4><a id=\\"Deploy_the_examples_23\\"></a><strong>Deploy the examples</strong></h4>\\n<p>For examples of each solution, you can deploy the <a href=\\"https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html\\" target=\\"_blank\\">Amazon Web Services Serverless Application Model (SAM)</a>template that accompanies this post. You can use these examples as a reference for your own implementations. These examples are all written in Node.JS, but bindings exist for the techniques used in this post in most popular programming languages.</p>\\n<p>To deploy the SAM template, clone this <a href=\\"https://github.com/aws-samples/aws-dynamodb-large-object-patterns\\" target=\\"_blank\\">GitHub repository</a> and follow the instructions in the repository.</p>\\n<p><strong>Option 1: Default behavior</strong></p>\\n<p>The default behavior is to reject items that are over the maximum item size, which can be a perfectly valid design choice. In that situation, you return an error message to the caller indicating the item size is too large and it’s up to the caller to implement the correct behavior, which can be to split the item into multiple parts, send it to a dead letter queue, or return an exception indicating that the item is too large.</p>\n<p>In the example repository, you can observe this behavior by running the unencoded-write <a href=\\"https://aws.amazon.com/lambda/\\" target=\\"_blank\\">Amazon Web Services Lambda</a> function that was deployed by the template. The repository includes a large sample event that exceeds the maximum item size (420 KB—20 KB over the maximum item size). If you invoke the function using this payload, you’ll get a ValidationException error from DynamoDB with a message that the item has exceeded the maximum allowed size, as shown in Figure 1 that follows.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/44daec4d0cbd4fdf9a45bb257da310c7_image.png\\" alt=\\"image.png\\" /></p>\n<p>Figure 1: DynamoDB validation exception</p>\n<p>This approach has the benefit that no custom server-side logic needs to be implemented and so costs and complexity remain low. In addition, requests that fail with this validation error don’t consume any write capacity units (WCU) from the table. The downside is that this approach might not satisfy your users’ requirements, so implementing one of the other strategies in this post is recommended.</p>\n<p><strong>Option 2: Store large objects in Amazon S3 with a pointer in DynamoDB</strong></p>\\n<p>One strategy for storing a large object is to use an alternative storage platform and use a pointer to the object in DynamoDB. <a href=\\"https://aws.amazon.com/s3/\\" target=\\"_blank\\">Amazon S3</a> is well suited to storing this data due to its high durability and low-cost characteristics. The implementation writes the large object to an S3 bucket, then creates a DynamoDB item with an attribute that points to the [Amazon S3](https://aws.amazon.com/cn/s3/?trk=cndc-detail) URL of the object. You can then use this URL to <a href=\\"https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html\\" target=\\"_blank\\">generate a pre-signed URL</a> to return to the caller, which has the added benefit of saving compute resources by avoiding downloading the object server-side. This architecture is illustrated in Figure 2 that follows.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/3a67f048d357419587cc71d7e8990fef_image.png\\" alt=\\"image.png\\" /></p>\n<p>Figure 2: Amazon S3 large object storage architecture</p>\n<p>This pattern is common in content-indexing solutions where the content itself could be any size, is semi-structured, and is stored in an S3 bucket. DynamoDB then forms an index that is used as a fast lookup of where this data is stored within the S3 bucket, as explored in <a href=\\"https://aws.amazon.com/blogs/big-data/building-and-maintaining-an-amazon-s3-metadata-index-without-servers/\\" target=\\"_blank\\">Building and maintaining an Amazon S3 metadata index</a>.</p>\\n<p>In the example repository, there are two functions created that demonstrate this pattern: one that writes the item to the S3 bucket and DynamoDB, and one that reads the item and generates a pre-signed URL to return to the caller. This lets you securely download the object to the client without needing to add an additional load to the server.</p>\n<p>If you invoke the write function with the same payload as used previously, you’ll see that this time the write succeeds, as shown in Figure 3 that follows:</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/18b6854e60b54e01b1654d27a2abe420_image.png\\" alt=\\"image.png\\" /></p>\n<p>Figure 3: Item written successfully</p>\n<p>Then, if you run the corresponding read function, you get a pre-signed URL. This is shown in Figure 4 that follows.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/7bfa5cca386542089801c39cfca64e44_image.png\\" alt=\\"image.png\\" /></p>\n<p>Figure 4: Generation of pre-signed URL</p>\n<p>This temporary URL provides access to a private-resource stored in an S3 bucket and is a secure way to grant access to those resources. If you take this URL and paste it into the browser, you retrieve the object that you stored in the S3 bucket, as shown in Figure 5 that follows:</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/bbaf80f51bdf4d0bb60c66ce8b1212bf_image.png\\" alt=\\"image.png\\" /></p>\n<p>Figure 5: Retrieval of object using pre-signed URL</p>\n<p>One advantage of this approach is that you can store data of virtually any size in Amazon S3 (up to 5 TB per object). It also keeps your DynamoDB costs lower as you consume less space for storage and fewer read capacity units (RCU) and WCUs to read and write the item by storing just the URL in DynamoDB. The disadvantage is that a second call is required to retrieve the large object, which adds additional latency and client complexity to resolve the additional call. There are also costs associated with the storage and retrieval from S3 (see <a href=\\"https://aws.amazon.com/s3/pricing/\\" target=\\"_blank\\">the S3 pricing page</a> for more information).</p>\\n<p><strong>Option 3: Split large items into item collections</strong></p>\\n<p>Another strategy is to split a large item into a collection of items with the same partition key. In this pattern, the partition key acts as a bucket that contains all the individual parts of the original large object as separate items.</p>\n<p>There are multiple ways to split large items into item collections. The preferred method if there are natural delineations within the object—for example if it is a large JSON object with many attributes—is to split the object so that different sets of attributes are stored as different items. This means you can selectively retrieve only the attributes you need, reducing I/O costs. For example, suppose the item is currently structured as shown in the following table:</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/ceb0ec640fa54f07ad7c5346377eeebd_image.png\\" alt=\\"image.png\\" /></p>\n<p>If this item is 10 KB, it will take 1.5 RCU to make a single eventually consistent read of the item. The following table shows how this item could be split into an item collection:</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/d0a14afa246046a4a44e8427c1febbf0_image.png\\" alt=\\"image.png\\" /></p>\n<p>Using this structure any single attribute can be retrieved, instead consuming 0.5 RCU per eventually consistent read, potentially reducing the required capacity of the table and therefore the cost. The entire item collection can also still be retrieved via the partition key.</p>\n<p>This structure could be costly however if multiple items need to be frequently retrieved or updated, plus the initial write cost could be high if there are many individual attributes. An optimization is to group attributes together based on how frequently they are updated. For example, if all the attributes describing the person are fairly static, but the notes attribute changes frequently, you could structure the item collection as in the following table:</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/eb35bdf9d1a944588e5bd79071e7cf37_image.png\\" alt=\\"image.png\\" /></p>\n<p>Using this approach, all the less frequently changed attributes can be retrieved with a single request and 0.5 RCU, but can also be updated with a single WCU as they are cumulatively below 1 KB. If the original item had many hundreds or even thousands of attributes, you could create multiple of these attribute groups to help minimize the WCU required.</p>\n<p>If the notes field could still exceed the maximum item size, or there is no clear way to split the object, such as when it is a binary object, the other option is to split the object into as many parts as necessary so that each part is small enough to fit into a DynamoDB item. You then concatenate these parts back together at retrieval time to transparently return it to the caller as one object.</p>\n<p>Referring back to the repository samples, the write function to split items takes the input string and splits it into as many parts as needed to fit each part into the maximum item size in DynamoDB. The function then creates a separate item with the same partition key. The item is uniquely identified by an incrementing part number as the sort key. The sort key is required to provide uniqueness, see <a href=\\"https://aws.amazon.com/blogs/database/using-sort-keys-to-organize-data-in-amazon-dynamodb/\\" target=\\"_blank\\">Using Sort Keys to Organize Data</a> for more details.</p>\\n<p>In this example, each item is sent individually in a loop for simplicity. You could construct the item collection separately and write this to DynamoDB in a single operation using the <a href=\\"https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchWriteItem.html\\" target=\\"_blank\\">BatchWriteItem</a> API action, or if you want all items to either succeed or fail atomically, you can use the <a href=\\"https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_TransactWriteItems.html\\" target=\\"_blank\\">TransactWriteItem</a> API action. The caveat is that both these API actions are limited to 25 items per request (and 16 MB aggregate size for BatchWriteItem or 4 MB aggregate size for TransactWriteItem), so you still need to incorporate iterator logic into your code if the item collection might exceed those limits.</p>\\n<p>As shown in Figure 6 that follows, the results of running this function show two items in the collection with the same partition key in DynamoDB:</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/dbb66e977af145aa96f84e39c6102a15_image.png\\" alt=\\"image.png\\" /></p>\n<p>Figure 6: DynamoDB items</p>\n<p>To read the object, you have to recombine the parts. Retrieval is straightforward as DynamoDB lets you use the partition part of the composite key to retrieve all items with that partition key. By using the <a href=\\"https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html\\" target=\\"_blank\\">Query API action</a> to retrieve the results, you can guarantee that the parts are processed in order based on the numeric sort key of the table. You then join these parts back together server-side to abstract this implementation detail away from the caller.</p>\\n<p>This pattern also allows for previewing items. Return the first element in the item collection—the 0 part in this case—as a preview with the option of choosing a button in the UI to view more that then returns the whole string. This reduces the IOPs—and therefore cost—required by only loading a preview of the data by default and loading the rest only if requested.</p>\n<p>The disadvantages of this approach are that complexity is high on the server-side to split and recombine the parts and so these functions must be well-tested in order to avoid data loss. Another consideration is the extra RCUs needed to return all of the items to make up the original object. For larger objects, consider using an alternate data-store, as discussed in Option 2 or combining with Option 4 to reduce the capacity required per item.</p>\n<p><strong>Option 4: Compress large objects</strong></p>\\n<p>In this final section, you explore compressing large objects. You learn two compression algorithms that can be used to perform the compression and the benefits and drawbacks of each.</p>\n<p>First, let’s look at the native zlib compression built into the Node.JS standard libraries. Within the samples repository there is a zlib folder. Within this folder are two Lambda functions, one for writing the data and one for reading the data.</p>\n<p>Using the same payload as earlier, you first invoke the write function, which uses the gzip function to compress the item. When the function completes, you get a success message as shown in Figure 7 that follows.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/6929409f206e434a89bc706f51e42321_image.png\\" alt=\\"image.png\\" /></p>\n<p>Figure 7: Gzip write success</p>\n<p>Then the read function uncompresses this string to the original verbose payload which is returned when the read function is complete, as shown in Figure 8 that follows.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/3f53ff1d24ba4954a63d85130588f388_image.png\\" alt=\\"image.png\\" /></p>\n<p>Figure 8: Gzip read success</p>\n<p>Next, try snappy compression. Again, under the snappy folder there are two functions—one to write, one to read.</p>\n<p>Invoking the write function uses the snappy node.js bindings to compress the large string and write the resulting smaller string to DynamoDB:</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/8a06f2744e0a4c8a8e32f93c2ed00d5d_image.png\\" alt=\\"image.png\\" /></p>\n<p>Figure 9: Snappy write success</p>\n<p>Then the read function uncompresses the string back to the original:</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/99dabd34d3f14abe9fb962e373bb2e4a_image.png\\" alt=\\"image.png\\" /></p>\n<p>Figure 10: Snappy read success</p>\n<p>As you see in the table that follows, snappy is faster—as it’s described to be—averaging 180 ms per invocation compared to 440 ms for zlib for writes over the 100 executions I tried, but with a comparatively weak compression ratio of 50 percent (208 KB) compared to zlib’s 66 percent (139 KB). This is fine for our use-case, and might be for yours too. Remember that in a serverless environment such as this, you’re billed by the millisecond, so if speed of execution is the primary concern, snappy might be a good fit. zlib, while providing slower compression, is a native implementation. It doesn’t require any additional packages to be installed, so if you’re working in an environment that prohibits third-party software, then this provides a quick and simple solution that might be good enough for your use-case. A comparison of how these differences effect the cost of your application is shown in the table that follows:</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/648b07bdb6144144b927b79fdf8f834b_image.png\\" alt=\\"image.png\\" /></p>\n<p>Based on 100 executions per second of a 512 MB Lambda function for one month in the eu-west-1 Region</p>\n<p>There are other compression algorithm libraries you can use, such as lzo and zstandard, that <a href=\\"https://www.percona.com/blog/2016/04/13/evaluating-database-compression-methods-update/\\" target=\\"_blank\\">offer different performance characteristics</a>.Please keep in mind that there might be licenses needed for the different algorithm libraries you adopt within your own projects.</p>\\n<p>One disadvantage of the compression strategy is that it adds some overhead to the compute time, regardless of the algorithm you use. This approach is also only suitable if the data can be compressed down to fit within the item size limit, which you might not know ahead of time. If the size of your data is predictable or there is a defined upper limit that happens to be within the compression ratio, then it might be suitable as a standalone solution. For data that is orders of magnitude above the item limit, it’s unlikely to compress down enough to fit. If that’s the case, using compression in conjunction with the item-splitting solution discussed earlier is a viable option. The advantage of compression is simplicity, with the implementation being transparent to the caller and reducing the need to restructure the item or use additional Amazon Web Services services to supplement the storage.</p>\n<h4><a id=\\"Cleanup_155\\"></a><strong>Clean-up</strong></h4>\\n<p>The Lambda function invocations and DynamoDB usage described in this post should fall within the <a href=\\"https://aws.amazon.com/\\" target=\\"_blank\\">Amazon Web Services (Amazon Web Services)</a>free tier for those services, so you shouldn’t incur any charges for these services until you exceed the free usage tier. After that, pricing details for both services are found on the <a href=\\"https://aws.amazon.com/dynamodb/pricing/\\" target=\\"_blank\\">DynamoDB pricing page</a> and <a href=\\"https://aws.amazon.com/lambda/pricing/\\" target=\\"_blank\\">Lambda pricing page</a>.</p>\\n<p>If you deployed the example SAM template, then <a href=\\"https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html\\" target=\\"_blank\\">delete the Amazon Web Services CloudFormation stack</a> from the CloudFormation console.</p>\\n<h4><a id=\\"Final_considerations_163\\"></a><strong>Final considerations</strong></h4>\\n<p>Depending on the access patterns and characteristics of your workload, any of these approaches could be suitable. Below are some final considerations to help you pick the right strategy for your workload.</p>\n<p><strong>Storing large objects in Amazon S3</strong></p>\\n<ul>\\n<li>By distributing your workload across multiple data stores, you lose transactional consistency. <a href=\\"https://aws.amazon.com/step-functions/\\" target=\\"_blank\\">Amazon Web Services Step Functions</a>can be used to establish consistency in distributed systems, for example by <a href=\\"https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-the-serverless-saga-pattern-by-using-aws-step-functions.html\\" target=\\"_blank\\">using the saga pattern</a>.</li>\\n<li>If you’re using DynamoDB’s global table feature for multi-Region replication, consider replicating the S3 bucket to the other Regions, particularly for disaster recovery scenarios. This can be achieved using the <a href=\\"https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html\\" target=\\"_blank\\">S3 Cross-Region Replication</a> feature.</li>\\n</ul>\n<p><strong>Splitting large items into item collections</strong></p>\\n<ul>\\n<li>If you’re using <a href=\\"https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SecondaryIndexes.html\\" target=\\"_blank\\">global or local secondary indexes</a> as part of your DynamoDB table, consider whether the large attributes need to be projected to the secondary indexes. Using the base table for lookups of the large object parts and secondary indexes for other access patterns without this attribute projected can save costs on the storage and the write capacity required to propagate these changes to the index.</li>\\n<li><a href=\\"https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Query.html\\" target=\\"_blank\\">Queries</a> and <a href=\\"https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Scan.html\\" target=\\"_blank\\">scans</a> in DynamoDB can retrieve a maximum of 1 MB of data. If the total result size is greater than this, you will have to make multiple paginated calls or use the <a href=\\"https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchGetItem.html\\" target=\\"_blank\\">BatchGetItem</a> API.</li>\\n</ul>\n<p><strong>Compression</strong></p>\\n<p>If you need to perform <a href=\\"https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Query.html#Query.FilterExpression\\" target=\\"_blank\\">filter expressions</a> for your queries, then this won’t be possible with a compressed string as DynamoDB cannot natively decompress the object.</p>\\n<h4><a id=\\"Conclusion_182\\"></a><strong>Conclusion</strong></h4>\\n<p>For many DynamoDB use-cases, you have to consider how you treat large objects. Making a conscious design decision up-front allows for the limits and access patterns of the system to be understood and provides a scalable and performant solution as the data size grows. To understand more design considerations when adopting DynamoDB, see <a href=\\"https://aws.amazon.com/blogs/database/how-to-determine-if-amazon-dynamodb-is-appropriate-for-your-needs-and-then-plan-your-migration/\\" target=\\"_blank\\">How to determine if Amazon DynamoDB is appropriate for your needs.</a>For more data modeling tips, see <a href=\\"https://aws.amazon.com/blogs/database/data-modeling-with-nosql-workbench-for-amazon-dynamodb/\\" target=\\"_blank\\">Data modeling with NoSQL Workbench for Amazon DynamoDB</a>.</p>\\n<p>Test the options discussed in this post by deploying the code samples yourself and let me know your thoughts in the comments section below.</p>\n<h4><a id=\\"About_the_Author_190\\"></a><strong>About the Author</strong></h4>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/ca67001b1f0b4654a25ff575d6794897_image.png\\" alt=\\"image.png\\" /></p>\n<p><strong>Josh Hart</strong> is a Senior Solutions Architect at Amazon Web Services. He works with ISV customers in the UK to help them build and modernize their SaaS applications on Amazon Web Services.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭