Amazon AWS Certified Database Specialty – Amazon DynamoDB and DAX Part 2
So now DynamoDB throughput. So let’s look at how the throughput is calculated in both these modes. So the first one is the provision capacity mode. We know that this uses capacity units. So one capacity unit is equivalent to one request per second. So one read request or one write request per second is one capacity unit. So RCUS or read capacity units are calculated in blocks of four KB’s and the last block is always rounded up. One strongly consistent table read per second is one RCU. If you use eventual consistency then two eventually consistent table reads are equal to one RCU. And if you use transactional consistency then one transactional read requires two RCU, right? And we’re going to calculate this, we are going to see some examples.
Don’t worry if this goes over your head right now we are going to see some of the examples and that will make it super clear. For WCU’s, these are calculated in blocks of one KB and last block is always rounded up. One table right is one WCU and one transactional right uses two WCU. Then in the OnDemand mode we use request units instead of capacity units but they are same for calculation purposes. So capacity units and request units work in similar fashion for calculation purpose. So read request units or RRUs are also calculated in blocks four KB with last block being rounded up. One strongly consistent table read is one RRU. Two eventually consistent reads are also one RRU and one transactional read is two RRUs. Similarly, the write request units are also calculated in blocks of one KB just like WCU’s with last block always rounded up one table right consumes one WRU, while one transactional write request consumes two WRU.
Now the provision capacity mode is typically used in the production environment and you should use this when you have a predictable traffic to your DynamoDB table. And you can also use reserve capacity along with it if you have a steady and predictable traffic. And this is going to give you a lot of cost savings. But remember that this can result in throttling if your consumption shoots up. But there is a remedy to it as well. So you can use auto scaling to reduce your throttling. And provision capacity mode often tends to be cost effective as compared to on demand capacity mode. The OnDemand capacity mode is typically used in dev test environments or in small applications. Having said that, this doesn’t mean that you cannot use on demand mode for larger applications.
You can definitely do that. And you should use on demand mode when you have variable or unpredictable traffic. So it really works well when you have unpredictable traffic. On demand mode can instantly accommodate up to two X the previous peak on your table. So for example, if the maximum capacity that your table has consumed is about 100 capacity units, then on demand capacity mode can instantly provide you with up to 200 capacity units, that’s twice the previous maximum. But if you exceed that twice the previous maximum limit within 30 minutes, then there could be throttling. All right, it’s always recommended to space your traffic growth over the 30 minutes period before driving more than two extra to your OnDemand capacity mode table. All right.
Now let’s see some examples of calculating capacity units. And you should really pay attention here because this is very easy. And if you do get a question on the exam, then you have easy points here. So, calculating capacity units, let’s see our first example.
We had to calculate capacity units to read and write a 15 KB item. So for RCAs with strong consistency, 15 KB item will be divided into four KB chunks. So you get 375, which rounded up comes up to four RCAs. So with strong consistency, reading a 15 KB item will consume about four RCAs with eventual consistency. However, you’re going to consume half of this, that is two RCAs. And if you use transactional consistency, then you are going to consume twice as much. The strong consistency, two times four RCU is eight RCUS is what you’re going to consume to read a 15 KB item with transactional consistency. And for WCUs, we have blocks of one KB. 15 KB item will require 15 WCS to write. And if you use transactional consistency, then you’re going to require twice as much. It will be 30 WCS. So it’s very easy.
Just keep the calculations in mind. So if you do get a question, that would be easy points for you. Let’s take another example. Calculate capacity units to read and write a 1. 5 KB item. So with strong consistency, you will require 1. 5 by four. That’s . 375 units. So rounded up you will require one RCU to read 1. 5 KB item. Similarly, if you use eventual consistency, then half of one RCU is zero five. When rounded up, it also comes up to one RCU. For smaller items, strong consistency and eventual consistency would be more or less similar. And if you do use transactional read, then you’re going to use twice as much the strong consistency. So two times one RCU that’s you’re going to need two RCS. For WCU you have one KB chunks. So you’re going to need about two WCUs. With 1. 5 rounded up comes up to two WCES. And with transactional ride you’re going to need twice as much. So you will need two times two as four WCS.
Now, let’s take one more example, the last one. And here we’re going to calculate throughput instead of capacity units. So you’ll be given the capacity units and you got to calculate the amount of throughput your application can deliver. So DynamoDB table has provision capacity of ten RCU and ten WCU. Calculate the throughput that your application can support. So read throughput with strong consistency will be four KB into ten. That is ten RCU. So that will be 40 KB is because we use four KB chunks with read capacity units. All right? If you use eventual consistency, that’s going to double your throughput. Okay? So it will be two times 40 KB. So it will be 80 KB per second. If you use transactional reads, that will be half of your strong consistency. So the throughput will reduce by half. So your application can deliver 20 KB per second, right? And the right throughput again will be in chunks of one KB. So it would be one KB into ten WCS. That will be ten KB per second. If you use transactional rights, that will be half of this, so it will be five KB per second. So the throughput calculation works inverse to the calculations of capacity units.
Now, when you use provision capacity, you might run the risk of throttling errors if you consume more capacity than what you have provisioned. But again, I say you might run into such errors. And you don’t often run into such errors because DynamoDB comes to your rescue. There are two features in DynamoDB that help you accommodate occasional bursts in capacity. The first one is DynamoDB burst capacity and second one is adaptive capacity. So let’s look at what DynamoDB burst capacity is. So here we have a graph that shows what you have provisioned. The horizontal red line shows what you have provisioned and the curve is showing your consumption graph. All right? So there are times when you are underutilizing your provision capacity. And there is one instance where you are exceeding your provision capacity. The DynamoDB burst capacity comes to your rescue here and it helps you accommodate occasional bursts in your consumption. The way it works is DynamoDB preserves about 5 minutes or 300 seconds worth of your unused read and write capacity. So you can see the shaded portion here. This is what is your unused capacity.
So DynamoDB uses this as the potential for your burst capacity, 300 seconds of unused provisioned capacity. It’s stored for future use and it gets utilized whenever you exceed your provision capacities. The spike you see here, that’s where the burst capacity can get used. You don’t see throttling errors, but remember that this burst capacity can get consumed very quickly and your application must not rely on the DynamoDB burst capacity. The second such feature that helps you accommodate occasional bursts in traffic is the DynamoDB adaptive capacity.
Here you can see that we have three partitions. DynamoDB data is always stored in different partitions. One partition is about ten gigs in size. So ten gig SSDs are used to store DynamoDB data. And we’re going to talk about that later. But for now, remember that whatever capacity you provision to your table is equally divided between all the available partitions used by your table. So in this example, we have three partitions p one, p two and p three. Let’s say we have provisioned about 600 WCAS per second on this table. So each of the three partitions is going to receive a third, that is about 200 WCS each. So the provisioned capacity line is at 200 WCS. And the blue portion, you can see that is the consumed capacity for each partition.
So we can see here that the application is sending more requests, more write requests on the partition p two, while partition p one and p three are not being utilized as much. So what happens is DynamoDB looks at the unused capacity in different partitions. So, for example, we see that about 100 WC use of capacity is unused in partition p one and partition p three. Then this unused capacity can then be utilized by other partitions which are consuming more than what has been provisioned for those partitions so in this case, partition P two is kind of a hot partition because it’s using more than what is provisioned for it. So it’s using about 400 WCS when it only has about 200 WCS provisioned. So that difference of 200 WCS can be accommodated from the unused capacity of partitions P one and P three. So the hot partitions can really consume the unused portions.
And this is what is called as adaptive capacity. So DynamoDB comes to your rescue here and it gathers the unused capacity from other partitions and helps the hot partitions to really use more than their provisioned capacity. But if the unused capacity runs out, then your application is going to result in throttling. So any consumption beyond this unused capacity is going to result in throttling. So what you see in dark red here, that’s where your application is going to return throttling exceptions, provisioned capacity exceeded exception.
And this is very useful for no uniform workloads like the hot partitions. So one partition is using more than the other. So that’s where adaptive capacity comes into play. And it’s really useful. It works automatically, but there are no guarantees. Your application, again, must not rely on adaptive capacity or the boost capacity, but just know that these features are available and they come into play automatically. But your application should still make the right choices like using auto scaling or on demand mode to get the best performance from your DynamoDB table. All right.
Now let’s look at the secondary indexes. In DynamoDB, we have two types local secondary index and global secondary index. So let’s first look at the local secondary indexes. That’s LSI’s you can define up to five local secondary indexes and local secondary indexes are the indexes where the partition key attribute is same as as the primary index of the table.
So for example, in our game table, we had user ID as the primary partition key. So if you create a local secondary index, then that index will also have its partition key as user ID. Okay? So the local secondary index always has the same partition key as that of the table, then the range key, the range key, of course is going to be different from the primary sort key. So here we have our gameplay table and we have partition key on user ID attribute. So if we create LSI, the LSI will also have the same partition key. And you cannot have a simple key on any local secondary index. You must use a range key. So you have to use combination of partition key and a sort key.
So for example, in this gameplay table, user ID and game ID is the primary key, user ID being the partition key and game ID being the sort key. Examples of LSI skid Bay user ID and game timestamp. So if you create an index with user ID as partition key and game timestamp as the sort key or user IDs partition key and result as a sort key, then these will be local secondary indexes. Now, there are some limits on local secondary indexes. We already know that we can create up to five LSIs and indexed items must be under ten gigs. Okay, the reason for this is the local secondary index sits on the same partition, same physical partition as the original table. And we have seen that a partition is maximum up to ten gigs.
You have to have your local secondary index. Data within the size of the partition cannot go beyond that. And local secondary indexes can only be created at the time of creating the table and you cannot delete them later on. So this is important thing to remember. With local secondary indexes, you can only query a single partition. You cannot query across the partitions. And LSIs support all different consistency models. That is, eventual strong or transactional consistency. And another good thing is the LSIs consume the provision capacity that’s located to the underlying table. So you don’t have to pay for additional capacity consumption.
It uses the same capacity that’s allocated to the table. You can query any table attributes, even if these attributes are not projected onto the index. What I mean by this is when you create an index, you have an option of projecting non key attributes on the index. So apart from the partition key and sort key, you can ask DynamoDB to add additional attributes. For example, if I create an LSI on user ID and game timestamp. I can ask DynamoDB to also store the result and duration attributes along with the index down.
This is going to increase the size of the index, but that’s going to speed up your queries. If your queries require additional attributes, you can project them onto the index, and you can project up to 20 such attributes. And if your table has more than 20 attributes and your application requires more than 20, then you can project all attributes onto the index. If you want to project fewer than 20, then you can name the attributes that you want to project. So that’s about the local secondary indexes. Now let’s look at the global secondary indexes.
Let’s look at the global secondary indexes. You can define up to 20 global secondary indexes or GSIS. Again, this is a soft limit. What’s different with GSIS is the GSIS are not restricted to use the partition key of the table. So you’re free to choose your partition key and sort key as you please. So depending on your applications access patterns, you can choose to create a global secondary index. So this is our gameplay table and we have a primary key on user ID and game ID.
And these are some of the examples of GSI. So you can have a GSI on user ID table, so that’s only going to index the user ID data or the game ID data also can be indexed. So you can have a simple as well as composite keys and you can use the same partition key as part of the table, or you can choose a different one.
For example, here we can have a GSI on user ID and result, or you can have a GSI on game time stamp and game ID or game time Stamp and durations. Game timestamp is the partition key and game ID is a sort key. So we can have the same set of keys as the table. So again, there is no restriction on your choices of the partition and sort key. You can omit the sort key or range key in local secondary index you cannot omit the sort key, but in GSI there is no such restriction.
Again, there is no restriction on the size. Like local secondary indexes are limited to ten gigs, there is no such size limit on the global secondary indexes. And also worth remembering that GSI can be created or deleted any time. You can delete only one global secondary index at a time, but you can create and delete them anytime, unlike local secondary indexes.
And when you query the GSIS you can query across partitions. So in other words you get to query across the table or over the entire table. The only thing you must remember that GSIS only support eventual consistency. You cannot have strong consistency operations with global secondary indexes. Another super important thing to note is GSIS have their own provisioned throughput. So you have to provision the throughput for global secondary indexes separately, and this is true if you’re using the provisioned capacity mode. And another important thing again is you can only query the projected attributes that’s the attribute that you choose to include in the index.
Now, let’s quickly look at when to choose which index. So, here we have our gameplay table, some of the examples of LSI’s and GSIS. Okay, so first let’s look at the local secondary indexes. You use local secondary indexes when your application requires the same partition key as the table. And since local secondary indexes use the same pro in capacity as the table, when you need to avoid additional costs, you choose LSI’s. And when your application needs strongly consistent reads, then again you will use the local secondary indexes.
Then what’s the use case for global secondary indexes? Now, when your application requires a different partition key than the table, then you use the global secondary indexes again. You can also use them when your partition key is same as that of the table. When your application requires finer throughput control, you can use global secondary indexes. Remember, global secondary indexes have their own throughput, and when your application only requires eventually consistent reads, that’s when global indexes or global secondary indexes should be used, because global secondary indexes do not support strongly consistent reads. All right, then, the DynamoDB indexes and Throttling. So let’s look at the Throttling considerations of the two indexes. First, the local secondary indexes. Now, these use the same capacity units as that of the main table.
So we don’t have any special Throttling considerations with local secondary indexes. But in case of global secondary indexes, you have to remember this. And this is super important. If the rights on the GSI get throttled, then the main table will be throttled as well, even if the WCUs on the main table are fine. That means, even if your main table is under utilizing its WCUs, and if your GSI is over utilizing the WCU, then your main table is also going to be throttled.
So it’s very important that you provision the capacities on the global secondary nexus thoughtfully. You have to choose them carefully. So it’s super important that you provision your global secondary index capacity units thoughtfully. And of course, you should make sure that the partition keys are chosen carefully. So you should choose your partition keys based on the access patterns of your application. And WCA should be provisioned accordingly so that they don’t result in Throttling, because the main table can also get throttled if your GSIS are throttled. All right?
Now let’s look at some of the simple design patterns with DynamoDB. So you can model different entity relationships with DynamoDB. For example, one is to one is to N is to M relationships. I know that I have said that you cannot have intertitle relationships in DynamoDB, but you can still model such relationships. That’s what we are going to look at now. So we have our gameplay table here to store players game states. Okay, we can use this. One is one, or One is to end modeling. For example, if you want to get the game data of a particular player, then you can query using the user ID as the partition key and you get the data of a particular game.
So if you provide user ID and game ID, then that’s going to be one is to one modeling. And if you query using just the user ID, then you’re going to get all the games played by that particular user. So that’s an example of One is to N modeling. So you can get multiple results back. For example, user ID as partition key and game ID as sort key. Then that’s an example of One is to end modeling. The second access pattern could be to get a player’s gaming history. So this will also be One is to end modeling. So you could have user ID as the partition key and the game timestamp as the sort key. You can get all the games played by a particular user in a particular time period. And the third axis pattern could be a gaming leaderboard, which could be Nstrum modeling because there could be multiple users and multiple sessions.
So we could have a global secondary index here, because we don’t use the same partition key as the table. So we could have game ID as the partition key and score as the sort key. So for any particular game, we can get the top scores from different players. So that’s the way you create gaming leaderboard. So these are some of the simple design patterns that you can create with Dynamo TB. Now let’s look at another concept called as ride sharing. Now, this is very important when you have very few partition keys, okay? For example, imagine you have voting applications with just two candidates, okay?
So essentially there are only two partition keys here, candidate A and candidate B. And remember that there is a limit of tenGB per partition. So if we use the partition key of the candidate ID, we are going to run into partition issues as we have very few partitions, just two partitions. So what is the solution? The solution is to add a suffix to the candidate ID.
For example, you can add any random suffix. So here’s an example. So candidate A one could be one suffix, candidate A two, candidate B one. And when you gather your votes, when you want to count your votes, you can combine data from all the partition keys that correspond to all that begin with candidate a as the votes for candidate A and candidate B. You can do the same thing with candidate B as well. This is how you allow DynamoDB to use more partitions and help it to perform better than creating hot partitions. All right, let’s continue to the next lecture.
Now, here we are quickly going to see some of the errors and exceptions in DynamoDB. So there are two types of errors server side errors and client side errors. Now, you might already know about the server side errors are categorized with Http 500 codes. So five Xx, for example, service not available, internal server error and things like that. And client side errors are 400 errors, like authentication failure or missing required parameters, or forbidden 40. Three forbidden error. That kind of errors are client side errors. Both of these are the ways how DynamoDB returns errors. And DynamoDB also returns some exceptions. For example, here are some common exceptions.
So I’m not really going to go into this. You just have to know that these are some of the exceptions, and you can make out from name of the exception what they really mean. So access denied exception means you have some client side error that your application doesn’t have the right permissions to access the DynamoDB table. And these are some of the exceptions. And then at the end, you see provision throughput exceeded exceptions. This occurs when you consume more than what you have provisioned. And if you’re using the AWS SDKs to communicate with DynamoDB from your application, then these SDKs are designed to automatically retry whenever it sees the provision throughput exceeded exception.
Or if you’re not using the AWS SDK, you can still code your application to retry on such exceptions. And more often than not, when you retry, you might not run into that exception again. So this is one way to resolve the provision throughput exception. Another option is to use exponential backoff. Now, both of these options, error retries and exponential backoff, are built into AWS SDKs. What exponential backup does is, for example, you make a request and you get a provision throughput exceeded exception. Then exponential backup is going to wait for, let’s say, 100 milliseconds or so before it retries. And on the retry, if it still gets the same exception, then this time it is going to wait a little longer.
So it might wait about 200 milliseconds before retrying. And then again, if it gets the same exception, it’s going to wait even longer. So it might wait, for example, 400 milliseconds before retrying. And if you continue to get the provisioned throughput exception, then that’s when it will stop retrying and back off. So this is how exponential back off works, and this is built into AWS SDKs. If you’re not using these SDKs, then you’ve can code your applications in this similar fashion.
Popular posts
Recent Posts