Amazon AWS Certified Database Specialty – Amazon RDS and Aurora Part 2
Now let’s look at scaling in Aura. So you have three types of scaling. In Aura, we can have storage scaling, we can have compute scaling, and we also have auto scaling options. Let’s look at storage scaling first. This is built in and automatic. And we already know that storage scales from ten gigs to about 64 TB and it scales in tang increments, okay? And of course the 64 TB is a soft limit. You can request AWS support to increase this limit if you need it. Then let’s look at the compute scaling. There are different compute scaling options in Aura. You can use instant scaling. So this is like the vertical scaling. You can scale up or scale down the instance size, and you can have minimal downtime if you use replica promotion. So this is called as a force failover. So what you do is when you want to scale up your instance, for example, instead of scaling the master instance, you scale up the replica. And once your replica is scaled up, your failure to that replica.
And the failure typically takes just about 30 seconds, as we already know. So you can really scale up your instances with just about 30 seconds of downtime, right? Then we also have read scaling. So this is about the horizontal scaling or the scaling out operation. So here we simply add more replicas to your cluster. So you can add up to 15 replicas. And the third option of scaling is by setting a higher value for the max connections parameter in the instance level parameter group. So essentially, you are increasing the number of connections that your instance can support. So essentially you are scaling your instance capacity, right? And then we also have auto scaling with Aura. So with auto scaling, you can set scaling policies. So you simply define scaling policies from the AWS console. So from the Actions menu, simply choose the option that says add replica, auto scaling, and you can set the min and max number of replicas and the scaling conditions.
So the Aura cluster will scale between min and max replicas depending on your scaling conditions. And this condition can be defined using a target metric, a cloud watch metric, like CPU utilization or the number of connections. These are the two metrics that you can use. This simply makes use of the cloud watch metrics and alarms. And you also have to define corresponding im role and a cool down period. And cooldown period is something that will prevent frequent scaling operations. So you can define your cool down period, and Aura ensures that if there was a scaling operation, then you at least wait for the cooldown period to complete before the next scaling operation can occur. All right, now let’s look at auto scaling in Aura serverless.
Aura serverless has automatic scaling, so it scales up and down based on the load. So load is again defined by the CPU utilization and the number of connections. So after a scale up operation, there is a 15 minutes cooldown period for the subsequent scale down operation. And after a scale down, there is about 310 2nd cooldown period for another scale down. Okay? And it’s important to note here that there is no cooldown period for scaling up. The cooldown period here applies only for the subsequent scale down operations after a scale up or after a scale down. So there is no cooldown period for scaling up. Scaling up operation can happen anytime as and when it is necessary. And it’s also important to note that scaling operation cannot happen in certain situations. Like for example, there are long running queries or transactions in progress, or if there are any temporary table or table locks that are in use. So in these situations, auto scaling cannot happen. Alright, that’s about it. Let’s continue to the next lecture.
Now let’s look at Aura monitoring. So, monitoring in Aura is same as in RDS. It uses the same RDS monitoring infrastructure, so there is no difference as such. So you can use RDS Notifications Log exports to Cloud Watch Logs or to S Three. And all the metrics are provided by Cloud Watch. Just like RDS, you have enhanced monitoring, you have performance inside, you have RDS recommendations. The recommendations are the periodic automated suggestions that RDS comes up with for your instances, for your parameter groups, for your replicas, and so on. And you have Cloud Trail for auditing purpose. For database activity monitoring, you can use Cloud Trail. You can also use AWS Trusted Advisor for any security or optimization recommendations as it relates to Aurora.
And in addition to this, Aura also supports something called as Advanced Auditing. Okay, so Advanced Auditing is something that’s available only with Aura MySQL, and you typically use it to audit your database activity. All right, so this is something similar to the PG Audit extension that you have in RDS or PostgreSQL, or it’s also similar to the MariaDB audit plugin that you use with MariaDB or with MySQL on RDS. Okay, so you use the Advanced Auditing option to monitor your database activity or to monitor different queries happening on your database cluster. And you can view the Advanced Auditing logs in the Logs section of your database in the RDS console, you enable it with the Server Audit logging parameter. And there’s another parameter called as Server Audit Events, which you can use to choose which events to log. So there are different events that you can log with Advanced Auditing.
These are the events, and they are pretty self-explanatory. So Connect query query DCL query DDL query DML and Table So Connect is something that you use for logging the connections and disconnections to your database, irrespective of whether these connections are successful or failed. And they also log your user information. So you can monitor the Logins Log ads or even the failed login attempts to your database.
The query event logs all the queries in plain text, including the failed queries. The query DCL is for logging the DCL queries or the Data Control Language queries. This typically contains the permission changes like Grant or Revoke statements. The query DDL Audit event will log the DDL queries or the queries that change the Table schemas like Create, Table, Alter, Table, et cetera. And the query DML will log the DML queries or data manipulation, language queries. And these correspond to the regular SQL queries that change your data, or the queries that you use for reads. The Insert Update statements or even the select statements that you use for reading will be logged by the query DML event. And you also have a Table event that logs the tables that were affected by any query execution. All right, let’s continue.
All right, in this demo, I’m going to quickly show you the monitoring options in the audience console. So monitoring features in audios as well as in Aura are exactly the same. So this demo applies to audios as well as to the Aura databases. All right, so I have this Aura MySQL cluster here, and let’s find out how to use monitoring with this cluster. Let’s select the database cluster by clicking on the cluster name. This is the connectivity and security tab. We’re going to go to the monitoring tab, and you can see different Cloud Watch metrics here, like CPU Utilization, DB Connections, Variable Memory Network Throughput, and so on. And these metrics are provided by Cloud Watch, and you can click on any of these graphs to enlarge them and to study them further.
So this is the CPU Utilization Graph, and you can use this screen to monitor the CPU utilization of your database cluster. You can see two graphs here, one in orange and one in blue. These correspond to the two instances that we have in our cluster. And you can also change the period here. The maximum granularity that Cloud Watch matrix offer is about 1 minute. And now let’s look at how to access the enhanced monitoring. So from the monitoring drop down, you can choose the option that says Enhanced Monitoring to switch to the enhanced monitoring graph. So here you can see a couple of OS level metrics like free memory, active memory, CPU, user CPU, total and so on. You cannot zoom into these metrics. And remember that enhanced monitoring offers higher granularity than the Cloud Watch metrics. And you can change the granularity of enhanced monitoring by modifying your database instance.
So you can choose the granularity as high as 1 second. And with regular Cloud Watch metrics, you only get about 60 seconds of granularity. All right, and now let’s go to the OS process list. All right, so here you can monitor different OS level processes of your database cluster. All right, and now let’s go over to the performance insights. And here you can see the performance insights of the two instances that we have in our cluster. And of course, there is not much to see here because I have just launched these instances and there are no queries being made on this database. So you can change the duration here. You can look at the load by weights, you can look at the SQL statements, hosts and users. Similarly, you have the metrics for the other instance as well. So you can zoom in or zoom out by time to analyze the database load on these instances. So that’s about the monitoring features in RDS and Aura. Let’s continue to the next level lecture.
Now exporting Aurora logs. Logs can be exported to Cloud Wash Logs or to S Three in exactly the same way as you would with any other RDS database. Okay, so you simply export the logs to any Cloud Wash log group, and from there, you can export that data to Amazon S Three.
And another way to publish logs to Cloud Wash logs is to enable advanced auditing that we just saw and set the cluster level parameter server audit logs upload to one. All right, and apart from this, you can also use the AWS, SDK, or Lambda functions to read the logs from Aura and write them into the S Three bucket. All right, so that’s about it. Let’s continue.
Now, the Database Activity streams now, the Database Activity Streams is something that is available in addition to advanced auditing feature. And the Database Activity streams are near real time data stream of your database activity. So you can use this for monitoring purposes and even for auditing and compliance purposes. So what Aurora does is it creates Kinesis data stream and pushes your database activity to this stream.
All right, so you have your Aura cluster that publishes the Database Activity stream to Kinesis data stream, and you can monitor your database activity in real time from the Kinesis console. And you can also consume this data stream using Lambda function or using Kinesis data firehose. All right?
Troubleshooting storage issues in Aura. So when using Aura, you could see different errors. Like, for example, could not write block XXX of temporary file, no space left on the device, or the free storage capacity for the DB instance is low, or the error triple one four, table full, and so on. So the errors like like these typically relate to the local storage in the compute nodes. So we already know that the shared cluster storage in Aura is used for storing your persistent data.
And local storage is something that is used for temporary data and for logs. So this is the local storage in the compute instance or at the Compute layer and the storage size of course will depend on the instance class that you are using for your aurora cluster. So what you can do is monitor the Cloud Watch metrics like free local storage or free able memory to see the available and freeable local storage space. In addition to this, you can also use Performance Insights so you can monitor the native counters in Performance Insights to identify which are the frequently failing queries that are consuming a lot of excessive storage because when the queries fail, they generate logs and that adds up to your local storage.
So you can use Performance Insights to identify the failing queries and then you can fix those queries to prevent excessive use of your local storage. Apart from that, you can also use certain parameters, like max heap table size or temporary table size. And what this is going to do is this is going to make your tables consume more memory on the instance and thereby reduce the local storage used. And if none of these works, you always have the option to scale up your DB instance class, right? And in addition, if you’re using postgres SQL you can also enable the log temp files parameter to monitor the temporary tables and files. You can also set the parameter RDS log retention to a lower value to reclaim additional storage space. Alright, so that’s how you troubleshoot local storage issues in Aura.
Now, let’s look at Aura benchmarking. So, AWS claims that Aura provides about five times performance of typical MySQL databases and three times performance of standard postgres databases. And to substantiate the claim, Aura provides us with a benchmarking guide. Okay, so this benchmarking guide comes with a cloud formation template that you can use to replicate or to validate this claim. And AWS recommends using Sysbench benchmarking tool for the benchmarking purpose. So you install Sysbench on an easy to instance, and then you perform your Aura benchmarking activity. All right, let’s continue.
We already looked at how to export Aura logs. You export your logs to cloud Watch logs, and from there you export to S Three. So now let’s look at how to export your data from Aura into S Three. So, you have your Aura database. You have to create an imp role with a policy that allows Aurora to write to the given S Three bucket. And then you attach this role to your Aura cluster. So the way to do this is you simply add the role arm to the cluster parameter group and attach that parameter group to your cluster. So you have two parameters that you can use for this purpose.
Aura select into s three role with the value of the imp role arm. Or you can also use AWS default S Three role with the value of the IAM role ARN. All right. And once you do that, you can export your data from Aura database using simple SQL queries like select into out file s three. So you simply use the query select into out file s three and your data can be exported to s three. So for example, you can run a query, let’s say select star from the table name into out file s three and you provide the s three file path. And further, you can have additional options like fields dominated by comma and lines dominated by n. So essentially we are creating a CSV file or comma separated file to dump our data into s three. So that’s about exporting data from Orion to Sri. Let’s continue to the next lecture.
In this lecture, let’s talk about the aura backups and backtracking. So the aura backups and snapshots are same as in RDS. There is no change. The only thing you should remember is, unlike RDS, you cannot disable automatic backups in aura. So the minimum retention period is about one day. You have the daily automatic backups with a 30 minutes backup window, just like you have an RDS. And this also allows for pit or point in time recovery. And you can, of course, create snapshots as well. And in addition to this, aura also offers something called as a backtrack. So the backtrack feature allows you to rewind your database in place. So it’s kind of an in place restore. All right, so let’s look at the backtrack feature. This is only supported with the MySQL flavor of our database. And backtrack allows you to quickly rewind your database cluster to a specific point in time. And unlike other restore processes, backtrack performs an in place restore.
So you don’t have to create a new instance to restore your database to the same database will be restored in place. And this allows you to quickly recover from a user error. So a typical restore process without a backtrack generally takes a few hours. And with backtrack, the restore can happen within a few minutes. You can use backtrack to restore when creating a new database cluster, or you can also backtrack when you’re restoring from a snapshot. And backtrack feature allows up to 72 hours of pitr or rewind option. So this is the target backtrack window. And remember, you can repeatedly backtrack backwards as well as forward in time.
So if you have previously backtracked backward, then you can definitely backtrack forward as well. And remember, you can backtrack only the entire database cluster and not the individual database instances or individual tables. Then remember that the DB cluster will remain unavailable when backtrack operation is running. This is kind of a few minutes of downtime and backtrack. Even if it’s promising, it’s not a replacement for your good old backup. So you should still continue to back up your data using the automatic backups as well as manual snapshots. And remember, you enable backtrack when you create or when you restore a database cluster. Then cross region replication must be disabled when you use backtrack. So backtrack is not supported when you use cross region replication. And when the backtrack operation runs, it causes a brief instance disruption, of course, because there is a few minutes of downtime. So you must stop or pause your application before you perform a backtrack. And again, just like cross region replication, backtrack is also not supported with multimaster clusters. All right.
Alright. Now let’s quickly compare the three backup options backups, snapshots, and backtrack. So the backups are automated, snapshots are manually triggered, and backtrack is automated as well. With backups, you can only restore to a new instance and it takes hours. Similar case with snapshots, you can only restore to a new instance and it will take a couple of hours. And Backtrack, on the other hand, supports in place restore said this happens within minutes. Automated backups support Pitr within the backup retention period. That’s maximum of 35 days. The snapshots do not support Pitr, and backtrack supports Pitr up to a maximum of 72 hours. Backups are great for unexpected failures. Snapshots are great for known database events like database upgrades, for example.
And the backtrack option is great for undoing your mistakes. For quick restores, for example, you do some mistake during testing or development, you can use backtrack to undo your mistakes. You can also use it to explore earlier data changes and so on. And remember, as we have discussed already, you can repeatedly backtrack backward or forward in time. Of course, you cannot backtrack forward into the future. You can only backtrack forward if you have previously backtracked backwards. Right? That makes sense, right? All right.
In this lecture, let’s look at the backup and restore options with Aura. So with Aura you already have the automated backups that you can access from this particular link here. So automated backups will show up here. Since we have just created this cluster, we don’t see any backups. So going back to the databases screen, you can select any of the instance and then from the Actions menu you can take a manual snapshot. So this is the primary master node and this is the read replica with read Replica. Also you can take a snapshot and for restoring from a snapshot, you can go to the snapshots, the snapshots will show up here, you select the snapshot and then you can restore from here. Or you can also copy your snapshots, share them, migrate them, or export to S three or delete them.
Okay, this is not aurora snapshot, but the process is exactly the same with aurora snapshots as well. All right, that’s how you restore from a snapshot or from an automated backup. Now, if you want to do a point in time restore, then you have to select the cluster and then from under Actions, you choose the option to restore to a point in time. So when you choose this option, it will show you the latest restorable time. Or you can also provide a custom restoration time and rest of the options are same as when you create a new database.
So the point in time recovery is always to a new database instance. All right? So I’m not going to do that, I’m just going to cancel out from here. So this was about point in time recovery. Now let’s look at the backtrack options. You select your cluster and you can choose the option to backtrack. Remember, you can backtrack the entire cluster if you choose the individual instance, you cannot backtrack it, okay? When you use backtrack, the entire cluster will be backtracked. So choose the backtrack option and here you can specify the time to which you want to backtrack. The earliest restorable time is shown here, so you can choose any time after this. So for example, this is the earliest restorable time, so you can choose anything after that. So let’s say I want to restore to 442 and 0 second, okay? Just an arbitrary time for our demo and then you can choose the option to backtrack the DB cluster.
And what this is going to do is it’s going to backtrack the same cluster in place, so it’s not going to create a new instance. So simply hit the backtrack DB cluster and it’s going to backtrack your cluster to the point in time that we specified. And you can see that it says it’s backtracking the DB cluster, right? And I’m going to pause the video here and come back after this is done. And now you can see that the cluster is available again, it took one or two minutes, so that’s how you would perform backtrack operation with your aura clusters. All right, let’s continue to the next lecture.
Popular posts
Recent Posts