PDFs and exam guides are not so efficient, right? Prepare for your Amazon examination with our training course. The AWS Certified Database - Specialty course contains a complete batch of videos that will provide you with profound and thorough knowledge related to Amazon certification exam. Pass the Amazon AWS Certified Database - Specialty test with flying colors.
Curriculum for AWS Certified Database - Specialty Certification Video Course
Name of Video | Time |
---|---|
1. Purpose Built Databases |
4:00 |
Name of Video | Time |
---|---|
1. Types of data |
5:00 |
2. Relational databases |
6:00 |
3. Non-relational databases |
7:00 |
Name of Video | Time |
---|---|
1. Amazon RDS overview |
4:00 |
2. RDS pricing model |
1:00 |
3. Instance type, storage type and storage auto scaling for RDS |
4:00 |
4. RDS parameter groups |
4:00 |
5. RDS option groups |
2:00 |
6. RDS - Hands on |
14:00 |
7. RDS security - Network |
3:00 |
8. RDS security - IAM |
6:00 |
9. Rotating RDS DB credentials |
1:00 |
10. Windows authentication in RDS for SQL Server |
3:00 |
11. RDS encryption in transit and at rest |
5:00 |
12. RDS backups |
4:00 |
13. Copying and sharing RDS snapshots |
2:00 |
14. How to encrypt an unencrypted RDS database |
2:00 |
15. DB restore options in RDS |
4:00 |
16. Exporting RDS DB snapshot to S3 |
2:00 |
17. RDS backup and restore - Hands on |
5:00 |
18. RDS Multi-AZ deployments and read replicas |
9:00 |
19. Read replica use cases |
1:00 |
20. Promoting a read replica to a standalone DB instance |
3:00 |
21. RDS Multi-AZ failover and replica promotion - Demo |
14:00 |
22. Enabling writes on a read replica |
1:00 |
23. RDS read replica capabilities |
2:00 |
24. RDS Second-tier replicas and replica promotion - Demo |
7:00 |
25. Cross-region read replicas in RDS |
1:00 |
26. RDS replicas with an external database |
2:00 |
27. RDS disaster recovery strategies |
6:00 |
28. Troubleshooting replica issues in RDS |
6:00 |
29. Performance hit on new read replicas |
2:00 |
30. Scaling and sharding in RDS |
3:00 |
31. RDS monitoring |
4:00 |
32. RDS event subscriptions, recommendations and logs |
5:00 |
33. Exporting RDS logs to S3 |
2:00 |
34. RDS Enhanced Monitoring |
2:00 |
35. RDS Performance Insights |
10:00 |
36. CloudWatch Application Insights |
1:00 |
37. RDS on VMware |
1:00 |
38. RDS - Good things to know |
2:00 |
39. Aurora overview |
4:00 |
40. Aurora architecture |
8:00 |
41. Aurora Parallel Query |
3:00 |
42. Aurora Serverless |
5:00 |
43. Data API for Aurora Serverless |
2:00 |
44. Aurora multi-master |
2:00 |
45. Aurora cross region replicas and Aurora Global database |
2:00 |
46. Reliability features in Aurora |
3:00 |
47. Aurora pricing model |
3:00 |
48. Aurora security - Network, IAM and encryption |
3:00 |
49. Parameter groups in Aurora and Aurora Serverless |
4:00 |
50. Creating an Aurora database - Hands on |
9:00 |
51. Creating an Aurora Serverless database - Hands on |
2:00 |
52. Using Data API with Aurora Serverless database - Hands on |
2:00 |
53. Scaling options in Aurora |
5:00 |
54. Aurora monitoring and advanced auditing |
4:00 |
55. Monitoring in RDS and Aurora - Demo |
4:00 |
56. Exporting Aurora logs |
1:00 |
57. Database activity streams in Aurora |
1:00 |
58. Troubleshooting storage issues in Aurora |
3:00 |
59. Aurora benchmarking |
1:00 |
60. Exporting data from Aurora into S3 |
2:00 |
61. Aurora backups and backtracking |
4:00 |
62. Aurora backups vs snapshots vs backtrack |
2:00 |
63. Aurora backup, restore, PITR and backtrack - Demo |
3:00 |
64. Cloning databases in Aurora |
3:00 |
65. Aurora failovers |
4:00 |
66. Cluster Cache Management (CCM) in Aurora PostgreSQL |
3:00 |
67. Simulating fault tolerance or resiliency in Aurora |
6:00 |
68. Simulating failovers in Aurora - Demo |
1:00 |
69. Aurora failover priorities in action - Demo |
10:00 |
70. Fast failover in Aurora PostgreSQL |
6:00 |
71. Cluster replication options for Aurora MySQL |
5:00 |
72. Aurora replicas vs RDS MySQL replicas |
3:00 |
73. Comparison of RDS deployments |
5:00 |
74. How to invoke Lambda functions from Aurora MySQL |
2:00 |
75. How to load data from S3 into Aurora MySQL |
2:00 |
76. RDS / Aurora – Good things to know |
3:00 |
Name of Video | Time |
---|---|
1. DynamoDB overview |
3:00 |
2. Working with DynamoDB - Hands on |
11:00 |
3. DynamoDB basics |
7:00 |
4. DynamoDB consistency |
5:00 |
5. DynamoDB pricing model |
6:00 |
6. DynamoDB throughput |
4:00 |
7. Calculating capacity units |
4:00 |
8. DynamoDB burst capacity and adaptive capacity |
6:00 |
9. DynamoDB local secondary index (LSI) |
4:00 |
10. DynamoDB global secondary index (GSI) |
3:00 |
11. Choosing between LSI and GSI |
3:00 |
12. Simple design patterns with DynamoDB |
4:00 |
13. Errors and exceptions in DynamoDB |
3:00 |
14. DynamoDB partitions |
5:00 |
15. DynamoDB partition behavior example |
6:00 |
16. Scaling options in DynamoDB |
4:00 |
17. DynamoDB scaling and partition behavior |
4:00 |
18. DynamoDB best practices |
4:00 |
19. DynamoDB best practices (contd.) |
4:00 |
20. Large object patterns and table operations |
5:00 |
21. DynamoDB accelerator (DAX) |
3:00 |
22. DAX architecture |
2:00 |
23. DAX operations |
4:00 |
24. Implementing DAX - Hands on |
6:00 |
25. DynamoDB backup and restore |
7:00 |
26. DynamoDb backup and restore - Hands on |
3:00 |
27. Continuous backup with PITR |
2:00 |
28. Continuous backup with PITR - Hands on |
4:00 |
29. DynamoDB encryption |
4:00 |
30. DynamoDB streams |
3:00 |
31. DynamoDB TTL |
2:00 |
32. DynamoDB TTL - Hands on |
5:00 |
33. TTL use cases |
1:00 |
34. DynamoDB global tables |
5:00 |
35. DynamoDB global tables - Hands on |
6:00 |
36. Fine-grained access control and Web-identity federation in DynamoDB |
6:00 |
37. CloudWatch contributor insights for DynamoDB |
1:00 |
Name of Video | Time |
---|---|
1. Redshift overview |
3:00 |
2. Creating a Redshift cluster - Hands on |
9:00 |
3. Redshift architecture |
2:00 |
4. Loading data into Redshift |
3:00 |
5. Loading data from S3 into Redshift - Hands on |
7:00 |
6. More ways to load data into Redshift |
2:00 |
7. Redshift Spectrum |
3:00 |
8. Querying S3 data with Redshift Spectrum - Hands on |
8:00 |
9. Redshift federated query |
1:00 |
10. Star schema in data warehouses |
2:00 |
11. Redshift fundamentals |
9:00 |
12. Redshift Workload Management (WLM) |
3:00 |
13. Redshift concurrency scaling |
2:00 |
14. Redshift scaling |
2:00 |
15. Redshift backup, restore and cross-region snapshots |
3:00 |
16. Redshift Multi-AZ deployment alternative |
4:00 |
17. Redshift availability and durability |
2:00 |
18. Redshift security |
3:00 |
19. Enhanced VPC routing in Redshift |
4:00 |
20. Redshift monitoring |
3:00 |
21. Redshift pricing |
1:00 |
22. Redshift related services - Athena and Quicksight |
3:00 |
23. AQUA for Redshift |
2:00 |
Name of Video | Time |
---|---|
1. ElastiCache overview |
5:00 |
2. Caching strategies |
5:00 |
3. Redis architecture and Multi-AZ auto-failover |
5:00 |
4. Redis backup and restore |
2:00 |
5. Redis scaling and replication |
6:00 |
6. Creating a Redis cluster - Hands on |
5:00 |
7. Redis global datastore |
3:00 |
8. Redis - Good things to know |
2:00 |
9. Redis best practices |
2:00 |
10. Redis use cases |
3:00 |
11. Memcached overview |
2:00 |
12. Memcached architecture |
1:00 |
13. Memcached auto discovery |
2:00 |
14. Memcached scaling |
2:00 |
15. Creating a Memcached cluster - Hands on |
4:00 |
16. Choosing between Redis and Memcached |
2:00 |
17. ElastiCache security |
4:00 |
18. ElastiCache logging and monitoring |
1:00 |
19. ElastiCache pricing |
1:00 |
Name of Video | Time |
---|---|
1. DocumentDB overview |
2:00 |
2. What and why about document databases |
2:00 |
3. DocumentDB architecture |
3:00 |
4. DocumentDB backup and restore |
2:00 |
5. DocumentDB scaling |
1:00 |
6. DocumentDB security |
2:00 |
7. DocumentDB pricing |
1:00 |
8. DocumentDB monitoring |
3:00 |
9. DocumentDB performance management |
1:00 |
10. Creating a DocumentDB cluster - Hands on |
5:00 |
Name of Video | Time |
---|---|
1. Neptune overview |
5:00 |
2. Neptune architecture |
3:00 |
3. Creating a Neptune cluster - Hands on |
13:00 |
4. Bulk loading graph data into Neptune |
2:00 |
5. Bulk loading graph data into Neptune from S3 - Hands on |
14:00 |
6. Neptune Workbench |
1:00 |
7. Querying Neptune via Jupyter Notebooks - Hands on |
5:00 |
8. Neptune replication and high availability |
2:00 |
9. Neptune backup and restore |
2:00 |
10. Database cloning in Neptune |
2:00 |
11. Neptune security |
2:00 |
12. Neptune monitoring |
1:00 |
13. Query queuing in Neptune |
1:00 |
14. Neptune service errors |
2:00 |
15. SPARQL federated query |
1:00 |
16. Neptune streams |
2:00 |
17. Neptune pricing |
1:00 |
Name of Video | Time |
---|---|
1. Amazon Elasticsearch Service overview |
3:00 |
2. ElasticSearch Service patterns |
2:00 |
3. Elasticsearch Service - Multi-AZ |
1:00 |
4. Logging options in Elasticsearch Service |
1:00 |
5. ElasticSearch Service pricing |
1:00 |
6. ElasticSearch Service - Hands on |
4:00 |
Name of Video | Time |
---|---|
1. Timestream overview |
4:00 |
2. Timestream pricing |
1:00 |
Name of Video | Time |
---|---|
1. QLDB overview |
2:00 |
2. QLDB architecture |
3:00 |
3. QLDB views |
2:00 |
4. Working with QLDB |
2:00 |
5. Data verification in QLDB |
2:00 |
6. Creating a QLDB ledger - Hands on |
12:00 |
7. Data verification in QLDB - Hands on |
3:00 |
8. QLDB backup and restore (an alternative) |
1:00 |
9. QLDB streams |
1:00 |
10. QLDB high availability, durability and an alternative to CRR |
1:00 |
11. QLDB security |
1:00 |
12. QLDB monitoring |
1:00 |
13. QLDB pricing |
1:00 |
Name of Video | Time |
---|---|
1. Keyspaces overview |
3:00 |
2. Migrating from Cassandra to Keyspaces |
1:00 |
3. Read and write consistency in Keyspaces |
1:00 |
4. Keyspaces pricing |
3:00 |
5. Working with Keyspaces - Hands on |
5:00 |
Name of Video | Time |
---|---|
1. Comparison of AWS Databases |
6:00 |
Name of Video | Time |
---|---|
1. Database migration overview |
5:00 |
2. DMS sources and targets |
2:00 |
3. DMS architecture and overview |
4:00 |
4. Migration with DMS in action - Hands on |
12:00 |
5. SCT overview |
3:00 |
6. Workload Qualification Framework (WQF) |
1:00 |
7. DMS tasks and task assessment reports |
2:00 |
8. DMS migration types |
1:00 |
9. DMS - Good things to know |
1:00 |
10. Migrating large tables and LOBs with DMS |
5:00 |
11. DW migration with SCT |
5:00 |
12. Migration playbooks |
3:00 |
13. DMS monitoring |
2:00 |
14. DMS validation |
2:00 |
15. DMS statistics and control tables |
3:00 |
16. DMS security - IAM, encryption and networking |
5:00 |
17. DMS pricing |
1:00 |
18. DMS general best practices |
1:00 |
19. DMS migration architectures to minimize downtime |
6:00 |
20. Migrating large databases |
2:00 |
21. Migrating to RDS databases |
8:00 |
22. Migrating to Aurora |
6:00 |
23. Migrating Redis workloads to ElastiCache |
4:00 |
24. Migrating to DocumentDB |
7:00 |
25. Streaming use cases for DMS |
4:00 |
Name of Video | Time |
---|---|
1. Encryption and Snapshots |
3:00 |
2. Database Logging |
5:00 |
3. Secrets Manager |
1:00 |
4. Active Directory with RDS Microsoft SQL Server |
2:00 |
Name of Video | Time |
---|---|
1. CloudFormation Overview |
7:00 |
2. CloudFormation Create Stack Hands On |
6:00 |
3. CloudFormation Update and Delete Stack Hands On |
8:00 |
4. YAML Crash Course |
4:00 |
5. CloudFormation Resources |
6:00 |
6. CloudFormation Parameters |
5:00 |
7. CloudFormation Mappings |
3:00 |
8. CloudFormation Outputs |
3:00 |
9. CloudFormation Conditions |
2:00 |
10. CloudFormation Intrinsic Functions |
6:00 |
Name of Video | Time |
---|---|
1. VPC Section Structure |
1:00 |
2. VPC, Subnets, IGW and NAT |
5:00 |
3. NACL, SG, VPC Flow Logs |
5:00 |
4. VPC Peering, Endpoints, VPN, DX |
6:00 |
5. VPC Cheat Sheet & Closing Comments |
3:00 |
Name of Video | Time |
---|---|
1. AWS Lambda Architectures |
4:00 |
2. Server Migration Service |
2:00 |
3. EBS-optimized instances |
2:00 |
4. Transferring large amount of data into AWS |
2:00 |
5. Disaster Recovery |
12:00 |
Name of Video | Time |
---|---|
1. Exam Guide & Sample Questions |
2:00 |
2. Sample question 1 |
4:00 |
3. Sample question 2 |
4:00 |
4. Sample question 3 |
3:00 |
5. Sample question 4 |
2:00 |
6. Sample question 5 |
3:00 |
7. Sample question 6 |
2:00 |
8. Sample question 7 |
4:00 |
9. Sample question 8 |
4:00 |
10. Sample question 9 |
5:00 |
11. Sample question 10 |
4:00 |
12. Exam Strategy: How to tackle exam questions |
1:00 |
13. Additional Resources |
1:00 |
100% Latest & Updated Amazon AWS Certified Database - Specialty Practice Test Questions, Exam Dumps & Verified Answers!
30 Days Free Updates, Instant Download!
AWS Certified Database - Specialty Premium File
AWS Certified Database - Specialty Training Course
AWS Certified Database - Specialty Study Guide
Free AWS Certified Database - Specialty Exam Questions & AWS Certified Database - Specialty Dumps
File Name | Size | Votes |
---|---|---|
File Name amazon.braindumps.aws certified database - specialty.v2024-03-02.by.axel.137q.vce |
Size 5.25 MB |
Votes 1 |
File Name amazon.realtests.aws certified database - specialty.v2021-12-28.by.christopher.126q.vce |
Size 3.85 MB |
Votes 1 |
File Name amazon.passcertification.aws certified database - specialty.v2021-10-13.by.austin.111q.vce |
Size 4.06 MB |
Votes 1 |
File Name amazon.selftesttraining.aws certified database - specialty.v2021-08-31.by.aiden.80q.vce |
Size 263.95 KB |
Votes 1 |
File Name amazon.certkey.aws certified database - specialty.v2021-05-28.by.austin.94q.vce |
Size 232.55 KB |
Votes 1 |
File Name amazon.selftestengine.aws certified database - specialty.v2021-02-26.by.alexander.87q.vce |
Size 210.71 KB |
Votes 2 |
Amazon AWS Certified Database - Specialty Training Course
Want verified and proven knowledge for AWS Certified Database - Specialty? Believe it's easy when you have ExamSnap's AWS Certified Database - Specialty certification video training course by your side which along with our Amazon AWS Certified Database - Specialty Exam Dumps & Practice Test questions provide a complete solution to pass your exam Read More.
The instance class size or storage is not sufficient. And if you're using parallel queries on the primary instance, that also results in higher replica lag. Now let's look at how to troubleshoot the replication errors. So there could be errors in replication. How do you troubleshoot them? What are the recommendations? Let's have a look at that. The recommendation from AWS is that the size of your replica should match your source database. The storage size as well as the instance class size should be the same as the source database. And you should use compatible database parameter group settings for the source as well as for the replica. For example, the maximum allowed packet size for read replica must be the same as that of the source database instance. This is just one example, but in general, the parameter group settings for the source and the replica should be the same or compatible. You can monitor the application state of your replica instance and if it shows an error, then you can see the details in the replication error field. Apart from this, you can also subscribe to Audiusevents to get alerts on any replica issues. Then writing to tables on a read replica. We have discussed this before that you can set the read only parameter to zero to make your read Replica write able. But remember to use it only for maintenance purposes.
Like, you want to create some indexes only on the replica. Then you set the read-only mode to zero to make your replica readable. And once you're done with your maintenance task, remember to set the read-only parameter back to one. Another important thing to note here is that if you write to the tables on the read replica, it might make them incompatible with the source database and break the replication. So, as I said, you should set the read-only parameter back to one immediately after completing your maintenance tasks. And replication is only supported with transactional storage engines like Innov. Inodb is a storage engine used in MySQL. Another engine that MySQL provides is my isam. And this is not very compatible with replication, and it's likely to cause replication errors. ODB is more common in use. In general, if you want to use replication, you should consider using inodb. Again, unsafe non-deterministic queries like Sydney can also break replication, so you should avoid using such queries. And finally, after you review the replication errors, if you find that these are not major errors, and if these are not a cause for concern, you can simply skip those errors.
Or another option is to delete and recreate the replica. All right, now how do you troubleshoot the read Replica issues? For example, in MySQL, replica issues are the errors or data inconsistencies between the source and the replica. And this could happen due to bin log events or inodb. Redo logs aren't flushed during replication or source instance failures. So if something like this happens, the only way is to delete and recreate the replica. There are some recommendations to prevent these issues from happening. So what you could do is set certain parameters. For example, if you sync binlock equal to one, or innov flash lockapp to your x commit equal to one, or in ODBsupport for XA, you can set it to one. And these settings are the recommendations from AWS. They might affect the performance of your database. So it's a good idea to test before implementing this in production. And from an exam perspective, you don't have to know the names of these parameters, but just remember that you have certain options to address the replica issues in my sequel. Alright, let's continue.
Now, this is kind of an important thing to know that when you create a new replica, you might experience slow performance on the replica. This occurs frequently because RDS snapshots are EBSnapshots that are stored in S3. And when you create your new replica, what happens when these EBS volumes are loaded lazily in the background? So only when you read certain data, that data gets loaded into the database, and other data still resides on S 3. It doesn't get loaded into the database until you specifically query that data. This is something that is called a first touch penalty. So when you query any data, it takes longer to retrieve it the first time. So how do you work around this? The simple option is, if your database is small, you can simply run the select star from table so you get all the data from S Three into your database. Another option is to initiate a full table scan with vacuum analysis options. This is supported in postgres sequel. And another reason for performance issues besides this lazy loading thing is that there could be an empty buffer pool. So the buffer pool is the cache for table and index data, and the master instance will always have this cache. But when you create a new read replica, this cache is going to be empty. So that could be another reason why there is slow performance on a new read replica.
Let's look at the scaling in RDS. So you can have vertical scaling or you can have horizontal scaling. So vertical scaling means scaling up. So you have your RDS instance and you increase the size of your RDS instance. So if you have a single AC instance, when you do scale up, it's not going to be available during the scaling operation. But if you have a multi-AC set up, then you will have minimal downtime during your scaling operation because the scaling operation will first happen on the standby, and then there will be a failover to the standby, and after that, the primary instance will be upgraded. The other way to scale is through horizontal scaling, which is also called scaling out. So, this simply means that you can create additional replicas for your database instance. So, adding more read capacity is how you scale horizontally. So this is kind of useful for reading heavy workloads. And the replicas also act as DR targets, so you can also use them for disaster recovery. Now, apart from this, there is one more option that I would like to discuss, and that is sharding. So what exactly is shading? Sharding is actually horizontal partitioning. So you actually distribute the data across different databases. You can think of it as breaking your database down into smaller chunks.
So you simply split and distribute your data across multiple databases. And each database is called a "shard". Okay? So you can have your data split across multiple chunks and your application can then read or write from individual databases. And you have to really manage this at the application level. RDS doesn't inherently support shouting, you have to manage it at the application level. So mapping or routing logic should be maintained at the application tier. So you can say that shard number one contains the data of customers from one to ten thousand, shard number two contains data from customer number 10,000 to 20,000, and so on. In addition to splitting the load, this also offers some kind of fault tolerance because now there is no single point of failure. As a result, if one shark fails, the other sharks are unaffected. All right, so that's about it. Let's continue to the next lecture.
Now let's look at RDS monitoring. Now, monitoring in RDS is integrated with Cloud Watch. So you can do the monitoring within the RDS dashboard as well as in the Cloud Watch dashboard. So some of the common metrics that you can monitor are CPU, RAM, disc space consumption, network traffic connections, IOps metrics, and so on. You can also use native database engine logs or database extensions. For example, you can use the PG Auditextension in PostgreSQL for auditing purposes. When you enable this extension, you can really monitor the database activity that's happening on your PostgreSQL database. DML, DCL, and DDL queries can be monitored.
So, DML is a data manipulation language. That is, any insert updates that are happening in your tables correspond to DML. DCL is a data control language. Any access controls changing in your database, like grants and reworks, that's happening in your database fall into DCL. And DDL is a data definition language. So any table structure changes or schema changes fall under DDL. So you can use extensions like PG Audit to keep track of your database activity. And then there are some manual monitoring tools like Trusted Advisor, Cloud Watch, and so on. So you can definitely monitor it using the RDS console. So all the Cloud Watch metrics can be monitored through the RDS console. The AWS Trusted Advisor is a tool that you can use for cost optimization, security fault tolerance, or performance improvement checks. The Trusted Advisor gives you some recommendations that you can implement in your database, and you can also monitor the RDS service health status using Cloud Watch.
All right? And apart from this, you also have automated tools for monitoring purposes, like the RDS Event Notifications. So you can subscribe to Audius events anytime your database is modified, updated, or deleted. Whatever happens to your database, you can set up the event notifications. You can also enable the database logs of your database engines, and these can also be exported to Cloud Watch logs. Again, Cloud Watch is an automated tool. Then, apart from this, you also have something called Enhanced Monitoring, which provides a real-time dashboard for your CPU metrics. Then you have performance inside. This is a really handy tool to identify performance bottlenecks. And we're going to go into detail about these tools right now. I'm just listing it here. Then you have RDS recommendations, which are automatically generated for your databases, and you can see them in your RDS console. And you also have Cloud Trail integrated with RDS just like any other AWS service. And all it does is capture the RDS API calls. These are viewable in the Cloud Trail console. Or you can also deliver it as a trail to an S3 bucket. Okay, now you can view up to 90 days of your account activity within the Cloud Trail Console. But if you want to store it for longer, then you should create a trail and deliver that trail to S Three, so you can then access the audit logs in s three. Alright, that's about it. Let's continue.
Now let's look at the RDS notifications. They are also called the event subscriptions. These are available within the RDS console. We'll go through a hands-on and look at everything, but for now, just know that these are available in the Arias Console.
And all it does is allow you to create Cloud Watch alarms to notify you whenever certain metric data crosses a threshold. So it's simply integrated with Cloud Watch and SMS services. So you get the notifications through the SMS service. What you can do is you can send your alarm notifications to an SMS topic. So you can get an email or a text message whenever these events occur. So all you have to do is to subscribe to any of the audience events that you need notification for right now. Event sources can be like snapshot instances, securitygroups, parameter groups, clusters, cluster snapshots, so on almost anything that's there in the Audius architecture, you can get event notifications on that. So you can have notifications on events like database instance creation, deletion, shutdown, restart, backup, recovery failure, and so on. So you can be notified about things that are relevant to your use case. Right, then let's look at the audience recommendations. These are periodic automated suggestions that are generated within your RDS Console.
Okay? You can see certain recommendations like enhanced monitoring are not enabled and databaseclusters with only one database instance. What they are suggesting is that you use multiple instances or you use a multiAZ setup for improved performance and availability. All right, so these are the kinds of recommendations that are available, and they are automatically generated within the RDS Console. All right, now let's look at the logs. The RDS logs Of course, you can view, watch, or download these logs from the RDS Console. You can export them to Cloud Watch logs. Of course, the types of logs are determined by the database engine. Another important thing to remember is that Cloud Watch logs do not expire. If you really want to expire them, you can set the group retention policy from one day to about ten years. If you don't want to expire them, don't expire them unless you want them to. And these logs are accessible from the RDS Console even if you disable the log export to Cloud Watch logs. Okay? Generally, it's a good idea to enable log export.
So the logs are exported from RDS to the Cloud Wash logs, and there are different lock types that can be exported to the Cloud Wash logs, which vary by the database engine. So you can have all the logs for Oracle, you can have an Audit Log for Oracle, MariaDB, and MySQL, and you have the MariaDB Audit Plugin,which you can enable using the Option group. And this will help you audit the database activity. So any DML, DDL, DCL, or queries running on your MySQL or MariaDB can now be audited using the MariaDBaudit plugin; similarly, the PG audit extension, as we just discussed, can audit the database depending on the type of database engine.
So you may want to remember that sometimes it might help in your example, right? Then you have listener login in Oracle. You also have atrace log in in SQL Server. MySQL Maradib, you have error logs in Postgres. You have a Postgres SQL log, which also contains audit logs. Postgres SQL also has an upgrade log. Mariahdb and MySQL have a general log, and they also provide you with a slow query log before we close. One thing I want to emphasise here is that you don't have to know which database engine provides which lock; you don't have to know it by the letter. But it's a good idea to have these things in the back of your mind. They'll help you choose the right responses when questions related to these topics get asked in the examination. All right, so that's about it. Let's continue.
Prepared by Top Experts, the top IT Trainers ensure that when it comes to your IT exam prep and you can count on ExamSnap AWS Certified Database - Specialty certification video training course that goes in line with the corresponding Amazon AWS Certified Database - Specialty exam dumps, study guide, and practice test questions & answers.
Comments (0)
Please post your comments about AWS Certified Database - Specialty Exams. Don't share your email address asking for AWS Certified Database - Specialty braindumps or AWS Certified Database - Specialty exam pdf files.
Amazon Training Courses
Only Registered Members can View Training Courses
Please fill out your email address below in order to view Training Courses. Registration is Free and Easy, You Simply need to provide an email address.
Latest IT Certification News
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.