PDFs and exam guides are not so efficient, right? Prepare for your Microsoft examination with our training course. The DP-203 course contains a complete batch of videos that will provide you with profound and thorough knowledge related to Microsoft certification exam. Pass the Microsoft DP-203 test with flying colors.
Curriculum for DP-203 Certification Video Course
Name of Video | Time |
---|---|
1. IMPORTANT - How we are going to approach the exam objectives |
3:00 |
2. OPTIONAL - Overview of Azure |
2:00 |
3. OPTIONAL - Concepts in Azure |
4:00 |
4. Azure Free Account |
5:00 |
5. Creating an Azure Free Account |
5:00 |
6. OPTIONAL - Quick tour of the Azure Portal |
6:00 |
Name of Video | Time |
---|---|
1. Section Introduction |
2:00 |
2. Understanding data |
4:00 |
3. Example of data storage |
2:00 |
4. Lab - Azure Storage accounts |
6:00 |
5. Lab - Azure SQL databases |
15:00 |
6. A quick note when it comes to the Azure Free Account |
4:00 |
7. Lab - Application connecting to Azure Storage and SQL database |
11:00 |
8. Different file formats |
7:00 |
9. Azure Data Lake Gen-2 storage accounts |
3:00 |
10. Lab - Creating an Azure Data Lake Gen-2 storage account |
9:00 |
11. Using PowerBI to view your data |
7:00 |
12. Lab - Authorizing to Azure Data Lake Gen 2 - Access Keys - Storage Explorer |
6:00 |
13. Lab - Authorizing to Azure Data Lake Gen 2 - Shared Access Signatures |
8:00 |
14. Azure Storage Account - Redundancy |
11:00 |
15. Azure Storage Account - Access tiers |
9:00 |
16. Azure Storage Account - Lifecycle policy |
3:00 |
17. Note on Costing |
5:00 |
Name of Video | Time |
---|---|
1. Section Introduction |
2:00 |
2. The internals of a database engine |
4:00 |
3. Lab - Setting up a new Azure SQL database |
3:00 |
4. Lab - T-SQL - SELECT clause |
3:00 |
5. Lab - T-SQL - WHERE clause |
3:00 |
6. Lab - T-SQL - ORDER BY clause |
1:00 |
7. Lab - T-SQL - Aggregate Functions |
1:00 |
8. Lab - T-SQL - GROUP BY clause |
4:00 |
9. Lab - T-SQL - HAVING clause |
1:00 |
10. Quick Review on Primary and Foreign Keys |
4:00 |
11. Lab - T-SQL - Creating Tables with Keys |
3:00 |
12. Lab - T-SQL - Table Joins |
5:00 |
Name of Video | Time |
---|---|
1. Section Introduction |
2:00 |
2. Why do we need a data warehouse |
10:00 |
3. Welcome to Azure Synapse Analytics |
2:00 |
4. Lab - Let's create a Azure Synapse workspace |
3:00 |
5. Azure Synapse - Compute options |
3:00 |
6. Using External tables |
4:00 |
7. Lab - Using External tables - Part 1 |
9:00 |
8. Lab - Using External tables - Part 2 |
12:00 |
9. Lab - Creating a SQL pool |
7:00 |
10. Lab - SQL Pool - External Tables - CSV |
9:00 |
11. Data Cleansing |
4:00 |
12. Lab - SQL Pool - External Tables - CSV with formatted data |
3:00 |
13. Lab - SQL Pool - External Tables - Parquet - Part 1 |
4:00 |
14. Lab - SQL Pool - External Tables - Parquet - Part 2 |
7:00 |
15. Loading data into the Dedicated SQL Pool |
2:00 |
16. Lab - Loading data into a table - COPY Command - CSV |
11:00 |
17. Lab - Loading data into a table - COPY Command - Parquet |
3:00 |
18. Pausing the Dedicated SQL pool |
3:00 |
19. Lab - Loading data using PolyBase |
5:00 |
20. Lab - BULK INSERT from Azure Synapse |
6:00 |
21. My own experience |
6:00 |
22. Designing a data warehouse |
11:00 |
23. More on dimension tables |
5:00 |
24. Lab - Building a data warehouse - Setting up the database |
6:00 |
25. Lab - Building a Fact Table |
8:00 |
26. Lab - Building a dimension table |
6:00 |
27. Lab - Transfer data to our SQL Pool |
15:00 |
28. Other points in the copy activity |
2:00 |
29. Lab - Using Power BI for Star Schema |
6:00 |
30. Understanding Azure Synapse Architecture |
7:00 |
31. Understanding table types |
7:00 |
32. Understanding Round-Robin tables |
5:00 |
33. Lab - Creating Hash-distributed Tables |
5:00 |
34. Note on creating replicated tables |
1:00 |
35. Designing your tables |
4:00 |
36. Designing tables - Review |
4:00 |
37. Lab - Example when using the right distributions for your tables |
10:00 |
38. Points on tables in Azure Synapse |
2:00 |
39. Lab - Windowing Functions |
4:00 |
40. Lab - Reading JSON files |
5:00 |
41. Lab - Surrogate keys for dimension tables |
6:00 |
42. Slowly Changing dimensions |
4:00 |
43. Type 3 Slowly Dimension dimension |
2:00 |
44. Creating a heap table |
3:00 |
45. Snowflake schema |
1:00 |
46. Lab - CASE statement |
6:00 |
47. Partitions in Azure Synapse |
2:00 |
48. Lab - Creating a table with partitions |
11:00 |
49. Lab - Switching partitions |
7:00 |
50. Indexes |
6:00 |
51. Quick Note - Modern Data Warehouse Architecture |
2:00 |
52. Quick Note on what we are taking forward to the next sections |
2:00 |
53. What about the Spark Pool |
2:00 |
Name of Video | Time |
---|---|
1. Section Introduction |
1:00 |
2. Extract, Transform and Load |
2:00 |
3. What is Azure Data Factory |
5:00 |
4. Starting with Azure Data Factory |
2:00 |
5. Lab - Azure Data Lake to Azure Synapse - Log.csv file |
13:00 |
6. Lab - Azure Data Lake to Azure Synapse - Parquet files |
13:00 |
7. Lab - The case with escape characters |
8:00 |
8. Review on what has been done so far |
6:00 |
9. Lab - Generating a Parquet file |
5:00 |
10. Lab - What about using a query for data transfer |
6:00 |
11. Deleting artefacts in Azure Data Factory |
3:00 |
12. Mapping Data Flow |
5:00 |
13. Lab - Mapping Data Flow - Fact Table |
14:00 |
14. Lab - Mapping Data Flow - Dimension Table - DimCustomer |
15:00 |
15. Lab - Mapping Data Flow - Dimension Table - DimProduct |
10:00 |
16. Lab - Surrogate Keys - Dimension tables |
4:00 |
17. Lab - Using Cache sink |
9:00 |
18. Lab - Handling Duplicate rows |
8:00 |
19. Note - What happens if we don't have any data in our DimProduct table |
4:00 |
20. Changing connection details |
1:00 |
21. Lab - Changing the Time column data in our Log.csv file |
8:00 |
22. Lab - Convert Parquet to JSON |
5:00 |
23. Lab - Loading JSON into SQL Pool |
5:00 |
24. Self-Hosted Integration Runtime |
3:00 |
25. Lab - Self-Hosted Runtime - Setting up nginx |
9:00 |
26. Lab - Self-Hosted Runtime - Setting up the runtime |
7:00 |
27. Lab - Self-Hosted Runtime - Copy Activity |
7:00 |
28. Lab - Self-Hosted Runtime - Mapping Data Flow |
16:00 |
29. Lab - Processing JSON Arrays |
8:00 |
30. Lab - Processing JSON Objects |
6:00 |
31. Lab - Conditional Split |
6:00 |
32. Lab - Schema Drift |
12:00 |
33. Lab - Metadata activity |
14:00 |
34. Lab - Azure DevOps - Git configuration |
11:00 |
35. Lab - Azure DevOps - Release configuration |
11:00 |
36. What resources are we taking forward |
1:00 |
Name of Video | Time |
---|---|
1. Batch and Real-Time Processing |
5:00 |
2. What are Azure Event Hubs |
5:00 |
3. Lab - Creating an instance of Event hub |
7:00 |
4. Lab - Sending and Receiving Events |
10:00 |
5. What is Azure Stream Analytics |
2:00 |
6. Lab - Creating a Stream Analytics job |
4:00 |
7. Lab - Azure Stream Analytics - Defining the job |
10:00 |
8. Review on what we have seen so far |
8:00 |
9. Lab - Reading database diagnostic data - Setup |
4:00 |
10. Lab - Reading data from a JSON file - Setup |
6:00 |
11. Lab - Reading data from a JSON file - Implementation |
5:00 |
12. Lab - Reading data from the Event Hub - Setup |
7:00 |
13. Lab - Reading data from the Event Hub - Implementation |
8:00 |
14. Lab - Timing windows |
10:00 |
15. Lab - Adding multiple outputs |
4:00 |
16. Lab - Reference data |
5:00 |
17. Lab - OVER clause |
8:00 |
18. Lab - Power BI Output |
10:00 |
19. Lab - Reading Network Security Group Logs - Server Setup |
3:00 |
20. Lab - Reading Network Security Group Logs - Enabling NSG Flow Logs |
8:00 |
21. Lab - Reading Network Security Group Logs - Processing the data |
13:00 |
22. Lab - User Defined Functions |
9:00 |
23. Custom Serialization Formats |
3:00 |
24. Lab - Azure Event Hubs - Capture Feature |
7:00 |
25. Lab - Azure Data Factory - Incremental Data Copy |
11:00 |
26. Demo on Azure IoT Devkit |
5:00 |
27. What resources are we taking forward |
1:00 |
Name of Video | Time |
---|---|
1. Section Introduction |
2:00 |
2. Introduction to Scala |
2:00 |
3. Installing Scala |
6:00 |
4. Scala - Playing with values |
3:00 |
5. Scala - Installing IntelliJ IDE |
5:00 |
6. Scala - If construct |
3:00 |
7. Scala - for construct |
1:00 |
8. Scala - while construct |
1:00 |
9. Scala - case construct |
1:00 |
10. Scala - Functions |
2:00 |
11. Scala - List collection |
4:00 |
12. Starting with Python |
3:00 |
13. Python - A simple program |
2:00 |
14. Python - If construct |
1:00 |
15. Python - while construct |
1:00 |
16. Python - List collection |
2:00 |
17. Python - Functions |
2:00 |
18. Quick look at Jupyter Notebook |
4:00 |
19. Lab - Azure Synapse - Creating a Spark pool |
8:00 |
20. Lab - Spark Pool - Starting out with Notebooks |
9:00 |
21. Lab - Spark Pool - Spark DataFrames |
4:00 |
22. Lab - Spark Pool - Sorting data |
6:00 |
23. Lab - Spark Pool - Load data |
8:00 |
24. Lab - Spark Pool - Removing NULL values |
8:00 |
25. Lab - Spark Pool - Using SQL statements |
3:00 |
26. Lab - Spark Pool - Write data to Azure Synapse |
11:00 |
27. Spark Pool - Combined Power |
2:00 |
28. Lab - Spark Pool - Sharing tables |
4:00 |
29. Lab - Spark Pool - Creating tables |
5:00 |
30. Lab - Spark Pool - JSON files |
6:00 |
Name of Video | Time |
---|---|
1. What is Azure Databricks |
4:00 |
2. Clusters in Azure Databricks |
6:00 |
3. Lab - Creating a workspace |
3:00 |
4. Lab - Creating a cluster |
14:00 |
5. Lab - Simple notebook |
3:00 |
6. Lab - Using DataFrames |
4:00 |
7. Lab - Reading a CSV file |
4:00 |
8. Databricks File System |
2:00 |
9. Lab - The SQL Data Frame |
3:00 |
10. Visualizations |
1:00 |
11. Lab - Few functions on dates |
2:00 |
12. Lab - Filtering on NULL values |
2:00 |
13. Lab - Parquet-based files |
2:00 |
14. Lab - JSON-based files |
3:00 |
15. Lab - Structured Streaming - Let's first understand our data |
3:00 |
16. Lab - Structured Streaming - Streaming from Azure Event Hubs - Initial steps |
8:00 |
17. Lab - Structured Streaming - Streaming from Azure Event Hubs - Implementation |
10:00 |
18. Lab - Getting data from Azure Data Lake - Setup |
7:00 |
19. Lab - Getting data from Azure Data Lake - Implementation |
5:00 |
20. Lab - Writing data to Azure Synapse SQL Dedicated Pool |
5:00 |
21. Lab - Stream and write to Azure Synapse SQL Dedicated Pool |
5:00 |
22. Lab - Azure Data Lake Storage Credential Passthrough |
10:00 |
23. Lab - Running an automated job |
6:00 |
24. Autoscaling a cluster |
2:00 |
25. Lab - Removing duplicate rows |
3:00 |
26. Lab - Using the PIVOT command |
4:00 |
27. Lab - Azure Databricks Table |
5:00 |
28. Lab - Azure Data Factory - Running a notebook |
6:00 |
29. Delta Lake Introduction |
2:00 |
30. Lab - Creating a Delta Table |
5:00 |
31. Lab - Streaming data into the table |
3:00 |
32. Lab - Time Travel |
2:00 |
33. Quick note on the deciding between Azure Synapse and Azure Databricks |
2:00 |
34. What resources are we taking forward |
1:00 |
Name of Video | Time |
---|---|
1. Section Introduction |
1:00 |
2. What is the Azure Key Vault service |
5:00 |
3. Azure Data Factory - Encryption |
5:00 |
4. Azure Synapse - Customer Managed Keys |
3:00 |
5. Azure Dedicated SQL Pool - Transparent Data Encryption |
2:00 |
6. Lab - Azure Synapse - Data Masking |
10:00 |
7. Lab - Azure Synapse - Auditing |
6:00 |
8. Azure Synapse - Data Discovery and Classification |
4:00 |
9. Azure Synapse - Azure AD Authentication |
3:00 |
10. Lab - Azure Synapse - Azure AD Authentication - Setting the admin |
4:00 |
11. Lab - Azure Synapse - Azure AD Authentication - Creating a user |
8:00 |
12. Lab - Azure Synapse - Row-Level Security |
7:00 |
13. Lab - Azure Synapse - Column-Level Security |
4:00 |
14. Lab - Azure Data Lake - Role Based Access Control |
7:00 |
15. Lab - Azure Data Lake - Access Control Lists |
7:00 |
16. Lab - Azure Synapse - External Tables Authorization via Managed Identity |
8:00 |
17. Lab - Azure Synapse - External Tables Authorization via Azure AD Authentication |
5:00 |
18. Lab - Azure Synapse - Firewall |
7:00 |
19. Lab - Azure Data Lake - Virtual Network Service Endpoint |
7:00 |
20. Lab - Azure Data Lake - Managed Identity - Data Factory |
6:00 |
Name of Video | Time |
---|---|
1. Best practices for structing files in your data lake |
3:00 |
2. Azure Storage accounts - Query acceleration |
2:00 |
3. View on Azure Monitor |
7:00 |
4. Azure Monitor - Alerts |
8:00 |
5. Azure Synapse - System Views |
2:00 |
6. Azure Synapse - Result set caching |
6:00 |
7. Azure Synapse - Workload Management |
4:00 |
8. Azure Synapse - Retention points |
2:00 |
9. Lab - Azure Data Factory - Monitoring |
7:00 |
10. Azure Data Factory - Monitoring - Alerts and Metrics |
4:00 |
11. Lab - Azure Data Factory - Annotations |
3:00 |
12. Azure Data Factory - Integration Runtime - Note |
7:00 |
13. Azure Data Factory - Pipeline Failures |
3:00 |
14. Azure Key Vault - High Availability |
2:00 |
15. Azure Stream Analytics - Metrics |
3:00 |
16. Azure Stream Analytics - Streaming Units |
2:00 |
17. Azure Stream Analytics - An example on monitoring the stream analytics job |
11:00 |
18. Azure Stream Analytics - The importance of time |
7:00 |
19. Azure Stream Analytics - More on the time aspect |
6:00 |
20. Azure Event Hubs and Stream Analytics - Partitions |
5:00 |
21. Azure Stream Analytics - An example on multiple partitions |
7:00 |
22. Azure Stream Analytics - More on partitions |
4:00 |
23. Azure Stream Analytics - An example on diagnosing errors |
4:00 |
24. Azure Stream Analytics - Diagnostics setting |
6:00 |
25. Azure Databricks - Monitoring |
7:00 |
26. Azure Databricks - Sending logs to Azure Monitor |
3:00 |
27. Azure Event Hubs - High Availability |
6:00 |
100% Latest & Updated Microsoft Azure DP-203 Practice Test Questions, Exam Dumps & Verified Answers!
30 Days Free Updates, Instant Download!
DP-203 Premium Bundle
Free DP-203 Exam Questions & DP-203 Dumps
File Name | Size | Votes |
---|---|---|
File Name microsoft.examlabs.dp-203.v2024-09-11.by.annabelle.126q.vce |
Size 2.48 MB |
Votes 1 |
File Name microsoft.realtests.dp-203.v2022-01-21.by.leja.124q.vce |
Size 2.59 MB |
Votes 1 |
File Name microsoft.braindumps.dp-203.v2021-11-02.by.zeynep.105q.vce |
Size 2.51 MB |
Votes 1 |
File Name microsoft.examlabs.dp-203.v2021-08-10.by.rory.64q.vce |
Size 1.74 MB |
Votes 1 |
File Name microsoft.test-king.dp-203.v2021-07-23.by.hunter.25q.vce |
Size 1.13 MB |
Votes 1 |
File Name microsoft.prep4sure.dp-203.v2021-04-16.by.maria.36q.vce |
Size 1.3 MB |
Votes 2 |
Microsoft DP-203 Training Course
Want verified and proven knowledge for Data Engineering on Microsoft Azure? Believe it's easy when you have ExamSnap's Data Engineering on Microsoft Azure certification video training course by your side which along with our Microsoft DP-203 Exam Dumps & Practice Test questions provide a complete solution to pass your exam Read More.
Let's go to the next SQL file. So this is the order by clause. Now the order by clause is used to sort the result set in either ascending or descending order. By default, the records are sorted in ascending order. So here I am selecting all the rows from the sales as the product table, but I'm ordering them by the list price. So I'll just replace it over here and hit execute. And I'm seeing everything that is being sorted as per the list price. If you want to list it in descending order, just add the descending keyword over here and hit execute. And now the list price or the rows are being displayed to you based on the descending order of the list price. So in this chapter, I just want to quickly go through the order by clause.
Let's go on to our next sequel file. Now, here I am looking at aggregate functions. Aggregation is now very important when it comes to analysis of your data. So, more often than not, you will actually be performing aggregation of your data. To look at stats about the data itself here, let's say I want to find the number of products. I'm using Product ID over here. When the product name is like Isai, it contains the name of silver. It contains a string of silver. You can go ahead and copy this, and I can execute it over here. So I can see there are three things that actually fulfil this condition. If you want to find the Max Product ID, you can go ahead and execute that as well. So the Max product ID is nine, eight, eight. Similarly, you can also go ahead and find the minimum Product ID and hit Execute so you get the minimum Product ID wherein this condition is being fulfilled. If you want to find the sum, or if you want to find the average, you can go ahead and actually use these Aggregation function. We’ll also be using these aggregate functions in Azure Stream Antics because normally data engineers will use aggregate functions when it comes to their streaming data.
Now let's go on to the next sequel statement. Now this is the group I clause, probably one of the most important clauses. So the group I keyword is used to group the rows into summary rows and this is used normally along with aggregate functions. So over here I want to see the count of the product ID from the sales table and I'm grouping it by the color. So the color column value, if I go ahead and copy this and replace it over here, hit on execute. So here I can see the number of product IDs that are grouped by the color. I can go a step further and give an alias on to the count of the product ID because as you can see over here, there is no column name. So I can take this, copy it over here, and it will execute. So I can see now a name is being given to our column. So product ID counts. I can take it even a step further. I can now see the color as well, so I can see a better representation of my data. I'll hit on execute. So now I can see for the color, let's say black, we have 89 products that match this color. So we can see all of these details. Here you can see that there are products which have no color at all. Null Values: Again, very important when you're working with your data is how do you actually consider rows that have null values. Sometimes you might want to put a default value over there if there is a null value for that particular column. Or you might totally discard the rows that have the null value for a particular column. You might actually consider discarding those rows if they don't give any meaning to the final result set on which you are trying to form an analysis. Performing or giving a default value is also sometimes a risk because it can sometimes skew your data. So you should always think very carefully about your data performance analysis.
What are you going to do for those roles that actually have a null value? So over here, if you only now want to see an account of the products wherein the color is not null, we can go ahead and execute the statement and we're getting the results as desired. Now, when it comes to the group by clause in the SQL data warehouse, again a very important concept. And over there, when you want to increase the efficiency of the group pi cross, you will actually be using different types of tables. So we have hash distributed tables and we also have something known as replicated tables. We also have round robin tables. But when it comes to the group by clause and you are going to see this later on, we are going to be working with something known as fact tables and dimension tables. And normally you use grouping when you are working with your fact and dimension tables, and at that point in time, to have an effective distribution of data, you'll actually create tables, types of tables. So in the SQL data warehouse, you have hash distribute table types and replicated distribution table types. So we will be looking at all of these table types when we are looking at the SQL data warehouse.
So I'll go on to the next file. So this is just the having clause. So here, the aggregate functions of the sum count cannot be used along with the where clause. So you can actually use it in the possessing clause here. The only difference is that I only want to have a result set wherein the number of products that actually match a particular color is greater than ten. That's it. If I take the statement, if I place it over here and hit execute, we are only getting those rows. Then where the number of products is greater than ten, the count of the products based on a particular color is greater than ten.
Now in this chapter, I just want to have a quick review when it comes to primary keys and foreign keys. So, primary keys are used to uniquely identify rows within a particular table. So if you are certain that you don't have duplicate rows in place, you can go in and enforce the primary key constraint on a particular table. This is a feature that is available in most SQL database systems, and then you have the foreign key. So, if you have one table that has a dependency on another table, you can have a foreign key constraint. If I go on to my Object Explorer, when you look at the sample tables that we have over here, I'm sure that there is a relationship between the tables over here.
So if I consider the product, the product category, the description, and the model, I want to see what the relationship between these tables is. If I, for example, go on to the sales table, if I go on to the keys, So here I can see that there is an primary key that is based on the product ID. So this means that the Product ID value is unique for every row. That means you can't have two rows that have the same product ID. I can see that there are also foreign key constraints that are being mapped onto other tables. So there is a relationship between the product table and the product category table, and there is a relationship between the product table and the product model table. If you want to see the visualization of the relationships again, you can actually do it from SQL Server Management Studio. You can right click on database diagrams. You can click on the new database diagram here. It will give you a message that one or more of these support objects required is not in place.
Do you want to create them? I'll click "yes." Then it should give me a screen wherein I can choose which tables I want to add on to my database diagram. So I'll choose my product table, I'll choose my product category table, and my product model table. I'll click on Add, and it will add all of these tables onto my database diagram. Once I have the tables in place, I'll close this. So here you can see that we have an primary key that is defined on the product table.
There is also a primary key on the product category table and a primary key on the product model table. There is also a relationship between the product table and the product category table, and there is a relationship between the product table and the product model table. Here the columns that are actually mapping tables are so we have the product category ID in the product table.
This is actually being mapped onto the product category id column that we have in the product category table. And then we have the Product Model ID column that will be mapped on to the Product Mode lid column in the Product Model table. When it comes to understanding the relationships between tables, this is going to be a very important concept that we will be learning in the sequel data warehouse. So based on relationships, based on the columns, you can actually construct something known as our Dimension Tables, which will be used in something known as the star Schema, and that's something that we're going to look at in detail at this point in time. Just a quick review to keep in mind that you have your primary keys and you have the foreign keys that are used to depict the relationships between the tables.
Prepared by Top Experts, the top IT Trainers ensure that when it comes to your IT exam prep and you can count on ExamSnap Data Engineering on Microsoft Azure certification video training course that goes in line with the corresponding Microsoft DP-203 exam dumps, study guide, and practice test questions & answers.
Comments (0)
Please post your comments about DP-203 Exams. Don't share your email address asking for DP-203 braindumps or DP-203 exam pdf files.
Purchase Individually
Microsoft Training Courses
Only Registered Members can View Training Courses
Please fill out your email address below in order to view Training Courses. Registration is Free and Easy, You Simply need to provide an email address.
Latest IT Certification News
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.