Training Video Course

Professional Data Engineer: Professional Data Engineer on Google Cloud Platform

PDFs and exam guides are not so efficient, right? Prepare for your Google examination with our training course. The Professional Data Engineer course contains a complete batch of videos that will provide you with profound and thorough knowledge related to Google certification exam. Pass the Google Professional Data Engineer test with flying colors.

Rating
4.41rating
Students
110
Duration
03:47:10 h
$16.49
$14.99

Curriculum for Professional Data Engineer Certification Video Course

Name of Video Time
Play Video: You, This Course and Us
1. You, This Course and Us
02:01
Name of Video Time
Play Video: Theory, Practice and Tests
1. Theory, Practice and Tests
10:26
Play Video: Lab: Setting Up A GCP Account
2. Lab: Setting Up A GCP Account
07:00
Play Video: Lab: Using The Cloud Shell
3. Lab: Using The Cloud Shell
06:01
Name of Video Time
Play Video: Compute Options
1. Compute Options
09:16
Play Video: Google Compute Engine (GCE)
2. Google Compute Engine (GCE)
07:38
Play Video: Lab: Creating a VM Instance
3. Lab: Creating a VM Instance
05:59
Play Video: More GCE
4. More GCE
08:12
Play Video: Lab: Editing a VM Instance
5. Lab: Editing a VM Instance
04:45
Play Video: Lab: Creating a VM Instance Using The Command Line
6. Lab: Creating a VM Instance Using The Command Line
04:43
Play Video: Lab: Creating And Attaching A Persistent Disk
7. Lab: Creating And Attaching A Persistent Disk
04:00
Play Video: Google Container Engine - Kubernetes (GKE)
8. Google Container Engine - Kubernetes (GKE)
10:33
Play Video: More GKE
9. More GKE
09:54
Play Video: Lab: Creating A Kubernetes Cluster And Deploying A Wordpress Container
10. Lab: Creating A Kubernetes Cluster And Deploying A Wordpress Container
06:55
Play Video: App Engine
11. App Engine
06:48
Play Video: Contrasting App Engine, Compute Engine and Container Engine
12. Contrasting App Engine, Compute Engine and Container Engine
06:03
Play Video: Lab: Deploy And Run An App Engine App
13. Lab: Deploy And Run An App Engine App
07:29
Name of Video Time
Play Video: Storage Options
1. Storage Options
09:48
Play Video: Quick Take
2. Quick Take
13:41
Play Video: Cloud Storage
3. Cloud Storage
10:37
Play Video: Lab: Working With Cloud Storage Buckets
4. Lab: Working With Cloud Storage Buckets
05:25
Play Video: Lab: Bucket And Object Permissions
5. Lab: Bucket And Object Permissions
03:52
Play Video: Lab: Life cycle Management On Buckets
6. Lab: Life cycle Management On Buckets
03:12
Play Video: Lab: Running A Program On a VM Instance And Storing Results on Cloud Storage
7. Lab: Running A Program On a VM Instance And Storing Results on Cloud Storage
07:09
Play Video: Transfer Service
8. Transfer Service
05:07
Play Video: Lab: Migrating Data Using The Transfer Service
9. Lab: Migrating Data Using The Transfer Service
05:32
Play Video: Lab: Cloud Storage ACLs and API access with Service Account
10. Lab: Cloud Storage ACLs and API access with Service Account
07:50
Play Video: Lab: Cloud Storage Customer-Supplied Encryption Keys and Life-Cycle Management
11. Lab: Cloud Storage Customer-Supplied Encryption Keys and Life-Cycle Management
09:28
Play Video: Lab: Cloud Storage Versioning, Directory Sync
12. Lab: Cloud Storage Versioning, Directory Sync
08:42
Name of Video Time
Play Video: Cloud SQL
1. Cloud SQL
07:40
Play Video: Lab: Creating A Cloud SQL Instance
2. Lab: Creating A Cloud SQL Instance
07:55
Play Video: Lab: Running Commands On Cloud SQL Instance
3. Lab: Running Commands On Cloud SQL Instance
06:31
Play Video: Lab: Bulk Loading Data Into Cloud SQL Tables
4. Lab: Bulk Loading Data Into Cloud SQL Tables
09:09
Play Video: Cloud Spanner
5. Cloud Spanner
07:25
Play Video: More Cloud Spanner
6. More Cloud Spanner
09:18
Play Video: Lab: Working With Cloud Spanner
7. Lab: Working With Cloud Spanner
06:49
Name of Video Time
Play Video: BigTable Intro
1. BigTable Intro
07:57
Play Video: Columnar Store
2. Columnar Store
08:12
Play Video: Denormalised
3. Denormalised
09:02
Play Video: Column Families
4. Column Families
08:10
Play Video: BigTable Performance
5. BigTable Performance
13:19
Play Video: Lab: BigTable demo
6. Lab: BigTable demo
07:39
Name of Video Time
Play Video: Datastore
1. Datastore
14:10
Play Video: Lab: Datastore demo
2. Lab: Datastore demo
06:42
Name of Video Time
Play Video: BigQuery Intro
1. BigQuery Intro
11:03
Play Video: BigQuery Advanced
2. BigQuery Advanced
09:59
Play Video: Lab: Loading CSV Data Into Big Query
3. Lab: Loading CSV Data Into Big Query
09:04
Play Video: Lab: Running Queries On Big Query
4. Lab: Running Queries On Big Query
05:26
Play Video: Lab: Loading JSON Data With Nested Tables
5. Lab: Loading JSON Data With Nested Tables
07:28
Play Video: Lab: Public Datasets In Big Query
6. Lab: Public Datasets In Big Query
08:16
Play Video: Lab: Using Big Query Via The Command Line
7. Lab: Using Big Query Via The Command Line
07:45
Play Video: Lab: Aggregations And Conditionals In Aggregations
8. Lab: Aggregations And Conditionals In Aggregations
09:51
Play Video: Lab: Subqueries And Joins
9. Lab: Subqueries And Joins
05:44
Play Video: Lab: Regular Expressions In Legacy SQL
10. Lab: Regular Expressions In Legacy SQL
05:36
Play Video: Lab: Using The With Statement For SubQueries
11. Lab: Using The With Statement For SubQueries
10:45
Name of Video Time
Play Video: Data Flow Intro
1. Data Flow Intro
11:04
Play Video: Apache Beam
2. Apache Beam
03:42
Play Video: Lab: Running A Python Data flow Program
3. Lab: Running A Python Data flow Program
12:56
Play Video: Lab: Running A Java Data flow Program
4. Lab: Running A Java Data flow Program
13:42
Play Video: Lab: Implementing Word Count In Dataflow Java
5. Lab: Implementing Word Count In Dataflow Java
11:17
Play Video: Lab: Executing The Word Count Dataflow
6. Lab: Executing The Word Count Dataflow
04:37
Play Video: Lab: Executing MapReduce In Dataflow In Python
7. Lab: Executing MapReduce In Dataflow In Python
09:50
Play Video: Lab: Executing MapReduce In Dataflow In Java
8. Lab: Executing MapReduce In Dataflow In Java
06:08
Play Video: Lab: Dataflow With Big Query As Source And Side Inputs
9. Lab: Dataflow With Big Query As Source And Side Inputs
15:50
Play Video: Lab: Dataflow With Big Query As Source And Side Inputs 2
10. Lab: Dataflow With Big Query As Source And Side Inputs 2
06:28
Name of Video Time
Play Video: Data Proc
1. Data Proc
08:28
Play Video: Lab: Creating And Managing A Dataproc Cluster
2. Lab: Creating And Managing A Dataproc Cluster
08:11
Play Video: Lab: Creating A Firewall Rule To Access Dataproc
3. Lab: Creating A Firewall Rule To Access Dataproc
08:25
Play Video: Lab: Running A PySpark Job On Dataproc
4. Lab: Running A PySpark Job On Dataproc
07:39
Play Video: Lab: Running The PySpark REPL Shell And Pig Scripts On Dataproc
5. Lab: Running The PySpark REPL Shell And Pig Scripts On Dataproc
08:44
Play Video: Lab: Submitting A Spark Jar To Dataproc
6. Lab: Submitting A Spark Jar To Dataproc
02:10
Play Video: Lab: Working With Dataproc Using The GCloud CLI
7. Lab: Working With Dataproc Using The GCloud CLI
08:19
Name of Video Time
Play Video: Pub Sub
1. Pub Sub
08:23
Play Video: Lab: Working With Pubsub On The Command Line
2. Lab: Working With Pubsub On The Command Line
05:35
Play Video: Lab: Working With PubSub Using The Web Console
3. Lab: Working With PubSub Using The Web Console
04:40
Play Video: Lab: Setting Up A Pubsub Publisher Using The Python Library
4. Lab: Setting Up A Pubsub Publisher Using The Python Library
05:52
Play Video: Lab: Setting Up A Pubsub Subscriber Using The Python Library
5. Lab: Setting Up A Pubsub Subscriber Using The Python Library
04:08
Play Video: Lab: Publishing Streaming Data Into Pubsub
6. Lab: Publishing Streaming Data Into Pubsub
08:18
Play Video: Lab: Reading Streaming Data From PubSub And Writing To BigQuery
7. Lab: Reading Streaming Data From PubSub And Writing To BigQuery
10:14
Play Video: Lab: Executing A Pipeline To Read Streaming Data And Write To BigQuery
8. Lab: Executing A Pipeline To Read Streaming Data And Write To BigQuery
05:54
Play Video: Lab: Pubsub Source BigQuery Sink
9. Lab: Pubsub Source BigQuery Sink
10:20
Name of Video Time
Play Video: Data Lab
1. Data Lab
03:00
Play Video: Lab: Creating And Working On A Datalab Instance
2. Lab: Creating And Working On A Datalab Instance
04:01
Play Video: Lab: Importing And Exporting Data Using Datalab
3. Lab: Importing And Exporting Data Using Datalab
12:14
Play Video: Lab: Using The Charting API In Datalab
4. Lab: Using The Charting API In Datalab
06:43
Name of Video Time
Play Video: Introducing Machine Learning
1. Introducing Machine Learning
08:04
Play Video: Representation Learning
2. Representation Learning
10:27
Play Video: NN Introduced
3. NN Introduced
07:35
Play Video: Introducing TF
4. Introducing TF
07:16
Play Video: Lab: Simple Math Operations
5. Lab: Simple Math Operations
08:46
Play Video: Computation Graph
6. Computation Graph
10:17
Play Video: Tensors
7. Tensors
09:02
Play Video: Lab: Tensors
8. Lab: Tensors
05:03
Play Video: Linear Regression Intro
9. Linear Regression Intro
09:57
Play Video: Placeholders and Variables
10. Placeholders and Variables
08:44
Play Video: Lab: Placeholders
11. Lab: Placeholders
06:36
Play Video: Lab: Variables
12. Lab: Variables
07:49
Play Video: Lab: Linear Regression with Made-up Data
13. Lab: Linear Regression with Made-up Data
04:52
Play Video: Image Processing
14. Image Processing
08:05
Play Video: Images As Tensors
15. Images As Tensors
08:16
Play Video: Lab: Reading and Working with Images
16. Lab: Reading and Working with Images
08:06
Play Video: Lab: Image Transformations
17. Lab: Image Transformations
06:37
Play Video: Introducing MNIST
18. Introducing MNIST
04:13
Play Video: K-Nearest Neigbors
19. K-Nearest Neigbors
07:42
Play Video: One-hot Notation and L1 Distance
20. One-hot Notation and L1 Distance
07:31
Play Video: Steps in the K-Nearest-Neighbors Implementation
21. Steps in the K-Nearest-Neighbors Implementation
09:32
Play Video: Lab: K-Nearest-Neighbors
22. Lab: K-Nearest-Neighbors
14:14
Play Video: Learning Algorithm
23. Learning Algorithm
10:58
Play Video: Individual Neuron
24. Individual Neuron
09:52
Play Video: Learning Regression
25. Learning Regression
07:51
Play Video: Learning XOR
26. Learning XOR
10:27
Play Video: XOR Trained
27. XOR Trained
11:11
Name of Video Time
Play Video: Lab: Access Data from Yahoo Finance
1. Lab: Access Data from Yahoo Finance
02:49
Play Video: Non TensorFlow Regression
2. Non TensorFlow Regression
05:53
Play Video: Lab: Linear Regression - Setting Up a Baseline
3. Lab: Linear Regression - Setting Up a Baseline
11:19
Play Video: Gradient Descent
4. Gradient Descent
09:56
Play Video: Lab: Linear Regression
5. Lab: Linear Regression
14:42
Play Video: Lab: Multiple Regression in TensorFlow
6. Lab: Multiple Regression in TensorFlow
09:15
Play Video: Logistic Regression Introduced
7. Logistic Regression Introduced
10:16
Play Video: Linear Classification
8. Linear Classification
05:25
Play Video: Lab: Logistic Regression - Setting Up a Baseline
9. Lab: Logistic Regression - Setting Up a Baseline
07:33
Play Video: Logit
10. Logit
08:33
Play Video: Softmax
11. Softmax
11:55
Play Video: Argmax
12. Argmax
12:13
Play Video: Lab: Logistic Regression
13. Lab: Logistic Regression
16:56
Play Video: Estimators
14. Estimators
04:10
Play Video: Lab: Linear Regression using Estimators
15. Lab: Linear Regression using Estimators
07:49
Play Video: Lab: Logistic Regression using Estimators
16. Lab: Logistic Regression using Estimators
04:54
Name of Video Time
Play Video: Lab: Taxicab Prediction - Setting up the dataset
1. Lab: Taxicab Prediction - Setting up the dataset
14:38
Play Video: Lab: Taxicab Prediction - Training and Running the model
2. Lab: Taxicab Prediction - Training and Running the model
11:22
Play Video: Lab: The Vision, Translate, NLP and Speech API
3. Lab: The Vision, Translate, NLP and Speech API
10:54
Play Video: Lab: The Vision API for Label and Landmark Detection
4. Lab: The Vision API for Label and Landmark Detection
07:00
Name of Video Time
Play Video: Live Migration
1. Live Migration
10:17
Play Video: Machine Types and Billing
2. Machine Types and Billing
09:21
Play Video: Sustained Use and Committed Use Discounts
3. Sustained Use and Committed Use Discounts
07:03
Play Video: Rightsizing Recommendations
4. Rightsizing Recommendations
02:22
Play Video: RAM Disk
5. RAM Disk
02:07
Play Video: Images
6. Images
07:45
Play Video: Startup Scripts And Baked Images
7. Startup Scripts And Baked Images
07:31
Name of Video Time
Play Video: VPCs And Subnets
1. VPCs And Subnets
11:14
Play Video: Global VPCs, Regional Subnets
2. Global VPCs, Regional Subnets
11:19
Play Video: IP Addresses
3. IP Addresses
11:39
Play Video: Lab: Working with Static IP Addresses
4. Lab: Working with Static IP Addresses
05:46
Play Video: Routes
5. Routes
07:36
Play Video: Firewall Rules
6. Firewall Rules
15:33
Play Video: Lab: Working with Firewalls
7. Lab: Working with Firewalls
07:05
Play Video: Lab: Working with Auto Mode and Custom Mode Networks
8. Lab: Working with Auto Mode and Custom Mode Networks
19:32
Play Video: Lab: Bastion Host
9. Lab: Bastion Host
07:10
Play Video: Cloud VPN
10. Cloud VPN
07:27
Play Video: Lab: Working with Cloud VPN
11. Lab: Working with Cloud VPN
11:11
Play Video: Cloud Router
12. Cloud Router
10:31
Play Video: Lab: Using Cloud Routers for Dynamic Routing
13. Lab: Using Cloud Routers for Dynamic Routing
14:07
Play Video: Dedicated Interconnect Direct and Carrier Peering
14. Dedicated Interconnect Direct and Carrier Peering
08:10
Play Video: Shared VPCs
15. Shared VPCs
10:11
Play Video: Lab: Shared VPCs
16. Lab: Shared VPCs
06:17
Play Video: VPC Network Peering
17. VPC Network Peering
10:10
Play Video: Lab: VPC Peering
18. Lab: VPC Peering
07:17
Play Video: Cloud DNS And Legacy Networks
19. Cloud DNS And Legacy Networks
05:19
Name of Video Time
Play Video: Managed and Unmanaged Instance Groups
1. Managed and Unmanaged Instance Groups
10:53
Play Video: Types of Load Balancing
2. Types of Load Balancing
05:46
Play Video: Overview of HTTP(S) Load Balancing
3. Overview of HTTP(S) Load Balancing
09:20
Play Video: Forwarding Rules Target Proxy and Url Maps
4. Forwarding Rules Target Proxy and Url Maps
08:31
Play Video: Backend Service and Backends
5. Backend Service and Backends
09:28
Play Video: Load Distribution and Firewall Rules
6. Load Distribution and Firewall Rules
04:28
Play Video: Lab: HTTP(S) Load Balancing
7. Lab: HTTP(S) Load Balancing
11:21
Play Video: Lab: Content Based Load Balancing
8. Lab: Content Based Load Balancing
07:06
Play Video: SSL Proxy and TCP Proxy Load Balancing
9. SSL Proxy and TCP Proxy Load Balancing
05:06
Play Video: Lab: SSL Proxy Load Balancing
10. Lab: SSL Proxy Load Balancing
07:49
Play Video: Network Load Balancing
11. Network Load Balancing
05:08
Play Video: Internal Load Balancing
12. Internal Load Balancing
07:16
Play Video: Autoscalers
13. Autoscalers
11:52
Play Video: Lab: Autoscaling with Managed Instance Groups
14. Lab: Autoscaling with Managed Instance Groups
12:22
Name of Video Time
Play Video: StackDriver
1. StackDriver
12:08
Play Video: StackDriver Logging
2. StackDriver Logging
07:39
Play Video: Lab: Stackdriver Resource Monitoring
3. Lab: Stackdriver Resource Monitoring
08:12
Play Video: Lab: Stackdriver Error Reporting and Debugging
4. Lab: Stackdriver Error Reporting and Debugging
05:52
Play Video: Cloud Deployment Manager
5. Cloud Deployment Manager
06:05
Play Video: Lab: Using Deployment Manager
6. Lab: Using Deployment Manager
05:10
Play Video: Lab: Deployment Manager and Stackdriver
7. Lab: Deployment Manager and Stackdriver
08:27
Play Video: Cloud Endpoints
8. Cloud Endpoints
03:48
Play Video: Cloud IAM: User accounts, Service accounts, API Credentials
9. Cloud IAM: User accounts, Service accounts, API Credentials
08:53
Play Video: Cloud IAM: Roles, Identity-Aware Proxy, Best Practices
10. Cloud IAM: Roles, Identity-Aware Proxy, Best Practices
09:31
Play Video: Lab: Cloud IAM
11. Lab: Cloud IAM
11:57
Play Video: Data Protection
12. Data Protection
12:02
Name of Video Time
Play Video: Introducing the Hadoop Ecosystem
1. Introducing the Hadoop Ecosystem
01:34
Play Video: Hadoop
2. Hadoop
09:43
Play Video: HDFS
3. HDFS
10:55
Play Video: MapReduce
4. MapReduce
10:34
Play Video: Yarn
5. Yarn
05:29
Play Video: Hive
6. Hive
07:19
Play Video: Hive vs. RDBMS
7. Hive vs. RDBMS
07:10
Play Video: HQL vs. SQL
8. HQL vs. SQL
07:36
Play Video: OLAP in Hive
9. OLAP in Hive
07:34
Play Video: Windowing Hive
10. Windowing Hive
08:22
Play Video: Pig
11. Pig
08:04
Play Video: More Pig
12. More Pig
06:38
Play Video: Spark
13. Spark
08:54
Play Video: More Spark
14. More Spark
11:45
Play Video: Streams Intro
15. Streams Intro
07:44
Play Video: Microbatches
16. Microbatches
05:40
Play Video: Window Types
17. Window Types
05:46

Google Professional Data Engineer Exam Dumps, Practice Test Questions

100% Latest & Updated Google Professional Data Engineer Practice Test Questions, Exam Dumps & Verified Answers!
30 Days Free Updates, Instant Download!

Google Professional Data Engineer Premium Bundle
$69.97
$49.99

Professional Data Engineer Premium Bundle

  • Premium File: 311 Questions & Answers. Last update: Nov 14, 2024
  • Training Course: 201 Video Lectures
  • Study Guide: 543 Pages
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates

Professional Data Engineer Premium Bundle

Google Professional Data Engineer Premium Bundle
  • Premium File: 311 Questions & Answers. Last update: Nov 14, 2024
  • Training Course: 201 Video Lectures
  • Study Guide: 543 Pages
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates
$69.97
$49.99

Free Professional Data Engineer Exam Questions & Professional Data Engineer Dumps

File Name Size Votes
File Name
google.actualtests.professional data engineer.v2024-10-12.by.benjamin.109q.vce
Size
1.37 MB
Votes
1
File Name
google.certkiller.professional data engineer.v2021-11-19.by.theodore.99q.vce
Size
281.81 KB
Votes
1
File Name
google.testking.professional data engineer.v2021-08-01.by.arabella.93q.vce
Size
406.26 KB
Votes
1
File Name
google.certkey.professional data engineer.v2021-04-30.by.adrian.103q.vce
Size
398.73 KB
Votes
2

Google Professional Data Engineer Training Course

Want verified and proven knowledge for Professional Data Engineer on Google Cloud Platform? Believe it's easy when you have ExamSnap's Professional Data Engineer on Google Cloud Platform certification video training course by your side which along with our Google Professional Data Engineer Exam Dumps & Practice Test questions provide a complete solution to pass your exam Read More.

Storage

10. Lab: Cloud Storage ACLs and API access with Service Account

In this demo for cloud storage, we will be creating some cloud storage buckets and then playing around with some of its available features, including access control lists, encryption, lifecycle management, versioning directory synchronization, and cross-project sharing of buckets. To start with, though, let us go ahead and create an a service account which we will use to access the bucket.

So we navigate to I Am and Service Accounts and click to create a new service account name. This One Store Core we assigned the role of an editor, which means that it will also have the ability to edit buckets and furnish a new private key in the form of a.json file. And once we hit Create, it will download our JSON file. So now that we have our service account along with this private key in a JSON file, go ahead and create a cloud storage bucket. So we navigate in the menu to storage and browser and here we create a new bucket. Since the name of each bucket needs to be unique globally, choose something that is available. So for us, Lunacon bucket four is available. We then hit Create, leaving all the default settings in place. And so we now have a bucket ready. The next step for us is to provision a VM instance, which we will use to access this bucket. So, for that, we navigate to compute engine instances and create a new VM instance. It's also called this One Store Core.

We created it in the US central region and left all the other settings as default and hit Create. And now with our virtual machine provision, SSH into it. Once the terminal is up, copy over the contents of the JSON file containing our service account private key to this VM instance. So for that, let us first create a credentials JSON file over here. You can use any text editor for it and then navigate to our own machines. We copy over the contents of the JSON file and paste it into our VM instance. And now that we have this file in place, let's use it to interact with our GCP project from the command line. We first run this gcc or activate serviceaccount command to allow our VM to interact with the Google Cloud API using the service account credentials. So once we have executed this command, the next step for us is to reset the local profile on this instance and initialise the API for which we run G Cloud on it. And when we run that command, we first choose option one to re-initialize the current configuration rather than create a new one. In a second set of options, we choose to use the new service account we created in order to interact with the API.

So, choose option two. The project to use should just be the one where we have provisioned our service account bucket and VM instance. So in my case, it is Loonycon project Zero six, and the next prompt just hit yes to confirm. And when prompted for the zone, I'm just going to select US Central Zone C, which is the location of this VM instance. Now that we have configured this instance to interact with our project from the command line using our service account, let us go ahead and get some files which we will use to upload into a cloud storage bucket. Or rather, let us just get one file and make copies of it. So the file I'm downloading is a hadoop setup file, and in order to have three files to upload to our bucket, let us just cheat and create two copies of the same file to create the first copy. And we now have a second copy. So, with three files in place, just copy one of the files into our cloud storage bucket. So this is the command to execute. For that, just grab our bucket name, and once we have that, we just upload the setup HTML file. Okay, so that was successful. So, with the file uploaded into a cloud storage bucket, let us take a look at its default ACL access control list.

By running this GS utility, ACL get command And once the ACL file is downloaded, taking a look at it, we see that there are a number of permissions which are assigned over here. So let us try to constrain it. For that, set this file to be private. For that, we just run GS Util ACL setprivate, and once that command has been executed, let us get the new ACL for it. And we see here that there is just one entry that a service account grants the owner role on this file. Next, let us loosen up the permissions on this file and grant read access to pretty much everyone. So for that, run this GS utility ACLch command and specify that all users should have read access to this file. So once the command has been executed, download the new ACL and take a look. And now we see that there is one additional entry in the ACL file which grants all users read access. Now navigate to the console and take a quick look at the file from over there. So we go into a bucket and we may need to refresh and we see that the setup HTML file is over here and it also has this public link. Because we have just recently set all users to have read access to it. Let us now head back to the SSHterminal of our instance and we will now delete the setup HTML file from our local storage. This is because we have uploaded a file so far and modified its permissions, but let us test the downloading file from our bucket. So we get rid of the local file and then try to download the file from the bucket using the GS utilityCP command, and we see that the download is successful.

11. Lab: Cloud Storage Customer-Supplied Encryption Keys and Life-Cycle Management

One thing we have not explicitly seen so far is that Google Cloud Storage always encrypts the file on the server side before it is written to disk. This is so that if someone malicious were to gain access to the disk, they would still be able to read the contents of the file. However, GCP also provides the option of a customer supplying an encryption key which will be used for server-side encryption. So let us see how that works. The first thing to do is to generate an encryption key, and for that, let us just use a Python library as an example, and we will generate what is called a customer-supplied encryption key, or Csek. So after running a command, we get an encryption key and, in order to use it for Google Cloud Storage, we are going to edit a config file called dot boto. So we should find this in our workspace, and once we have located it, edit this one in our text editor.

We find that there is a field called encryption key. And over here, let us just append the encryption key we just created. So with that added, save the file. We can now upload a new file which will be encrypted using our new encryption key. So, just as before, we use the GS Util CPcommand, but this time we upload the second file. Let's just go ahead and upload the three dot HTML as well. And now, with two new files in our bucket, navigate to the console and see what our bucket looks like now. And once we go to a bucket, we see that the two new files appear over here. And it also states that these files have been encrypted with a customer-supplied key. Let us now try to download all of these files into our instance and see if you are able to view them. First, just delete all our local copies of the file.

So all the three set-up HTML files would be removed. And now download the file from the bucket. We just download all the set up HTML files. And now let us see if you're able to read the contents. So the first HTML file was encrypted automatically by GCP for us. So this is not a key which we supplied and we're able to view this. Let us take a look at the other files. So, setup two HTML and we're able to read this one as well. And finally, move over to the last setup file, and we can see that all of these files are readable now, even though the first one is encrypted with a Google supplied key, and the last two are encrypted with our own customer supplied key. For our next step, let us perform a key rotation and see what a possible side effect is if certain steps are not followed when performing this rotation. So first, edit the photo file again and comment out the encryption key, which is unused. We will soon be replacing this with a new one. But let us use this encryption key and make it a decryption key. So we copy over this value and we uncomment the encryption key one and we use this key for decryption. Now we save the file and next let us go and create a new encryption key.

So use Python once again and when the key is generated, copy it over and add it to our boto file. And now for the encryption key property, just uncomment the line first and then remove the old encryption key value and add the new one. And just to note, the old encryption key is now the new decryption key. What we will do next is to rewrite the key for our setup two-dot HTML file in the cloud storage bucket. So we will use this GS utility rewrite command. And what this essentially does is decrypt a file using the decryption key specified in our boto config, then encrypt the file using the new encryption key value specified in our config file. So we have now rotated our encryption keys and we have applied that to the setup of the HTML file in our bucket. Let us now go into our botoconfigand comment out the decryption key property. And once that is complete, now try to download the setup and set up three HTML files and see how that works. So we have downloaded the setup to an HTML file and let us try to do that for setup three. And this download fails because we did not perform a rewrite of the setup HTML using our new encryption key. And we also commented out the decryption key from a property file which could have been used to decrypt this file.

As a result, whenever a key rotation is performed, it is best to rewrite the files with the new keys. Now that we have some familiarity with encryption for cloud storage buckets, let us move along to lifecycle management. So first let's take a look at the Lifecycle policy on the bucket we just created. And we see that, by default, there is no lifecycle policy set. So let us go ahead and add one. So for that, we create a JSON file and within the file we specify a rule. So what this says is that the action to be performed is to delete files in the bucket on the condition that their age has exceeded 31 days. So once we have saved the file, we apply this rule to our bucket. We do that by executing the Lifecycle set command. And once that has been executed, let us just verify that the policy has been applied correctly. So we run the Lifecycle get command again and see that our JSON file has indeed been loaded. So the lifecycle has been set for our buckets from lifecycle management. Let us move along to versioning. So let us run the GS util versioning get command and we see that, by default, versioning is disabled. So let us activate that for our bucket.

So we do a versioning set. We see thatversioning has now been enabled for the bucket. Just run versioning get again for a quick confirmation that versioning is indeed enabled. Let us now test versioning by creating a few copies of the setup.dot HTML file. So for the original copy, just take a look and we see that the size is about 53.8. Edit this file and remove a few lines. So once that is complete, we save this version and upload it into a bucket. We expect this version to be a little smaller than the previous one. Now let us create a third version of the file. So we do that by just editing the setup.html locally again and we remove a few more lines, so it's going to be even smaller, and once we have completed that, we save the file and we upload a third version of the file. Let us now take a look at the different versions of the file in our bucket. So we just do a gsutil LS and see the three different versions which are listed here. Let us now grab the oldest version of the file, which is also the original copy. So this is the one which appears right at the top of our list and we download it into recover. TXT. Now let us compare the relative sizes of the original version and the latest version, which is on our file system. So we can see that the latest version is about a shade under 53. The original version is about 53.8 KB. So, using versioning, we have been able to recover the original file.

12. Lab: Cloud Storage Versioning, Directory Sync

Moving along now from versioning, try to setup synchronisation between a local folder in our file system and our remote cloud storage bucket. So to do this, create two levels of directories. So we call them the first level and the second level. Let us copy over the setup HTML file into our first level and let us copy it over as well into the second level directory. Now set up the syncing of our local file system directory with a cloud storage bucket by using the RSNC command. And over here, we're just going to sync our entire home directory with a bucket.

So once we execute the command, we see that a whole bunch of files have been uploaded into the bucket. And once this is complete, let us navigate into the console and view the contents of our bucket from over there. So, navigating to a bucket, we see that all our files are indeed present here, including all our config files. And specifically, we see that the first level directory is also visible. So let us navigate into that and we see, as expected, that the setup HTML as well as the second level directory are present here. And inside the second level directory, we see the setup HTML just as we expected. As a quick verification, let us go back to the command line and see if we can list the contents of our remote bucket from over here, specifically the directory which we just created. So we recursively list the contents of the first level directory and we see that the files and directory structure can be viewed from here. So, moving on now to our final exercise in this lab on cloud storage. And this is going to involve the sharing of buckets across projects. To see what that means, let us go ahead and create a new bucket in a different project.

So we navigate from Project Six in my case, and I'm going to select another project within my organisation called Loonikon Project Seven. And once inside, I'm going to navigate to the storage browser and create a new bucket over here. So just to reiterate, the name of the bucket must be unique globally. So you may not always get the bucket name you want. In my case, I'm getting a Looney Bucket, leaving all the other values as default, and creating the bucket. And once it is ready, let us upload a file. And in my case, I'm just going to pick a file from the file system. And once this file has been uploaded, let me just rename this to something a little smaller. So I'm just going to call the sample PNG. And now we have a bucket with a file in it. The next step is to create a service account for this project. So we navigate to the Im and service account. And when we create this new service account,let us call it Cross Project Storage. And the role we will assign is that of a storage object viewer. So we simply navigate through the menu and provide a new private key in JSON format, after which the file is downloaded to our file system.

And now we are set up with a service account along with a new private key for it. So for our next step, let's navigate back to the original project. So we just select the project from the menu and, once inside, we will now provision a second VM instance. So, going to the VM instances page, we will call this one the Cross project. This is going to be in the European West region, and the machine type is going to be a micro. With that ready, let us just hit "create." And we now have two instances of this project. Let us SSH into the newly created instance once our terminal is up and try to list the contents of our newly created bucket from the command line. So the bucket is on a different project, you will recall. And also, we haven't really done any configuration in this instance to use the Google Cloud API. And running this LS command, we see that we get an access denied message. That is because this instance is using the default service account linked with this project and it does not have the permissions to access the bucket created in the other project. However, the new service account that we created does have the required permissions.

So let us configure this instance to use that service account. So we create a new credentials JSON file, and we copy over the contents from the private key file of our new service account. Once that is done, we authorise thisVM to use the service account credentials when accessing the Google Cloud API. So we run this G cloud auth activate serviceaccount command and then we run the G cloudinit command, which will reset the local profile and initialise the Google Cloud API from here. So when we run the command, we first select option one to reinitialize the configuration. We then select the service account which we would like to use. So we hit option two. If asked whether to enable the API, we just say yes. And finally, when we want to specify the project which we would like to access, we enter the ID of the second project. So that is where our bucket is. So in my case, it's Lunacon Project Seven.

So this instance, which is in our original project, has now been set up to access the second project using the service account credentials from there. So let us just check if you're able to list the contents of our new bucket from here. We once again run the command and see that the LS was successful this time. Now let us try to write something into that bucket. So we have one file here, which is our credentials adjacent file. So let us just use that and we see that we do not have the permission to write to this bucket. And if you remember, this is because the new service account that we created had the storage object viewer permission. So let us go ahead and modify that. So let us navigate to the IAM page for the second project. And for our service account over here, we see that we just have the storage object viewer role attached to it. So let us go ahead and add another one. This time, let us add the storage object admin role as well. Once that is done, we see the service account has multiple roles assigned to it. So, as a final test, let us head back to our instance and see if the upload to the bucket works now. So we just run the same command and this time the upload is successful. So we have been able to configure this instance to access a bucket created in a different project. In other words, we have enabled cross-project sharing.

Cloud SQL, Cloud Spanner ~ OLTP ~ RDBMS

1. Cloud SQL

Here is a question that I'd like you to try to answer, or at least keep in the back of your mind as we discuss Cloud SQL in the remainder of this video. The question is, does it make sense to use a cloud proxy to access cloud SQL? In other words, using a cloud proxy to access a cloud SQL instance is a good idea. Is the statement true or false? The next technology that we are going to discuss is going to be Cloud SQL. This is a relational database, as its name would suggest. Because relational databases are used for transaction processing, that is, OLTP applications, they need to make very strong commitments regarding transactions. So the asset properties of automatic consistency, isolation, and durability must be enforced by RDBMS.

Relational databases like Cloud SQL and Cloud Spanner need to work with extremely structured data and enforce constraints on that data. As a result, they are unsuitable for OLAP, analytics, Bi, or business intelligence applications. They tend to deal with much larger datasets, particularly in these days of big data. Such large data sets cannot be efficiently handled by relational databases like Cloud SQL. The transaction support provided by relational databases is overkill for OLAP applications. OLAP applications typically do not require very strict writing consistency. We won't spend a whole lot of time talking about relational data and the relational data model that underlies RDBMS. Just very quickly, discuss the basic idea.

These databases represent data tables or relationships. These tables tend to have rows and columns. The number and type of the columns are strictly defined and are known as the schema. If you find yourself wondering what exactly a schema is or what a primary key is, my suggestion is that you take some time off to very quickly brush up on those fundamentals of databases before picking up here. Once again, the relational data model works extremely well when we have data that is very well structured but does not exhibit particularly strong hierarchical relationships. This is data without a lot of missing values. As we shall see, when there are a lot of nulls or missing values, columnar databases tend to work a bit better. Let's quickly review the differences between Cloud SQL and Cloud Spanner. We had discussed this previously. Cloud Spanner is Google proprietary and it's also more advanced than Cloud SQL. We shall discuss Cloud Spanner in a lot of detail, but for now, do keep in mind that, unlike Cloud SQL, Cloud Spanner offers horizontal scaling, which means that one can increase the number of instances of one's database application and thereby improve its performance.

Cloud SQL supports two open source SQL implementations: MySQL, which is the fast and usual and efficient implementation, as well as a beta version of PostgreSQL. This is a beta currently as of July 2017. The very broad brush generalization is that PostgreSQL functions better when you have very complex sequel queries because the PostgreSQL version is in Beta. Let's mostly restrict our conversation to MySQL. Most of this will also give you an idea of how cloud SQL works. In general, cloud SQL is not server less. That's because we need to explicitly instantiate cloud SQL. This is typically the case with any technology which provides transaction support. Here, while creating a Cloud SQL instance,we need to specify the region. We also need to pick between first and second generation instances. Second generation instances have a slew of new features, including higher availability configuration, proxy support, and the ability to remain operational during maintenance. For all of these reasons, we should prefer second generation or first generation, which is pretty intuitive.

Let's talk a little bit about the high availability configuration because this is conceptually interesting. This is a feature on second-generation MySQL instances. Here, the instances have a failover replica, and that failover replica must be in a different zone than the original instance. The original is called the "Master." By the way, this gives us a little bit of a hint about how Cloud SQL works under the hood. Any changes made to data on the Master, whether to users or system tables, are reflected in the failure replica via a process known as semi synchronous replication.

Later on in this course, when we discuss networking in detail, we will be quite interested in proxies of all varieties. This is important if you are looking to pass the Cloud Architect certification. So let's also talk a bit about the SQL cloud proxy. In general, proxies are a security feature because they help shield the real IP addresses of your instances. So, for instance, you can use a Cloud SQL proxy in order to give secure access to your instances without having to whitelist the individual IP addresses of the instances that are going to be connected to it. And this proxy is a secure one because it will set up a secure encrypted connection and use SSL certificates to verify the identities of the client and the server. This also makes for easier connection management. By the way, security and easier connectionmanagement are general features of proxies. The proxy eliminates the need for static IP addresses on your server.

Let's take a very quick look at how a cloud proxy works. Here is a diagram taken from the GCP docs. This illustrates how a client application running on a client machine can communicate with Cloud SQL via a TCP Secure tunnel. The Cloud SQL proxy works by having a local client, which is called the proxy client, running on the client machine. The client application only communicates with the proxy. It does not directly talk with Cloud SQL, and as a result of that, there is a great deal of additional security. The client app uses something like ODBC or GDBC to connect to this proxy client, and the proxy, in turn, uses a secure tunnel. This is a secure TCP tunnel to communicate with cloud SQL. Like all proxies, this requires a proxy server running at the other end, with which the proxy client can communicate with.

And when we fire up the proxy on the client side, you need to tell the client where its corresponding proxy server is, i.e., to what cloud SQL instances it should be connecting. You also need to tell it where to listen for data coming from the cloud SQL application. This is important because this will not be sent along with the standard TCP port. This is one of the reasons why this is more secure. And lastly, the client proxy also needs to be told where to find the credentials that will be used to authenticate the application. That's the client application with cloudSQL running in the cloud. Let's now come back to the question posed at the start of the video. The question was whether using a cloud proxy to access cloud SQL is a good idea. True or false? Well, this statement is true. It is a good idea to use a cloud proxy to access cloud SQL. This has to do with security. Using a cloud proxy allows your applications to connect directly to cloud SQL via a secure tunnel. In the absence of a cloud proxy, you would either need to whitelist some IP addresses, which is typically a bad idea from a security point of view, or you would need to set up SSL, which would have its own administrative overhead. So using a cloud proxy with a cloud sequel is a good idea.

Prepared by Top Experts, the top IT Trainers ensure that when it comes to your IT exam prep and you can count on ExamSnap Professional Data Engineer on Google Cloud Platform certification video training course that goes in line with the corresponding Google Professional Data Engineer exam dumps, study guide, and practice test questions & answers.

Comments (0)

Add Comment

Please post your comments about Professional Data Engineer Exams. Don't share your email address asking for Professional Data Engineer braindumps or Professional Data Engineer exam pdf files.

Add Comment

Purchase Individually

Professional Data Engineer  Premium File
Professional Data Engineer
Premium File
311 Q&A
$43.99 $39.99
Professional Data Engineer  Training Course
Professional Data Engineer
Training Course
201 Lectures
$16.49 $14.99
Professional Data Engineer  Study Guide
Professional Data Engineer
Study Guide
543 Pages
$16.49 $14.99

Only Registered Members can View Training Courses

Please fill out your email address below in order to view Training Courses. Registration is Free and Easy, You Simply need to provide an email address.

  • Trusted by 1.2M IT Certification Candidates Every Month
  • Hundreds Hours of Videos
  • Instant download After Registration

Already Member? Click here to Login

A confirmation link will be sent to this email address to verify your login

UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.