Use VCE Exam Simulator to open VCE files
Organizations are still working out the details of getting to the cloud. With all of the hardware and servers running in datacenters and in colocation spaces, moving to the cloud still takes a bit of effort.
Architecting solutions in Azure is not just development or infrastructure management in the cloud. It’s much more than that, and you need to understand how the Azure resources an organization needs to operate will sometimes be centered in development and sometimes in infrastructure. It’s up to you to know enough about these topics.
This chapter helps you understand how you can bring your existing workloads to Azure by allowing the use of some familiar resources (IaaS Virtual Machines) and others that may be new (serverless computing) to your environment. In addition, the use of Multi-Factor Authentication (MFA) is covered here to ensure your cloud environment is as secure as possible. An Azure Solutions Architect may face all these situations in day-to-day work life and needs to be ready for each of them.
Skills covered in this chapter:
Because most organizations have been operating on infrastructure running in house, there is a significant opportunity to help them migrate these workloads to Azure, which may save some costs and provide efficiencies for these servers that their datacenters may not. Or they might want to explore getting out of the datacenter business. How can you help your organization or customer move out of a datacenter into the Azure cloud?
The recommended option for this is Azure Site Recovery (ASR), which offers different options depending on the type of workload you’re migrating (physical or virtual).
This skill covers how to:
Azure Site Recovery provides a way to bring your servers into Azure while allowing them to be failed back to your on-premises datacenter should the need arise as part of a business continuity and disaster recovery (BDCR) scenario. An increasingly common practice is to make the failover one-way and use ASR to move servers to Azure. Then you switch off their local counterparts, effectively migrating your environment to Azure.
Follow these steps to configure the Azure resources to migrate existing servers to Azure:
Note
Consider Creating the Azure Resources First
Creating the Azure resources first prepares the destination and ensures that nothing is missed. Because the process moves files into Azure, this can minimize issues when the transfer begins because the target resources will be identified up front.
Name Choose a unique name for your Recovery Services vault.
Subscription Specify an active Azure subscription.
Resource Group Create a new or select an existing resource
group for the Recovery Services vault.
Location Select the region to use for the Recovery Services vault.
Note
Feature Name Changes Happen at Cloud Speed, Too
Backup and Site Recovery (OMS) is the new name for the Recovery Services vault resource. As of this writing, the names have not been updated throughout the portal.
Once the Recovery Services vault is ready, open the overview page by clicking the resource within the resource group. This page provides some high-level information, including any new things related to Recovery Services vault.
Use the following steps to get started with a site recovery (migration in this case):
Where are your machines located? On-premises.
Where do you want to replicate your machines to? To Azure.
Are your machines virtualized? Select the appropriate response:
Yes, with VMware.
Yes, with Hyper-V.
Other/Not virtualized.
Note
About Physical Servers
Migrating Physical Servers using P2V, which is covered later in this chapter, uses the Physical/ Other option of the Azure Site Recovery configuration mentioned here. Aside from this step, the Azure configuration is the same as discussed here.
Note
About Hyper-V
When Hyper-V is selected, you see an additional question about the use of System Center as well.
Step 2 of infrastructure preparation is deployment planning, which helps to ensure that you have enough bandwidth to complete the transfer of virtualized workloads to Azure. This will take an estimate of the time needed to transfer the workloads to Azure based on machines found in your environment.
Click the Download link for the deployment planner, located in the middle pane of the Deployment planning step, to download a zip file to get started.
This zip file includes a template that will help in collecting information about the virtualized environment as well as a command-line tool to scan the virtualized environment to determine a baseline for the migration. The tool requires network access to the Hyper-V or VMware environment (or direct access to the VM hosts where the VMs are running). The command-line tool provides a report about throughput available to help determine the time it would take to move the scanned resources to Azure.
Note
Ensure RDP is Enabled Before Migration
Ensuring the local system is configured to allow remote desktop connections before migrating it to Azure is worth the prerequisite checks. There will be considerable work to do, including a jump box local to the migrated VMs virtual network if these steps are not done before migration. It’s likely that this will be configured already, but it’s never a bad idea to double-check.
After the tool has been run, in the Azure portal, specify that the deployment planner has been completed and click OK.
Next the virtualization environment will be provided to Azure by adding the Hyper-V site and server(s).
Note
All Hypervisors Welcome
At the time of this writing, the lab used for the examples consists of Hyper-V infrastructure. The examples provided will use Hyper-V as the on-premises source, but ASR is compatible with VMware as well.
To add a Hyper-V server, download the Azure Site Recovery Provider and the vault registration key (see Figure 2-5), and install them on the Hyper-V server. The vault registration info is necessary because ASR needs to know which recovery vault the VMs belong to once they are ready to migrate to Azure.
Install the Site Recovery Provider on the virtualization host, if you’re using Hyper-V as shown in Figure 2-6.
After installation and registration, it may take some time for Azure to find the server that has been registered with Site Recovery vault.
Proceed with infrastructure prep by completing the Target section of the wizard as shown in Figure 2-7.
Select the subscription and the deployment model used. (Generally, this will be Resource Manager.)
Note
Ensure that Storage and Networking are Available
A storage account and network are necessary within the specified subscription in the same Azure region as the Recovery Services vault. If this exists when you reach this step, you can select the resources. If the storage account and network don’t exist, you can create them at this step.
Click the Storage Account button at the top of the Target blade to add a storage account.
Provide the following storage account details:
Storage account name
Replication settings
Storage account type
When this storage account is created, it will be placed in the same region as the replication services vault.
If a network in the same region as the vault isn’t found, you can click the Add Network button at the top of the Target blade to create one. Much like storage, the network region will match the vault, other settings, including address range and name, will be available for configuration.
The last requirement for preparing infrastructure is to configure a replication policy. Complete the following steps to create a replication policy:
Name The name of the replication policy.
Source Type This should be prepopulated based on previous
settings.
Target Type This should be prepopulated based on previous settings.
Copy Frequency Enter the replication frequency for subsequent copies to be captured.
Recovery Point Retention In Hours How much retention is needed for this server.
App Consistent Snapshot Frequency In Hours How often an app-consistent snapshot will be captured.
Initial Replication Start Time Enter a time for the initial replication to begin.
Associated Hyper-V Site Filled in based on previous settings.
After the completion of the on-premises settings, you return to the Site Recovery blade to continue configuration.
To enable replication, complete the following steps:
OS Type Whether the OS is Linux or Windows (available as default and per VM).
OS Disk Select the name of the OS Disk for the VM (available per
VM).
Disks to replicate Select the disks attached to the VM to replicate (available per VM).
With replication options configured, the last part of the configuration to complete is the recovery plan. To configure the recovery plan, use the following steps:
This overview screen shows the number of items in the recovery plan in both the source and target, as shown in Figure 2-8.
To test the configuration, click the Test Failover button at the top of the Site Recovery Plan blade and complete the following steps:
Once the failover completes, the VM should appear in the resource group that was specified for post failover use as shown in Figure 2-9.
Once the test failover has completed, your VM is running in Azure, and you can see that things are as expected. When you’re happy with the result of the running VM, you can complete a cleanup of the test, which will delete any resources created as a part of the test failover. Selecting the item(s) in the replicated items list and choosing the option to cleanup test failover. When ready to migrate, use an actual failover by completing the following steps:
Following the failover of the VM to Azure, cleanup of the on-premises environment happens as part of the completion of the migration to Azure. This ensures that the restore points for the migrated VM are cleaned up and that the source machine can be removed because it’ll be unprotected after these tasks have been completed.
You may need to tweak settings to optimize performance and ensure that remote management is configured once the system has landed, like switching to managed disks—the disks used in failover are standard disks.
There may also be some networking considerations after migrating the VM. External connectivity may require network security groups to ensure that RDP or SSH is active to allow connections. Remember that any firewall rules that were configured on premises will not necessarily be completely configured post migration in Azure.
After verification that the migrated resource is operating as needed, the last step of the migration is to remove the on-premises resources. In terms of Azure, the resources are still in a failover state because the process was to fail them over with the intention of failing back. An Azure Site Recovery migration is really a one-way failover.
Need More Review?
Azure Migration Resources
Check out the following articles for additional material:
“Prepare Azure resources for disaster recovery of onpremises machines” https://docs.microsoft.com/enus/azure/site-recovery/tutorial-prepare-azure
“Migrate on-premises machines to Azure” https://docs.microsoft.com/en-us/azure/site-recovery/migratetutorial-on-premises-azure
In the age of the cloud, even using servers is considered legacy technology in some instances because there are platform-based services that will run the code provided rather than deploying applications, functions, or other units of work to a server. The cloud provider—Azure, in this case—takes care of the workings under the hood, and the customer needs to worry only about the code to be executed.
There are more than a few resources in Azure that run without infrastructure—or serverless:
Azure Storage
Azure Functions
Azure Cosmos DB
Azure Active Directory
These are just a few of the services that are available for serverless compute. Serverless resources are the managed services of Azure. They’re not quite platform as a service (PaaS), but they’re not all software as a service (SaaS), either. They’re somewhere in between.
Serverless objects are the serverless resources to be used in an architecture. These are the building blocks used in a solution, and there will be several types created depending on the solution being presented.
Two of the most popular serverless technologies supported by Azure are logic apps and function apps. The details of configuring these are discussed in turn.
A logic app is a serverless component that handles business logic and integrations between components—much like Microsoft Flow, but with full customization and development available.
This skill covers how to:
To build a simple logic app that watches for files in a OneDrive folder and sends an email when they’re found, complete the following steps:
Name Provide a name for the logic app.
Subscription Choose the subscription where the resource should be created.
Resource Group Select Create or Use Existing to choose the resource group where the logic app should be created. If you select Use Existing, choose the appropriate resource group from the drop-down menu.
Location Select the region where the logic app should be created.
Log Analytics Specify if log analytics should be enabled or disabled for this resource.
Note
Log Analytics Workspace is Required
To enable the log analytics feature for a logic app, ensure that the log analytics workspace that will collect the information exists beforehand.
Once a logic app resource exists, you can apply the code to get it to act on resources through predefined templates, custom templates, or using a blank app and adding code to perform actions for the application.
To add code to copy Azure storage blobs from one account to another, complete the following steps:
Note
Connect to Onedrive
A connection to OneDrive will be needed to use this template; clicking to connect a OneDrive account will prompt for login to the account.
Azure Functions allows the execution of code on demand, with no infrastructure to provision. Whereas logic apps provide integration between services, function apps run any piece of code on demand. How they’re triggered can be as versatile as the functions themselves.
As of this writing, Azure Functions support the following runtime environments:
.NET
JavaScript
Java
PowerShell (which is currently in preview)
To create a function app, complete the following steps:
App Name Enter the name of the function app.
Subscription Enter the subscription that will house the resource.
Resource Group Create or select the Resource Group that will contain this resource.
OS Select the operating system that the function will use (Windows or Linux).
Hosting Plan Select the pricing model used for the app: Consumption (pay as you go) or App Service (specifically sized app service).
Note
New App Service Plan if Needed
If you select the App Service hosting plan, a prompt to select/create it will be added.
Location Select the Azure region where the resource will be located.
Runtime Stack Select the runtime environment for the function app.
Storage Create or select the storage account that the function app will use.
Application Insights Create or select an Application Insights resource for tracking usage and other statistics about this function app.
In the resource group where you created the function app, select the function to view the settings and management options for it.
The Overview blade for the function app provides the URL, app service, and subscription information along with the status of the function (see Figure 2-15).
Function apps are built to listen for events that kick off code execution. Some of the events that functions listen for are
HTTP Trigger
Timer Trigger
Azure Queue Storage
Azure Service Bus Queue trigger
Azure Service Bus Topic trigger
Important
Multiple Types of Authentication Possible
When configuring a function for the HTTP Trigger, you need to choose the Authorization level to determine whether an API key will be needed to allow execution. If another Azure service trigger is used, you may need an extension to allow the function to communicate with other Azure resources.
In addition to the Overview blade, there is a blade for platform features.
These are the configuration items for the App Service plan and other parts of Azure’s serverless configuration for this function. Here, you configure things like networking, SSL, scaling, and custom domains, as shown in Figure 2-16.
Within the App Settings blade for function apps is the Kudu console, listed as Advanced Tools (Kudu). This console operates much like being logged into the back end of the system or app. Because this is a serverless application, there is no back end to be managed; this tool is used for troubleshooting a function app that isn’t performing as needed. Figure 2-17 shows the Kudu back end.
Note
Azure has a Custom Console for Troubleshooting
You can access the Kudu console by inserting .scm. into the URL of the Azure function. https://myfunction.azurewebsites.net would be https://myfunction.scm.azurewebsites.net
Need More Review?
Azure Functions Creation and Troubleshooting
Check out the articles at the following URLs for additional information:
“Using Kudu and deploying apps into Azure” https://blogs.msdn.microsoft.com/uk_faculty_connection/2017/0 kudu-and-deploying-apps-into-azure/
“Azure Functions Documentation”
https://docs.microsoft.com/en-us/azure/azure-functions/
“Execute an Azure Function with trigger” https://docs.microsoft.com/en-us/learn/modules/executeazure-function-with-triggers
Event Grid is an event-consumption service that relies on publish/subscription to pass information between services. Suppose I have an on-premises application that outputs log data and an Azure function that’s waiting to know what log data has been created by the on-premises application. The on-premises application would publish the log data to a topic in Azure Event Grid. The Azure function app would subscribe to the topic to be notified as the information lands in Event Grid.
The goal of Event Grid is to loosely couple services, allowing them to communicate, using an intermediate queue that can be checked for new data as necessary. The consumer app listens for the queue and is not connected to the publishing app directly.
To get started with Event Grid, complete the following steps:
Once the registration completes, you can begin using Event Grid by navigating to the Event Grid Topics services in the portal, as shown in Figure 2-18.
Once a subscription has topics created, each topic will have specific properties related to the subscription. Click the event grid subscription from the list. From the Topic Overview blade, the URL for the topic endpoint, the status, and the general subscription information is available. You can manage the following items from this point:
Access Control The Azure IAM/Role-Based configuration for which Azure users can read, edit, and update the topic. Access Control is discussed later in this chapter.
Access Keys Security keys used to authenticate applications publishing events to this topic.
Ensuring that the applications pushing information to this topic have a key to do so will ensure that the amount of noise sent to the topic is controlled. If the application sends an overly chatty amount of information, the noise may not be reduced.
Important
Security Item
To ensure the access keys for a topic are secured and kept safe, consider placing them in a Key Vault as secrets. This way, the application that needs them can refer to the secret endpoint and avoid storing the application keys for the topic in any code. This prevents the keys from being visible in plain text and only available
With a topic created and collecting information, consuming services that require this information need to subscribe to these events and an endpoint for the subscription. In this case, an endpoint is an app service with a URL that the subscriber services will hit to interact with the topic.
Event subscriptions can collect any and all information sent to a topic, or they can be filtered in the following ways:
By Subject Allows filtering by the subject of messages sent to the topic—for example, only messages with .jpg images in them Advanced filter A key value pair one level deep
In addition to filtering information to collect for a subscription, when you select the Additional Features tab when you’re creating an event subscription shows other configurable features including the following:
Max Event Delivery Attempts How many retries there will be.
Event Time To Live The number of days, hours, minutes, and seconds the event will be retried.
Dead-Lettering Select whether the messages that cannot be delivered should be placed in storage.
Event Subscription Expiration Time When the subscription will automatically expire.
Labels Any labels that might help identify the subscription.
Need More Review?
Working with Event Grid
Check out the articles at the following URLs for additional information:
“Concepts in Azure Event Grid” https://docs.microsoft.com/en-us/azure/event-grid/concepts
“Understand event filtering for Event Grid subscriptions” https://docs.microsoft.com/en-us/azure/event-grid/event-filtering
“Event sources in Azure Event Grid” https://docs.microsoft.com/en-us/azure/event-grid/event-sources
Azure Service Bus is a multi-tenant asynchronous messaging service that can operate with first in, first out (FIFO) queuing or publish/subscribe information exchange. Using queues, the message bus service will exchange messages with one partner service. If using the publish/ subscribe (pub/sub) model, the sender can push information to any number of subscribed services.
A service bus namespace has several properties and options that can be managed for each instance:
Shared Access Policies The keys and connection strings available for accessing the resource. The level of permissions, manage, send, and listen are configured here because they’re part of the connection string.
Scale The tier of service the messaging service uses: Basic or Standard.
Note
A Note About SKU
A namespace can be configured with a premium SKU, which allows geo recovery in the event of a disaster in the region where the service bus exists. Selection of a premium SKU is available only at creation time.
Geo-Recovery Disaster recovery settings that are available with a Premium namespace.
Export Template An ARM automation template for the service bus resource.
Queues The messaging queues used by the service bus.
Each configured queue displays the queue URL, max size, and current counts about the following message types:
Active Messages currently in the queue
Scheduled Messages sent to the queue by scheduled jobs or on a general schedule
Dead-Letter Messages that are undeliverable to any receiver
Transfer Messages pending transfer to another queue
Transfer Dead-Letter Messages Message that failed to transfer to another queue
In addition to viewing the number of messages in the queue, you can create shared access permissions for the queue. This will allow permissions for manage, send, and listen to be assigned. Also, this provides a connection string leveraging the assigned permissions that the listener application will use as the endpoint when collecting information from the queue.
In the properties blade of the selected message queue, the following settings can be updated:
Message time to live default
Lock duration
Duplicate detection history
Maximum delivery count
Maximum queue size
Queue state (is the queue active or disabled)
Move expired messages to the dead-letter subqueue; keep undeliverable messages in a subqueue to keep this message queue clean
The settings for a message queue are similar to those discussed earlier in the section about Event Grid because they serve a similar purpose for the configured queues.
Need More Review?
More Information About Service Bus Messaging
Check out the articles at the following URLs for additional information:
“What is Azure Service Bus” https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-messaging-overview
“Choose between Azure messaging services—Event Grid, Event Hubs, and Service Bus”
https://docs.microsoft.com/en-us/azure/event-grid/compare-messaging-services?
“Choose a messaging model in Azure to loosely connect your services” https://docs.microsoft.com/en-
Azure has a couple of different options for load balancing: the Azure load balancer that operates at the transport layer of the networking stack and the Application gateway that adds to the load balancer at layer 4 and adds layer 7 (HTTP) load balancing on top of this configuration using additional rules.
This skill covers how to:
An Application Gateway has the following settings that you can configure to tune the resource to meet the needs of an organization:
Configuration Settings for updating the tier, SKU, and instance count; indicate whether HTTP/2 is enabled.
Web Application Firewall Allows adjustment of the firewall tier for the device (Standard or WAF) and whether the firewall settings for the gateway are enabled or disabled.
Enabling the WAF on a gateway defaults the resource itself to a Medium tier by default.
If the Firewall Status is enabled, the gateway evaluates all traffic except the items excluded in a defined list (see Figure 2-19). The firewall/WAF settings allow the gateway to be configured for detection only (logging) or prevention.
Note
Auditing at the Firewall Requires Diagnostics
When using the Firewall Settings in WAF mode, enabling detection mode requires diagnostics to be enabled to review the logged settings.
Backend Pools The nodes or applications to which the application gateway will send traffic.
The pools can be added by FQDN or IP address, Virtual Machine, VMSS, and App Services. For target nodes not hosted in Azure, the FQDN/IP Address method allows external back-end services.
HTTP Settings These are the port settings for the back-end pools. If
you configured the gateway with HTTPS and certificates during setup, this defaults to 443; otherwise, it starts with port 80. Other HTTP-related settings managed here are
Cookie Based Affinity (sticky sessions)
Connection Draining—ensuring sessions in flight at the time a back-end service is removed will be allowed to complete
Override paths for backend services—allows specified directories or services to be rerouted as they pass through the gateway
Listeners These determine which IP addresses are used for the frontend services managed by this gateway. Traffic hits the front end of the gateway and is processed by configured rules as it moves through the application gateway. Listeners are configured for IP address and port parings.
Rules The rules for the gateway connect listeners to backend pools, allowing the gateway to route traffic landing on a specific listener to a backend pool using the specified HTTP settings.
Even though each of these items is configured separately in the application gateway, rules bring these items together to ensure traffic is routed as expected for an app service.
Health Probes are used to ensure the services managed by the gateway are online. If there are issues with one of the configured back-end services, the application gateway removes the resource from the back end of the gateway. This ensures that the back-end service being used by the gateway will be less likely to display errored pages for resources that may be down.
Important
At Least One Back-End Service is Needed
If all back-end services are unhealthy, the application gateway is unable to route around the issue.
The interval at which health probes are evaluated, the timeout period, and retry threshold can all be configured to suit the needs of the back-end applications as shown in Figure 2-20.
Exam Tip
Azure supports different types of load balancing services that can be used in concert with one another. Be sure to understand when to use an application gateway and when to use a network load balancer.
An application gateway defaults to a front-end configuration using a public IP address, but you can configure it to use a private IP address for the front end. This might be useful in a multitiered application configuration. Using one application gateway to direct traffic from the Internet to an “internal” gateway that has a private front-end configuration might be a useful configuration in some scenarios.
Configuring virtual IP addresses (VIPs) happens in the settings for the application gateway in the Front-End IP Configuration section shown in Figure 2-21.
When you set the front-end configuration, the default public settings include a configured listener. Each configuration needs a listener to allow it to properly distribute traffic to back-end resources.
Setting up private front-end configurations requires a name and private IP address to be specified if the original header will be modified to a known IP value.
Note
Update Time May be Required
When saving settings to some areas of the application gateway resource, the time to update may take longer than expected.
The application gateway handles load balancing at layer 7 (the application layer) of the OSI model. This means it handles load balancing techniques using the following methods:
Cookie-Based Affinity This will always route traffic during a session to the same back-end application where the session began. The cookie-based method works well if there is state-based information that needs to be maintained throughout a session. For client computers to leverage this load balancing type, the browser used needs to allow cookies.
Cookie-Based Affinity management happens in the HTTP Settings/Backend HTTP Settings blade of the resource (see Figure 2-22).
Connection Draining Enable this setting to ensure that any connections that are being routed to a resource will be completed before the resource is removed from a backend pool. In addition, enter the number of seconds to wait for the connection to timeout.
Protocol Set HTTP or HTTPS here. If you choose HTTPS, you need to upload a certificate to the application gateway.
URL Path-Based Routing uses a configuration called a URL Path Map to control which inbound requests reaching the gateway are sent to which backend resources. There are a few components within the Application Gateway needed to take advantage of URL Path-Based Routing:
URL Path Map The mapping of requests to back-end resources
Backend Listener Specifies the front-end IP configuration and port that the routing rules will be watching
Routing Rules The rules associate the URL Path Map and the listener to ensure that specific requests are routed to the correct backend pool.
PowerShell is necessary to add the configurations to an application gateway for the settings needed for URL Path-Based Routing.
Exam Tip
Leveraging the examples to help create a PowerShell script that works in your environment is advisable. When reviewing code supplied by others, be sure to look over it in an editor that supports the language—like Visual Studio Code—to help you understand what the code does before you run it in your environment.
A useable example of the following code is at https://docs.microsoft.com/en-us/azure/application-gateway/tutorial-urlroute-powershell:
#Configure Images and Video backend pools
$gateway = Get-AzApplicationGateway `
-ResourceGroupName Az-300-RG-Gateway`
-Name AppGateway
Add-AzApplicationGatewayBackendAddressPool `
-ApplicationGateway $gateway `
-Name imagesPool
Add-AzApplicationGatewayBackendAddressPool `
-ApplicationGateway $gateway `
-Name videoPool
Add-AzApplicationGatewayFrontendPort `
-ApplicationGateway $gateway `
-Name InboundBEPort `
-Port 8080
$backendPort = Get-AzApplicationGatewayFrontendPort `
-ApplicationGateway $gateway `
-Name bport
#configure a backend Listener
$fipconfig = Get-AzApplicationGatewayFrontendIPConfig `
-ApplicationGateway $gateway
Add-AzApplicationGatewayHttpListener `
-ApplicationGateway $gateway `
-Name backendListener `
-Protocol Http `
-FrontendIPConfiguration $fipconfig `
-FrontendPort $backendPort
#Configure the URL Mapping
$poolSettings = Get-AzApplicationGatewayBackendHttpSettings `
-ApplicationGateway $gateway `
-Name myPoolSettings
$imagePool = Get-AzApplicationGatewayBackendAddressPool `
-ApplicationGateway $gateway `
-Name imagesBackendPool
$videoPool = Get-AzApplicationGatewayBackendAddressPool `
-ApplicationGateway $gateway `
-Name videoBackendPool
$defaultPool = Get-AzApplicationGatewayBackendAddressPool `
-ApplicationGateway $gateway
-Name appGatewayBackendPool
$imagePathRule = New-AzApplicationGatewayPathRuleConfig `
-Name imagePathRule `
-Paths "/images/*" `
-BackendAddressPool $imagePool `
-BackendHttpSettings $poolSettings
$videoPathRule = New-AzApplicationGatewayPathRuleConfig `
-Name videoPathRule `
-Paths "/video/*" `
-BackendAddressPool $videoPool `
-BackendHttpSettings $poolSettings
Add-AzApplicationGatewayUrlPathMapConfig `
-ApplicationGateway $gateway `
-Name urlpathmap `
-PathRules $imagePathRule, $videoPathRule `
-DefaultBackendAddressPool $defaultPool `
-DefaultBackendHttpSettings $poolSettings
#Add the Routing Rule(s)
$backendlistener = Get-AzApplicationGatewayHttpListener `
-ApplicationGateway $gateway `
-Name backendListener
$urlPathMap = Get-AzApplicationGatewayUrlPathMapConfig `
-ApplicationGateway $gateway `
-Name urlpathmap
Add-AzApplicationGatewayRequestRoutingRule `
-ApplicationGateway $gateway `
-Name rule2 `
-RuleType PathBasedRouting `
-HttpListener $backendlistener `
-UrlPathMap $urlPathMap
#Update the Application gateway
Set-AzApplicationGateway -ApplicationGateway $gateway
Important
Be Patient when Updating Application Gateway
An update to the application gateway can take up to 20 minutes.
Exam Tip
Remember to work with the Azure Command-Line Interface (CLI) to understand how the commands work and that they differ from PowerShell. Although PowerShell can handle the command-line work in Azure, there may be some significant Azure CLI items on the exam, and it’s good to know your way around.
Once the URL map is configured and applied to the gateway, traffic is routed to the example pools (images and videos) as it arrives. This is not traditional load balancing where traffic would be routed based on load of the device; a certain percentage of traffic goes to pool one and the rest to pool two. In this case the content type is helping to drive the incoming traffic.
Need More Review?
Additional Resources for Load Balancing Options
Check out the articles at the following URLs for additional information:
“What is Azure Application Gateway” https://docs.microsoft.com/en-us/azure/applicationgateway/overview
“Custom rules for Web Application Firewall v2” https://docs.microsoft.com/en-us/azure/applicationgateway/custom-waf-rules-overview
“Load balance your web service traffic with Application Gateway” https://docs.microsoft.com/en-us/learn/modules/load-balance-web-traffic-with-applicationgateway
You also can review the Azure CLI documentation at https://docs.microsoft.com/en-us/cli/azure/get-started-with-azure-cli? view=azure-cli-latest.
Azure supports connectivity to external or on-premises networks via two methods:
VPN An encrypted connection between two networks via the public Internet
ExpressRoute A private circuit-based connection between an organization’s network and Azure
Note
Security Details
The connection made by ExpressRoute runs over private circuits between an organization and Azure. No other traffic traverses these circuits, but the traffic is not encrypted on the wire by default. Some organizations may choose or be required to encrypt this traffic with a VPN.
This skill covers how to:
The virtual network gateway is a router endpoint specifically designed to manage inbound private connections. The resource requires the existence of a dedicated subnet, called the gateway subnet, for use by the VPN.
To add a gateway subnet to a virtual network, complete the following steps:
Important
About Networking
Be mindful of the address space used for virtual networks created as any subnets, including the gateway subnet; they must fit within the address space and have no overlap.
To create a virtual network gateway, complete the following steps:
Subscription The Azure subscription that will contain the virtual network gateway resource.
Name The name of the virtual network gateway.
Region The region for the virtual network gateway. There must be a virtual network in the region where the virtual network gateway is created.
Gateway Type ExpressRoute or VPN.
VPN Type Route-Based or Policy-Based.
SKU The resource size and price point for the gateway.
Virtual Network The network to which the gateway will be attached.
Public IP address The external IP address for the gateway (new or existing).
Enable Active-Active mode Allow active/active connection management.
Enable BGP/ASN Allow BGP route broadcasting for this gateway.
Note
Provisioning Time
Virtual network gateways can take anywhere from 15 to 45 minutes to be created. In addition, any updates to the gateway also can take between 15 and 45 minutes to complete.
Important
About Networking Resources
When you configure networking resources, there’s no way to deprovision them. Virtual machines can be turned off, but networking resources are always on and billing if they exist.
Create and configure site-to-site VPN
Once the virtual network gateway(s) are configured, you can begin configuring the connection between them or between one gateway and a local device.
There are three types of connections available using the connection resource in Azure:
Vnet to Vnet Connecting two virtual networks in Azure—across regions perhaps
Site to Site An IPSec tunnel between two sites: an on-premises datacenter and Azure
ExpressRoute A dedicated circuit-based connection to Azure, which we discuss later in this chapter
For a site-to-site configuration, complete the following steps:
Important
Keep it Together
The resource group and subscription for connections and other related resources should be the same as the configuration for the virtual network gateway.
Virtual Network Gateway Choose the available virtual network gateway based on subscription and resource group settings already selected.
Local Network Gateway Select or create a local network gateway. This will be the endpoint for any on-premises devices being connected to this VPN.
Once the site-to site-VPN configuration has been completed, verification of the connection will work, or it won’t. If you have everything configured correctly, accessing resources in Azure should work like accessing other locaresources.
Connecting to the machines connected to the Azure virtual network using local IP addresses should confirm that the VPN is connected, as the ping test shows in Figure 2-25.
In addition to the ping testing and connections between systems on these networks, the Summary blade for the local connection in Azure shows traffic across the VPN. This is shown in Figure 2-26.
In many cases, VPN connections to Azure will be low maintenance once they are connected and in use. There may be times, though, that certain connectivity might need restrictions placed on it—for example, if a server in Azure should be accessed through a load balancer or be accessible only from the local network.
Azure allows these resources to be created without public IP addresses, making them accessible only across the VPN. This is part of the managemen of these resources; simply removing the public IP takes the machine off of the Internet, but an organization may have additional requirements in that systems in a production environment cannot talk directly to systems in a nonproduction environment. The segregation can be handled via network security groups and routing table entries.
A network security group serves as an ACL list for access (or denial) to resources, so it would help to open or block ports to and from certain machines.
Figure 2-27 shows a simple network security group where port 22 is allowed but only from a source tagged as a virtual network. This allows othe resources on Azure virtual networks to reach the device, but nothing from th Internet can connect directly.
You can use network security groups at the network interface level for a virtual machine or at the subnet level.
Exam Tip
Configuring network security groups at the subnet level ensures uniform rule behavior across any devices in the planned subnet and makes management of connectivity much less complicated.
Note
Security
If your organization has requirements for one-to-one access and connectivity, a network security group configured at the interface level for the VM might be necessary to ensure restricted access from one host to another.
Network security groups also allow the collection of flow logs that capture information about the traffic entering and leaving the network via configured network security groups. To enable this, you need two additional resources for all features, as shown in Figure 2-28:
A storage account to collect the flow log data
A Log Analytics workspace for traffic analysis
In addition to network security groups, route table entries can be used to control traffic flow between network resources. With a route table entry, you can force all the traffic between subnets to pass through a specific network o virtual network appliance that handles all the rules and access controls. Ther are reference architectures for this type of configuration in the Azure documentation that walk through configuring this type of network topology.
Before you can use ExpressRoute as a VPN connection type, you need to configure it and prepare it as an Azure resource. Complete the following steps to configure the ExpressRoute Circuit resource in Azure:
Circuit Name The name of the circuit resource.
Provider Select the name of the provider delivering the circuit.
Peering Location The location where your circuit is terminated; if you’re using a partner like Equinix in their Chicago location, you would use Chicago for the Peering Location.
Bandwidth The bandwidth provided by the provider for this connection.
SKU This determines the level of ExpressRoute you are provisioning.
Data Metering This is for the level of billing and can be updated from metered to unlimited but not from unlimited to metered.
Subscription The Azure subscription associated with this resource.
Resource Group The Azure resource group associated with this resource.
Location The Azure region associated with this resource; this is different from the peering location.
Note
Costs and Billing
When you configure ExpressRoute in Azure, you receive a service key. When Azure issues the service key, billing for the circuit begins. Wait to configure this until your service provider is prepared with the circuit that will be paired with ExpressRoute to avoid charges while you’re waiting for other components.
Once the service key is issued and your circuit has been provisioned by a provider, you provide the key to the carrier to complete the process. Private peering needs to be configured and BGP allowed for ExpressRoute to work.
ExpressRoute also requires the virtual network gateway to be configured
for it. To do this, when creating a virtual network gateway, select ExpressRoute as the gateway type (as shown in Figure 2-29).
Configuring the peering settings for ExpressRoute happens from within the ExpressRoute configuration settings once the circuit has been set up in Azure. From there you see three types of peerings:
Azure Public This has been deprecated; use Microsoft peering instead.
Azure Private Peering with virtual networks inside subscriptions managed by your organization.
Microsoft Peering directly with Microsoft for the use of public
services like Dynamics and Office 365.
You need to meet the following requirements for peering:
A /30 subnet for the primary link.
A /30 subnet for the secondary link.
A valid VLAN ID to build peering on; no other circuit-based connections can use this VLAN ID. The primary and secondary links for ExpressRoute must use this VLAN ID.
An AS number for peering (2 byte and 4 byte are permitted).
Advertised prefixes, which is a list of all prefixes to be advertised over BGP.
Optionally, you can provide a customer ASN if prefixes that do not belong to you are used, a routing registry name if the AS number is not registered as owned by you, and an MD5 hash.
Review the peering information and complete the following steps to finish configuring ExpressRoute:
Important
Validation
Microsoft may require you to specify proof of ownership. If you see validation needed on the connection, you need to open a ticket with support to provide the needed information before the peer can be established. You can do this from the portal.
Once you have successfully configured the connection, the details screen shows a status of configured.
Linking (or creating a connection to) ExpressRoute also happens from within the ExpressRoute resource. Choose the Connections option within the settings for ExpressRoute and provide the following:
Name The name of the connection.
Connection Type ExpressRoute.
Virtual Network Gateway Select the gateway with which to link ExpressRoute.
ExpressRoute Circuit Select the configured circuit with which to connect.
Subscription Select the subscription containing the resources used in this connection.
Resource Group Select the resource group containing the resources used in this connection.
Location Select the Azure region where the resources used in this connection are located.
This is like creating a site-to-site connection, as described earlier, but it uses different resources as part of the connection.
Exam Tip
ExpressRoute is a private connection to Azure from a given location, and it requires high-end connectivity. Much of the discussion of ExpressRoute presented here relies on Microsoft Documentation because we don’t currently have access to an ExpressRoute circuit.
The settings and configurations discussed are high level, but we’ve provided an overview of the concepts for ExpressRoute for the exam.
Role-Based Access Control (RBAC) provides a manageable way to assign access to resources in Azure by allowing permissions to be assigned across job roles. If you’re a server operator, you may be able to start and restart VMs but not power off or delete them. Because every resource in Azure is permissible and requires access, the consolidation of permissions into roles can help keep things organized.
This skill covers how to:
While Azure provides roles for certain activities—like contributor and reader, which provide edit and read access respectively—there may be job roles within an organization that don’t fit nicely into these predefined items. Custom roles can be built to best suit the needs of an organization. To create a custom role, complete the following steps:
Before creating a custom role, it’s a good idea to check the access for the user or group the custom role will include. In addition to determining the need for a custom role, this check helps ensure existin access is known and can be updated after custom roles are created
The creation of custom roles happens through the Azure CLI or Azure PowerShell because there’s no portal-based method to build roles as of this writing. To create a custom role using PowerShell, complete the following steps:
$CustomRole = Get-AZRoleDefinition | where {$_.name -eq "Virtual Machine
Contributor"}
$CustomRole.Actions
To keep the custom role creation fairly simple, create a role for VM operators that can manage and access virtual machines. The role called out earlier is allowed to manage but not access machines. The Virtual Machine Administrator Login role allows log in but no management of the machine.
$AdminRole = get-azroledefinition | where {$_.name -eq "Virtual Machine
Administrator Login"}
At this point, the $CustomRole variable should contain an object for the Virtual Machine Contributor role, and $AdminRole should contain an object for the Virtual Machine Administrator Login role.
As you can see from Figure 2-31, the actions allowing access to the VMs are missing from the Virtual Machine Contributor Role.
$customRole = get-azroledefinition | where { $_.name -eq "Virtual Machine
Contributor" }
$customRole.id = $null
$customRole.name = "Custom - Virtual Machine Administrator"
$Customrole.Description = "Can manage and access virtual machines"
$customRole.Actions.Add("Microsoft.Compute/VirtualMachines/*/read")
$customRole.AssignableScopes.Clear()
$CustomRole.AssignableScopes = "/subscriptions/<your subscription id>/resourceGroups/<Resource Group for role>"
New-AzRoleDefinition -role $CustomRole
This will create a custom role called Custom - Virtual Machine Administrator and assign all the roles from the Virtual Machine Contributor Role plus the ability to log in to Azure Virtual Machines.
The role will be scoped to the supplied resource ID for the resource group chosen. This way, the added permissions are applicable only to the resource group(s) that need them" perhaps the Servers resource group.
Figure 2-32 shows the output of the command to create this custom role, with sensitive information redacted.
Previously, a custom role was created to allow management of and access to virtual machines within an Azure Resource Group. Because the custom role was scoped at the resource group level, it will only be assignable to resource groups.
To make use of the custom role and any built-in roles, the roles need to be assigned to users or groups, which makes them able to leverage these access rights.
To assign the newly created custom role to a group, complete the following steps:
Note
Custom Role Naming
Although the type of any custom roles is set to CustomRole when roles are added, we’ve found prepending the word “Custom –” to the beginning of the name or following a naming standard predefined by your organization may make custom roles easier to find when searching for them at a later time.
Azure AD user, group, or service principal
User Assigned Managed Identity
System Assigned Managed Identity
App Service
Container Instance
Function App
Logic App
Virtual Machine
Virtual Machine Scale set
Because virtual machine administrators are generally people, keep the Azure AD user, group, or service principal selected.
Important
About Groups
Keep in mind that using a group for role assignments is much lower maintenance than individually assigning users to roles.
The user (or users if a group was assigned) was assigned new access and may need to log out of the portal or PowerShell and log back in or reconnect to see the new access rights.
Like access to resources running in Azure, access to the platform itself is controlled using RBAC. There are some roles dedicated to the management of Azure resources at a very high level—think management groups and subscriptions.
A management group in Azure is a resource that can cross subscription boundaries and allow a single point of management across subscriptions.
If an organization has multiple subscriptions, they can use management groups to control access to subscriptions that may have similar access needs.
For example, if there are three projects going on within an organization that have distinctly different billing needs—each managed by different departments—access to these subscriptions can be handled by management groups, allowing all three subscriptions to be managed together with less effort and administrative overhead.
When you use RBAC roles, the method of assigning access to subscriptions or management groups is the same as other resources, but the roles specific to management and where they’re assigned are different. These would be set at the subscription or management group level.
Important
Cumulative by Default
RBAC access is cumulative by default, meaning contributor access at the subscription level is inherited by resource groups and resources housed within a subscription. Inheritance is not required because permission can be granted at lower levels within a subscription all the way down to the specific resource level. In addition, permission can also be denied at any level; doing so prevents access to resources where permission was denied. If denial of permissions happens at a parent resource level, any resources underneath the parent will inherit the denial.
There will always be an entity in Azure that is the overall subscription admin or owner. Usually this is the account that created the subscription but can (and should) be changed to a group to ensure that more than one person has top-level access to the subscription. In addition, this change will account for job changes, staff turnover, and reduce the likelihood that someone forgets about access to Azure during these situations.
To configure access to Azure at the subscription level, complete the following steps:
The users or groups that have specific roles assigned display. At the subscription level there should be few roles assigned as shown in Figure 2-33. Most access happens at the resource group or resource level.
The group has owner access at the subscription level. This access allows members of the group to create, modify, and remove any resources within the selected subscription.
Note
Adding a Coadministrator
This is only necessary if Classic deployments are being used (the classic portal). Assigning Owner RBAC rights in the Resource Management portal achieves the same result.
As mentioned previously, management groups allow RBAC configurations to cross subscription boundaries. Using the scope of a management group for high-level administrative access will consolidate visibility of multiple subscriptions without needing to configure RBAC settings in each of multiple subscriptions. This way, the admins group can be assigned owner access in a management group that contains all the subscriptions for an organization, simplifying the configuration a bit further as shown in Figure 2-34.
Top-level access management groups have a top-level root group scoped to the Azure AD tenant. Administrative users can’t see this with usual administrative or owner RBAC permissions. To allow this visibility, assign the User Access Administrator role to the group that will be working with management groups.
To add subscriptions or management groups to a management group, complete the following steps:
There will likely be very little information visible when viewing a management group. Click the details link next to the name of the group to see more information and take action on the management group, including Adding Management Groups and Subscriptions.
Management groups can be nested to consolidate resource management. This should be used carefully because doing so can complicate management of subscriptions and resources further than necessary.
Important
Changing Management Groups may Require Permission
Review
When moving items from one management group to another, permissions can be affected negatively. Be sure to understand the impact of any changes before they are made to avoid removing necessary access to Azure resources.
Identifying the cause of problems with RBAC may require some digging to understand why a user is unable to perform an action. When you’re assigning access through RBAC, be sure to keep a group of users configured for owner access. In addition to a group, consider enabling an online-only user as an owner as well. This way, if there is an issue with Active Directory, not all user accounts will be unable to access Azure.
Because Role-Based Access Control (RBAC) is central to resource access in Azure, using RBAC carefully is paramount in working with Azure. Like permissions in Windows before it, Azure RBAC brings a fair amount of trial and error to the table when assigning access. Also, because Azure is constantly evolving, there may be times when a permission just doesn’t work as stated.
The main panel of the IAM blade has improved considerably in recent time by providing a quick way to check access up front. If someone is questioning their access, an administrator or other team member can easily enter the username or group name where access is being questioned and see which role assignments are currently held. No more sifting through the role assignments list to determine if Fred has contributor or viewer access to the new resource group. This is one of the key tools in troubleshooting—being able to see who has what level of access.
During the times when Fred should have access to a particular resource, but claims to be missing access while Azure shows the correct role assignments, the Roles tab on the IAM blade shown in Figure 2-35 can help determine whether all the needed permissions are available. Sometimes they won’t be.
Looking at the list of roles is only somewhat helpful. If Fred claims that h can’t read a resource, but he’s listed as having the reader role for the resource, there will likely be something going on behind the role. To see the permissions assigned to the listed role, click on the name of the role.
On the top of the listed assignments page for the role, click Permissions to see the list of permissions that make up the role.
You will see, as shown in Figure 2-36, the list of resource providers that the role meets and whether they have partial access or all access to the provider as well as what data access for a provider the role has.
Selecting a provider name from this view displays the components used b this role within a given provider and the permissions assigned, as shown for the Azure Data Box provider in Figure 2-37.
In addition to investigating which permissions get assigned with certain roles, changing roles for certain users or groups to see how access changes another method that’s useful in working through access issues.
There also can be times when changes to RBAC are being cached"whe settings changes just aren’t appearing once they’ve been made. In the Azu portal, changes made can take up to 30 minutes to be reflected. In Azure C or a PowerShell console, the process of signing out and signing in again w force settings to refresh when making changes to RBAC. Similarly, when using Rest APIs to manage permissions, updating the current access token will refresh permissions.
There are also times when certain resources may require permissions tha are higher than the stated need"for example, working in an App Service may require write permission to the underlying storage account to ensure performance monitoring is visible; otherwise, it returns an error. In cases like this, perhaps elevated access (contributor in this case) might be preferable for a time to allow monitoring. This way, the developers get access to the items they need, but maybe the access doesn’t remain assigned long term.
Azure Policy provides a way to enforce and audit standards and governance throughout an Azure environment. Using this configuration involves two toplevel steps:
Using policy can streamline auditing and compliance within an Azure environment. However, it can also prevent certain resources from being created depending on the policy definition settings.
Important
Remember to Communicate
Although the intent may be to ensure, for example, that all resources are created in a specified region within Azure, remember to overcommunicate any enforcement changes to those using Azure. The enforcement of policy generally happens when the Create button is clicked, not when the resource is discovered to be in an unsupported region.
Collections of policy definitions, called initiatives, are used to group like policy definitions to help achieve a larger goal of governance rather than assigning 10 policy definitions separately. To hit this goal, they can be grouped into one initiative.
To assign a policy, complete the following steps:
Scope Select the scope at which the chosen policy will be configured.
Exclusions Select any resources that will be exempt from the policy assignment.
Policy definition Select the policy definition to be assigned.
Assignment Name Enter the name for this policy assignment.
Description Enter a description for the expected outcome of the policy assignment.
Assigned By The name of the Azure logged-in user who is assigning the policy will be listed.
When selecting from the list of available definitions, shown in Figure 2-39, pay attention to the name of the policy. Audit policies are used to capture information about what would happen if the policy were enforced. These will not introduce any breaking changes. Policies that aren’t labeled audit may introduce breaking changes.
Once a policy has been assigned, its compliance state may show Not Started because the policy is new and has not yet been evaluated against resources. Click Refresh to monitor the state of compliance. This may take some time to reflect state change.
If a policy runs against a scope and finds items in noncompliance, it may require remediation tasks to be performed. These tasks are listed under the remediation section in the Policy blade, and they’re only applicable to policies that will deploy resources if they are not found.
Multi-Factor Authentication (MFA) is becoming more and more necessary with the increased number of breaches and security concerns floating around information technology these days. MFA ensures that a log in to a site or application goes a step beyond the username and password by requiring the user to provide something they have (usually in the form of a token received via SMS or an authenticator app).
This skill covers how to:
To enable MFA for Azure AD, complete the following steps:
Important
Upgraded SKU Requirements
The use of MFA in Azure AD requires the use of a paid SKU for Azure AD.
If your tenant of Azure AD is not at a high enough SKU, you may be able to enable a premium trial to continue configuration.
Account Lockout Settings available to ensure incorrect logon attempts will trigger the account to be locked out and options for them to reset on their own.
Block/Unblock Users Add users to the block list who will be unable to log in. This will set all authentication attempts to be denied and remain in place for 90 days from the first blocked login.
Fraud Alert Configure fraud reporting for users receiving a second factor request they aren’t expecting and configure fraud reporting users be blocked on report of fraud.
Notifications Enter the recipient email address who will receive alerts generated by MFA. The best practice here would be to use a distribution group for notification.
OATH Tokens Configure the secret keys (via upload) for
associated hardware tokens to allow use of a hardware token as a second factor.
Phone Call Settings Configure the caller ID number (United States only), whether an operator is required to transfer for extensions, the number of pin attempts allowed per call, and greetings used with the phone calling service.
Providers The provider settings were disabled as of September 1, 2018.
MFA Server Settings These settings bring the functionality of Azure AD MFA to an on-premises datacenter.
Server Settings The number of seconds before the two-factor code times out.
One-Time Bypass A list of user accounts that will bypass MFA authentication. The default bypass is for 300 seconds.
Caching Rules Configure MFA so that consecutive authentications don’t repeat MFA authentication.
Server Status Reports the status of an on-premises MFA server.
Activity Report Shows activity against Azure AD MFA and a connected MFA server.
Configure user accounts for use with MFA
MFA is configured per user account, meaning it can be enabled for one user or all user accounts in an organization. To set up a user account to use MFA, complete the following steps:
Require Selected User(s) To Provide Contact Methods Again Re-collect additional email addresses and phone numbers used by
MFA.
Delete All Existing Application Passwords Generated By Selected Users Any app-level passwords these users have configured as a secure way to authenticate specific applications o devices will be deleted.
Restore Multi-Factor Authentication On All Remembered Devices For devices on which the user(s) have selected options for the device to be remembered. Reset this to require a reauthentication.
The last setting available on this page is Enforce. If this is configured for a user, they will be required to use MFA at sign-in. If the MFA settings are enabled but not enforced, the user will be able to decide if MFA should be used.
While MFA provides extra security, many organizations like to reduce the amount of two-factor logins needed by users when they connect from known IP addresses. An example of this might be in the corporate or satellite offices owned and managed by the company. Because the network and systems are managed by a known IT staff, these networks can generally be trusted to not require MFA.
Your mileage may vary because this is configurable. Some organizations may choose to constantly require the use of MFA or only allow the IP of the corporate office to be trusted. This will likely depend on the industry and amount of sensitive information handled by the organization.
To add trusted IPs, complete the following steps:
Exam Tip
Configuring trusted IPs for known locations is great. Pay attention to areas where the location might not have a static IP address (a home office, for example). Configuring an IP that may change, even if it doesn’t change often, as trusted could cause issues for users expecting no MFA at their location.
Note
Remember Me
Another option available on the app settings page for MFA allows users to have their devices remembered by MFA for a set number of days. Check the box at the bottom of the app settings screen and set the number of days (between 1 and 60).
Fraud alerts help keep corporate information safe and help employees ensure their access does not fall into the wrong hands. With alerts enabled, anytime a user receives an MFA prompt they did not initiate, it can be reported as fraudulent, which can prompt the account to be locked to prevent further attempts. To configure fraud alerts for MFA, complete the following steps:
Note
Update the Default
Consider the option to change the default fraud code"0#. If it is modified, a custom greeting is necessary to ensure the call recipients know that they can enter a different code before pressing # to report fraud.
When using MFA, there may be a use case to allow some employees to bypass MFA in certain situations. This is outside of the configuration of trusted IPs because it bypasses all MFA for the configured accounts for the preset time limit, which has a default of five minutes.
From the MFA blade of Azure Active Directory, select One-Time Bypass to display the bypass list shown in Figure 2-44. Then complete the following steps:
There are several ways that users can verify their identity when MFA is configured. These include:
Microsoft Authenticator App A phone app that delivers a one-time use token to provide the second factor of authentication.
SMS Users can opt to receive a text message with a code to prove their identity.
Voice Call Users can opt to be called by the MFA service to prove their identity.
Other options are available for use with self-service password reset only:
Email Address A link for password reset is delivered via email.
Security Questions A series of questions are configured and answered during onboarding. They are stored with the Azure Active Directory user object and not visible to any other users.
Note
Oauth in Preview
OAuth tokens are hardware tokens that are sent with a way to generate the secret keys for each token. Once generated, the keys and users are uploaded to MFA for use. This feature is in preview at the time of this writing.
Each of these options’ configuration steps will be discussed in turn.
Exam Tip
Although you may primarily use the authenticator app or SMS for the second factor, there are other methods, such as voice calls and sending a code via the Microsoft (or other) authenticator application. These get configured in a similar way, but be aware that they exist.
Note
MFA Requires more info Once Enabled
When MFA is enabled, the next log in for the account will require additional details from the user.
To configure the Microsoft Authenticator App, complete the following steps:
Receive notifications
Use verification code
Advanced MFA settings are managed through the user account page. To change or add verification methods once MFA is enabled and the initial sign in has completed, complete the following steps:
On the Additional Security Verification screen, select the default method of contact from the What’s Your Preferred Option drop-down list : Authenticator App notification, SMS message, or phone call to desired number.
Also, on this screen, set the authentication phone number for texts and phone calls.
Virtual machines from on-premises datacenters or other cloud
environments as well as Physical servers can be migrated to Azure.
Application load balancing and network load balancing work in tandem to ensure an all-around solution.
Serverless compute resources take the management of the servers and move it into the platform, reducing the organization’s server management footprint.
Logic Apps perform custom integrations between applications and services both inside and outside of Azure.
Virtual network peering allows communication between networks in Azure without need for a VPN, whereas Site-to-Site VPNs connect Azure to existing on-premises networks and ExpressRoute provides completely private connections to Microsoft services from an onpremises environment.
Role-Based Access Control aligns user access to Azure resources more closely with job roles. Keep in mind that this alignment is not always perfect and multiple roles may be necessary to provide the correct access.
Policies in Azure help to ensure that resources can be audited for compliance and deployment controlled as required by the organization.
Multi-Factor Authentication should be used wherever possible, especially for sensitive (think administrative) access. Configurations like trusted IP and One-Time bypass will help reduce the amount of MFA requirements from known networks and simplify troubleshooting of MFA. While SMS-based authentication is better than not using MFA, be aware that it can be spoofed more easily than an app configured for use with a specific account.
In this thought experiment, demonstrate your skills and knowledge of the topics covered in this chapter. You can find the answers to thought experiment questions in the next section.
You’re an Azure solutions architect hired by Tailspin Toys to help them with security configuration and Multi-Factor Authentication setup for their planned migration to the Azure cloud platform.
When discussing the configuration plan with Tailspin, you discovered the following items as requirements:
Some virtualized workloads running Linux or Windows will be migrated to Azure.
Access to the migrated servers is limited to the help desk being able to reboot servers as needed and the server admins team getting full control of the machines.
Admin-level users in Azure need to use strong passwords and another level of authentication. All the admins are smartphone users.
Considering the discovered requirements, answer the following questions:
This section contains the solution to the thought experiment for this chapter.
Please keep in mind there may be other ways to achieve the desired result. Each answer explains why the answer is correct.
admins group maintains full control over the Azure migrated machines. Using groups in this situation as the entities to which the roles are assigned further simplifies management and maintenance because it allows users to be moved from one level of access to another with little administrative effort.
Top Training Courses
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.