Exam-Answer

Exam AZ-100: Microsoft Azure Infrastructure and Deployment

Prepare for Exam AZ-100: Microsoft Azure Infrastructure and Deployment. Free demo questions with answers and explanations.

Home / Microsoft / AZ-100 / Question 1

Question 1

You have an Azure subscription named Sub1. Sub1 contains two resource groups named RG1 and RG2.

You need to ensure that Global Administrators can manage all resources contained in RG1 and RG2.

Solution: From the Azure Active Directory Properties blade, you enable Access management for Azure resources.

Does this solution meet the goal?

Answers


Explanation (click to expand)

This solution does meet the goal. The Access management for Azure resources property, located in the Azure Active Directory (Azure AD) tenant's settings, ensures that Azure AD users assigned to the Global Administrator role maintain full control over all subscription resources in the event that the identity is removed from Azure resource-level access lists. In keeping with least-privilege security, Microsoft recommends that you enable this property only when necessary.

References (click to expand)

Similar questions


Home / Microsoft / AZ-100 / Question 2

Question 2

You have an Azure subscription named Sub1. Sub1 contains two resource groups named RG1 and RG2.

You need to ensure that Global Administrators can manage all resources contained in RG1 and RG2.

Solution: From the subscription's Access control (IAM) blade, you click Add role assignment.

Does this solution meet the goal?

Answers


Explanation (click to expand)

This solution does not meet the goal. Azure Active Directory (Azure AD) permissions are distinct from Azure resource permissions. In this case, you should enable the Access management for Azure resources property from the Azure AD tenant's Properties blade. This property, when enabled, ensures that Azure AD users assigned to the Global Administrators role maintain full resource access even if their account is stripped from resource-level access control lists (ACLs). The Add role assignment button is used to make an addition to that scope's ACL. For instance, you may need to add a new Azure administrator to the Owner role for a subscription, resource group, or resource.

References (click to expand)

Similar questions


Home / Microsoft / AZ-100 / Question 3

Question 3

You have an Azure subscription named Sub1. Sub1 contains two resource groups named RG1 and RG2.

You need to ensure that Global Administrators can manage all resources contained in RG1 and RG2.

Solution: From the Azure Active Directory Roles and administrators blade, you modify the Global administrator role properties.

Does this solution meet the goal?

Answers


Explanation (click to expand)

This solution does not meet the goal. To ensure that Global Administrators maintain full access to Azure resources, you need to enable the Access management for Azure resources property from the Azure AD tenant's Properties blade.

The only properties of the Global Administrators (or any Azure AD) group that can be modified are the name, description, and membership type field. None of these properties accomplishes the scenario requirement.

References (click to expand)

Similar questions


Home / Microsoft / AZ-100 / Question 4

Question 4

You create a Windows Server virtual machine (VM) in an Azure resource group named iaas-rg. You plan to generalize the operating system and capture a system for use in future deployments.

You need to ensure that other administrators make no changes to the virtual machine configuration until you complete the image capture process. You need to enact your solution as quickly as possible.

What should you do?

Answers


Explanation (click to expand)

Because time is of the essence, you should set a Read only lock at the resource group level. Resource locks in Azure allow you to prevent unwanted changes to Azure resources no matter what the user's privilege level is. For example, even subscription Owners would not be able to resize a VM if the resource has a Read only lock applied to it.

By settings the lock at the VM's parent resource group level, you ensure that other administrators can make no changes to the VM's entire configuration environment, including virtual network interface (vNIC), virtual hard disks (VHDs), and so forth.

We should not set a Delete lock at the VM level for two reasons. First, the Delete resource lock prevents only delete operations, so administrators would be able to undertake other management actions on the VM. Second, a resource-level lock does not affect related VM assets contained in the same resource group.

You should not edit the RBAC permissions at either the resource group or the VM level because the scenario states that you need to enact your solution as quickly as possible. Furthermore, by restricting other administrators' RBAC access, you potentially restrict them from undertaking actions on other VMs to which they should have management access.

References (click to expand)


Home / Microsoft / AZ-100 / Question 5

Question 5

You manage a Windows Server virtual machine (VM) in Azure named prod-vm1. The VM uses managed disk storage, runs Windows Server 2012 R2, and resides in a resource group named prod-west-rg located in the West US region.

You need to move prod-vm1 to a resource group named prod-east located in the East US region.

What should you do?

Answers


Explanation (click to expand)

You should back up prod-vm1, restore the VM to the prod-east-rg resource group, and then delete the original VM instance. Unfortunately, managed disks are one of the few Azure resources that cannot be moved between resource groups or subscriptions. Because the VM in Azure has so many dependencies, this managed disk restriction means that you are unable to move the entire VM without redeploying the disks and configuration into the new resource group.

You cannot move prod-vm1 to the prod-east-rg resource group by using the Move-AzureRmResource PowerShell cmdlet because the scenario states that the VM uses managed disk storage. If the VM used unmanaged disk storage, the Move-AzureRmResource command could move the VM to another resource group or even another Azure subscription.

You cannot use azcopy to copy prod-vm1 to the prod-east-rg resource group. Azcopy is a cross-platform command-line tool with which you can copy or move binary large object (BLOB) data between storage accounts. In this case, the VM in question uses managed disk storage. Moreover, Azcopy cannot migrate VM configuration, only virtual hard disks (VHDs).

You cannot author an Azure Resource Manager (ARM) template that moves prod-vm1 to the prod-east-rg resource group because it uses managed disk storage.

References (click to expand)


Home / Microsoft / AZ-100 / Question 6

Question 6

You deploy an application in a resource group named App-RG01 in your Azure subscription.

App-RG01 contains the following components:

* Two App Services, each with an SSL certificate

* A peered virtual network (VNet)

* Redis cache deployed in the VNet

* Standard Load Balancer

You need to move all resources in App-RG01 to a new resource group named App-RG02.

Choose all that apply:

Answers


Explanation (click to expand)

You need to delete the SSL certificate from each App Service before moving it to the new resource group. You cannot move an App Service with an SSL certificate configured. If you want to do that, you need to delete the certificate, move the App Service and then upload the certificate again.

You cannot move the Load Balancer within the same subscription. A Standard Load Balancer cannot be moved either within the same subscription or between subscriptions.

You need to disable the peer before moving the VNet. When you want to move a VNet with a peer configured, you need to disable it before moving the VNet. When you move a VNet, you need to move all of its dependent resources.

You can only move the VNet within the same subscription. When you want to move a VNet, you also need to move all of its dependent resources. In this case, you also need to move the Redis cache, which can be moved only within the same subscription. Because you want to move the resources from App-RG01 to App-RG02, which is in the same subscription, you can move the VNet with no problem.

References (click to expand)


Home / Microsoft / AZ-100 / Question 7

Question 7

You deploy a Storage Account named store01 in your Azure subscription.

You grant the contributor role to some users in store01. The users work on an application that will use the storage account for storing some information.

The users report that they are not able to list the storage account keys for connecting their application to the storage account.

You need to identify the root cause of the issue.

What is the most probable cause?

Answers


Explanation (click to expand)

The reason that the users are not able to list the storage account keys is that you configured a ReadOnly lock. Locks are applied to any operation that makes a request to the following URL: https://management.azure.com. When you apply a ReadOnly lock, you can unintentionally block access to other resources. In this case, you are blocking access to the keys because the list operation is handled through POST operations and the returned keys will be used for write operations.

Configuring a CanNotDelete lock does not affect the list keys operation. The CanNotDelete lock prevents a user from deleting a resource but still allows users to modify and read resources in the resource group.

You do not need to grant your users with Owner or Storage Account Key Operator Service roles. When you configure a lock in a resource or resource group, this takes precedence over any assigned role, even the Owner role. If you want to remove the lock, you need to have access to the Microsoft.Authorization/* or Microsoft.Authorization/locks/* actions. Only the Owner and User Access Administrator roles have enough privileges to manage locks. In this scenario, granting the Owner role to your users will enable them to remove the ReadOnly lock on their own.

References (click to expand)


Home / Microsoft / AZ-100 / Question 8

Question 8

You are the owner of your organization's Microsoft Azure subscription. You hire a new administrator to help you manage a virtual network that contains nine Windows Server virtual machines (VMs). The deployment is contained in a resource group named prod-rg.

You need to provide the administrator with least-privilege access only to the prod-rg resource group. The administrator should be allowed to manage all aspects of the Azure VMs. Your solution should minimize management effort.

What should you do?

Answers


Explanation (click to expand)

You should assign the administrator to the Contributor role at the resource group scope. The Contributor role-based access control (RBAC) role provides the new administrator with full read/write privileges at that scope. Inheritance ensures that the permissions cascade to the VMs within the prod-rg resource group and minimizes management overhead.

You should not assign the administrator to the Virtual Machine Operator role at the virtual machine scope. The Virtual Machine Operator role does not grant the administrator full access to all resources contained on the virtual network. Moreover, making multiple RBAC assignments requires much more management effort than making a single role assignment at a parent scope.

You should not assign the Allowed virtual machine SKUs Azure Policy at the resource group scope. Doing so only restricts the administrator from selecting VM instance stock-keeping units (SKUs) that are defined in the Azure Policy. The scenario states only that the administrator should be able to fully manage existing VMs within the prod-rg resource group.

You should not assign a custom Azure Policy at the management group scope. Azure Policy is a governance feature that restricts the types of resources administrators can select in Azure Resource Manager. In other words, Azure Policy is fundamentally different from RBAC, which limits the ability for administrators to take particular actions in the first place.

References (click to expand)


Home / Microsoft / AZ-100 / Question 9

Question 9

You determine that business units have Azure resources spread across different Azure resource groups.

You need to make sure that resources are assigned to their proper cost centers.

What should you do?

Answers


Explanation (click to expand)

You should create taxonomic tags and assign them at the resource level. Tags in Azure are key-value string pairs that administrators can associate with Azure resources for logical organization. Identifying cost centers is an excellent use case for tags. Because corporate divisions own Azure resources spread across different Azure resource groups, you have to assign cost center tags at the resource level. Wherever possible, it is best practice to organize related resources into the same resource groups because you can then bulk-assign taxonomic tags to all contained resources in a single operation.

You should not create taxonomic tags and assign them at the resource group level. The scenario states that business units have resources spread across different resource groups. If you assign a particular cost center tag at the resource group level, then you likely will mis-tag contained resources owned by another business unit.

You should not deploy the Enforce tag and its value Azure Policy. Doing so enforces the presence of a single specified tag and value pair. In this case, the scenario states that the organization has more than one cost center and therefore needs more than one taxonomic tag.

You should not deploy the Enforce tag and its value on resource groups Azure Policy for the same reasons. The company has more than one cost center, and the business units have resources spread across multiple resource groups.

References (click to expand)


Home / Microsoft / AZ-100 / Question 10

Question 10

You are the cloud operations lead for your company's Microsoft Azure subscription. Your team consists of eight administrators who co-manage all Azure-deployed resources.

The corporate governance team mandates that all future Azure resources be deployed only within certain regions.

You need to meet the compliance requirement.

Which Azure feature should you use?

Answers


Explanation (click to expand)

To meet the new compliance requirement, you should deploy Azure Policy. Azure Policy is a governance feature that allows you to enforce requirements at two Azure scopes: the management group and the resource group. For example, you can require that all deployments are constrained to particular regions, or that only certain virtual machine (VM) sizes are allowed.

You should not use RBAC. RBAC focuses on user actions at different scopes. For example, a user may be restricted with RBAC from creating VMs in any Azure region. By contrast, Azure Policy customizes the properties a user can choose during resource deployment.

You should not use taxonomic tags. These key-value pairs are useful for organizing Azure resources (for instance, to identify different cost centers). However, tags have no authorization capability on their own.

You should not use Activity Log Analytics. This management solution aggregates Azure activity log data in a Log Analytics workspace. Specifically, the activity log records control plane activities such as resource creation, but does not enforce authorization.

References (click to expand)


Home / Microsoft / AZ-100 / Question 11

Question 11

You use taxonomic tags to logically organize resources and to make billing reporting easier.

You use Azure PowerShell to append an additional tag on a storage account named corpstorage99. The code is as follows:

$r = Get-AzureRmResource -ResourceName "corpstorage99" -ResourceGroupName "prod-rg"

Set-AzureRmResource -Tag @{Dept="IT"} -ResourceId $r.ResourceId -Force

The code returns unexpected results.

You need to append the additional tag as quickly as possible.

What should you do?

Answers


Explanation (click to expand)

You should call the Add() method on the storage account resource as shown in the second line of this refactored Azure PowerShell code:

$r = Get-AzureRmResource -ResourceName "corpstorage99" -ResourceGroupName "prod-rg"

$r.Tags.Add("Dept", "IT")

Set-AzureRmResource -Tag $r.Tags -ResourceId $r.ResourceId -Force

Unless you call the Add() method, the Set-AzureRmResource cmdlet will overwrite any existing taxonomic tags on the resource. The Add() method preserves existing tags and includes one or more tags to the resource tag list.

You should not deploy the tag by using an Azure Resource Manager template. Doing so is unnecessary in this case because the Azure PowerShell is mostly complete as-is. Furthermore, you must find the solution as quickly as possible.

You should not assign the Enforce tag and its value Azure Policy to the resource group. Azure Policy is a governance feature that helps businesses enforce compliance in resource creation. In this case, the solution involves too much administrative overhead to be a viable option. Moreover, the scenario makes no mention of the need for governance policy in specific terms.

You should not refactor the code by using the Azure Command-Line Interface (CLI). Either Azure PowerShell or Azure CLI can be used to institute this solution. It makes no sense to change the development language given that you have already completed most of the code in PowerShell.

References (click to expand)


Home / Microsoft / AZ-100 / Question 12

Question 12

Your company has an Azure Subscription with several resources deployed. The subscription is managed by a Cloud Service Provider.

The accounting department is currently granted the billing reader role, so they are able to see cost-related information. They need to get a better understanding of the costs so they can assign them to the correct cost center.

You need to provide cost center information. Your solution should minimize the administrative effort.

What two actions should you perform? Each correct answer presents part of the solution.

Answers


Explanation (click to expand)

You should create a tag named CostCenter and assign it to each resource group. Creating a tag and assigning it to each resource group allows you to easily identify the cost center associated with each resource group. When you associate a tag with a resource or resource group, you need to provide a value to that tag. You can instruct the accounting department to use the Azure Cost Management tool to review the costs associated with each cost center by filtering by the newly created tag.

You should also create a tag named CostCenter and assign it to each resource. If you apply a tag to a resource group, that tag is not inherited by the resources in the resource group. You need to manually configure the tag for each resource that you want to include in the cost center. You can automate this action by using a PowerShell or Azure CLI script.

You should not instruct the accounting department to use either the Cost Analysis blade in the subscription panel or the Azure Account Center. Because your subscription is managed by a Cloud Service Provider, you can get that information from your provider. You can also get this information by using the Azure Cost Management tool.

References (click to expand)


Home / Microsoft / AZ-100 / Question 13

Question 13

Your company requires all resources deployed in Azure to be assigned to a cost center.

You use a tag named CostCenter to assign each resource to the correct cost center. This tag has a set of valid values assigned.

Some of the resources deployed in your subscription already have a value assigned to the CostCenter tag.

You decide to deploy a subscription policy to verify that all resources in the subscription have a valid value assigned.

Choose all that apply:

Answers


Explanation (click to expand)

The Deny effect is not evaluated first. When a policy is evaluated, the Disabled effect is always evaluated first to decide whether the rule should be evaluated afterwards. The correct order of evaluation of the policy effects is: Disabled, Append, Deny and Audit.

The Append effect does not modify the value of an existing field in a resource. The Append effect adds additional fields during the creation or update of a resource. If the field already exists in the resource and the values in the resource and the policy are different, then the policy acts as a deny and rejects the request.

The Audit effect will create a warning event in the activity log for non-compliant resources. The audit effect is evaluated last before the Resource Provider handles a create or update request. You typically use the audit effect when you want to track non-compliant resources.

The DeployIfNotExists effect is only evaluated if the request executed by the Resource Provider returns a success status code. Once the effect has been evaluated, it is triggered if the resource does not exist or the resource defined by ExistenceCondition is evaluated to false.

References (click to expand)


Home / Microsoft / AZ-100 / Question 14

Question 14

You are the lead architect for your company's Microsoft Azure infrastructure.

To maintain corporate compliance certifications, you need to ensure that any virtual machines (VMs) are created only in approved Azure regions.

What should you do?

Answers


Explanation (click to expand)

You should define and deploy a custom Azure Policy by using JSON and Azure PowerShell. Azure Resource Manager includes a number of predefined policy templates that cover various governance use cases. However, you can also build a custom template and upload it to Azure to make it available in your subscriptions.

You should not define and deploy an Azure Automation DSC configuration. Azure Automation DSC prevents configuration drift on newly deployed or existing Azure or on-premises nodes. This scenario requires that you enforce compliance on VM locations at deployment time.

You should not deploy a management group. A management group is a scope level above the Azure subscription that allows you to assign Azure Policy that affects multiple subscriptions simultaneously. In your case, you need to define a policy in the first place, and then you can optionally scope the new custom policy to a management group.

You should not enforce conditional access policy on Azure Active Directory. This feature affects user accounts, not VMs deployed in Azure. Conditional access allows you to specify requirements for your users to access Azure AD-protected apps. For instance, you might require that users can only authenticate to an app if they are connecting from a corporate IP address.

References (click to expand)


Home / Microsoft / AZ-100 / Question 15

Question 15

Your company is developing a line-of-business (LOB) application that uses the Azure IoT Hub for gathering information from Internet of things (IoT) devices.

The LOB application uses the IoT Hub Service SDK to read device telemetry from the IoT Hub.

You need to monitor device telemetry and be able configure alerts based on device telemetry values. Your solution should require the least administrative effort.

What should you do?

Answers


Explanation (click to expand)

You should enable Azure Monitor resource diagnostics logs on the IoT Hub. Resource-level diagnostics logs allow you to monitor events that happen inside the resource. Each type of resource provides a different type of events. For the IoT Hub, the event category DeviceTelemetry fits your needs.

You should not use Azure Activity Logs. This service provides information about the actions performed on the resources in a subscription while using Resource Manager. Creating an IoT Hub, listing keys from a storage account, or starting a virtual machine are some examples of the type of activity logging information provided by Activity Logs.

You should not use Azure Resource Health. This service provides information about the high-level health status of the resource or if there is a regional outage. You would use this service to know if the IoT Hub is running or not.

You should not use Azure Application Insights with the LOB application. Application Insights provides information about the application performance while the application is running.

References (click to expand)


Home / Microsoft / AZ-100 / Question 16

Question 16

Your company has a line-of-business (LOB) application that uses Azure SQL Database for storing transactional information. Your company also has deployed System Center Service Manager.

You need to configure an alert when the database reaches the 70% of CPU usage. When this alert rises, you need to notify several users by email and by SMS. You also need to automatically create a ticket in the ITSM system. Your solution should require the minimum administrative effort.

Which two actions should you perform? Each correct answer presents part of the solution.

Answers


Explanation (click to expand)

You should configure an ITSMC. You need configure an ITSM connector for connecting Azure with your System Center Service Manager service. Using this connector, you can create work items in the ITSM system based on alerts.

You should also configure one Action Group with three actions: one for email notification, one for SMS notification, and one for ITSM ticket creation. Once the alert fires, you need to configure the actions that the alert will perform. You can configure several types of actions for the alert, like Azure App Push, Email, SMS, Voice, Runbooks, Logic Apps, ITSM, or Webhooks. You can add several actions to the same action group.

You should not configure two Action Groups. Although you can create two separate action groups with different actions and attach them to the same alert, this would require more administrative effort.

You should not configure System Center Service Manager with Azure Automation. You could configure an Azure Automation Hybrid Worker for running Azure Automation runbooks to create tasks in System Center Service Manager, but this solution would require much more effort.

References (click to expand)

Similar questions


Home / Microsoft / AZ-100 / Question 17

Question 17

Your company has an Azure subscription that hosts all services that the company uses in production. The Finance department notices that the bills related to Azure are increasing. The company wants to keep the costs of this Azure subscription under control.

After reviewing the costs analysis reports you realize that there are several virtual machines that are consuming more resources than expected.

You need to inform management when the spend for these resources is unusual.

What should you do?

Answers


Explanation (click to expand)

You should configure a report schedule in the Cost Management portal. The Cost Management portal allows you to perform detailed cost analysis of your resources. When running a cost report, you can set the filter for the virtual machines that are consuming more resources than expected. Then you can schedule that report to run periodically and send it to a list of recipients. You can also save it to a JSON or CSV report in a Storage Account. Then you can set three different alert levels for the report, one for the green level, one for the yellow level and one for the red level. You need to set the cost thresholds for levels yellow and red.

You should not configure a billing alert in the subscription page of the account portal. This allows you to create alerts based on billing totals or monetary credits that apply to the entire subscription. You cannot configure alerts for specific resources in the subscription.

You should not configure the Power BI content pack for Azure Enterprise. This option allows you to connect your Enterprise Agreement subscription with Power BI for cost analysis. You can create alerts in Power BI if you are using a Power BI Pro license. You also need an Enterprise Agreement subscription.

You should not use the costs-by-service blade in the cost analysis section of the subscription. You can use this section for reviewing the cost analysis per service, but you cannot configure any alert for the service consumption on this page.

References (click to expand)


Home / Microsoft / AZ-100 / Question 18

Question 18

Your company has a line-of-business (LOB) application that uses Azure SQL Database for storing transactional information. The LOB application also uses Windows and Linux virtual machines for business and presentation application layers.

Some users are reporting errors in the application.

You need to be alerted every time that an exception arises in any part of the application. Your solution should require the minimal administrative effort.

Which two actions should you perform? Each correct answer presents part of the solution.

Answers


Explanation (click to expand)

You should create an alert using a search query that looks for exceptions in Windows servers. You need to use Log Analytics or Application Insight resource types and Log signal types. Then you can write a search query that gets all messages from Windows Events that contains the word "Exception".

You should also create an alert using a search query that looks for exceptions in Linux servers. You use the same configuration as for Windows Events, but you will use Syslog messages.

You should not create an alert using a search query that looks for exceptions in business and presentation layer virtual machines. When creating an alert, you can only select one target type. This means that you can only get information from Windows Events or Syslog, so you will not be able to query both data sources at the same time.

You should not create an alert using a search query that looks for exceptions in application layer servers or in the business layer. You need to query for specific log data sources. Application and business layers is a concept of a design pattern for applications that can be compound of Windows, Linux virtual machines, or other Azure services.

References (click to expand)

Similar questions


Home / Microsoft / AZ-100 / Question 19

Question 19

You have a Microsoft Azure subscription that has 8 virtual machines (VMs).

You need to configure monitoring such that when either CPU usage or available memory reaches a threshold value, Azure both notifies administrators via email and creates a new issue in your corporate issue tracker.

What is the minimum number of Azure alerts and action groups you need to meet these requirements?

Answers


Explanation (click to expand)

You should create one alert and one action group. A single alert can contain more than one metric-based condition. By contrast, if you needed alert conditions based on Activity Log events, as of this writing a single alert can contain no more than one Activity Log condition.

A single action group can contain more than one notification or remediation step. An action group is a sequence of actions that Azure takes in response to an alert condition. The Azure ITSM connector links Azure Monitor to your Information Technology Service Management (ITSM) solution to allow Azure to create issue tickets automatically.

References (click to expand)


Home / Microsoft / AZ-100 / Question 20

Question 20

You have 20 Azure subscriptions. All subscriptions are linked to the same Azure Active Directory (Azure AD) tenant named company.com.

You plan to generate detailed usage and spend reports across all Azure subscriptions.

You need to incorporate resource optimization suggestions into your reports.

What should you do?

Answers


Explanation (click to expand)

You should use Cloudyn reports. Cloudyn is a software-as-a-service (SaaS) product integrated into Azure that enables you to track resource expenditures. Cloudyn also offers in-depth guidance to help you reduce your monthly spend. Cloudyn is an extension of Azure Cost Management, Microsoft's native cost management solution in Azure.

You should not run interactive queries in Azure Log Analytics because the scenario does not state that Azure resources are configured to write their diagnostics data to an Azure Log Analytics workspace. If this were so, then you could indeed use KQL to generate resource cost data. However, Azure Log Analytics does not offer optimization suggestions.

You should not design metrics charts in Azure Monitor because the metrics charting capability does not support ad-hoc queries. Furthermore, you would use not Azure Monitor but Azure Cost Management or Azure Advisor to retrieve cost optimization advice from the Azure platform directly.

You should not create a Stream Analytics job in the Azure portal. Azure Stream Analytics is an event-processing engine that uses a Structured Query Language (SQL)-like syntax to filter and process data extracted from Azure Event Hubs or IoT Hubs.

References (click to expand)


Home / Microsoft / AZ-100 / Question 21

Question 21

You have an Azure resource group named RG1. RG1 contains a Windows Server virtual machine (VM) named VM1.

You plan to use Azure Monitor to configure an alert rule for VM1.

You need to configure an alert that notifies you whenever the VM is restarted.

What should you do?

Answers


Explanation (click to expand)

You should define an Activity Log alert condition. The Azure Activity Log tracks all control-plane operations that occur within your subscriptions. You can define Azure alert conditions that fire when a particular Azure Activity log event transpires. In this case, you would add the Restart Virtual Machine signal to your alert condition list.

You should not define a metric-based alert condition. Metric-based alerts are triggered when diagnostic measurement data exceeds a given threshold. For instance, you might trigger an alert when the VM's average CPU consumption exceeds 75 percent over a 10 minute period.

You should not define an action group with either a webhook or an IT Service Management (ITSM) action type. Action groups define how Azure responds whenever an alert condition is triggered. In this case you need only a single notification action group to inform you whenever a VM is restarted.

References (click to expand)

Similar questions


Home / Microsoft / AZ-100 / Question 22

Question 22

You have a website hosted in Azure App Services that is used globally within your company. The website contains a mixture of dynamic and static content.

You are asked to put a Content Delivery Network (CDN) in place to optimize the experience for the end users.

You need to configure the CDN and web app to optimize both dynamic and static content where possible.

What two actions should you perform? Each correct answer presents part of the solution.

Answers


Explanation (click to expand)

You should implement Dynamic Site Acceleration (DSA) on the CDN. DSA adds support for route optimization, TCP optimizations, object prefetch, and adaptive image compression, all of which provide improved performance for dynamically generated content.

You should also implement custom caching rules on the CDN to identify the difference between static and dynamic content. DSA cannot cache content because by nature it is dynamic. In this case, if you implement DSA on the CDN, you need to implement custom caching rules to identify the source of static content, which can be cached within the CDN.

You should not implement general web delivery on the CDN. This will result in caching the static web content but not affect the dynamically generated content.

You should not implement CORS on the website. This allows scripting elements such as JavaScript to interact with the backend platforms in your environment.

References (click to expand)


Home / Microsoft / AZ-100 / Question 23

Question 23

You are configuring the XML file specifying the data paths to use. This file will configure the export job to control the data exported. Your file currently looks like this:

<?xml version="1.0" encoding="utf-8"?>

<BlobList>

<BlobPath>pictures/animals/kangaroo.jpg</BlobPath>

<BlobPathPrefix>/vhds/</BlobPathPrefix>

<BlobPathPrefix>/movies/dramas</BlobPathPrefix>

</BlobList>

What will be exported based on the current XML file?

Choose all that apply:

Answers


Explanation (click to expand)

The Azure import export service has an executable process that can be used to configure the import and export jobs. This process is WAImportExport.exe and can take an XML file as input. Looking at the file supplied and what will happen is as follows.

The &lt;BlobPath&gt; option is used top specify the exact path to a blob file, in this case kangaroo.jpg. This file will be exported.

The &lt;BlobPathPrefix&gt; option indicates a couple of different scenarios. /vhds/ has a trailing slash, which means that everything inside the folder vhds will be exported, without exclusion. All files and folders will be exported.

The /movies/dramas path does not have a trailing slash. This syntax means that everything in the movies folder prefixed by the word dramas will be exported. In this case any file and folder starting with dramas as a prefix will be exported.

References (click to expand)


Home / Microsoft / AZ-100 / Question 24

Question 24

Your company has developed a web application that serves dynamic and static content to users. The application is deployed in several Azure Web Apps in different Azure regions to achieve the best performance.

The Support department for the web application receives complains from users about poor performance of the application.

You review the performance of all components of the application and determine that you need to deploy a Content Delivery Network (CDN).

You need to configure a CDN for achieving the best performance.

What are two ways that you can configure the CDN? Each correct answer presents a complete solution.

Answers


Explanation (click to expand)

You should configure a single Azure CDN Standard from Akamai or Azure CDN Standard from Verizon endpoint, configure dynamic site acceleration (DSA), and configure caching rules. Dynamic site acceleration improves the performance when delivering dynamic content to end users. For static content, you can create cache rules only for the static content. Enabling caching rules for dynamic content may negatively impact dynamic content. Alternatively, you could create two different CDN endpoints, one endpoint optimized with DSA and another endpoint optimized for static content.

You should not configure a single Azure CDN Premium from Verizon endpoint, configure dynamic site acceleration, and configure caching rules. Although you can use this hybrid approach with Azure CDN Premium from Verizon endpoints, caching is configured using a rules engine instead of caching rules.

You should not configure a single Azure CDN Standard Microsoft endpoint, configure dynamic site acceleration, and configure caching rules. This type of CDN endpoint does not allows dynamic site acceleration features.

References (click to expand)


Home / Microsoft / AZ-100 / Question 25

Question 25

Your company has line-of-business (LOB) application deployed in Azure. This LOB application creates a large amount of information that is stored in a storage account.

To optimize the costs for storage, the LOB application changes the storage tier from hot to archive for those blobs that will not be needed anymore.

You are requested to get the information that the LOB application archived. You decide to create an Azure Export job for getting the archived information.

When creating the export job, you are not able to see the storage account in the list of storage accounts where the data resides.

Why are you not able to see the storage account in the list?

Answers


Explanation (click to expand)

You cannot see the storage account in the list because you are using a General Purpose V2 storage account. Only this kind of storage account has different three types of storage tiers: hot, cool, and archive. General Purpose V2 storage accounts are not supported for the Azure Import/Export Service.

General Purpose V1 and Page Blobs storage accounts are supported origins of information when configuring an Export job.

You are not using Azure Files. The LOB application changes the storage tier from hot to archive. This feature is only available on General Purpose V2 storage accounts. Azure Files is not a valid origin of information when configuring an Export job.

References (click to expand)


Home / Microsoft / AZ-100 / Question 26

Question 26

Your on-premises datacenter has a mixture of servers running Windows Server 2012 R2 Datacenter edition and Windows Server 2016 Datacenter edition.

You need to configure Azure Sync Service between the Azure Files service and the servers in the datacenter.

Which two activities must you complete to ensure that the service will operate successfully on your servers? Each correct answer presents part of the solution.

Answers


Explanation (click to expand)

To enable Azure File Sync, you must disable Internet Explorer Enhanced Security for all admin and user accounts.

Azure File Sync requires a minimum PowerShell version of 5.1. Windows Server 2016 supports that as the minimum default version, but it may have to be installed on Windows Server 2012 R2 servers.

The Windows Identity Foundation and Azure Active directory connect do not need to be installed on the file servers in the environment. Azure Active Directory Connect is used to synchronize on-premises identities to Azure Active Directory (Azure AD) and so is needed in the overall environment, but not on the file servers.

References (click to expand)


Home / Microsoft / AZ-100 / Question 27

Question 27

You are configuring the Azure File Sync service to synchronize data from your Windows Server failover cluster to Azure Files. Your Windows Server failover cluster is currently configured to support the Scale-Out file server for application data operational mode. The Failover Cluster is set up with data deduplication. The server endpoint is located on the system drive.

The Azure Files Sync service fails to operate on the failover cluster.

You need to rectify the situation.

What two actions should you perform? Each correct answer presents part of the solution.

Answers


Explanation (click to expand)

With the Azure File Sync service, only certain cluster configuration types are supported. The File Server for General Use type must be configured on the Windows failover cluster. Scale-Out file server is not supported.

The deduplication feature of the Windows clustered file server must also not be enabled because this is incompatible with the Azure Sync Service.

Clustered shared volumes must not be enabled because these are incompatible with the Azure Sync Service.

The server endpoint being mounted to the system volume is not a problem in this scenario. This would only be a problem if cloud tiering was a requirement or if rapid name space restore was needed. Neither is relevant to the question, so moving the endpoint from the system volume would have no effect here.

References (click to expand)


Home / Microsoft / AZ-100 / Question 28

Question 28

Your company has a file server that stores important information. The operating system for this file server is Windows Server 2012 R2 Standard Edition. The information is stored in a separate volume from the system volume. To improve security, the volume that stores corporate information is encrypted using BitLocker.

Your company wants to centralize the storage of information and improve the flexibility for accessing the information. You decide to use Azure File Sync for achieving this goal.

You configure an Azure File share and the appropriate firewall rules for allowing access from your company offices.

After configuring the Sync group, you receive an error about the cloud endpoint creation.

What is the most likely cause of the error?

Answers


Explanation (click to expand)

You are getting the error while creating the cloud endpoint because you are using firewall rules in the storage account. This is not a supported configuration. You cannot use firewall rules or virtual networks with the storage account that will host the synced data from for on-premises file servers.

You are not getting the error because you forgot to register the file server with Azure File Sync. You can download and install the Azure Storage Sync agent in the file server after configuring the cloud endpoint. Once you install the agent, you need to register the file server with the Azure File Sync service before you can create a server endpoint.

You are not getting the error because you are using Windows Server 2012 R2 Standard Edition. Windows Server 2012 R2 Standard and Datacenter editions as well as Windows Server 2016 Standard and Datacenter editions are supported operating systems for working with the Azure File Sync service.

You are not getting the error because you are trying to sync an encrypted volume. Using encrypted disks with BitLocker is a supported configuration.

References (click to expand)


Home / Microsoft / AZ-100 / Question 29

Question 29

Your company deploys an Azure File Sync service. This service syncs with an on-premises file server located on your office. The server stores the information synced with Azure in a volume different from the system volume. The file server has an antivirus solution installed.

You notice that some infrequently accessed files are downloaded to the file server. After monitoring file system access, you determine that no user is accessing to the affected files.

You need to troubleshoot what is happening with those files.

What are two ways of meeting your goal? Each correct answer presents a complete solution.

Answers


Explanation (click to expand)

You should review the Application or Services\Microsoft\FileSync\Agent event logs. These diagnostics and operational event logs gathers information about sync, recall, and tiering issues. Since you notice that infrequent accessed files are being downloaded to the file server, this means that you have enabled cloud tiering for this server. When cloud tiering is enabled, the Azure File Sync file system filter replaces the actual file with a pointer to the file in the Azure File share where all the data is stored. When a user access to a tiered file, the file is transparently downloaded to the server. This issue could happen when an antivirus solution is not aware of the offline NTFS attribute in the file. This attribute is set to allow third-party applications to identify tiered files.

You should not run the fltmc command at an elevated command prompt. You use the fltmc command to list all filesystem filters loaded in the file server. If the StorageSync.sys and StorageSyncGuard.sys file system filter drivers are not loaded, tiered files are not recalled and downloaded again to the file server. This is not the observed behavior.

You should not run the Test-NetConnection -ComputerName storage-account-name.file.core.windows.net -Port 443 PowerShell cmdlet. You use the Test-NetConnection cmdlet to check the connectivity with a computer. If you use the Fully Qualified Domain Name, you are also checking the DNS resolution. You can check the connectivity to a TCP port if you use the -Port parameter.

You should not run the Set-AzureRmStorageSyncServerEndpoint -Id serverendpointid -CloudTiering true -VolumeFreeSpacePercent 60 PowerShell cmdlet. You use this cmdlet to enable cloud tiering on a server endpoint. You have already enable cloud tiering on this server.

References (click to expand)


Home / Microsoft / AZ-100 / Question 30

Question 30

You have a Windows Server 2012 R2 file server deployed in your on-premises infrastructure. You want to deploy a file server hybrid solution. You decide to use Azure File Sync.

Choose all that apply:

Answers


Explanation (click to expand)

You cannot use cloud tiering with server endpoints on the system volume. You can create endpoints on the system volume, but those files will not be tiered. This means that all files in the server endpoint will be synced with the configured cloud endpoint.

The Data tiering free space policy does not apply to each server endpoint individually. You can configure a policy for each server endpoint individually, but the most restrictive free space policy applies to the entire volume. This means that if you configure two server endpoints in the same volume with two distinct policies, for example 20% and 40%, the 40% of free space policy will be applied. The free space tiering policy forces the sync system to start tiering, or moving data to the cloud, when the free space limit is reached. When the sync system tiers a file, it creates a pointer in the file system, and the actual data is moved to Azure. You can still list the tiered file, but the real data is no longer stored on your local disk.

For tiered files, the media file type will be partially downloaded as needed. When you try to access to a tiered file, it automatically downloads the entire file transparently. The exception is for those file types than can be read even if the data has not been completely downloaded, like media files or zip files.

The free space policy takes precedence over any other policy. You can configure date and free space policies on the same server endpoint, but the free space policy will always have precedence over the date policy. This means that if you configure a 60-day date policy and a 50% free space policy for the same server endpoint, and the volume reaches 50% of free space, the sync system will tier the files that have been unmodified for more time (coolest files), even if they were modified fewer than 60 days ago.

You cannot sync files in a mount point inside a server endpoint. You can use a mount point as a server endpoint, but you cannot have mount points inside a server endpoint. In this case, all files in the server endpoint will be synced except those files stored inside each mountpoint in the endpoint.

References (click to expand)

Similar questions


Home / Microsoft / AZ-100 / Question 31

Question 31

You have several Windows Server 2012 R2 file servers deployed in your on-premises infrastructure. You want to deploy a file server hybrid solution. You decide to use Azure File Sync with some of your file servers.

You configure two Azure File Storage accounts for this purpose. You are configuring the Azure File Sync.

Choose all that apply:

Answers


Explanation (click to expand)

You cannot use more than one Azure file share in the same sync group. An Azure file share is represented by a cloud endpoint. You can only have one cloud endpoint per sync group. You can add as many server endpoints as you want. You should think of sync groups as the replication hub in the sync process.

A server can sync with multiple sync groups. You can configure as many server endpoints as you need in a single server and each endpoint can be synced with different sync groups. This means that you can have the same server synced with a different sync group as long as you use different server endpoints. Remember that you cannot use NAS or mounted shares as server endpoints, and tiering will be applied only to those endpoints that are not stored in a system volume.

Changes made directly on the file share can take up to 24 to be synced. You can make changes directly on an Azure file share that is a member of a sync group, but you should bear in mind that this change will not be effective until the change is discovered by the Azure File Sync change detection job that runs every 24 hours. This means that, in the worst case, a change made directly on the Azure file share can take up to 24 to be synced.

Pre-seeding is not the best approach for doing the first synchronization. When you onboard with Azure File Sync you typically prefer to have a zero-downtime synchronization. You can achieve this by using only one server that hosts the dataset that you will sync and perform the first synchronization with this server. Once this first synchronization is done, you can add any additional server to the sync group. If you use pre-seeding, you need to manually copy all the datasets to the Azure file share using a mechanism like SMB copy or Robocopy. If you decide to use this method, you need to ensure that you can afford the downtime and that there will not be any changes to the dataset.

References (click to expand)

Similar questions


Home / Microsoft / AZ-100 / Question 32

Question 32

You have an Azure subscription that contains a storage account.

Your on-premises environment includes six file servers that host a total of 12 file shares.

You need to meet the following technical requirements:

* Requirement 1: Reduce the storage footprint of the on-premises file servers.

* Requirement 2: Provide fault tolerance for the on-premises file shares.

* Requirement 3: Secure the hybrid cloud connection with IPSec.

You plan to configure Azure File Sync.

Choose all that apply:

Answers


Explanation (click to expand)

Azure File Sync meets technical requirement 1. Azure File Sync reduces the storage footprint of the on-premises file servers. Cloud tiering is an Azure File Sync feature that generates a heat map of on-premises file share data and archives infrequently accessed files to the cloud endpoint, thus freeing up local disk storage on your file servers.

Azure File Sync meets technical requirement 2. Azure File Sync provides fault tolerance for the on-premises file shares. If a file server goes offline, you can easily restore its file shares to another file server simply by reconfiguring your Azure File Sync sync group.

Azure File Sync does not meet technical requirement 3. Azure File Sync does not secure the hybrid cloud connection with IPSec. Instead, Azure File Sync communicates over Transmission Control Protocol (TCP) 443 using Secure Sockets Layer (SSL). By contrast, IPSec is used with Azure site-to-site virtual private network (VPN) connections.

References (click to expand)


Home / Microsoft / AZ-100 / Question 33

Question 33

You have a Microsoft Azure subscription that contains a storage account.

Your on-premises environment includes six file servers that host a total of 12 file shares. These file shares are consolidated in a Distributed File System Replication (DFS-R) configuration.

You plan to deploy Azure File Sync to centralize the distributed file shares in Azure and to enable cloud tiering. You configure Azure File Sync as follows:

* Two Storage Sync Service instances with 6 file servers in each instance

* Four Sync Groups

* Two cloud endpoints

Choose all that apply:

Answers


Explanation (click to expand)

All servers in the topology cannot sync with each other because your topology includes two Storage Sync Service instances. Only servers registered within a single Storage Sync Service instance and Sync Group can sync with each other.

The topology requires six registered servers. You need to install the Azure File Sync agent on every local file server and register each with its respective Sync Group.

You do not need to decommission the DFS-R environment before enabling Azure File Sync because Azure File Sync supports DFS-R environments.

References (click to expand)


Home / Microsoft / AZ-100 / Question 34

Question 34

You are asked to configure an Azure storage account to be accessible from only one specific Virtual Network in an Azure Virtual Network (VNet). It must not be accessible from any other network or region in use across your company's Azure subscription.

You need to implement this requirement.

What should you do?

Answers


Explanation (click to expand)

You should implement a VNet service endpoint. Service endpoints are used to limit the network access to a specific set of resources. To meet the requirement, you can implement a storage endpoint on an Azure Resource Manager deployed storage account to restrict the access to a specific VNet and exclude access from all other resources including the Internet and on-premises connected resources.

You should not add a network security group. This is used to limit the access to the resources within a VNet by implementing rules such as IP filters and role based access control. It cannot restrict access to a storage account by itself.

You should not deploy Azure Traffic Manager. This is used to control the flow of network traffic into and out of Azure networks. It cannot restrict access to a storage account by itself.

You should not activate the Secure transfer required option. This feature forces all the traffic into and out of the storage account to be secured over HTTPS instead of allowing fallback to HTTP.

References (click to expand)


Home / Microsoft / AZ-100 / Question 35

Question 35

You manage several Windows Server virtual machines (VMs) located in a virtual network (VNet) named prod-vnet. These VMs are used internally by development staff and are not accessible from the Internet.

You need to provide your development staff with secure access to object and table data to support their Azure-based applications. The storage account data reside in Azure, but must not be exposed to the Internet.

What two actions should you perform? Each correct answer presents part of the solution.

Answers


Explanation (click to expand)

You should deploy a general-purpose storage account, and then configure a service endpoint. A general-purpose storage account consists of four services, two of which are called for in the scenario:

* Binary large object (blob) object storage

* Table (key-value pair) storage

* Queue (messaging) storage

* File (Server Message Block (SMB) file share) storage

Service endpoints allow you to bind certain Azure services, including storage accounts and Azure SQL Databases, to a VNet in order to restrict their access. In this scenario, you would create a service endpoint on prod-vnet to allow the Microsoft.Storage resource provider access. You would then complete the configuration by associating the storage account with the target VNet.

You should not deploy a blob storage account because a blob storage account has only one service and the scenario requires both object and table storage to support your developers. The blob storage account includes access tiers that save costs on cool and cold storage (archival) for block blobs such as documents or media files.

You should not deploy an Azure File Sync sync group. Azure File Sync is a mechanism to offer tiered and synchronized storage for on-premises Server Message Block (SMB) file shares. This feature meets none of the scenario's requirements.

You should not configure a CDN profile. CDN profiles are used in conjunction with Azure App Service web applications to deliver static website assets to worldwide customers with low latency.

You should not configure a P2S VPN. A P2S VPN is appropriate when you need to give individual users a secure connection to an Azure VNet. In this case you are concerned with providing secure access from the VNet to a storage account.

References (click to expand)


Home / Microsoft / AZ-100 / Question 36

Question 36

You create a binary large object (blob) storage account named reportstorage99 that contains archival reports from past corporate board meetings.

A board member requests access to a specific report. The member does not have an Azure Active Directory (Azure AD) user account. Moreover, he has access only to a web browser on his Google Chromebook device.

You need to provide the board member with least-privilege access to the requested report while maintaining security compliance and minimizing administrative overhead.

What should you do?

Answers


Explanation (click to expand)

You should generate an SAS token for the report and share the URL with the board member. SAS enables you to define time-limited read-only or read-write access to Azure storage account resources. It is important that you set the time restriction properly because the SAS includes no authentication. Any person with access to the URL can access the target resource(s) within the token's lifetime. In this case, you both minimize administrative effort as well as maintain security compliance because the SAS token points only to a single file, not the entire blob container that hosts the requested report.

You should not create an Azure AD account for the board member and grant him RBAC access to the storage account. First, it requires significant management overhead to create and manage Azure AD accounts, even for external (guest) users. Second, SAS and not RBAC is the way Azure provides screened access to individual storage account resources. You can use RBAC roles only at the storage account scope.

You should not copy the report to an Azure File Service share and provide the board member with a PowerShell connection script. Here you create security and governance problems by creating multiple copies of the source report, as well as producing unnecessary administrative complexity.

You should not deploy a point-to-site (P2S) VPN connection on the board member's Chromebook and grant the board member RBAC access to the report. The scenario stipulates that the board member is limited to using a web browser on his Chromebook. Furthermore, the Azure P2S VPN client is supported only on Windows, macOS, and endorsed Linux distributions. Chrome OS is not supported.

References (click to expand)


Home / Microsoft / AZ-100 / Question 37

Question 37

The development team asks you to provision an Azure storage account for their use.

To remain in compliance with IT security policy, you need to ensure that the new Azure storage account meets the following requirements:

* Data must be encrypted at rest.

* Access keys must facilitate automatic rotation.

* The company must manage the access keys.

What should you do?

Answers


Explanation (click to expand)

You should configure the storage account to store its keys in Azure Key Vault. Azure Key Vault provides a mechanism to store secrets, such as storage account keys, user credentials, and digital certificates, securely in the Microsoft Azure cloud. You can access the underlying Representational State Transfer (REST) application programming interface (API) to rotate or retrieve the secrets in your source code.

You should not enable SSE on the storage account for two reasons. First, SSE is enabled automatically on all Azure storage accounts and encrypts all storage account data at rest. Second, SSE in its native form uses Microsoft-managed access keys, which violates the scenario constraint for customer-managed keys.

You should not require secure transfer for the storage account. Secure transfer forces all REST API calls to use HTTPS instead of HTTP. This feature has nothing to do with either access keys or their management and rotation.

You should not create a service endpoint between the storage account and a VNet. A service endpoint allows you limit traffic to a storage account from resources residing on an Azure VNet.

References (click to expand)


Home / Microsoft / AZ-100 / Question 38

Question 38

Your company is developing a .NET application that stores part of the information in an Azure Storage Account. The application will be installed on end user computers.

You need to ensure that the information stored in the Storage Account is accessed in a secure way. You ask the developers to use a shared access signature (SAS) when accessing the information in the Storage Account. You need to make the required configurations on the storage account to follow security best practices.

Choose all that apply:

Answers


Explanation (click to expand)

You need to configure a stored access policy. When you use SAS, you have two different options. You can either use ad-hoc SAS or configure a stored access policy. By using ad-hoc SAS, you specify the start time, expiration time, and permission in the URI. If someone copies this URI, they will have the same level of access as the corresponding user. This means that this type of SAS can be used by anyone in the world. By configuring a stored access policy, you define the start time, expiration time, and permissions in the policy and then associate a SAS with the policy. You can associate more than one SAS with the same policy.

You should not set the SAS start time to now. When you set the start time of a SAS to now, there can be subtle differences in the clock of the servers that host the Storage Account. These differences could lead to an access problem for a few minutes after the configuration. If you need your SAS to be available as soon as possible, you should set the start time 15 minutes before the current time, or you can just not set the start time. Not setting the start time parameter means that the SAS will be active immediately.

You should validate data written using SAS. The information written to the storage account when the user uses a SAS can cause problems, such as communication issues or corruption. Because of this, it is a best practice to validate the data written to the storage account after it is written and before the information is used by any other service or application.

You can revoke a SAS by deleting a stored access policy. If you associate a SAS with a stored access policy, the start time, expiration time, and permission are inherited from the policy. If you remove the policy, you are invalidating the SAS and thus making it unusable. Keep in mind that if you remove a stored access policy with associated SAS and then create another stored access policy with the exact same name as the original policy, the associated SAS will be enabled again.

References (click to expand)


Home / Microsoft / AZ-100 / Question 39

Question 39

Your company wants to configure a storage account.

You need to ensure that the storage is available in case of failure of an entire datacenter. You also need to move the data to different access tiers depending on the frequency of access. Your solution needs to be the most cost-effective.

What type of storage should you configure?

Answers


Explanation (click to expand)

You should configure a storage account with ZRS replication. This type of replication makes a synchronous copy of the data between three different availability zones in the same region. Each availability zone is autonomous from the others and has separate networking and utility features. Because you also need to be able to move data between different access tiers based on the data access frequency, you need to configure a General Purpose v2 storage account. This is the most cost-effective solution.

You should not configure the storage account with LRS replication. This type of replication makes a copy of the data between different storage scale units inside the same datacenter. This type of replication is not resilient to a datacenter failure.

You should not configure the storage account with GRS or RA-GRS. This type of replication makes your data available in case of a datacenter or regional failure. This type of replication makes your data resilient to a datacenter failure but is not the most cost-effective solution.

References (click to expand)


Home / Microsoft / AZ-100 / Question 40

Question 40

You have performed a lift and shifted migration of several Windows Servers to Azure Infrastructure as a Service (IaasS).

You need to configure the servers to support Azure Backup.

What are two ways of achieving your goal? Each correct answer presents a complete solution.

Answers


Explanation (click to expand)

When you lift and shift VMs from on premises to Azure IaaS, the VMs do not have the prerequisites to setup backup operations in Azure. Before you can setup backup operations, you have to perform one of two actions.

You can install the Azure VM Agent on the migrated VMs. This will also deploy the Azure VM Backup Agent on the VMs by default. You can also manually deploy the Azure VM Backup Agent on the migrated VMs. Both options will result in a VM that can be configured for Azure VM Backup protection.

You should not enable Backup via the Backup Blade in the Azure VM Configuration Panel. This option cannot be used unless the VM already have the Backup Agent deployed first.

You should not execute the Backup-AzureRmBackupItem cmdlet. This cmdlet can run a non-policy based backup activity but also requires the Azure Backup Agent to have been deployed on the VM being targeted for backup.

References (click to expand)

Similar questions


Home / Microsoft / AZ-100 / Question 41

Question 41

You are tasked with managing the corporate Microsoft Azure subscription. Presently, a site-to-site virtual private network (VPN) connects the company's on-premises network infrastructure to a virtual network (VNet) named prod-vnet.

You need to implement a backup strategy for nine virtual machines (VMs) located on prod-vnet.

What should you do first?

Answers


Explanation (click to expand)

The first thing you should do is create a Recovery Services vault in Azure. The Recovery Services vault is a Microsoft-managed disaster backup repository that is used both by Azure VM Backup and Azure Site Recovery (ASR).

You do not need to install the VM Backup extension on the Azure-based VMs. Azure automatically distributes the VM extension to any VMs that you associate with a Recovery Services vault.

You should not deploy Azure Backup Server in your on-premises environment. Azure Backup Server is a specialized edition of System Center Data Protection Manager (DPM) and is used to back up on-premises workloads to Azure.

You should not define an ASR recovery plan. ASR is a migration and data replication feature, and is not used for a basic backup and restore disaster recovery (DR) scenario.

References (click to expand)


Home / Microsoft / AZ-100 / Question 42

Question 42

You back up all Azure-based virtual machines (VMs) to a Recovery Services vault. One of these VMs is a Windows Server 2016 domain member server named app1 that hosts an internally developed line of business (LOB) web application.

A developer informs you that she needs to review three-month-old log files stored on app1. You need to retrieve these files as efficiently as possible.

What should you do?

Answers


Explanation (click to expand)

To retrieve the old log files as efficiently as possible, you should use Azure File Recovery to mount the backed-up VHDs as drives on your administrative workstation. The process works like this:

* In the Recovery Services vault, start File Recovery.

* Select the appropriate recovery point.

* Download and run the provided PowerShell script. The script mounts the operating system and data VHDs as local drives on your administrative system.

* Retrieve the necessary files from the app1 file system.

* Disconnect the network drive mappings.

You should not download the appropriate VHD files from the Recovery Services vault to your administrative workstation. Azure File Recovery makes this step unnecessary. Likewise, you should not have to restore the VHDs or the entire VM to an alternate location in order to recover individual files from its file system.

You should not retrieve the files from the appropriate backed-up VHDs by using Azure Storage Explorer. Storage Explorer is a cross-platform desktop application that makes it easy to interact with Azure storage accounts. Recovery Services vaults do not expose their contents to Storage Explorer.

You should not make an RDP connection to app1 and use the Previous Versions feature to restore the requested log files. First, you need to work as efficiently as possible. Second, there is no guarantee that the server's current run state has three months' worth of log files.

References (click to expand)


Home / Microsoft / AZ-100 / Question 43

Question 43

You have an Azure resource group named RG1. RG1 contains a Linux virtual machine (VM) named VM1.

You need to automate the deployment of 20 additional Linux VMs. The new VMs should be based upon VM1's configuration.

Solution: From the virtual machine's Automation script blade, you click Deploy.

Does this solution meet the goal?

Answers


Explanation (click to expand)

The solution meets the goal. Every deployment in Azure is described in a template in JavaScript Object Notation (JSON) format. You can access the underlying template from the Automation script blade of the VM resource, and can then deploy multiple new instances of a resource by modifying the template parameters. Optionally, you can store customized Azure Resource Manager templates directly in the Azure portal from the Templates blade.

References (click to expand)

Similar questions


Home / Microsoft / AZ-100 / Question 44

Question 44

You have an Azure resource group named RG1. RG1 contains a Linux virtual machine (VM) named VM1.

You need to automate the deployment of 20 additional Linux VMs. The new VMs should be based upon VM1's configuration.

Solution: From the Templates blade, you click Add.

Does this solution meet the goal?

Answers


Explanation (click to expand)

The solution meets the goal. The Templates blade in the Azure portal enables you to store JavaScript Object Notation (JSON) documents that automate Azure resource deployment. In this case, you could store the Linux VM properties in a template and deploy the 20 additional VMs simply by editing the template parameter values for each additional VM.

References (click to expand)

Similar questions


Home / Microsoft / AZ-100 / Question 45

Question 45

You have an Azure resource group named RG1. RG1 contains a Linux virtual machine (VM) named VM1.

You need to automate the deployment of 20 additional Linux VMs. The new VMs should be based upon VM1's configuration.

Solution: From the resource group's Policies blade, you click Assign policy.

Does this solution meet the goal?

Answers


Explanation (click to expand)

This solution does not meet the goal. To automate the deployment of the 20 additional VMs, you should access the virtual machine's underlying JavaScript Object Notation (JSON) template and deploy the new resources by using the template and custom deployment parameters. By contrast, Azure Policy is a governance product that makes it easier for Azure administrators to constrain deployments to meet organizational requirements. For example, you could deploy an Azure policy that requires all resource deployments to occur within only company-authorized geographic locations.

References (click to expand)

Similar questions


Home / Microsoft / AZ-100 / Question 46

Question 46

You manage a Windows Server 2016 virtual machine (VM) named VM1.

You need to configure an additional public IPv4 address for VM1.

Solution: From the VM's Networking blade, you click Attach network interface.

Does this solution meet the goal?

Complete the Case Study

Answers


Explanation (click to expand)

This solution does not meet the goal. You can configure multiple public and private IP addresses to an Azure VM by modifying the IP configuration of its bound virtual network interface card. Because a network interface card can have more than one IP configuration, in this case you should add a second configuration and configure a new static IP address to that new configuration. Simply attaching another network interface does not guarantee that the additional interface has a public IP address associated with it.

References (click to expand)

Similar questions


Home / Microsoft / AZ-100 / Question 47

Question 47

You manage a Windows Server 2016 virtual machine (VM) named VM1.

You need to configure an additional public IPv4 address for VM1.

Solution: From the network interface's IP configurations blade, you click Add.

Does this solution meet the goal?

Answers


Explanation (click to expand)

This solution meets the goal. You configure IP addresses for Azure VMs by modifying the IP configuration(s) of one or more associated virtual network interfaces. Each network interface has a single IP configuration, but you can include additional configurations with additional public and private IP addresses if the VM has more complex virtual network addressing or routing requirements. Azure allows both Windows Server and Linux VMs to have more than one virtual network card and therefore reside on multiple subnets within the same virtual network.

References (click to expand)

Similar questions


Home / Microsoft / AZ-100 / Question 48

Question 48

You manage a Windows Server 2016 virtual machine (VM) named VM1.

You need to configure an additional public IPv4 address for VM1.

Solution: From the virtual machine's Extensions blade, you click Add.

Does this solution meet the goal?

Answers


Explanation (click to expand)

This solution does not meet the goal. To add another public IP address to the VM, you should add a second IP configuration to the network interface associated with VM1 and configure a public IP address. By contrast, VM extensions are software agents that broaden the capabilities of Windows Server and Linux VMs running in Azure. Extensions are not related to IP address assignment.

References (click to expand)

Similar questions


Home / Microsoft / AZ-100 / Question 49

Question 49

Your company has two Azure subscriptions, subsA and subsB, for different lines of business. Each subscription has its own Azure Active Directory (Azure AD) tenant assigned.

You have a virtual machine (VM) deployed in the subsA subscription, in a resource group named RG-A1. You attempt to move the VM to another resource group named RG-B2 that is configured in the subsB subscription.

While you are trying to move the VM, you get an error.

You need to identify the cause of the error so you can move the VM.

What is the most likely cause?

Answers


Explanation (click to expand)

You cannot move the VM because the subscriptions are in different Azure AD tenants. One of the prerequisites for being able to move a resource between different subscriptions or resource groups is that the source and destination subscriptions need to exist in the same Azure AD tenant. You need to transfer the ownership of one of the subscriptions to the other Azure AD tenant before you will be able to move the VM.

You can move classic VMs between resource groups or subscriptions. Moving this type of VM has some restrictions. The cloud service associated with the VM needs to be moved with the VM, and all VMs in a cloud service need to be moved together. However, moving classic VMs is still technically possible.

It is true that you cannot move a VM if it has managed disks configured, but in this situation, you cannot make the move because of the different Azure AD tenants, not the managed disks.

You can move resources between resources groups in different subscriptions. As long as the source and destination subscriptions exist within the same Azure AD tenant, and the destination resource group exists prior to the move, this operation is allowed.

References (click to expand)


Home / Microsoft / AZ-100 / Question 50

Question 50

Your company has an Azure subscription with some virtual machines (VMs) deployed. One of these VMs is used by the development team for testing purposes.

You receive a call from the development team stating that they are not able to access the VM. After doing some troubleshooting and resetting the Remote Desktop Protocol (RDP) configuration, you decide to redeploy the VM.

You need to use PowerShell to redeploy the VM.

Which cmdlet should you use?

Answers


Explanation (click to expand)

You should use the Set-AzureRmVM cmdlet with the -Redeploy switch. You need to provide the resource group name and the VM name that you want to redeploy.

You should not use the Update-AzureRmVM cmdlet. You use this cmdlet for updating the state of a VM to the values stored in a VM object that you can usually get using the Get-Azure RmVM cmdlet.

You should not use the Restart-AzureRmVM cmdlet. This cmdlet will restart the VM. Restarting a VM does not redeploy it to a new host.

You should not use the New-AzureRmVMConfig cmdlet. This cmdlet is intended for a creating new VM object configuration that you can use with other cmdlets.

You should not use the Remove-AzureRmVM cmdlet. You typically use this cmdlet when you want to remove a VM from Azure.

References (click to expand)


Home / Microsoft / AZ-100 / Question 51

Question 51

You have a Windows Server 2012 R2 virtual machine (VM) that is experiencing connectivity issues. You are not able to connect to the VM using Remote Desktop (RDP).

You need to move the VM to a different node inside the Azure infrastructure.

Which two commands can you use? Each correct answer presents a complete solution.

Answers


Explanation (click to expand)

You can use either the az vm redeploy command or the Set-AzureRmVm cmdlet to redeploy a VM. When you redeploy a VM, Azure tries to gracefully shutdown the VM and move it to another node inside the Azure Infrastructure. Azure copies all the current settings for the VM to the new location. This operation helps when you are experiencing connectivity issues with your VM and you are not able to connect to the VM using RDP or SSH.

You should not use the az vm deallocate Azure CLI command. This command shuts down the VM and disconnects all compute resources from the VM. You are not billed for deallocated VMs.

You should not use the az vm convert Azure CLI command. You use this command when you want to convert unmanaged disks in a VM to managed disks.

You should not use the Update-AzureRmVM cmdlet. This cmdlet updates the state of a VM. You use this cmdlet when you want to update the properties of the VM. This cmdlet does not redeploy the VM.

You should not use the New-AzureRmVM cmdlet. This cmdlet creates a new VM but does not redeploy an existing VM to a new node in the Azure infrastructure.

References (click to expand)


Home / Microsoft / AZ-100 / Question 52

Question 52

Your company purchases a new application and is planning to deploy it in Azure. The application requires Windows Server 2016. It also requires high-availability, so it will be deployed using a scalability set.

You are asked to prepare the virtual machine (VM) to automatically deploy all needed requirements for the application to run. You decide to use a custom script extension.

Before deploying the custom script, you test it and ensure that the script runs with no errors in the local environment. You store the script and some dependencies needed for the application in a blob storage account.

While you are testing automatic deployment, you realize that the custom script is not running.

What is the reason for the custom script not running?

Answers


Explanation (click to expand)

Your custom script is failing to run because you need to add an entry in the NSG. By default, communication with external systems are restricted. If you store your script in any external resource, like Azure Storage, Github, or a local server, you need to configure the firewall/NSG.

Your custom script is not failing to run because you need to sign the script. You can invoke your custom script by using the following command:

powershell -ExecutionPolicy Unrestricted -File your-custom-script.ps1

Your script is failing to run because the resource manager is not able to access to it due to communication restrictions.

Your custom script is not failing to run because the operation is taking more than 90 minutes. The issue is that the script is not able to run at all because the resource manager is not able to access it due to communication restrictions.

Your custom script is not failing to run because you need to configure a proxy server for the custom script. A custom script does not support proxy settings. If you need to use a proxy for connecting to an external resource, you can use third-party applications like curl.

References (click to expand)


Home / Microsoft / AZ-100 / Question 53

Question 53

Last month you deployed an Ubuntu Linux server virtual machine (VM) named linux1 to a virtual network (VNet) in Azure.

Today, you need to perform emergency remote management of linux1 from your Windows 10 Enterprise Edition workstation. Your solution must minimize both setup time and administrative effort.

What should you do?

Answers


Explanation (click to expand)

You should connect to the VM by using SSH and Azure Cloud Shell. SSH is the default protocol for remote Linux server management. Azure Cloud Shell is a browser-based command shell that gives you access to SSH and a variety of other Azure management tools. In this situation time is of the essence. Therefore, logging into the Azure portal, starting a Cloud Shell, and making an SSH-based connection to linux1 meets all scenario requirements.

You should not connect to the VM by using SSH and Windows Subsystem for Linux. The feature is available in Windows 10 Pro, Education, or Enterprise, and allows you to run native Linux commands directly on Windows. However, Windows Subsystem for Linux is not installed by default, and taking the time to install and configure the feature violates the scenario requirements for minimized setup time and administrative overhead.

You should not connect to the VM by using RDP and Remote Desktop Connection. While it is true that you can install an RDP server on Linux VMs running in Azure, this is not a default configuration and requires too much time and management effort.

You should not connect to the VM by using RDP and PowerShell Core 6.0. PowerShell Core 6.0 is available on Linux VMs deployed in Azure from the Azure Marketplace. However, the two in-box PowerShell remote management protocols are Web Services-Management (WS-Man) and SSH, not RDP.

References (click to expand)


Home / Microsoft / AZ-100 / Question 54

Question 54

Your company's Azure environment consists of the following resources:

* 4 virtual networks (VNets)

* 48 Windows Server and Linux virtual machines (VMs)

* 6 general purpose storage accounts

You need to design a universal monitoring solution that enables you to query across all diagnostic and telemetry data emitted by your resources.

What should you do first?

Answers


Explanation (click to expand)

You should create a Log Analytics workspace. Azure Log Analytics is the central resource monitoring platform in Azure. The Log Analytics workspace is the data warehouse to which associated resources send their telemetry data. Azure Log Analytics has its own query language with which you can generate reports that stretch across all your Azure deployments and management solutions.

You should not install the Microsoft Monitoring Agent (MMA). This agent is indeed required to associate Windows physical and virtual servers (on-premises and in Azure). However, Log Analytics automatically deploys the MMA to your Azure virtual machines when you onboard them to your Log Analytics workspace.

You should not enable Network Watcher. Network Watcher is a virtual network diagnostics platform. While you can link Network Watcher to Azure Log Analytics, you still need to create the Log Analytics workspace first.

You should not activate resource diagnostic settings. Before Microsoft developed Log Analytics, administrators were required to configure diagnostic settings on a per-resource level. This is no longer necessary because Microsoft Monitoring Agent configures nodes to send their diagnostics logs to a Log Analytics workspace.

References (click to expand)


Home / Microsoft / AZ-100 / Question 55

Question 55

Your company's Azure environment consists of two virtual networks (VNets) arranged in the following topology:

* prod-vnet: 9 virtual machines (VMs)

* dev-vnet: 9 virtual machines (VMs)

The VMs in the prod-vnet should run continuously. The VMs in dev-vnet are used only between 7:00 A.M. and 7:00 P.M. local time.

You need to automate the shutdown and startup of the dev-vnet VMs to reduce the organization's monthly Azure spend.

Which Azure feature should you use?

Answers


Explanation (click to expand)

You should create an Azure Automation runbook. Azure Automation is a management solution that allows you to publish PowerShell or Python scripts in Azure and optionally schedule Azure to run them automatically. In this case, best practice is to write a PowerShell workflow script that automates VM startup and shutdown, and then bind the script to two Azure Automation schedules: one to describe shutdown time, and the other to describe startup time.

You should not use Azure Automation Desired State Configuration (DSC). DSC is a PowerShell feature that prevents configuration drift on your Azure and/or on-premises servers. For example, you could deploy a DSC configuration that prevents server services from stopping.

You should not use Azure Auto-Shutdown. This feature, part of Azure DevTest Labs, allows you to schedule Azure VMs to shut down at the same time every day or night. However, this feature does not provide for automated VM startup.

You should not use Azure change tracking. Change tracking is an IT service management (ITSM) feature that is part of Azure Automation service and records all configuration changes to your Azure VM resources.

References (click to expand)


Home / Microsoft / AZ-100 / Question 56

Question 56

Your media production company recently moved all its infrastructure into Azure.

Every 14 days you run a batch to render several thousand video clips into various media formats for customers. At the moment the batch job is run on a single H-series virtual machine (VM).

You need to design a scalable compute solution. The solution must meet the following technical and business requirements:

* Must use VM instance sizes smaller than H series

* Must support automatic scale out and scale in based on CPU metrics

* Must minimize deployment time

* Must minimize administrative overhead

What should you do?

Answers


Explanation (click to expand)

You should deploy a virtual machine scale set (VMSS). Scale sets represent the only way to horizontally scale Azure VMs automatically. A scale set is a collection of two or more identically configured Windows Server or Linux VMs that provide full, centralized control over their lifecycle. Scale sets support up to 1,000 instances when you use VM images in the Azure Marketplace.

You cannot configure an auto-scaling rule on the existing VM. Scale sets are the only way to horizontally autoscale Azure VMs. By contrast, Azure App Service apps can be configured individually with auto-scaling rules based on time, date, or CPU metric.

You should not create an Azure Data Factory pipeline. Azure Data Factory is a cloud-based data orchestration engine similar in function to SQL Server Integration Services (SSIS). Therefore, Data Factory is not appropriate for this scenario.

You should not author an ARM template that creates additional VMs. While you can indeed use ARM templates to automate the deployment and removal of VMs, doing so violates the scenario constraints of minimized setup time and management overhead.

References (click to expand)


Home / Microsoft / AZ-100 / Question 57

Question 57

You have a Microsoft Azure subscription named Sub1.

You deploy a Windows Server 2016 virtual machine (VM) named VM1 to Sub1.

You need to change the availability set assignment for VM1.

What should you do?

Answers


Explanation (click to expand)

You should redeploy VM1 from a recovery point. In Azure, you can assign a VM to an availability set only during initial deployment. Therefore, to reassign the VM to another availability set in this case, you would need to perform the following actions:

1. Take a backup of the current VM.

2. Delete the current VM.

3. Deploy a new VM based on the most recent restore point to the correct availability set

You should not move VM1 to a different availability zone because availability zones are mutually exclusive from availability sets.

You should not assign VM1 to the new availability set because, as previously discussed, this is not a supported action in the Azure service fabric.

You should not migrate VM1 to another Azure region because by definition members of the same availability set must reside in the same region.

References (click to expand)


Home / Microsoft / AZ-100 / Question 58

Question 58

You have an Azure resource group named RG1. RG1 contains four virtual machines (VMs) and their associated resources.

You need to generate resource usage reports by using interactive queries.

What should you use?

Answers


Explanation (click to expand)

You should use the Log Search feature of Azure Log Analytics to run interactive queries and to build reports based on VM diagnostics data resident in your Azure Log Analytics workspace. Log Search uses Kusto Query Language (KQL), a new query language based on Structured Query Language (SQL), Splunk Processing Language (SPL), and PowerShell.

You should not use Azure Monitor because Monitor does not support interactive queries itself. Instead, Monitor allows you to:

* Enable diagnostics logging

* Plot resource metrics

* Configure alerts

The Log Search functionality in Azure Monitor is actually a shortcut to Log Search in your Azure Log Analytics workspace.

You should not use Azure alerts. Alerts in Azure Monitor can be based on resource metrics (for example, CPU utilization of a VM) or Activity Log alerts (for instance, whenever a VM is powered off or restarted).

You should not use Azure Service Bus. Azure Service Bus is an enterprise-class messaging platform that supports microservice application architectures. It is not related to resource monitoring.

References (click to expand)

Similar questions


Home / Microsoft / AZ-100 / Question 59

Question 59

You have a Microsoft Azure subscription that has four virtual machines (VMs) located in the East US region.

You configure the four VMs identically to act as web servers.

You need to ensure that traffic is distributed equally across the four web servers. You also need to protect the web servers against the most common web application security risks. Your solution must minimize expense.

What should you do?

Answers


Explanation (click to expand)

You should deploy Azure Application Gateway. Application Gateway is a software load balancer that is specialized for web workloads. In addition to providing load balancing services, Application Gateway also includes a web application firewall (WAF) that protects back-end pool web servers against the most common web application vulnerabilities.

You should not deploy a virtual machine scale set because this solution requires the deployment of additional VMs. Moreover, a scale set offers no native protection against web application vulnerabilities.

You should not deploy a Traffic Manager profile because the scenario states that the web servers reside in the same geographic area. Traffic Manager is a Domain Name System (DNS)-based load balancer that works across multiple Azure regions.

You should not deploy an Azure Content Delivery Network (CDN) profile. A CDN makes user access to static web resources faster by geo-distributing those resources. CDN is unrelated to this scenario.

References (click to expand)


Home / Microsoft / AZ-100 / Question 60

Question 60

You have a Linux virtual machine (VM) named VM1 that runs in Azure. VM1 has the following properties:

* Size: Standard_D4s_v3

* Number of virtual CPUs: 2

* Storage type: Premium

* Number of data disks: 6

* Public IP address: Standard SKU

You attempt to resize the VM to the Standard_D2s_v3 size, but the resize operation fails.

Which VM property is the most likely cause of the failure?

Answers


Explanation (click to expand)

In this case, the VM resize failure is caused by the VM's current number of data disks. The Standard_D4s_v3 instance size supports up to 8 data disks, but the Standard D2s_v3 instance size supports only up to 4 data disks. Therefore, you will be unable to make the VM size reduction unless you detach the extra data disks from the VM.

The number of virtual central processing units (vCPUs) is not a problem because Standard_D4s_v3 supports 4 vCPUs and Standard_D2s_v3 supports 2 vCPUs.

The storage type is not a problem because both instance sizes support premium disk storage.

The public IP address resource is not a problem because this resource is associated at the network interface level and not the VM level.

References (click to expand)


Home / Microsoft / AZ-100 / Question 61

Question 61

You use Azure VM Backup to back up all Windows Server and Linux VMs in Azure to a Recovery Services vault.

One of your colleagues informs you that he accidentally deleted corp-archive-vm1. You inspect Azure Monitor and verify that the server has been backed up every night for the past two months even though it has been powered off the entire time.

You need to restore the VM to its original location as quickly as possible.

What two actions should you perform? Each correct answer presents part of the solution.

Answers


Explanation (click to expand)

You should select the most recent crash-consistent restore point. A crash-consistent restore point is an Azure VM backup that is not guaranteed to boot and/or experience data loss. This is the only restore point possible when an Azure VM is backed up while it is powered off.

You should then restore corp-archive-vm1 by creating a new VM. Azure VM Backup allows you to restore only the VM's disks, the entire VM as a new resource, or individual files from the VM's disks. In this situation you need to put the deleted VM back online as quickly as possible, so letting Azure VM Backup restore a new VM by using the restore point makes the most sense.

You should not select the most recent application-consistent restore point. This type of restore point is not available because the VM had been backed up while powered off. An application-consistent restore point is available when a Windows Server in Azure is backed up and the Shadow Copy Service (VSS) writer can guarantee the restored VM will boot up with no data loss or data corruption.

You should not restore the corp-archive-vm1 disks and ARM template and redeploy the VM using Azure PowerShell. Doing so violates the time constraint in the scenario.

References (click to expand)


Home / Microsoft / AZ-100 / Question 62

Question 62

You manage an Azure Windows Server virtual machine (VM) that hosts several SQL Server databases.

You need to configure backup and retention policies for the VM. The backup policy must include transaction log backups.

What should you do?

Answers


Explanation (click to expand)

You should configure a SQL Server in Azure VM backup policy from the Recovery Services Azure portal blade. The Azure Recovery Services vault has three default policy templates:

* Azure Virtual Machine

* Azure File Share

* SQL Server in Azure VM

Because you need to back up both the SQL Server databases as well as transaction logs, you should create a SQL Server in Azure VM backup policy. These policies also enable you to specify backup retention durations at the daily, weekly, monthly, and yearly scopes.

You should not configure point-in-time and long-term retention policies from the SQL Servers Azure portal blade. These backup and retention policies are available for the Azure SQL Database platform-as-a-service (PaaS) offering, and not for Azure virtual machines hosting SQL Server databases.

You should not configure a continuous delivery deployment group from the Virtual Machine Azure portal blade. This feature is unrelated to VM backup and recovery, and allows you to integrate a VM in a Visual Studio Team Services (VSTS) continuous integration/continuous deployment (CI/CD) workflow.

You should not configure a point-in-time snapshot from the Disks Azure portal blade. The snapshot functionality in Azure does not have formal policy associated with it, nor does it back up VM configuration.

References (click to expand)


Home / Microsoft / AZ-100 / Question 63

Question 63

You deploy Azure Recovery Services in your Azure Subscription. You are making a backup of all the virtual machines (VMs) in this subscription.

Some of the VMs in the subscription were deployed using custom images. You also have encrypted VMs.

Due to your company's disaster recovery plan, you need to be able to recover VMs.

Choose all that apply:

Answers


Explanation (click to expand)

You cannot use the replace existing option with encrypted VMs. If you need to recover an encrypted VM, you need to use the restore disks option. Then you need to create a new VM from restored disks.

When you restore a VM, you can customize the VM configuration using PowerShell. When you restore a VM using the create virtual machine option, the new VM is created using a Quick Create option provided by the portal. If you need to change this default configuration, you can do it by using PowerShell for the restoration process. You can also perform a restore from backup disks and create a new VM from these disks.

You can also restore VMs that have multiple NICs. You can backup and restore VMs that have special network configurations like VMs with multiple NICs, VMs with multiple reserved IP addresses, or load-balanced VMs. You need to perform some additional steps when you want to restore these types of VMs.

Restoring VMs created using custom images using the replace existing option is unsupported. You cannot use the replace existing restore type with encrypted VMs, VMs that have been created from custom images, generalized VM, or VMs configured with unmanaged disks. If you want to restore these types of VMs, you need to use the create virtual machine or restore backed-up disks options.

References (click to expand)


Home / Microsoft / AZ-100 / Question 64

Question 64

Your company has a custom line-of-business (LOB) application that uses several Azure resources. All resources assigned to the LOB application are in the same resource group. After the first deployment of the LOB application, the company adds more features to the application. You also add more resources to the resource group in different additional deployments.

You need to create a template to redeploy the resources needed for the LOB application.

What should you do?

Answers


Explanation (click to expand)

You should use the Export-AzureRmResourceGroup cmdlet. When you use this cmdlet, you get all resources in a resource group and save it as a template. This like a snapshot of the configuration of the resource group. This template has many values hard-coded directly in the template.

You should not use the Save-AzureRmResourceGroupDeploymentTemplate cmdlet. This cmdlet allows you to create a template from a deployment that is in the deployment history of a resource group. Since you have made several modifications to the resource group by adding more resources in additional deployments, the deployments in the deployment history do not contain the whole group of resources needed for the LOB application.

You should not use the Get-AzureRmResourceGroupDeployment cmdlet. This cmdlet returns the deployment history for a resource group.

You should not use the Get-AzureRmResourceGroupDeploymentOperation cmdlet. This cmdlet returns all the operations performed during a deployment. This is useful if you need to troubleshoot a failed deployment.

References (click to expand)


Home / Microsoft / AZ-100 / Question 65

Question 65

You have an ARM template for creating a Windows virtual machine. You got this template from an existing resource group with a single virtual machine, using the automation script option.

You want to reuse this template for other deployments. You need all the resources in the resource group to be on the same location.

What should you do?

Answers


Explanation (click to expand)

You should edit the template file and update each location parameter with the value [resourceGroup().location]. The resourceGroup() function gets the resource group object that will be used for deploying the template. This way, all resources in the template will use the same location as the resource group. You need to ensure that all resources are supported in the location that you are using for the resource group.

You should not edit the parameters file and add a new parameter named location of type string with the default value of [resourceGroup().location]. This is the first step for centralizing the location value in the template, but you also need to update the location parameter in the template file with the value [parameters('location')].

You should not use the Azure portal and create a resource group in the desired location and then use the New-AzureRmResourceGroupDeployment cmdlet using the newly created resource group. If the resource group is deployed in a location different than the configured in the template file, the resources will be deployed in different locations. You need to modify the location parameter in the template file to the value [resourceGroup().location] to inherit the location from the parent resource group.

You should not use the New-AzureRmResourceGroup cmdlet with the location parameter to create a resource group in the desired location and then use the New-AzureRmResourceGroupDeployment cmdlet using the newly created resource group. If the resource group is deployed in a location different than the configured in the template file, the resources will be deployed in different locations. You need to modify the location parameter in the template file to the value [resourceGroup().location] to inherit the location from the parent resource group.

References (click to expand)


Home / Microsoft / AZ-100 / Question 66

Question 66

Your company is planning to deploy a new application in its Azure subscription. The application consists of several Linux virtual machines (VMs).

You are asked to deploy the needed VMs for this new application. The VMs will run version 18.04-LTS of Ubuntu server. You decide to create an ARM template for the deployment.

You need to ensure that the VM image can automatically update after the initial deployment. You also need to use VM images from the marketplace.

Which two ARM parameters should you configure? Each correct answer presents part of the solution.

Answers


Explanation (click to expand)

You should configure the sku and offer parameters. The sku parameter is used for setting the major release version of the operating system distribution. You use the offer parameter for setting the name of the images created by the publisher. For this situation, you need to set sku to 18.04-LTS and offer to UbuntuServer.

You should not use the version parameter. You typically use this parameter for setting the version of the image of the selected sku. You can use the version number using the format Major.Minor.Build or use the keyword latest. If you set this parameter to latest, you will always use the latest available version at deployment time, but the image will not be update when a new version becomes available.

You should not use the vmSize parameter. This parameter is used to set the amount of hardware resources assigned to the VM. It is not related to the operating system used in the VM.

You should not use the osType parameter. You use this parameter to specify the operating system installed on the osDisk when you use a user-image or specialized virtual hard disk (VHD) to deploy the VM.

References (click to expand)


Home / Microsoft / AZ-100 / Question 67

Question 67

You have a resource group named APP-RG that consists of several resources.

You are asked to add a storage account to the resource group. You decide to deploy the new storage account by using an ARM template and the New-AzureRmResourceGroupDeployment cmdlet. This template does not contain any linked or nested templates.

After the deployment finishes successfully, you realize that all the resources in the resource group have been replaced by the new storage account.

Why did this happen?

Answers


Explanation (click to expand)

The resources in the resource group have been replaced by the new storage account because you used the -mode complete parameter with the New-AzureRmResourceGroupDeployment cmdlet. This cmdlet has two deployment modes, incremental and complete. When you use the complete mode, all resources in the resource group that are not included in the template are deleted.

This did not happen because you did not use the -mode parameter with the New-AzureRmResourceGroupDeployment cmdlet. When you do not use the -mode parameter, you are using the default incremental deployment mode. In this mode, any resource that is not present in the template is maintained in the resource group. If a resource in the resource group is present in the template, if any parameter in the template differs from the values in the resource group, that value is updated in the resource present in Azure. You should use this mode when deploying the template.

Since the template that you are using does not contains any linked or nested templates, the mode parameter should not be present in the template with either value. This parameter is part of the deployment resource type and is typically used with nested or linked templates. Deployment modes, complete and incremental, behave the same way as in the New-AzureRmResourceGroupDeployment cmdlet.

References (click to expand)


Home / Microsoft / AZ-100 / Question 68

Question 68

You deploy a line of business (LOB) application. All resources that are part of the LOB application are deployed in a resource group named APP-RG.

The resources that are part of the LOB application were added in different phases.

You need to export the current configuration of the resources in APP-RG to an ARM template. You will later use this template for deploying the LOB application infrastructure in different environments for testing or development purposes.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

Answers


Explanation (click to expand)

You do not need to export the ARM template from the latest deployment. In this scenario, the LOB application was deployed in several phases. The latest deployment will export only the latest resources added to the application. If you want to export the ARM template with all needed resources for the LOB application, you need to export the ARM template from the resource group.

Each deployment contains only the resources that have been added in that deployment. When you export an ARM template from a deployment, the template only contains the resources created during that deployment.

The parameters file contains the values used during the deployment. The parameters file is a JSON file that stores all the parameters used in the ARM template. You can use this file to reuse the template in different deployments, just changing the values of the parameters file. If you use this file in templates created from resource groups, you need to make significant edits to the template before you can effectively use the parameters file.

The template contains needed scripts for deploying the template. When you download an ARM template from a deployment or a resource group, the downloaded package contains seven files: the ARM template, the parameters file, an Azure CLI script, a PowerShell script, a Ruby script and a .NET class for deploying the template.

References (click to expand)


Home / Microsoft / AZ-100 / Question 69

Question 69

You need to deploy several virtual machines (VMs) in your Azure Subscription. All VMs will be deployed in the resource group RG01 based on an ARM template that is stored in GitHub.

You need to automate this operation.

Which two commands can you use? Each correct answer presents a complete solution

Answers


Explanation (click to expand)

You could use the az group deployment create Azure CLI command. This command creates a new deployment using the template provided in the -template-uri parameter. In this case, you need to use the URL that points to GitHub where you stored the ARM template that you want to use to deploy the new VMs.

You could also use the New-AzureRmResourceGroupDeployment cdmlet. This cmdlet creates a new deployment using the template provided in the -TemplateUri paramenter.

You should not use az vm create or New-AzureRmVM. You use these commands to create a new VM from custom or marketplace images, but not from ARM templates.

References (click to expand)


Home / Microsoft / AZ-100 / Question 70

Question 70

Your company deploys a line-of-business (LOB) application. This application is installed on three separate virtual machines (VMs).

You receive some performance alerts on one of the VMs. After some troubleshooting, you identify a deficiency in the IO of the storage system.

You need to add an additional new empty data disk to the existing VM. You decide to use an unmanaged disk.

Which PowerShell cmdlet should you use?

Answers


Explanation (click to expand)

You should use the Add-AzureRmVMDataDisk cmdlet to add a new data disk to your existing VM. This cmdlet gets as a parameter the name of the VM to which you want to add the new virtual disk. You can also configure the size, location, caching, and type of virtual disk that you will add. If you use the ManagedDiskId parameter, you can add a managed disk to the VM. If you omit this parameter, you will use unmanaged disks instead.

You should not use the New-AzureRmDiskConfig cmdlet. You use this cmdlet for creating an object that represents the disk configuration of the VM.

You should not use the New-AzureRmDisk cmdlet. This cmdlet creates a new managed virtual disk but does not attach it to the VM. After using the cmdlet, you need to add the virtual disk to the VM using the Add-AzureRmVMDataDisk cmdlet with the CreateOption Attach parameter. You decided to use an unmanaged virtual disk, so you cannot use this cmdlet.

You should not use the Add-AzureRmVhd cmdlet. This cmdlet uploads a virtual hard disk (VHD) file from an on-premises computer to Azure. This cmdlet is not useful for adding a new virtual disk to your VM.

You should not use the New-AzureRmVMDataDisk cmdlet. This cmdlet creates a managed virtual data disk for a VM, but it does not add the virtual disk to the VM.

References (click to expand)


Home / Microsoft / AZ-100 / Question 71

Question 71

You are configuring a Network Security Group (NSG). The default NSG rules are already in place.

You need to configure the NSG to support only the following the types of traffic into the subnet from the Internet.

* Remote Desktop Management

* Secured HTTPS traffic

* Unsecured HTTP traffic

Which three ports should you configure in the NSG configuration? Each correct answer presents part of the solution.

Answers


Explanation (click to expand)

You should configure port 3389 for Remote Desktop Protocol (RDP) connections, port 443 for Secured HTTP (HTTPS) connections, and port 80 for unsecured HTTP connections.

You should not configure the NSG to support ports 21 or 53. Port 21 is for File Transfer Protocol (FTP) and port 53 is for Domain Name System (DNS).

The default NSG rules are already set so there is no requirement to configure anything beyond these rules.

References (click to expand)


Home / Microsoft / AZ-100 / Question 72

Question 72

You design a virtual network (VNet) topology with the following characteristics:

* web subnet: 3 web front-end virtual machines (VMs)

* app subnet: 3 application server VMs

* data subnet: 3 database server VMs

The client requires that inter-subnet network traffic be strictly controlled with network security groups (NSGs).

You need to design a solution that minimizes NSG rule creation and maintenance.

What should you do?

Answers


Explanation (click to expand)

You should define ASGs that align to each application tier. ASGs allow you to target inbound or outbound NSG security rules to workloads. For instance, you could create an ASG that includes the three web servers, a second ASG for the three application servers, and a third ASG that includes the three database servers. You then reference the ASG tags in your NSG rules. This simplifies network administration in Azure and makes rule maintenance more straightforward. For example, if you add a fourth web server to the VNet subnet, then all you have to do is include that new server to your ASG definition.

You should not enable the built-in rules in each NSG because none of these rules meets the requirements. The three Azure-provided default NSG rules do the following:

* Allow all inbound and outbound VNet traffic.

* Allow an Azure load balancer to send health probes through the NSG.

* Deny any traffic for which a prior rule does not match.

You should not employ the VirtualNetwork NSG service tag in each NSG. Service tags do make it easier to write NSG security rules and thereby control traffic. However, the best application of service tags for this use case is to create ASGs that map to the client's three infrastructure-as-a-service (IaaS) workloads.

You should not bind a route table to each subnet. Route tables are used in conjunction with network virtual appliances (NVAs) to customize inter-subnet routing. However, the tables on their own increase complexity and represent an incomplete configuration.

References (click to expand)


Home / Microsoft / AZ-100 / Question 73

Question 73

You hosts a line-of-business (LOB) web application in a virtual network (VNet) in Azure. A site-to-site virtual private network (S2S VPN) connection links your on-premises environment with the Azure VNet.

You plan to use a network security group (NSG) to restrict inbound traffic into the VNet to the following IPv4 address ranges:

* 192.168.2.0/24

* 192.168.4.0/24

* 192.168.8.0/24

Your solutions must meet the following technical requirements:

* Limit rule scope only to the three IPv4 address ranges.

* Minimize the number of NSG rules.

* Minimize future administrative maintenance effort.

What should you do?

Answers


Explanation (click to expand)

You should pass the three IPv4 address ranges into the NSG rule as a comma-separated list. NSGs in Azure allow you to specify individual IP addresses or address ranges either individually or as a comma-separated list. This reduces the number of NSG rules you would otherwise have to create to meet your use case.

You should not pass the IPv4 address range 192.168.0.0/22 into the NSG rule. Doing so would include other IPv4 network addresses besides the four included in the scenario requirements. Route summarization, also called supernetting, refers to identifying multiple contiguous IPv4 network addresses under a single, larger, network address.

You should not define an ASG that includes the three IPv4 address ranges. In fact, ASGs are bound to the IP address configurations of virtual machines (VMs) running in Azure, not on-premises network address ranges.

You should not define an NSG rule that includes the VirtualNetwork service tag. Service tags (and ASGs) represent keyword identifiers that make it easier to reference multiple hosts and/or networks in NSG rules. In this case, you need to write a rule that allows inbound connections from on-premises IPv4 address ranges, not the VNet itself.

References (click to expand)


Home / Microsoft / AZ-100 / Question 74

Question 74

You deploy a virtual network (VNet) named VNET01. You deploy several virtual machines (VMs) connected to VNET01.

You configure a new service on VM01, which is one of the VMs connected to VNET01.

You need to allow inbound traffic to TCP port 992. You decide to create a network security group named NSG01 and attach it to the primary NIC of VM01.

Which PowerShell cmdlet should you use?

Answers


Explanation (click to expand)

You should use the Set-AzureRmNetworkInterface cmdlet. This cmdlet modifies the NIC configured for a VM. To make this association, you should use a script similar to the following:

$nic = Get-AzureRmNetworkInterface -ResourceGroupName "RG01" -Name "primary NIC of VM01"

$nsg = Get-AzureRmNetworkSecurityGroup -ResourceGroupName "RG01" -Name "NSG01"

$nic.NetworkSecurityGroup = $nsg

$nic | Set-AzureRmNetworkInterface

You should not use the Set-AzureRmVirtualNetworkSubnetConfig cmdlet. This cmdlet is used to associate an NSG to a VNet.

You should not use the Set-AzureRmNetworkSecurityGroup cmdlet. You use this cmdlet when you need to save the changes that you made to an object that represents an NSG, for example, when you add an additional security rule.

You should not use the Set- AzureRmNetworkSecurityRuleConfig cmdlet. You use this cmdlet when you need to save the changes that you made to an object that represents a security rule. For example, you would use it when you want to change the access for a security rule from Allow to Deny.

References (click to expand)


Home / Microsoft / AZ-100 / Question 75

Question 75

You deploy a virtual network (VNet) named VNET01. VNET01 also has multiple subnets configured.

You have several virtual machines (VMs) connected to VNET01 subnets. You configure three network security groups (NSGs) named NSG01, NSG02 and NSG03. NSG01 is attached to VNET01. NSG02 and NSG03 are attached to different VMs.

Users experience some connectivity issues when they connect to the services hosted on the VMs.

You need to troubleshoot these connectivity issues. You need to identify the security rules that affect each VM.

Which two commands should you use in your script? Each correct answer presents a complete solution.

Answers


Explanation (click to expand)

You need to list which effective security rules affect the traffic flow of the VMs. You can review the effective security rules by using the az network nic list-effective-nsg Azure CLI command or the Get-AzureRmEffectiveNetworkSecurityGroup PowerShell cmdlet. By using either, you could also review the effective security rules that the Azure portal uses. Also, both commands would allow you to review the effective security rules under support + troubleshooting section on each NIC for each VM.

You should not use the Get-AzureRmNetworkSecurityGroup cmdlet. This cmdlet gets an NSG object. You would use this object to make changes to the NSG.

You should not use the Get-AzureRmNetworkProfile cmdlet. You should use this cmdlet to retrieve a top-level network profile.

You should not use the az network nic show-effective-route-table command. This command is useful for getting the effective routing rules applied to a network interface, which also affect the communication flow.

You should not use the az network nic list command. You should use this command to list all the NICs in a resource group.

References (click to expand)


Home / Microsoft / AZ-100 / Question 76

Question 76

You are asked to deploy a virtual machine (VM) in your Azure subscription. This VM must be configured with a static IP address for connectivity to some legacy applications.

You need to configure the VM to support a static IP address.

What are two ways to achieve your goal? Each correct answer presents a complete solution.

Answers


Explanation (click to expand)

When configuring a static IP address for an Azure VM, there are three supported options for doing so. One option is to use the New-AzureRmNetworkInterface with the -PrivateIPAddress switch. The second is to set the IP address after the VM has been created in the Azure Portal. A third option is to use the Azure CLI and the az network nic create command.

You should not use the Add-AzureRmVMNetworkInterface cmdlet. This cmdlet is used to add a network interface to an existing VM but does not have options for configuring static IP addresses.

You should not use the Set-AzureRmNetworkInterface cmdlet. This cmdlet can be used to set a static IP address, but it must be used in conjunction with other cmdlets to complete the action.

References (click to expand)

Similar questions


Home / Microsoft / AZ-100 / Question 77

Question 77

You are asked to configure Azure virtual machine (VM) connectivity between two virtual networks (VNets) in the same Azure Resource Group.

The solution must support an application that requires connectivity using IPv6 and may not fall back to IPv4 for compliance reasons within the application being hosted. The application must also support IPv6 clients on the public Internet.

You need to implement these requirements.

What three actions should you perform? Each correct answer presents part of the solution.

Answers


Explanation (click to expand)

Within Azure IPv6 is supported but is limited to certain restrictions and requires carful configuration and planning. In this case you should add an IPv6 endpoint to each VM in the application because IPv6 address are not added by default and must be explicitly defined.

You should also deploy two load balancers. In Azure networking IPv6 to IPv6 communication is not supported, so you have to configure routing to first pass to a load balancer, which can communicate to another load balancer to complete the connectivity.

You also need to add a public IPv6 address to one of the load balancers to support connectivity to the Internet based IPv6 based machines.

You should not add an NSG to subnets hosting the VMs to block IPv4 connectivity. Cross VNet connectivity is not possible without the use of a load balancer. In this case because the load balancer will route the traffic, you do not need to consider IPv4 blocking here.

You should not add one Azure Load Balancer to the resource group because this will not support connectivity between to disparate VNets over IPv6.

References (click to expand)


Home / Microsoft / AZ-100 / Question 78

Question 78

You need to configure public IP addressing for four infrastructure virtual machines (VMs) that reside on an Azure virtual network (VNet).

Your solution must meet the following technical and business requirements:

* Minimize the VMs' attack surface

* Minimize administrative/maintenance complexity

* Minimize cost

What should you do?

Answers


Explanation (click to expand)

You should assign a public IP address to a public load balancer and use NAT to reach the VMs. This configuration accomplishes all the scenario goals and adds some additional ones:

* Minimizes the number of Azure public IP addresses required

* Obfuscates any management ports on the VMs

* Load balances traffic across the VMs if they are identically configured

You should not assign a public IP address to an Azure VPN Gateway and use a public load balancer to reach the VMs. Doing so requires far more infrastructure, cost, and management overhead than what is required to satisfy the requirements.

You should not assign a public IP address to each VM vNIC and use JIT VM Access to reach the VMs. JIT VM Access is a paid Azure Security Center feature that protects VM management ports. However, assigning a public IP address to the VMs potentially exposes other ports to the public Internet.

You should not assign a public IP address to each VM and use NSGs to reach the VMs. Deploying public IP addresses needlessly exposes the VMs to the Internet. Moreover, NSG configuration grows complex quickly when you bind them at the vNIC level.

References (click to expand)


Home / Microsoft / AZ-100 / Question 79

Question 79

You need to assign a static private IPv4 address for a Windows Server virtual machine (VM) named corp-vm1 running in a virtual network (VNet) named corp-vnet.

What should you do?

Answers


Explanation (click to expand)

You should modify the IP configuration of the virtual network interface associated with corp-vm1. TCP/IP configuration for VMs takes place from the virtual network interface card (vNIC) level, not the VM level. A vNIC can have one or more IP configurations that define static or dynamic public and private IPv4 addresses.

You cannot edit the address range of corp-vnet. Instead, you can add a second range, and then delete the original range. However, this can be accomplished only if you have first removed any subnets that use the first address range.

You should not connect to corp-vm1 by using RDP and edit the VM's virtual network connection properties. All VM networking must be configured from outside the virtual machine's operating system environment. If you change the networking settings from within the VM, the VM will likely lose Internet connectivity and you will be required to reset the VM's configuration.

You should not connect to corp-vm1 by using WinRM and run the Set-NetIPAddress PowerShell cmdlet. It is crucial that you configure all TCP/IP properties from outside the VM at the Azure Resource Manager level.

References (click to expand)


Home / Microsoft / AZ-100 / Question 80

Question 80

One of your colleagues deployed a new virtual network (VNet) named corp-vnet that has the following properties:

* Address range: 172.16.0.0/16

* Front-end subnet: 172.16.2.0/24

* Mid-tier subnet: 172.16.3.0/24

* Back-end subnet: 172.16.4.0/24

To avoid a conflict with your on-premises IPv4 address space, you need to change the corp-vnet address space to 192.168.0.0/16 and redefine the subnet IDs immediately, before your colleague attempts to migrate virtual machines (VMs) to the new VNet.

What should you do?

Answers


Explanation (click to expand)

You should remove and redeploy corp-vnet. Azure VNet address ranges cannot be modified. Furthermore, deleting an address range requires first removing any associated subnets. Therefore, in this case it makes the most sense simply to remove the entire VNet and redeploy it.

You should not add the 192.168.0.0/16 address space to corp-vnet. Azure VNets do indeed support more than one non-overlapping IPv4 address space. However, doing so does nothing to resolve the problem outlined in the scenario.

You should not delete the three subnet resources from corp-vnet. Performing this action is a necessary first step to delete the address range, but as previously discussed, it is most time-efficient simply to delete the entire VNet and redeploy it by using the Azure portal or another management tool.

You should not edit the corp-vnet address range to 192.168.0.0/16. This operation is not supported in Azure Resource Manager.

References (click to expand)


Home / Microsoft / AZ-100 / Question 81

Question 81

Your company's Microsoft Azure infrastructure team asks you for help in designing a traffic control solution for their deployment.

The deployment consists of a single virtual network (VNet) that has the following topology:

* edge subnet: Linux-based network virtual appliance (NVA) running enterprise firewall software with IP forwarding enabled

* data1 subnet: 4 Windows Server virtual machines (VMs)

* data2 subnet: 4 Ubuntu Linux VMs

You need to recommend a solution to the infrastructure team so that all outbound Azure VM traffic must pass through the NVA on the edge subnet.

What two actions should you perform? Each correct answer presents part of the solution.

Answers


Explanation (click to expand)

You should create a route table with a next-hop IP address. The route table resource allows you to override the Azure-provided system routes that automatically route traffic into, within, and out of an Azure VNet. A route table contains a "next hop" default route that Azure uses when VMs within a subnet attempt to reach resources beyond that subnet. The available next hop types are:

* Virtual appliance

* Internet

* Virtual network

* Virtual network gateway

You should also bind the route table resource to each subnet. In this scenario, you would create a route table that defines a next hop to your NVA, and then bind the single route table to every subnet in the VNet.

You should not create an NSG with an outbound rule. NSGs are software firewall resources used to selectively allow or block inbound and/or outbound traffic. You cannot define next hop routing with NSGs.

You should not bind the resource to each VM vNIC. Route tables can be bound only to the subnet resource. By contrast, NSGs can be bound to subnet and/or vNIC resources.

You should not deploy two internal load balancers between the three subnets. Load balancers are used to distribute user requests equitably among two or more back-end VMs and are not used to enforce next hop routing logic.

References (click to expand)


Home / Microsoft / AZ-100 / Question 82

Question 82

A virtual machine (VM) named VM01 is deployed in a resource group named RG01. This VM is connected to a virtual network (VNet) named VNET01.

You plan to connect VM01 to an additional VNet named VNET02.

You need to create an additional NIC on VM01 and connect it to VNET02.

Which two Azure CLI commands should you use? Each correct answer presents part of the solution.

Answers


Explanation (click to expand)

You should use the az network nic create command. Using this command, you can create the additional NIC that you want to add to VM01. You need to use the subnet and vnet-name parameter to connect the NIC to the correct VNet. You can optionally configure a static IP address by using the private-ip-address parameter.

You should then use the az vm nic add command. This command will attach the new NIC to VM01. You need to provide the name of the recently created NIC to attach it to VM01.

You should not use the az vm nic set command. This command is useful for setting a NIC as the primary NIC of the VM.

You should not use the az network nic update command. You can use this command to change the settings of existing NICs.

References (click to expand)


Home / Microsoft / AZ-100 / Question 83

Question 83

You are deploying a group of new virtual machines (VMs) in your Azure Subscription. These new VMs are part of the frontend layer of a new application that your company is publishing.

You plan to configure an Azure Load Balancer for these new VMs. You decide to configure a Standard Load Balancer.

You need to configure the public IP address that you will assign to the load balancer.

Choose all that apply:

Answers


Explanation (click to expand)

You can only use a standard SKU public IP with Standard Load Balancers. Standard Load Balancers have been designed with security in mind. This means that you need to manually authorize any inbound connection. The Standard SKU public IP address is the only SKU that has this configuration by default.

Standard SKU public IP addresses do not allow inbound communication by default. You need to manually create and assign a network security group (NSG) that allows inbound communication with the standard SKU public IP address. If you need to allow all inbound traffic by default, you need to use a basic SKU public IP address.

You can only use the static allocation method with standard SKU public IP addresses. When you configure a public IP address, there are two different allocation methods: static or dynamic. Static allocation method reserves and assigns a public IP address when the public IP resource object is created in Azure.

You cannot specify the IP address of a public IP resource. Even if you use the static allocation method for a public IP, you cannot manually specify the IP address assigned to your public IP resource object. This address is picked from a pool of public addresses and assigned to your resource.

References (click to expand)


Home / Microsoft / AZ-100 / Question 84

Question 84

You are asked to connect a virtual network (VNet) to a private DNS zone to support new application namespaces in the new private zone. The VNet already has virtual machines (VMs) assigned to it and has existing private DNS zones assigned.

You need to complete this task.

What should you do first?

Answers


Explanation (click to expand)

To complete this task, you should just add the new private DNS zone to the existing VNet. When adding a new private DNS zone to an existing VNet that already has private zones assigned, you do not need to complete any other task other than add the zone.

You do not need to remove the existing VMs from the VNet. This would have been a requirement if the VNet did not already have private DNS zones linked to it.

Setting up a new VNet, assigning the private DNS zone to this VNet, and moving the existing VMs to it would satisfy the connectivity to the new private DNS zone, but the existing linked private zones would be lost.

References (click to expand)


Home / Microsoft / AZ-100 / Question 85

Question 85

You deployed two virtual networks (VNets) that have the following properties:

* dev-vnet-west (West US region)

* prod-vnet-east (East US region)

You configure global VNet peering to link to dev-vnet-west and prod-vnet-east VNets.

You need to ensure that virtual machines (VMs) in either VNet can resolve fully qualified domain names (FQDNs) of any other VM in Azure.

What should you do?

Answers


Explanation (click to expand)

You should create a private zone in Azure DNS. Peered VNets are unable to support host name resolution between themselves. Therefore, you must use an external DNS solution. Azure DNS allows you to create private, non-publicly routable DNS zones that are bound to one or more VNets. The VMs in either VNet would therefore register their host records with the private Azure DNS zone, and the service handles name resolution as usual.

You should not use Azure-provided DNS in each VNet. Azure-provided DNS can resolve VM hostnames and FQDNs only within a single VNet. Moreover, the FQDNs point to the Microsoft-owned cloudapp.net DNS domain, which is likely not ideal for your use case.

You should not deploy DNS servers in each VNet and add their private IP addresses to the DNS servers list. This configuration is incomplete. You would also need to configure each DNS server to forward queries to the DNS server in the peered VNet.

You should not add service endpoints to each VNet. Service endpoints allow you to tie particular Azure services, such as storage accounts and Azure SQL databases, to a VNet.

References (click to expand)


Home / Microsoft / AZ-100 / Question 86

Question 86

A client asks you to assist in moving a public website and Domain Name System (DNS) domain from the current host into Azure.

You help the client migrate the website to an Azure App Service web application. You also create a zone in Azure DNS for the client's company.com domain.

You now need to configure DNS so that user requests to company.com resolve to the Azure App Service app.

What should you do next?

Answers


Explanation (click to expand)

You should delegate the company.com zone to Azure DNS at the client's registrar. Azure DNS is not the registrar, so this step is necessary to ensure that DNS lookups for company.com are redirected to Azure DNS, which will serve as the domain's source of authority from now on.

You should not create a host (A) record for company.com in Azure DNS until you have delegated the zone to Azure DNS at the original registrar. This new resource record will ensure that calls to company.com (the so-called "naked" domain) resolve to the correct Azure App Service web application.

You should not create an alias (CNAME) record for company.com in Azure DNS for two reasons. First, as previously described, you need to complete the zone delegation before you populate the Azure DNS zone. Second, a CNAME record is used to map an additional host name to an existing A record. In this scenario, you need to handle only the company.com root domain name.

You should not configure Azure DNS as a secondary name server because the client's goal is to manage DNS from within Azure and Azure DNS does not support zone transfers.

References (click to expand)


Home / Microsoft / AZ-100 / Question 87

Question 87

You have an Azure resource group named RG1. RG1 contains two virtual networks (VNets) with the following attributes:

* VNet1: East US region; 4 Windows Server virtual machines (VMs)

* VNet2: West US region; 8 Linux VMs

You need to configure host name resolution for all VMs within RG1. Your solution must meet the following technical requirements:

* VMs within each VNet should be able to resolve each other's fully qualified domain names (FQDNs).

* VMs in VNet1 should be able to resolve the host names of VMs in VNet2.

* VMs in VNet2 should be able to resolve the host names of VMs in VNet1.

What should you do?

Answers


Explanation (click to expand)

You should create a private zone in Azure DNS. Within a virtual network, Azure-provided name resolution enables all VMs within a VNet to resolve each other's host names with no additional configuration required. However, Azure-provided name resolution does not work between VNets because VNets are, by design, isolated network communications boundaries.

In this case, you define a private zone in Azure DNS and link both VNets to that zone so the VMs can resolve each other's host names to their corresponding private IP addresses.

You should not define a peering between VNet1 and VNet2. As previously stated, Azure-provided name resolution does not function across VNet boundaries, even with a peering.

You should not deploy a VNet-to-VNet virtual private network (VPN) connection. Although this solution would allow cross-VNet name resolution, it fails to meet the scenario requirement for minimized cost and complexity.

References (click to expand)


Home / Microsoft / AZ-100 / Question 88

Question 88

You configure the companycs.com zone in Azure DNS. You have an A record set named app that points to an App Service that hosts a web application.

You need to make this application available by using the webapp.companycs.com domain name. This new domain name needs to point to the public IP address of the App Service.

You need to ensure that the DNS record for this new domain name is updated or deleted automatically in case the app.companycs.com DNS record is modified or deleted.

Which type of record set should you create?

Answers


Explanation (click to expand)

You should create an A alias record set. An A alias record set is a special type of record set that allows you to create an alternative name for a record set in your domain zone or for resources in your subscription. This is different from a CNAME record type because the alias record set will be updated or deleted in case the target record set is modified or deleted. You can only create an A alias record set that points to another A record set or an Azure resource.

You should not use a CNAME alias record set. The custom domain name for your web application is represented by an A record set. A CNAME alias record set can only point to another CNAME record set. Moreover, the value returned by a CNAME alias record set is a domain name. You are required to create a DNS record that returns an IPv4 address. This means that you need an A alias record set.

You should not use an A record set. This record set type will not be automatically updated or deleted if the app.companycs.com record is modified or deleted.

You should not use a CNAME record set. This record set type will not be automatically updated or modified if the app.companycs.com record is modified or deleted. You are also required to create a DNS record that returns an IPv4 address. This means that you need an A alias record set.

References (click to expand)


Home / Microsoft / AZ-100 / Question 89

Question 89

Your company acquires another business that also uses Azure to deploy virtual machines (VMs) to run business applications for their users. You are tasked with ensuring that your existing applications hosted in Azure can connect securely to the newly acquired company's applications and data hosted in their existing Azure subscription.

You need to configure this environment.

What should you do?

Answers


Explanation (click to expand)

You should enable a Site-to-Site VPN connection between virtual networks (VNets) in the two subscriptions. Site to Site VPN supports connectivity between VNets in different subscriptions.

You should not use a Point-to-Point VPN connection. A Point-to-Point VPN connection can support connectivity from one endpoint to another, but it does not allow multiple connections for different applications within a multiple server or services endpoint to use the same VPN endpoint.

You should not enable VNet Peering. Virtual network peering allows resources from two different network segments to connect as if they were on the same segment. This is not supported across different subscriptions.

You should not create a virtual network gateway in both subscriptions. Performing this activity will set up the prerequisites for configuring connectivity, but the gateways themselves do not support connectivity without additional configuration work.

References (click to expand)


Home / Microsoft / AZ-100 / Question 90

Question 90

Your company acquires another business. The acquired business has Azure resources located in a sovereign Azure cloud.

You are asked to configure network connectivity between your company's Azure network and the Azure network of the acquired company.

You need to implement this connectivity.

What should you do?

Answers


Explanation (click to expand)

To make a network connection from a public Azure subscription to a sovereign cloud Azure subscription, you must use a Site-to-Site VPN. It is the only option that supports connectivity across the sovereign boundary and provides full virtual network (VNet) connectivity.

You should not enable a VNet-to-VNet VPN gateway or VNet Peering. These options are only supported within the same subscription.

You should not configure a Point-to-Point VPN connection. While this is supported across subscriptions, this option only supports single application or VM connectivity, not full network connectivity.

References (click to expand)


Home / Microsoft / AZ-100 / Question 91

Question 91

You have several Windows Server and Ubuntu Linux virtual machines (VMs) distributed across two virtual networks (VNets):

* prod-vnet-west (West US region)

* prod-vnet-east (East US region)

You need to allow VMs in either VNet to connect and to share resources by using only the Azure backbone network. Your solution must minimize cost, complexity, and deployment time.

What should you do?

Answers


Explanation (click to expand)

You should configure peering between prod-vnet-west and prod-vnet-west. Peering enables VMs located on two different Azure VNets to be grouped logically together and thereby connect and share resources. Traditional VNet peering involves two VNets located in the same region. However, global VNet peering, generally available in summer 2018, supports VNets distributed across any Azure public region.

You should not deploy a VNet-to-VNet VPN. First, global VNet peering means that you are no longer required to use a VPN gateway to link VNets located in different Azure regions. Second, the scenario requires that you minimize cost and complexity.

You should not create a private zone in Azure DNS. This action would be necessary for resources in peered VNets to resolve each other's DNS host names. However, the scenario makes no requirement for private host name resolution.

You should not add a service endpoint to each VNet. Service endpoints allow you to limit access to certain Azure resources, such as storage accounts and Azure SQL databases, to resources located on a single VNet. Thus, this feature cannot be used to link two VNets as the scenario mandates.

References (click to expand)


Home / Microsoft / AZ-100 / Question 92

Question 92

You use an Azure Resource Manager (ARM) template to deploy a virtual network (VNet) that contains two Windows Server virtual machines (VMs).

You need to verify connectivity between the newly deployed VMs. Your tests include the following requirements:

* Determine whether line-of-business (LOB) traffic is allowed between the VMs.

* Isolate any network security group(s) that may block valid inter-VM network traffic.

* Minimize cost, time, and troubleshooting complexity.

What should you do first?

Answers


Explanation (click to expand)

To run all desired tests efficiently, you should first enable Network Watcher in the target Azure region. Network Watcher includes two tools that are directly relevant to your troubleshooting scenario:

* IP flow verify: Tests point-to-point connectivity between Azure VMs and identifies any network security group (NSG) rules at play

* Security group view: Displays the effective NSG rules applied to a VNet subnet and/or a VM's VNet interface

You should not deploy the Network Performance Monitor (NPM) management solution. First, this solution fails to meet the scenario requirements of minimal administrative overhead, complexity, and cost. Second, NPM monitors the health of VNets and subnets. It does not have the point-to-point connectivity check tools offered by Network Watcher.

You should not run the Test-NetConnection PowerShell cmdlet from your administrative workstation. While this cmdlet is useful to test connectivity between Azure VMs, it provides no insights into NSG rules that may prevent a connection.

You should not configure an Azure Automation runbook to perform a packet capture on the target VNet. Network Watcher does integrate with Azure Automation, but this solution involves far more complexity than simply using the native Network Watcher tools.

References (click to expand)


Home / Microsoft / AZ-100 / Question 93

Question 93

Your company has an Azure subscription with an Azure Active Directory (Azure AD) tenant. Your company wants to deploy a system that allows users to have a unified experience across all their Windows devices. The security policies of your company require that all user and application data must be encrypted before moving to the cloud and also be encrypted at rest when stored in the cloud.

All computers in your company runs different versions of Windows 7, Windows 8.1, and Windows 10. Your company has an Active Directory Domain Service (AD DS) domain on the local infrastructure. All the computers in the company are joined to the AD DS domain.

You need to deploy Enterprise State Roaming. Your solution needs to require the lowest possible costs.

Which two prerequisites do you need to meet before configuring Enterprise State Roaming? Each correct answer presents part of the solution.

Answers


Explanation (click to expand)

You should purchase Azure AD Premium P1 licenses. You can enable Enterprise State Roaming on any Azure AD Premium license or Enterprise Mobility + Security (EMS) license. In this case, the Premium P1 license has the lower costs.

You should also configure Azure AD Connect. Since all computers in the company are joined to an AD DS domain, you need to deploy Azure AD Connect. This connector allows you to synchronize objects in your on-premises AD DS domain with an Azure AD tenant.

You should not purchase an Azure AD Basic license. This license level does not allow you to enable the Enterprise State Roaming feature.

You should not deploy AD FS. This is not a requirement for deploying the Enterprise State Roaming feature in hybrid scenarios. Azure AD Connect is the required component for hybrid synchronization scenarios.

You should not update all Windows 7 and Windows 8.1 computers to Windows 10. All of these versions are supported for Enterprise State Roaming.

References (click to expand)


Home / Microsoft / AZ-100 / Question 94

Question 94

Your company has an Office 365 tenant for communications and collaboration. The company has also an Azure subscription with an Azure Active Directory (Azure AD) Premium tenant. The company also has an on-premises Active Directory Domain Services (AD DS) domain.

The security policies of your company allow access to cloud applications from employee-owned devices. The security policies require that access to any Office 365 application from a mobile device be limited to users who have enrolled and registered their devices in the corporate Azure AD tenant. The policy applies only to iOS or Android devices.

You need create a conditional access policy to implement the security policy for Microsoft Office 365 Exchange Online to meet the requirements.

Which four conditions should you configure? Each correct answer presents part of the solution.

Answers


Explanation (click to expand)

You should configure the following conditions:

* Users and groups

* Cloud apps

* Device platform

* Client apps

The Users and groups condition must always be present when you configure a conditional access policy. With this condition you control the user or group of users that the policy will affect.

You need to configure the Cloud apps condition to affect only Microsoft Office 365 Exchange Online. You could also add other Microsoft Office 365 web app applications or third-party applications that are connected to Azure AD or use the Azure AD Application Proxy.

You need to apply the Device platform condition so the policy affects only Android and iOS devices. You can use this condition to control the device platform that will be affected by the policy. Other device platforms are Windows Phone, Windows, or macOS.

You should use the Client apps condition to control which client application can access to the application. For this specific case, you should select mobile apps and desktop clients.

You should not use a Sign-in risk condition. This condition evaluates the likelihood that the sign-in attempt is being performed by a non-legitimate user. The calculation of this likelihood value is performed during the sign-in process.

You should not use a Device state condition. This condition is used to control access based on whether the device has been marked as compliant or whether it is a device joined to hybrid Azure AD.

You should not use a Location condition. You use this condition when you want to restrict access to your cloud apps from specific regions or countries.

You should not use Require device to be marked as compliant. This is a grant option, not a condition. This option will allow access to the cloud app only to managed devices.

You should not use Require approved client app. This is a grant option, not a condition. This option grants access only to those applications that have been approved by Microsoft.

References (click to expand)


Home / Microsoft / AZ-100 / Question 95

Question 95

Your company has an Azure subscription with an Azure Active Directory (Azure AD) tenant. Your company uses this Azure AD tenant for managing access to the resources deployed in Azure.

Your company has a security policy that states that all users must have only the required privileges to do their job. This policy also requires that all privileges be reviewed every month and any incorrect permission assignments must be corrected.

You decide to use Azure AD access review. Access review has not been used before.

You need to configure Azure AD access review. Your solution must require the least administrative effort.

What two actions should you perform? Each correct answer represents part of the solution.

Answers


Explanation (click to expand)

You should use the Default Program. A program is a way of organizing access reviews and controls. The Default Program is always present in access review. Since this is the first usage of access review, it is OK to use this default program. If you require additional programs for other compliance issues or business goals, you can create new programs.

You should configure a control with a monthly frequency with 14 days for the duration setting. Configuring the frequency to monthly will run the access review every month. The duration setting is the amount of time that each review is open to the reviewers to provide their input.

You should not create a new program. This requires more administrative effort and is not required since this is the first usage of the access review.

You should not configure quarterly or yearly frequency. You need the access review to be performed every month. The duration setting does not set the frequency of the access review, but the time frame that a reviewer can make modifications to the access review. For a monthly frequency, the maximum duration setting allowed is 27 days to avoid overlapping with next access review.

References (click to expand)


Home / Microsoft / AZ-100 / Question 96

Question 96

Your company's local environment consists of a single Active Directory Domain Services (AD DS) domain.

You plan to offer your users single sign-on (SSO) access to Azure-hosted software-as-a-service (SaaS) applications that use Azure Active Directory (Azure AD) authentication. The tenant's current domain name is companycom.onmicrosoft.com.

You need to configure Azure AD to use company.com, the organization's owned public domain name.

What should you do?

Answers


Explanation (click to expand)

You should add a Domain Name System (DNS) verification record at the domain registrar. This step is required to verify to Microsoft that you own the public DNS domain name in question. You perform the validation by creating either a text (TXT) or mail exchanger (MX) record in your DNS zone file at the registrar's website, using Microsoft-provided values. You can delete the verification record after Azure validates the domain for use with Azure AD.

You should not remove the companycom.onmicrosoft.com domain name from the Azure AD tenant. In fact, you cannot remove this domain name because Azure uses it to identify your directory uniquely across the entire Microsoft Azure global ecosystem.

You should not add a company.com user principal name (UPN) suffix to the AD DS domain. If you use a non-routable DNS domain in AD DS, then you may indeed be required to perform this action. However, the scenario does not specify what AD DS domain name is currently defined.

You should not run Azure AD Connect from a domain member server and specify the custom installation option. Configuring the proper public and private DNS domain names is one of the prerequisite steps that needs to be completed before you run the Azure AD Connect wizard for the first time.

References (click to expand)


Home / Microsoft / AZ-100 / Question 97

Question 97

You have a single Active Directory Domain Services (AD DS) domain operating at the Windows Server 2016 domain functional level. Account synchronization is configured between AD DS and your corporate Azure Active Directory (Azure AD) tenant. All user workstations run Windows 10 Enterprise Edition.

The support desk informs you that they regularly receive support requests from users who changed their Azure AD password and are no longer able to log onto the local AD DS domain.

You need to configure the environment to allow users to change their password either locally or in the cloud, and have the passwords remain in sync.

What should you do?

Answers


Explanation (click to expand)

You should upgrade Azure AD to a premium pricing tier. You need either the Premium P1 or Premium P2 stock-keeping unit (SKU) to enable both self-service password reset as well as password writeback from Azure AD to AD DS. Azure AD Basic Edition supports the former, but not the latter, feature. You upgrade your Azure AD edition by logging into the Office 365 Admin Center (portal.office.com), and clicking Purchase services from the navigation menu.

You should not enable Azure AD conditional access. This feature enables you to write policy that constrains which users, and from which locations, can authenticate to Azure AD-connected applications. Azure AD conditional access is not related to password reset and/or writeback.

You should not configure Azure AD Join for all Windows 10 workstations. Windows 10 Enterprise Edition computers can be joined to Azure AD to support Microsoft Intune-based device management. However, you still need to upgrade your Azure AD edition to meet the requirements.

You should not deploy Active Directory Federation Services (AD FS) in the local AD DS domain. AD FS facilitates token-based authentication across different identity providers and is unrelated to the issues of password reset and writeback.

References (click to expand)


Home / Microsoft / AZ-100 / Question 98

Question 98

Your company has an Azure Active Directory (Azure AD) tenant federated with its on-premises Active Directory Domain Services (AD DS) domain. This domain is named companycs.com.

Your company recently purchased another company named CompanyBD. CompanyBD has its own AD DS domain named companybd.net. This domain is not federated with an Azure AD tenant.

You need to integrate the companybd.net domain with your Azure AD tenant. You decide to federate this new domain.

You attempt to federate the companybd.net domain with Azure AD by using the following cmdlet:

Convert-MsolDomaintoFederated -DomainName companybd.net

You get the following error:

Convert-MsolDomaintoFederated: The federation service identifier specified in the Active Directory Federation Services 2.0 server is already in use. Please correct this value in the AD FS 2.0 Management console and run the command again.

What is the most likely reason for getting this error?

Answers


Explanation (click to expand)

You are getting this error because the value of the IssuerUri is adfs.companycs.com. This is the default value for this parameter when you configure the federation between the top-level domain and Azure AD. You got this value because you did not use the -SupportMultipleDomain parameter when you used the Copnvert-MsolDomaintoFederated cmdlet the first time. You need to change the value of IssuerUri to companycs.com to be able to add an additional top-level domain. At this point, you should delete the Microsoft Office 365 Identity Platform entry in the Relying Party Trust in the AD FS Management console and use the Update-MSOLFederatedDomain -DomainName companycs.com -SupportMultipleDomain command. This will recreate the trust relationship with the correct value for the IssuerUri parameter.

References (click to expand)


Home / Microsoft / AZ-100 / Question 99

Question 99

You are the administrator of the Azure Active Directory (Azure AD) tenant and the Active Directory Domain Services (AD DS) on-premises domain in your company. Your company uses Office 365 as well as other third-parties cloud services. Your company uses Windows 8.1 and Windows 10 domain client computers.

Your company wants to allow all employees to use their own devices to access the company's resources, using a Bring Your Own Device (BYOD) approach.

You need to ensure that your company's assets are still protected while allowing the employees to use their own devices. You also need to keep your current device management capabilities. You plan to deploy Azure AD Join. You should ensure that your solution allows Single Sign-On (SSO).

Which two tools should you deploy? Each correct answer presents part of the solution.

Answers


Explanation (click to expand)

You should deploy Active Directory Federation Services (AD FS). You need to implement SSO, but you still need to allow your users to access third-parties cloud services. Deploying AD FS allows you to manage all the authentication methods in your on-premises AD DS domain, while establishing trust relationships with cloud services for authenticating your users.

Also, you should deploy System Center Configuration Manager (SCCM). This solution allows you to manage your on-premises workstations. You should also deploy a Mobile Device Management (MDM) solution such as Windows Intune. This way, you can manage Azure AD joined devices with the MDM solution and your on-premises workstations with SCCM.

You should not deploy Azure AD Connect. In this scenario, you need to implement SSO so users can access third-parties cloud services. You can only achieve this by using AD FS, not Azure AD Connect.

You should not upgrade all domain client computers to Windows 10. Although having Windows 10 installed on all your workstations is a best-practice, this is not a requirement in an Azure AD Join hybrid deployment. You can use Windows down-level devices such as Windows 7 and 8.1 or Windows Server 2008 R2 through 2012 R2.

References (click to expand)


Home / Microsoft / AZ-100 / Question 100

Question 100

Your company is evaluating a hybrid identity management strategy for authenticating users accessing application hosted in Azure.

You have the following requirements:

* Users must be able to login to the Azure hosted applications using the same username and password as they use on-premises.

* Minimum additional infrastructure is needed to support the sign on mechanism.

* User accounts revoked on premises must be instantly revoked in Azure.

*A cloud based solution should be in place in the event of disaster recovery being invoked. This cloud based solution will not have the same restriction on instant user revocation.

You need to implement the identity management strategy.

Which two identity management solutions should you choose? Each correct answer presents part of the solution.

Answers


Explanation (click to expand)

You should use Azure AD Pass-through Authentication to allow the support for instant account revocation and fall back to a cloud identity provider in the case of disaster recovery. When Azure AD Pass-through Authentication is not available, you can have it fall back to leveraging Azure AD Connect with Password Sync to support login.

You do not require ADFS in this scenario. Although it would support instant account revocation, it does not have a disaster recovery option for fall back to another provider.

Using cloud authentication would not meet the requirements. Cloud identities are not connected to on premises and cannot be revoked on premises. Cloud authentication also offers no failback capability in the event of a disaster.

References (click to expand)


Home / Microsoft / AZ-100 / Question 101

Question 101

You implement Azure Active Directory (Azure AD) Connect to synchronize your on premises Active Directory objects to Azure AD. You discover that the Domain Users group is not available for assigning permissions to applications in Azure.

You need to resolve the issue using the least administrative effort.

What should you do?

Answers


Explanation (click to expand)

The Domain Users group has a property called IsCriticalSystemObject and any user or group with this property does not sync to Azure AD by design. Therefore, you should create a new group in Active Directory and add all the users from the domain to that group. This can then be used to grant permission to applications in Azure AD after that group is synchronized.

You should not move the Domain Users group to an Organizational Unit that is configured for synchronization. Doing this will not synchronize the account because the well-known groups do not synchronize by default.

You should not modify the IsCriticalSystemObject property of the group. This property is set by design to prevent replication of well-known objects to Azure AD. Such well-known objects are the default Active Directory groups and accounts such as the Domain Administrator account.

You should not set a value on the RepsTo property in Active Directory. This property stores the server names that the directory will replicate to for the object. It has no bearing on replication to Azure AD.

References (click to expand)


Home / Microsoft / AZ-100 / Question 102

Question 102

You configure federated authentication on your Azure subscription for multiple applications. As a part of that work, you enable Home Realm Discovery and set the policy as shown in the exhibit.

The majority of your users can successfully access the application, but several users report that they are unable to sign in.

You need to resolve the problem.

What are two ways to resolve the problem? Each correct answer presents a complete solution.

Answers


Explanation (click to expand)

You should either add the users to the federated.example.edu domain or disable Home Realm Discovery to allow users to login to any domain.

Home Realm Discovery redirects users to a specific federated login endpoint to accelerate the Azure login process. When multiple domains are in play, users may be blocked from logging in if the configuration is not correct. In this example users cannot login to Azure, but the policy indicates that Home Realm Discovery is enabled with a preferred domain of federated.example.edu. Because a preferred domain is specified, it means that every user of the application must be able to login to that domain. Cleary they cannot at this point. One possible solution is to add these users to the federated.example.edu domain to enable them to log in. A second solution would be to disable Home Realm Discovery to allow users to log in to any domain.

You should not change the AllowCloudPasswordValidation property to false. This will not allow a non-preferred domain user from logging in if a preferred domain is set. It only allows a user with a synchronized password hash to login directly to an Azure AD endpoint.

You should not change the PreferredDomain property to the domain that the users who cannot login are from, This will fix the problem for the users who cannot log in, but will subsequently block access for the users who can currently log in.

References (click to expand)


Home / Microsoft / AZ-100 / Question 103

Question 103

Your company's local environment consists of a single Active Directory Domain Services (AD DS) domain. The company purchases a Microsoft Office 365 E5 subscription, and you plan to configure directory synchronization between AD DS and Azure Active Directory (Azure AD) to support single sign-on (SSO) for your users.

You need to ensure that improperly formatted domain user names will not cause synchronization errors.

What should you do?

Answers


Explanation (click to expand)

You should download and run the free Microsoft Directory Synchronization Error Remediation (IdFix) tool on a domain member server or workstation prior to configuring directory synchronization between AD DS and Azure AD. IdFix can isolate and even remediate common errors reported by Azure AD Connect, including improperly formatted domain user names.

You should not run Azure AD Connect in custom mode. This operation actually configures directory synchronization or identity federation. It does not proactively identify potential errors and offer to remediate them for you.

You should not run the Synchronization Service Manager. This tool and the Synchronization Rules Editor are included when you install Azure AD Connect on the domain controller or member server that will host the directory synchronization service. Synchronization Service Manager is used to customize the synchronization schedule. It can only be run after directory synchronization is enabled.

You should not run the Synchronization Rules Editor. The tool can be run only post-deployment of directory synchronization. Also, the tool is used to customize the user and group attributes synchronized between on-premises and cloud environments, not to proactively address synchronization errors.

References (click to expand)


Home / Microsoft / AZ-100 / Question 104

Question 104

One of your colleagues used Azure AD Connect to synchronize all Active Directory domain user and group accounts to your Azure Active Directory (Azure AD) tenant. As a result, authorized domain users have single sign-on (SSO) access to internally developed software-as-a-service (SaaS) apps that rely on Azure AD authentication.

You need to reconfigure directory synchronization to exclude domain service accounts and user identities that should not have access to the SaaS application.

What should you do?

Answers


Explanation (click to expand)

You should re-run Azure AD Connect in order to perform organizational unit (OU) filtering and thereby customize the Active Directory identities that are replicated to Azure AD. You do not need to uninstall and reinstall Azure AD Connect to make changes to your identity synchronization topology. You just need to re-run the Azure AD Connect wizard, which offers the following options:

* Viewing current configuration

* Customizing synchronization properties

* Refreshing the directory schema

* Configuring staging mode

* Changing user sign-in

In this case, you would choose Customizing synchronization properties to configure OU filtering.

You should not run the Synchronization Rules Editor. This tool allows you to view and potentially remap Active Directory schema attributes with Azure AD properties.

You should not configure conditional access in Azure AD. Conditional access is an Azure AD security feature whereby you can define policy to restrict which Azure AD users are authorized to access cloud-based or on-premises applications. You need to restrict Active Directory accounts are synchronized to Azure AD.

You should not stop the synchronization service. Doing so does not address the issue of changing which Active Directory accounts are synchronized to Azure AD.

References (click to expand)


Home / Microsoft / AZ-100 / Question 105

Question 105

Your company recently purchased an Office 365 subscription. Your company has an on-premises Active Directory Domain Services (AD DS) domain. You configure smartcard authentication support for some specific users.

You want to ensure that users can access all Office 365 applications without typing their password. You also want to ensure that they use the same password for the AD DS company domain and Office 365.

You need to deploy a solution that meets all the company's requirements.

Which solution should you deploy?

Answers


Explanation (click to expand)

You should deploy AD FS. You need your users to be able to use all Office 365 applications without typing the password every time they need to use the application. This means that you need to implement a Single Sign-On (SSO) solution. You have also configured smartcard authentication support. This is the only solution that enables SSO and also allows smartcard authentication with Office 365 applications for your users.

You should not deploy Active Directory Federation Services (AD FS) and Seamless SSO. Seamless SSO is an Azure AD Connect feature that is not supported by AD FS.

You should not deploy Azure AD Connect with password hash synchronization or pass-through authentication and Seamless SSO. None of these options allows users with a smartcard to authenticate to Office 365 applications.

You should not deploy Azure AD Connect with password hash synchronization or pass-through authentication. These options do not provide SSO on their own. You would need to enable Seamless SSO on Azure AD Connect to support SSO. These options do not meet the requirement of authenticating users with smartcards either.

References (click to expand)


Home / Microsoft / AZ-100 / Question 106

Question 106

You are asked to connect your new Office 365 subscription with your on-premises Active Directory Domain Services (AD DS) domain. You configure Azure AD Connect and enable Seamless Single Sign-On (SSO).

You need to configure Group Policy Object (GPO) support for SSO.

Which two policies or settings should you configure? Each correct answer presents part of the solution.

Answers


Explanation (click to expand)

You should configure the Site to Zone Assignment List setting. You need to configure this setting to establish the URL to which Kerberos tickets are forwarded when the user tries to sign on to Office 365 applications. You need to configure https://autologon.microsoftazuread-sso.com as an Intranet zone, because Kerberos tickets are not sent to cloud endpoints.

You should also configure the Allow updates to status bar via script policy for the Intranet Zone. When the user tries to access an Office 365 application, Seamless SSO uses JavaScript scripts to run all requests in the background. These JavaScript scripts also need to update the status bar of the user's browser. You need to configure this setting for the Intranet Zone because you need your clients to send Kerberos tickets.

You should not configure the Internet Zone Template or Intranet Zone Template policies. These policies are useful to configure the security level for each zone for the domain users.

You should not configure the Allow updates to status bar via script policy for the Internet Zone. This policy will allow JavaScript scripts downloaded from the Internet Zone to update the status bar. In this scenario, the Seamless SSO's URL is treated as an Intranet Zone's URL.

You should not turn on the Notification bar policy for intranet content. This policy has no effect on the Seamless SSO feature.

References (click to expand)


Home / Microsoft / AZ-100 / Question 107

Question 107

You configure Azure AD Connect to synchronize your on-premises Active Directory Domain Services (AD DS) domain with your Office 365 subscription. You enable the password hash synchronization feature. Then, you sync all user accounts that are assigned to an employee. You also configure group-based filtering.

A user indicates that she cannot log in to Office 365 applications. However, she is able to log in successfully through her company's workstations.

You need to troubleshoot the password synchronization process.

After some investigation, you realize that this user has been moved to another job position in the company.

What is the most likely cause of the login problem?

Answers


Explanation (click to expand)

The most likely cause of the problem is that the user object has been moved to another security group. You only sync user accounts that are assigned to an employee. You manage these users by using AD DS security groups. If you accidentally move a user out of the security group that you have configured to synchronize to Azure AD, that user is removed from the Azure AD tenant and will not be able to log in. You need to move the affected user account back to the correct security group for Azure AD Connect to synchronize it again with Azure AD tenant.

The user object being selecting the User must change password at next logon setting would not make the user unable to log in. This setting would make the password not synchronized with the Azure AD tenant. Temporary passwords are not synced. In this scenario, the user was able to correctly log in to her company's workstation. She was not asked to change her password.

The user object being disabled would not make the user unable to log in. Azure AD Connect will not synchronize disabled objects. In this scenario, the user was able to log in to her computer, which means that the user object was not disabled.

Configuring the cloudFiltered attribute would not make the user unable to login. In this scenario, you are using group-based filtering. Attribute filtering is a more granular technique to filter the objects that will be added to the metaverse and then synced with Azure AD tenant. If the cloudFilterted attribute is present in an object, that object will not be synced with Azure AD.

References (click to expand)


Home / Microsoft / AZ-100 / Question 108

Question 108

You are asked to create a new set of Azure Active Directory (Azure AD) security groups that represent the entire hierarchy of a manager's team. This is to include people managed by the manager but not people managed by the manager's own team. For example if Bob manages Tom and Tom manages Fred. The group must include Tom but not Fred. The group should also update dynamically as people change managers over time.

You need to implement the request using the least amount of administrative effort.

What should you do?

Answers


Explanation (click to expand)

You should create new groups using the Direct Reports rule. This rule creates a dynamic group including members who have the same ManagerID attribute.

You should not create multiple Azure AD groups and add members with the same ManagerID attribute. This will initially be successful but does not satisfy the auto updating requirement as managers change.

You should not create Azure AD groups for each manager and use a custom script to update the groups on attribute changes. This would solve the requirement of keeping the group up to date on changes but would require more administrative effort to implement.

You should not construct dynamic groups in Azure AD based on the ManagerID attribute. This attribute is only available with the Direct Reports rule.

References (click to expand)


Home / Microsoft / AZ-100 / Question 109

Question 109

Your company has an Azure subscription configured with an Azure Active Directory (Azure AD) tenant. This tenant is used for managing user information. Your company has also a line-of-business (LOB) application that uses the Azure AD tenant for getting information from users.

You are asked to update the mobile attribute of all tenant users.

You need to perform this task using the least administrative effort.

What should you do?

Answers


Explanation (click to expand)

You should use Set-AzureADUser cmdlet. This cmdlet allows you to modify the properties of a user in Azure AD. You can use this cmdlet in a script to perform a bulk modification on your tenant.

You should not use the Invoke-RestMethod cmdlet to call the Graph API. You use Invoke-RestMethod for making HTTP requests. You could use this cmdlet for calling the users' endpoint in the Graph API and then make a bulk modification, but this would require much more effort.

You should not use the Set-MsolUser cmdlet. This cmdlet is used for modifying Office 365 user properties. Although this cmdlet will modify the properties on the Azure AD tenant associated with the Office 365 subscription, your company does not have an Office 365 subscription, so you should not use this cmdlet.

You should not use the Set-ADUser cmdlet. You use this cmdlet for modifying the properties of an Active Directory Domain Services (AD DS) user. Since your Azure AD tenant is not connected with an AD DS domain, this cmdlet would not work for your environment.

References (click to expand)


Home / Microsoft / AZ-100 / Question 110

Question 110

Due to a recent corporate reorganization, team members in the Accounting department are now part of the Finance department.

You need to change the Department property for 36 Azure AD user accounts. Your solution must minimize administrative effort and make future bulk updates easier to perform.

What should you do?

Answers


Explanation (click to expand)

You should write a PowerShell script using the AzureAD module. Your script should call Connect-AzureAD to authenticate as an administrator, enumerate a list of appropriate group members, and then use a looping construct to update the Department property for the group members. For instance, your script may look like the following:

Connect-AzureAD

$groupName = Get-AzureADGroup -SearchString 'Accounting'

$users = Get-AzureADGroupMember -ObjectId $groupName.ObjectId

foreach ($u in $users) {

Set-AzureADUser -ObjectId $u.Mail -Department 'Finance'

}

Going beyond this scenario's requirements, you will likely need to rename the Accounting group to Finance as well for consistency's sake.

You should not write a PowerShell workflow using the MSOnline module and a CSV file containing the relevant usernames. First, MSOnline is a legacy Azure Active Directory module and Microsoft strongly encourages customers to use the AzureAD module instead. Second, importing a CSV file is an excellent way to bulk-add new objects into AzureAD but is not appropriate for this bulk update scenario.

You should not write a DSC script and deploy it using Azure Automation. DSC is used to prevent configuration drift in target systems, not to perform bulk updates to a directory service.

You should not write a PowerShell Azure Function using the AzureRM.Profile module. The AzureRM.Profile module contains cmdlets concerning authentication and user context, not Azure Active Directory itself.

References (click to expand)


Home / Microsoft / AZ-100 / Question 111

Question 111

You hire another administrator who will be responsible for managing all infrastructure-as-a-service (IaaS) deployments in your Azure subscription.

You create a new Azure Active Directory (Azure AD) user account for the new hire. You need to configure the new user account to meet the following requirements:

* Read/write access to all Azure IaaS deployments

* Read-only access to Azure AD

* No access to Azure subscription metadata

Your solution must also minimize your access maintenance in the future.

What should you do?

Answers


Explanation (click to expand)

You should assign the user the Contributor role at the resource group level. Least privilege security means granting users only the scope and degree of access they require, and no more. The Contributor built-in role-based access control (RBAC) role grants the new employee full read/write access at that scope, but does not grant the user any privileges either at the subscription or Azure AD levels.

You should not assign the user the Global administrator directory role. This role assignment allows the new employee full access to Azure AD, which violates the scenario's technical requirements. Also, Azure AD directory roles govern what a user can do within Azure AD and have nothing to do with regard to other Azure service permissions.

You should not assign the user the Owner role at the resource level. The Owner role is most privileged, and its assignment must be strictly controlled. Furthermore, making multiple role assignments at the resource level makes for more difficult and complex administration in the future.

You should not assign the user the Virtual Machine Operator role at the subscription level. This built-in role grants users only the ability to monitor and restart virtual machines (VMs), which is far less privilege than the new hire requires to perform full management of the company's Azure IaaS deployments.

References (click to expand)


Home / Microsoft / AZ-100 / Question 112

Question 112

Your company has an Azure Active Directory (Azure AD) tenant named companycs.com, for managing all users that need to access the resources deployed in their Azure subscription.

You need to grant access to an external consultant to some of the resources deployed in your subscription. This external consultant will use her own email address as her username. The company of the external consultant does not use Office 365 or any other Azure AD tenant.

Which PowerShell cmdlet should you use?

Answers


Explanation (click to expand)

You should use the New-AureADMSInvitation cmdlet. You use this cmdlet to create guest users in your Azure AD tenant. In this scenario, you need to create a guest user because the external consultant will use her own email address as the username to log in to your tenant. You need to create guest users to grant access to users that do not exist on your Azure AD tenant, regardless of whether they use Office 365 or have another Azure AD tenant configured.

You should not use the New-AzureADUser cmdlet. This cmdlet is used to create a new user in your Azure AD tenant. In this scenario, using this cmdlet will create a new user that will use companycs.com as part of their username.

You should not use the New-ADUser cmdlet. You typically use this cmdlet to create new users in your on-premises AD DS domain. If your on-premises AD DS domain and your Azure AD tenant were synced, any new user would appear on your Azure AD tenant. However, they would be internal users, not guests.

You should not use the New-MsolUser cmdlet. You use this cmdlet to create a new user in an Office 365 subscription.

References (click to expand)