Category: Cloud

Forcefully Revoke Azure AD User Session Access – Immediately

Sometimes it is critical to revoke a user’s Azure AD session for whatever reason it may be. You can always delete the user from Azure AD, however if the user is connected via PowerShell, the user’s token may not expire for a few more minutes, or maybe hours, depending on the token TTLs settings… So what can you do? You can forcefully revoke a user’s token session by using the following PowerShell cmdlet, “Revoke-AzureADUserAllRefreshToken“. Due to Microsoft’s ever changing Azure modules, I have tested this solution within the Azure Cloud Shell, and not on a local machine with PowerShell ISE with the AZ or RM modules.

First we need to identify which user will have its access revoked. Based off of the Revoke cmdlet, you will need to specify the “ObjectID” parameter, and the user’s ObjectID can be found within the Azure AD blade as seen below:

For additional information you can view the user’s access by executing the following cmdlet, “Get-AzRoleAssignment -ObjectId <>

Once we have identified the user and its ObjectID, we first need to connect to Azure AD, this is done by running the following cmdlet, “Connect-AzureAD -TenantId <>“. With my experience you need to specify the TenantID. Once you have connected, and verified your device, you can now run the Revoke cmdlet, as seen below, the following cmdlet needs to be executed, “Revoke-AzureADUserAllRefreshToken -ObjectId <>“. The Revoke cmdlet will not provide any details if the operation was successful, however it will throw an error if something did not go right — yes, very helpful, right? 🙂

By running this Revoke cmdlet, the user has now lost all access to its Azure AD account and any active sessions, either via the Azure Portal UI, or PowerShell will be immediately revoked. 🙂

Deploy an Azure Cloud Witness for your Failover Cluster Quorum for Windows Server 2016 & 2019 with PowerShell

For the longest time, when deploying a cluster with Windows Server, you only had the two options,

  1. Using a dedicated disk for the quorum, or
  2. Configuring an SMB file-share as the quorum witness

With Server 2016 and 2019, there is now a third option, Cloud Witness. The Cloud Witness leverages Azure Blob storage to provide that additional cluster/quorum vote.

Before showing you how this is done, one should understand the purpose of a witness/quorum is with respect to a failover cluster.

When one or more members of a cluster stops reporting to the other cluster members, there is a vote. The vote ensures that there is no split-vote, and ensures the cluster has a true owner. For example, in a two node cluster, if each node believe it is the owner, then this will cause a “split-brain”. In short, neither node will ever agree it is the owner (or not). This is where a quorum is required to determine who is the owner by providing the third vote, ie. majority. This ensures the cluster has a true owner by having the majority of votes. Each member gets a vote, plus the quorum.

Why this matters, in the even there is no quorum, a node from the cluster can be evicted and as a result will suspend all application services to prevent data corruption by more than one system writing data without the cluster services coordinating data writes and access. Depending on policies, VMs running on the ejected cluster member will either suspend operations or be migrated to other nodes before being ejected.

Below is a step-by-step guide on how to configure the Azure Blob storage as the Cloud Witness.

Assumptions:

  • The Azure Blob storage account has already been created,
  • The cluster with at least 2 nodes already exists.

Launch the PowerShell console as Administrator, and execute the following cmdlet:

Set-ClusterQuorum -CloudWitness -AccountName "storage_account_name" -AccessKey "primary_access_key"

Now if we go back to the Failover Manager console we can see we have successfully configured cluster with a Cloud Witness.

In conclusion, deploying a Cloud Witness for a Failover Cluster is very simple, and in case of power outage in one datacenter, maintenance on a node, etc. then the entire cluster and its members (nodes) are all given an equal opportunity. Not only is it recommended and a requirement for 2-node clusters, but for any number of nodes, having a quorum is key ensuring high-availability.  As mentioned, there are the traditional options such as using a dedicated disk or a file-share (SMB) as the cluster witness. However with Azure Blob storage with its 16×9 uptime, we can always ensure the quorum witness is online and available.

Deploy an Azure Cloud Witness for your Failover Cluster Quorum for Windows Server 2016 & 2019

For the longest time, when deploying a cluster with Windows Server, you only had the two options,

  1. Using a dedicated disk for the quorum, or
  2. Configuring an SMB file-share as the quorum witness

With Server 2016 and 2019, there is now a third option, Cloud Witness. The Cloud Witness leverages Azure Blob storage to provide that additional cluster/quorum vote.

Before showing you how this is done, one should understand the purpose of a witness/quorum is with respect to a failover cluster.

When one or more members of a cluster stops reporting to the other cluster members, there is a vote. The vote ensures that there is no split-vote, and ensures the cluster has a true owner. For example, in a two node cluster, if each node believe it is the owner, then this will cause a “split-brain”. In short, neither node will ever agree it is the owner (or not). This is where a quorum is required to determine who is the owner by providing the third vote, ie. majority. This ensures the cluster has a true owner by having the majority of votes. Each member gets a vote, plus the quorum.

Why this matters, in the even there is no quorum, a node from the cluster can be evicted and as a result will suspend all application services to prevent data corruption by more than one system writing data without the cluster services coordinating data writes and access. Depending on policies, VMs running on the ejected cluster member will either suspend operations or be migrated to other nodes before being ejected.

Below is a step-by-step guide on how to configure the Azure Blob storage as the Cloud Witness.

Assumptions:

  • The Azure Blob storage account has already been created,
  • The cluster with at least 2 nodes already exists.

Launching the Failover Manager within Windows Server manager, connect to the cluster, and do the following. Right click the cluster object and select More Actions > Configure Cluster Quorum Settings…

Next select the Advanced Quorum configuration..

Ensure we have all the nodes selected, as seen below.

Next, select the Configure a Cloud Witness:

Now we need to get our Azure Blob storage account name, and its primary account key. This can be retrieved from the Azure portal.

Now validate the settings and complete the configuration.

Now if we go back to the Failover Manager console we can see we have successfully configured cluster with a Cloud Witness.

In conclusion, deploying a Cloud Witness for a Failover Cluster is very simple, and in case of power outage in one datacenter, maintenance on a node, etc. then the entire cluster and its members (nodes) are all given an equal opportunity. Not only is it recommended and a requirement for 2-node clusters, but for any number of nodes, having a quorum is key ensuring high-availability.  As mentioned, there are the traditional options such as using a dedicated disk or a file-share (SMB) as the cluster witness. However with Azure Blob storage with its 16×9 uptime, we can always ensure the quorum witness is online and available.

Step-by-Step – Deploying Azure Site Recovery (ASR) OVF Template (VMware On-Premises)

In the following tutorial, I will go through a step-by-step walk-through on deploying the Azure Site Recovery (ASR) VMware OVF template. This OVF template is a critical step as it bridges the connection between your On-Premises datacenter and the Azure Site Recovery Vault. Obviously there are a handful of prerequisites, as we need to prepare our VMware environment in addition, prepare our Azure workspace. I have created similar posts for Hyper-V and Azure to Azure (A2A) ASR Migrations, please visit the following link for the detailed setups of the Azure Recovery Services Vault HERE.

Let’s begin…  The first step is to download and install the VMware OVF template. The VMware OVF template can be found at the Microsoft Download Center.

Next, we need to deploy the OVF template within vCenter.

Note, this template will consuming about 1.5TB of space. This is a result of Microsoft consolidating the Configuration server and Process server into one workload.

Once the template is deployed, start the appliance and let’s begin registering our vCenter with ASR vault.

Note, the licence provided with OVF template is an evaluation licence valid for 180 days. You as the customer need to activate the windows with a procured licence.

Now we need to provide the server with some local administrative credentials.

Once you have given it some credentials, the server will auto login. The ASR wizard should launch on its own, if not, you can launch it manually — the icon should be on the desktop.

Once the ASR wizard starts, we will now need to complete the setup for this server following by registering the server with ASR.

Give the server some name, ie. VMwareASR01

Next, we need to validate the server can go over the Internet, ie. Azure and communicate as needed. If you are using a proxy, here is the time to set that up.

One thing to note, having the proxy settings configured within Internet Explorer should be removed.

Once an Internet connection has been established, we can then sign into the Azure Portal.

Now we need to sign into Azure with some credentials, ideally with a privileged/Global Administrator account.

Once you have logged into Azure successfully, you will need to reboot the server.

Once the server is back online, the next steps is to configure the Configuration server. 🙂 This step we will register this server/vCenter appliance with our Recovery Services vault. Let’s begin!

The server will auto-launch the ASR wizard, if not, launch it from the desktop icon.

Now that we have established an internet connection, we can configure our Network Interface Card(s) (NICs). Note, you can add as many as NICs needed, however, this needs to be done at the vSphere level. Once the server has been configured, you cannot add and/or remove those NICs. So, make sure you have it configured exactly as you need it. In my case, we will only need one, so, we will configure the NIC here.

Next, we need to sign into our Azure account, and select the corresponding subscription, resource group and select the appropriate recovery services vault. All of these should be available, and should have been created well before we began configure this server, as per the prerequisites…

Next, our server will download, install and configure MySQL on the server, along with the vSphere PowerCLI tools.

Gotcha, here, the appliance did not provide the vSphere PowerCLI tools, so we had to manually download, and install it.

Once we downloaded VMware’s vSphere PowerCLI toolset, we were able to continue. As mentioned, this was not provided, although it should have been. If we continued forward, the wizard would have thrown an error at the end of validation.

Next, we need to now provide the credentials and information with regards to our vCenter server(s).

Please read the prerequisites with regards to the needed permissions to allow our ASR Configuration server to communicate with the vCenter server(s).

Next we need to provide Windows and Linux based credentials to deploy install the ASR Mobility Service to all machines that will need to be replicated to Azure.

For this exercise, we are not replicating Linux machines to Azure, however if we were, similar to the Windows Mobility Service, we would need to provide some credentials that have elevated access to each of the Linux machines.

Once we have provided all the information above, we should now be able to validate some of the settings we have provided, and register our server with Azure and the Recovery Services vault. Give this a few minutes, as it took about 5 minutes to establish the communication/trust.

Once the registration of the server is complete, and the ASR appliance is officially configured with our Azure Recovery Services vault, we should now be able to see the vCenter/Configuration Server within our Azure Recovery Services vault.

If we click on the server, we can get some additional information, such as the server’s health, configuration, heartbeat, and so on…

We can also now click on the Process Server and get some additional information as well.

Now we are able to select the VMs we want to begin replicating to Azure and start testing failovers, either real, or simulated.

I hope this was helpful! Thanks, and until next time…

Log Analytics (OMS) AD Assessment – “No Data Found”

So, you deployed the OMS/Log Analytics AD (Active Directory) Assessment solution, and let it sit for a few hours, or maybe even a few days now.. Yet, the AD Assessment tile is still shows, “No Data Found“….

Well that is frustrating! Below is the series of steps I took to get this working, and ultimately what the actual solution was to get this OMS/Log Analytics solution pack working.

First things first,  did the basics… Check to ensure the Microsoft Monitoring Agent is deployed, and installed correctly. Also checked to see the service was running.

Confirmed the AD Assessment prerequisites were all satisfied:

  • The Active Directory Health Check solution requires a supported version of .NET Framework 4.5.2 or above installed on each computer that has the Microsoft Monitoring Agent (MMA) installed. The MMA agent is used by System Center 2016 – Operations Manager and Operations Manager 2012 R2, and the Log Analytics service.
  • The solution supports domain controllers running Windows Server 2008 and 2008 R2, Windows Server 2012 and 2012 R2, and Windows Server 2016.
  • A Log Analytics workspace to add the Active Directory Health Check solution from the Azure marketplace in the Azure portal. There is no further configuration required.

After all that, I decided to execute the following query within Log Analytics, I got the following results:

Operation | where Solution == "ADAssessment" | sort by OperationStatus asc

Okay, so I ensured .NET 4.0 was installed, fully. For safe measures, I enabled all of the .NET 4.6 sub-features, and for kicks, installed .NET 3.5 as well. Yet.. still nothing!

Next, I decided to take a look at the registry…

If we navigate to the following Registry Key, “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HealthService\Parameters\Management Groups\<YOUR Management Group Name>\Solutions\ADAssessment

I decided to delete the “LastExecuted” key, and then decided to reboot the server….

After a few minutes, I went back to the OMS/Log Analytics portal, and there it is!!!!

I ran the same query again, and verified the AD Assessment solution was working as expected:

Operation | where Solution == "ADAssessment" | sort by OperationStatus asc

Great! Now, if I click within the tile, I get the following AD Health Checks.

I hope this helped! Cheers! For more information on the OMS Active Directory Assessment Solution, please visit: https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-ad-assessment

 

Azure Update Management – Part II

A little while ago, I blogged on OMS’ (Operations Management Suite) Update Management Solution. As great as this solution was, there were some limitations at the time, such having the ability to exclude specific patches, co-management with SCCM (Configuration Manager), and few others.

Since that post, there have been some great improvements to Update Management, so let’s go over some of the key updates, and do a quick setup walk-through:

  1. Both Windows (2008R2+) and (most) Linux Operating Systems are supported
  2. Can patch any machine in any cloud, Azure, AWS, Google, etc.
  3. Can patch any machine on-premises
  4. Ability to Exclude patches

One of the biggest improvements I want to highlight is, the ability to EXCLUDE patches, perhaps in time there will also be INCLUDE only patches. 😉

First, we need to get into our Azure VM properties.. Scroll down to the Update Management.

  • If the machine belongs to a Log Analytics workspace, and/or does not have an Automation Account, then link it now, and/or link/create the Automation Account
  • If you do not have an Log Analytics workspace and/or an Automation Account, then you have the ability to create it at run-time now.

In this scenario, I kept it clean as possible, so both the Log Analytics workspace needs to be created, and likewise for the Automation Account, and Update Management needs to be linked to the workspace.

Once enabled, it a few minutes to complete the solution deployment….

After Update Management has been enabled, and it has run its discovery on the VM, insights will be populated, like its compliance state.

Now we know this machine is not compliant, as it missing a security update(s), in addition, missing 3 other updates too. Next, we will schedule a patching deployment for the future. So let’s do that now.

Now we can create a deployment schedule with some base settings, name, time, etc. But one thing to note, we can now EXCLUDE specific patches! This is a great feature, as let’s say, we are patching an application server, and a specific version of .NET will break our application, as the application Dev team has not tested the application against the latest .NET framework.

In this demo, I am going to EXCLUDE patch, KB890830.

Next, we need to create a schedule. This can be an ad-hoc schedule, or a recurring schedule.

Once you hit create, we can now see the Deployment Schedule, under Scheduled Update Deployments.

You can also click on the deployment to see it’s properties, and which patches have been excluded.

After the deployment has initiated, you can take a look at its progress.

If we go into the Update Deployment (yes, I got impatient, and deleted the first one, and re-created it…), and click on the Deployment we created, we can see the details.

As you can see, patch, KB890830 was not applied! Awesome.

If we not go back to the Update Management module, we can now see the VM is compliant.

 

Azure Virtual Network (VNet) Peering

In this blog post, I will go over,

  • What is Azure VNet (Virtual Network) Peering,
  • When to use VNet Peering,
  • How to implement VNet Peering.

What is Azure Virtual Network (VNet) Peering?

Azure VNet (Virtual Network) Peering enables resources within two separate virtual networks to communicate with one another. Leveraging Microsoft’s backbone infrastructure, the two peered virtual networks will communicate over its own isolated network.

Below we have two Virtual Networks (VNet01 and 02), that have different IP Address spaces. By implementing VNet Peering, the two networks will be able to communicate with one another, as if all resources are in one network. Some notes, VNet Peering is not transitive, ie. If VNet01 and VNet02 are Peered, and VNet02 and VNet03 are Peered. This means, VNet01 and VNet03 cannot communicate with one another. Another note, inbound and outbound traffic in the VNet Peer are $0.01 per GB. Prices are a bit higher for Global VNet Peering. Get the official numbers here, https://azure.microsoft.com/en-us/pricing/details/virtual-network/.

When to use Azure Virtual Network Peering?

As mentioned above, you want to enable Azure VNet Peering when you have two virtual networks that have resources (VMs) in both networks that need to communicate with one another. For example, let’s say you have exhausted 4,000 VM limit within a VNet…

Some of the benefits of VNet Peering is:

Before you go ahead and implement, there are a few requirements:

Finally, how to implement it!

In this example, both of my virtual networks (VNets) are in the same region, Canada Central.

Select VNet01, and select Peering:

 

Give the Peering a name, “VNet01Peering” and select the other VNet, VNet02.

 

Give it a few seconds, and it should now be connected to VNet02:

Next, we now need to apply the same concepts to VNet02. So let’s do that now.

 

 

Now if we go to the VMs within each of the Virtual Networks, and try to ping another VM in the other VNet, it should now work! Based on the images below, you can see the Ping failed, that was from a previous ping response prior to VNet Peering being implemented.

VM01 in VNet01 trying to Ping VM02 in VNet02; 10.10.10.4 -> 192.168.1.4: 10.10.10.0/24 -> 192.168.1.0/24.

And conversely, the other way around…

VM02 in VNet02 trying to Ping VM01 in VNet01; 192.168.1.4 -> 10.10.10.4 -> : 192.168.1.0/24 -> 10.10.10.0/24.