Category: Cloud

Azure Virtual WAN – Now supports 3rd Party Network Virtual Appliances (NVA)

Following up on a previous post for Azure Virtual WAN, one of the key items I brought up was that lack of integration for 3rd Party Network Virtual Appliances (NVA) and how you are encouraged to use Azure’s Firewall service. Well, as of today, July 21, 2020, you can now deploy NVA’s within the vWAN. This is great news, as many customers prefer 3rd party firewalls, ie. Palo Alto, CheckPoint, etc.

Looking forward to testing this out.

For more information, read up on the following post HERE.

 

Azure Virtual WAN (Hub-Spoke v2.0)

Over the past few years I have been promoting customers to adopt the Microsoft/Azure’s Hub-Spoke (Hub and Spoke) network topology and its architecture. Not too long ago (Ignite 2019), Azure has made some significant improvements with their architecture and best practices by introducing “Virtual WAN“. I like to call this new service and evaluation as the new Hub-Spoke architecture version 2.0.

What is Azure Virtual WAN?

Azure Virtual WAN, or vWAN is a networking solution/service that allows you to integrate key functionalities such as networking, routing and security within a single pane. Azure vWAN is really a software-defined (SD) solution of WAN based technologies, and similar to service endpoints, and private links, Azure vWAN leverages the Microsoft network backbone to build a highly-available and high-speed global transit network. These functionalities include branch connectivity such as Site to Site VPNs, Point to Site VPN, ExpressRoute and intra-cloud/transitive networks, Azure Firewall and encrypted private connectivity.

Azure vWAN SKU Type

Microsoft has made it simple with the options you get with Azure vWAN, either basic or standard. Once you deploy Azure vWAN, you can get started with one of the use cases (as mentioned above) and add functionalities as your network evolves.

With Basic, you can only leverage vWAN for a Site to Site VPN connection. However once you go to the Standard SKU, you can now leverage all of the functionalities mentioned above, ie. Site to Site, Point to Site, ExpressRoute and Inter-Hub and VNet to VNet transiting via the vHUB.

Architecture (simplified) Overview

The image below depicts a high level but comprehensive with its capabilities on how Azure Virtual WAN can be integrated for various scenarios. With the “Any to Any” connectivity model, the Global transit network enables to connect your branch offices, remote users, datacenters, Azure VNets to one another. The image and model below, a spoke can be your Azure VNet, or a branch office, or a remote user, or perhaps the Internet. This architecture enables logical one-hop transit connectivity between the networking endpoints.

Source: https://docs.microsoft.com/en-us/azure/virtual-wan/media/virtual-wan-global-transit-network-architecture/figure1.png

Conclusion

The Azure vWAN architecture is a hub and spoke architecture that incorporates scalability, resilience and performance built in for branches (VPN/SD-WAN devices), users (Azure VPN/OpenVPN/IKEv2), ExpressRoute circuits, and Azure virtual networks (VNets). There are some excellent benefits of using Azure Virtual WAN within your architecture as it simplifies the overall network topology and provides a handful of opportunities to integrate your on-premises datacenters, branch offices, remote users into a global transit network architecture. There are some limitations I can already see today, and that being not being able to leverage 3rd party network virtual appliances (NVA) such as Palo Alto, Checkpoint, etc., however I am sure that is already within the roadmap.

(more…)

Azure Service Endpoints versus Azure Private Links

Recently a lot of folks have been asking about Azure Service Endpoints and Azure Private Links — what’s the difference? when to use which? and why?

For starters, let’s review what is a Service Endpoint, and what is a Private Link? Followed by which solution is better to use, and why….

Azure Service Endpoints

With any Azure Virtual Network (VNet) you can leverage a ‘service endpoint’ that provides a secure connection and a direct connection to Microsoft Azure’s service over Microsoft’s backbone network infrastructure. The service endpoints allow you to run services/resources over the VNet and enables private IP Address within the VNet to communicate with the Azure service without the requirement of having a public IP on the VNet. Service Endpoints work by enabling your VNet or subnet(s) to support the Service Endpoint, and once enabled, you can configure which PaaS resource(s) can accept traffic from those subnet(s)/VNets. There is no requirement to do any IP filtering and/or NAT translation, all you need to tell is the PaaS resource(s) which VNet/Subnet to allow traffic from. When Service Endpoints are enabled, the PaaS resource sees traffic coming from your VNet private IP, not the public IP.

Azure Private Link

Azure Private Link allows you to access Azure (PaaS) services, like Key Vault, Storage, Log Analytics, etc., over a private endpoint within your Azure VNet. The communication between the Private Link (endpoint) and your VNet continue to travel over the Microsoft’s backbone network, however your service is no longer exposed over the Internet. One drawback with Private Link is that to support resolution of the PaaS resources using the same name, you do need to implement DNS to resolve the private link zone for that resource. There is integration with Azure Private DNS to set this up for you, but this can be problematic if you have your DNS service already running, or do not want to use Azure Private DNS with your VNet. Once enabled, you have now granted access to a specific PaaS resource within your VNet. Meaning, you can control the egress to the PaaS resource. Unlike Service Endpoints, Private Link allows access from your on-premises infrastructure to Azure resources over an ExpressRoute circuit, or Site to Site VPN tunnel, or via its peered VNets.

What’s the difference, and when to use?
  • The biggest difference between Private Links and Service Endpoints, is Public IPs. With Private Link, there is never any Public IP created and traffic can never go through the Internet, whereas with Service Endpoints, you have the option to limit access.
  • Second key difference with Private Link is, once enabled, you have now granted access to a specific PaaS resource within your VNet. Meaning, you can control the egress to the PaaS resource.
  • Service Endpoints are much simpler to implement and significantly reduce the complexity of your VNet/Architecture design.
  • Private Link will always ensure traffic stays within your VNet.
  • Another key difference between Private Links and Service Endpoints, is cost. There is a $0 cost to implement Service Endpoints, as the cost is already integrated within the VNet cost itself. Whereas Private Links costs can quickly grow depending on the total ingress and egress traffic and the runtime of the link. For example, within Azure Canada Central, to have a Private Link that is available for 730 hours in a given month, and that allows 100TB of ingress and egress (for both) can run over $2,000 monthly.
    • This is something to factor when designing or implementing either solution, as Private Links will quickly add to your monthly spend.
  • Another consideration is, availability, meaning Service Endpoints and Private Links are not generally available for all services, for example. There is no Service Endpoint as of writing this post, for Azure Log Analytics. However, there is a solution for Private Links for Log Analytics. Both services are available but not for all resources/services. For the complete list you can visit the links below, Service Endpoints: HERE ; Private Link: HERE.

Ultimately, if you are considering either solution, Private Link versus Service Endpoint, then you are probably concerned with security and with that said, Private Link is superior to Service Endpoints. The services available to Private Link will continue to grow like Service Endpoints, but based on my observation, it appears Private Link has a much deeper portfolio with Azure services integration.

Azure Security Center – Continuous Export via Azure Policy

Earlier this week, I highlighted how you can use Azure Security Center (ASC) and its Continuous Export feature to send Security Alerts and Recommendations to Event Hubs (and/or Log Analytics) — you can find that post HERE. Today I want to show you how to can use Governance best practices, and leverage Azure Policy to ensure ASC is configured to forward to either Event Hubs and/or Log Analytics.

As a quick intro, Azure Security Center (ASC) is a holistic solution provided by Microsoft to not only assess your Azure resources, but can also be extended to your On-Premises infrastructure as well. ASC is a security management solution that improves your overall security posture within your Azure environment and on-premises infrastructure. I work with a lot of customers where they require an “agnostic” SIEM solution. ASC generates detailed security recommendations and alerts that can be viewed through the ASC portal. However when customers have a requirement to send this telemetry to some third party SIEM, such as QRadar, Splunk, etc. In short, your Azure resources can send their security events directly to Event Hubs (via Diagnostic Agents) or can be configured (the easier approach) with ASC.

To get these policies, go HERE to the Azure GitHub repo. Next post, I will walk you through the setup and all the necessary parameters that are required to get this policy up and ‘governing’.

Azure Security Center – Continuous Export

Azure Security Center (ASC) is a great holistic solution provided by Microsoft to not only assess your Azure resources, but can also be extended to your On-Premises infrastructure as well. ASC is a security management solution that improves your overall security posture within your Azure environment and on-premises infrastructure. I work with a lot of customers where they require an “agnostic” SIEM solution, so they don’t have all of their eggs in one basket (sort of speak) with a single vendor. Azure Sentinel is a great solution, but still lacks maturity in comparison to other products like IBM’s QRadar, Splunk and some others.

ASC also generates detailed security recommendations and alerts that can be viewed through the ASC portal. However when customers have a requirement to send this telemetry to some third party SIEM, Azure’s Event Hubs is a great middleman solution.

In short, your Azure resources can send their security events directly to Event Hubs (via Diagnostic Agents) or can be configured (the easier approach) with ASC. Choosing the latter, we can also configure ASC to Continuously Export the data being collected in ASC to be forwarded to Event Hubs. Which in turn will allow the third party SIEM to ingest the data within Event Hubs.

Once you have enabled ASC, enrolled your resources, (assuming you have already configured Event Hubs and a third party SIEM) you can then setup Continuous Export within the ASC console as shown below.

Setting up ASC Continuous Export is pretty straightforward, provided you have already configured Event Hubs, and your SIEM to ingest from Event Hubs. Within ASC, select Continuous Export. Enable which workspace to send the data to, either Event Hubs, or Log Analytics (Sentinel). Select the type of alerts and recommendations (All, Low, Medium, High). Specify the Subscription where Event Hubs lives, the Event Hub Namespace, Name, and Policy Name. Hit Save, and that is it!

That is, pretty simple. Definitely a much easier solution than deploying Linux and Windows Agent Diagnostic (LAD/WAD) — another post for another day 🙂

Automate and Deploy Microsoft Defender Advanced Threat Protection (MDATP) via PowerShell

A few days ago, I needed to on-board Azure Windows Server VMs with Microsoft Windows Defender Advanced Threat Protection, or in short, MDATP. Sometimes Azure Security Center (ASC) has issues with on-boarding VMs and deploying the MDATP agent. As a result, I wrote the following PowerShell script that will download the MDATP.cmd file from my Azure Blob container and install it locally to the VM. This script allows you to automate it for many VMs to the scope of a Resource Group.

Now there are a few assumptions here…

  1. Download the MDATP.cmd file from the Defender Security Center portal
  2. Remove the requirement for user consent for the MDATP execution
  3. Upload the modified MDATP file to an Azure Blob container
  4. Generated a SAS URI for the MDATP file

There are many examples on the Internet on how to go step #4. Maybe in time I will do another post.

To remove the requirement of the MDATP agent to execute based on user interaction/consent can be done by removing, or commenting out the following lines of code. Launch the MDATP.cmd file within Notepad, and add a “:” before each line of code from lines 9 through 19, except line 14. Should look something like this.

Now, update and run the following PowerShell code. You can validate the VM is calling back to the Defender Security Center portal or by running the MDATPClientAnalyzer on the VM.

#update resource as needed
$resourcegroup = "YOUR_RESOURCE_GROUP"
#get only Windows Server VMs
$vms = Get-AzVM -ResourceGroupName $resourcegroup | Where-Object {$_.StorageProfile.OSDisk.OSType -eq "Windows"} | Select-Object Name
foreach ($vm in $vms)
{
    #friendly start message to indicate which server has started
    Write-Host "Server $vm has started..."
    #create folder, do not display error if folder already exists
    New-Item -Path "C:\" -Name "MDATP" -ItemType "directory" -ErrorAction SilentlyContinue
    #download MDATP.cmd file from Storage Account with SAS URI. Execute the cmd file. Passing "Y" to continue with installation.
    Invoke-WebRequest -Uri "YOUR_URI_SAS" -OutFile WindowsDefenderATPLocalOnboardingScript.cmd; Start-Process -FilePath "C:\MDATP\WindowsDefenderATPLocalOnboardingScript.cmd" -Verb RunAs
    #sleep for 5 seconds
    Start-Sleep -Seconds 5
    #restart-server
    Restart-Computer -ComputerName $vm
    #friendly finished message to indicate which server has completed and will now reboot
    Write-Host "Server $vm has completed, reboot initiated..."
}

Forcefully Revoke Azure AD User Session Access – Immediately

Sometimes it is critical to revoke a user’s Azure AD session for whatever reason it may be. You can always delete the user from Azure AD, however if the user is connected via PowerShell, the user’s token may not expire for a few more minutes, or maybe hours, depending on the token TTLs settings… So what can you do? You can forcefully revoke a user’s token session by using the following PowerShell cmdlet, “Revoke-AzureADUserAllRefreshToken“. Due to Microsoft’s ever changing Azure modules, I have tested this solution within the Azure Cloud Shell, and not on a local machine with PowerShell ISE with the AZ or RM modules.

First we need to identify which user will have its access revoked. Based off of the Revoke cmdlet, you will need to specify the “ObjectID” parameter, and the user’s ObjectID can be found within the Azure AD blade as seen below:

For additional information you can view the user’s access by executing the following cmdlet, “Get-AzRoleAssignment -ObjectId <>

Once we have identified the user and its ObjectID, we first need to connect to Azure AD, this is done by running the following cmdlet, “Connect-AzureAD -TenantId <>“. With my experience you need to specify the TenantID. Once you have connected, and verified your device, you can now run the Revoke cmdlet, as seen below, the following cmdlet needs to be executed, “Revoke-AzureADUserAllRefreshToken -ObjectId <>“. The Revoke cmdlet will not provide any details if the operation was successful, however it will throw an error if something did not go right — yes, very helpful, right? 🙂

By running this Revoke cmdlet, the user has now lost all access to its Azure AD account and any active sessions, either via the Azure Portal UI, or PowerShell will be immediately revoked. 🙂

Deploy an Azure Cloud Witness for your Failover Cluster Quorum for Windows Server 2016 & 2019 with PowerShell

For the longest time, when deploying a cluster with Windows Server, you only had the two options,

  1. Using a dedicated disk for the quorum, or
  2. Configuring an SMB file-share as the quorum witness

With Server 2016 and 2019, there is now a third option, Cloud Witness. The Cloud Witness leverages Azure Blob storage to provide that additional cluster/quorum vote.

Before showing you how this is done, one should understand the purpose of a witness/quorum is with respect to a failover cluster.

When one or more members of a cluster stops reporting to the other cluster members, there is a vote. The vote ensures that there is no split-vote, and ensures the cluster has a true owner. For example, in a two node cluster, if each node believe it is the owner, then this will cause a “split-brain”. In short, neither node will ever agree it is the owner (or not). This is where a quorum is required to determine who is the owner by providing the third vote, ie. majority. This ensures the cluster has a true owner by having the majority of votes. Each member gets a vote, plus the quorum.

Why this matters, in the even there is no quorum, a node from the cluster can be evicted and as a result will suspend all application services to prevent data corruption by more than one system writing data without the cluster services coordinating data writes and access. Depending on policies, VMs running on the ejected cluster member will either suspend operations or be migrated to other nodes before being ejected.

Below is a step-by-step guide on how to configure the Azure Blob storage as the Cloud Witness.

Assumptions:

  • The Azure Blob storage account has already been created,
  • The cluster with at least 2 nodes already exists.

Launch the PowerShell console as Administrator, and execute the following cmdlet:

Set-ClusterQuorum -CloudWitness -AccountName "storage_account_name" -AccessKey "primary_access_key"

Now if we go back to the Failover Manager console we can see we have successfully configured cluster with a Cloud Witness.

In conclusion, deploying a Cloud Witness for a Failover Cluster is very simple, and in case of power outage in one datacenter, maintenance on a node, etc. then the entire cluster and its members (nodes) are all given an equal opportunity. Not only is it recommended and a requirement for 2-node clusters, but for any number of nodes, having a quorum is key ensuring high-availability.  As mentioned, there are the traditional options such as using a dedicated disk or a file-share (SMB) as the cluster witness. However with Azure Blob storage with its 16×9 uptime, we can always ensure the quorum witness is online and available.

Deploy an Azure Cloud Witness for your Failover Cluster Quorum for Windows Server 2016 & 2019

For the longest time, when deploying a cluster with Windows Server, you only had the two options,

  1. Using a dedicated disk for the quorum, or
  2. Configuring an SMB file-share as the quorum witness

With Server 2016 and 2019, there is now a third option, Cloud Witness. The Cloud Witness leverages Azure Blob storage to provide that additional cluster/quorum vote.

Before showing you how this is done, one should understand the purpose of a witness/quorum is with respect to a failover cluster.

When one or more members of a cluster stops reporting to the other cluster members, there is a vote. The vote ensures that there is no split-vote, and ensures the cluster has a true owner. For example, in a two node cluster, if each node believe it is the owner, then this will cause a “split-brain”. In short, neither node will ever agree it is the owner (or not). This is where a quorum is required to determine who is the owner by providing the third vote, ie. majority. This ensures the cluster has a true owner by having the majority of votes. Each member gets a vote, plus the quorum.

Why this matters, in the even there is no quorum, a node from the cluster can be evicted and as a result will suspend all application services to prevent data corruption by more than one system writing data without the cluster services coordinating data writes and access. Depending on policies, VMs running on the ejected cluster member will either suspend operations or be migrated to other nodes before being ejected.

Below is a step-by-step guide on how to configure the Azure Blob storage as the Cloud Witness.

Assumptions:

  • The Azure Blob storage account has already been created,
  • The cluster with at least 2 nodes already exists.

Launching the Failover Manager within Windows Server manager, connect to the cluster, and do the following. Right click the cluster object and select More Actions > Configure Cluster Quorum Settings…

Next select the Advanced Quorum configuration..

Ensure we have all the nodes selected, as seen below.

Next, select the Configure a Cloud Witness:

Now we need to get our Azure Blob storage account name, and its primary account key. This can be retrieved from the Azure portal.

Now validate the settings and complete the configuration.

Now if we go back to the Failover Manager console we can see we have successfully configured cluster with a Cloud Witness.

In conclusion, deploying a Cloud Witness for a Failover Cluster is very simple, and in case of power outage in one datacenter, maintenance on a node, etc. then the entire cluster and its members (nodes) are all given an equal opportunity. Not only is it recommended and a requirement for 2-node clusters, but for any number of nodes, having a quorum is key ensuring high-availability.  As mentioned, there are the traditional options such as using a dedicated disk or a file-share (SMB) as the cluster witness. However with Azure Blob storage with its 16×9 uptime, we can always ensure the quorum witness is online and available.