Category: Cloud

Azure Update Management – Part II

A little while ago, I blogged on OMS’ (Operations Management Suite) Update Management Solution. As great as this solution was, there were some limitations at the time, such having the ability to exclude specific patches, co-management with SCCM (Configuration Manager), and few others.

Since that post, there have been some great improvements to Update Management, so let’s go over some of the key updates, and do a quick setup walk-through:

  1. Both Windows (2008R2+) and (most) Linux Operating Systems are supported
  2. Can patch any machine in any cloud, Azure, AWS, Google, etc.
  3. Can patch any machine on-premises
  4. Ability to Exclude patches

One of the biggest improvements I want to highlight is, the ability to EXCLUDE patches, perhaps in time there will also be INCLUDE only patches. 😉

First, we need to get into our Azure VM properties.. Scroll down to the Update Management.

  • If the machine belongs to a Log Analytics workspace, and/or does not have an Automation Account, then link it now, and/or link/create the Automation Account
  • If you do not have an Log Analytics workspace and/or an Automation Account, then you have the ability to create it at run-time now.

In this scenario, I kept it clean as possible, so both the Log Analytics workspace needs to be created, and likewise for the Automation Account, and Update Management needs to be linked to the workspace.

Once enabled, it a few minutes to complete the solution deployment….

After Update Management has been enabled, and it has run its discovery on the VM, insights will be populated, like its compliance state.

Now we know this machine is not compliant, as it missing a security update(s), in addition, missing 3 other updates too. Next, we will schedule a patching deployment for the future. So let’s do that now.

Now we can create a deployment schedule with some base settings, name, time, etc. But one thing to note, we can now EXCLUDE specific patches! This is a great feature, as let’s say, we are patching an application server, and a specific version of .NET will break our application, as the application Dev team has not tested the application against the latest .NET framework.

In this demo, I am going to EXCLUDE patch, KB890830.

Next, we need to create a schedule. This can be an ad-hoc schedule, or a recurring schedule.

Once you hit create, we can now see the Deployment Schedule, under Scheduled Update Deployments.

You can also click on the deployment to see it’s properties, and which patches have been excluded.

After the deployment has initiated, you can take a look at its progress.

If we go into the Update Deployment (yes, I got impatient, and deleted the first one, and re-created it…), and click on the Deployment we created, we can see the details.

As you can see, patch, KB890830 was not applied! Awesome.

If we not go back to the Update Management module, we can now see the VM is compliant.

 

Advertisements

Azure Virtual Network (VNet) Peering

In this blog post, I will go over,

  • What is Azure VNet (Virtual Network) Peering,
  • When to use VNet Peering,
  • How to implement VNet Peering.

What is Azure Virtual Network (VNet) Peering?

Azure VNet (Virtual Network) Peering enables resources within two separate virtual networks to communicate with one another. Leveraging Microsoft’s backbone infrastructure, the two peered virtual networks will communicate over its own isolated network.

Below we have two Virtual Networks (VNet01 and 02), that have different IP Address spaces. By implementing VNet Peering, the two networks will be able to communicate with one another, as if all resources are in one network. Some notes, VNet Peering is not transitive, ie. If VNet01 and VNet02 are Peered, and VNet02 and VNet03 are Peered. This means, VNet01 and VNet03 cannot communicate with one another. Another note, inbound and outbound traffic in the VNet Peer are $0.01 per GB. Prices are a bit higher for Global VNet Peering. Get the official numbers here, https://azure.microsoft.com/en-us/pricing/details/virtual-network/.

When to use Azure Virtual Network Peering?

As mentioned above, you want to enable Azure VNet Peering when you have two virtual networks that have resources (VMs) in both networks that need to communicate with one another. For example, let’s say you have exhausted 4,000 VM limit within a VNet…

Some of the benefits of VNet Peering is:

Before you go ahead and implement, there are a few requirements:

Finally, how to implement it!

In this example, both of my virtual networks (VNets) are in the same region, Canada Central.

Select VNet01, and select Peering:

 

Give the Peering a name, “VNet01Peering” and select the other VNet, VNet02.

 

Give it a few seconds, and it should now be connected to VNet02:

Next, we now need to apply the same concepts to VNet02. So let’s do that now.

 

 

Now if we go to the VMs within each of the Virtual Networks, and try to ping another VM in the other VNet, it should now work! Based on the images below, you can see the Ping failed, that was from a previous ping response prior to VNet Peering being implemented.

VM01 in VNet01 trying to Ping VM02 in VNet02; 10.10.10.4 -> 192.168.1.4: 10.10.10.0/24 -> 192.168.1.0/24.

And conversely, the other way around…

VM02 in VNet02 trying to Ping VM01 in VNet01; 192.168.1.4 -> 10.10.10.4 -> : 192.168.1.0/24 -> 10.10.10.0/24.

 

Wait, Installing Windows Servers CALs on an Azure VM isn’t your last step….

Recently I was presented with a problem, where the client needed to increase the number of terminal services (RDP sessions) from the default 2, to 5. The server was a virtual machine (VM) that was being hosted on Azure, and it was a Windows Server 2016 VM. So, simple solution, right? Just install the Terminal Services (Remote Desktop Service) roles, purchase and install the 5 CALs, and walk away.

Well, after I installed Terminal Services, and configured the Remote Desktop roles, installed and activated the 5 CALs, User3 was still unable to login without kicking User1 or User2 off the machine.

Turns out, the end-users were given the RDP file from the Azure portal, which was fine, however when that specific file was downloaded and used by the end-users, it contained the administrative switch set to true. With this property enabled, User3 would never be able to login without kicking one of the other users off. So, what to do?

 

Opening the RDP file, and modifying the administrative switch from 1 to 0, was the trick! Gave the users the updated RDP file, and all good. Users3, 4 and 5 were now able to log on to the server.

If you’re curious, below is an example of the RDP file contents, (Open it within Notepad). When you download the RDP file from the Azure portal, it will contain the following info, public IP of the server, prompt for credentials, administrative…. You will need to change the administrative switch from 1 to 0, and save the file. Of course, you still need to install the Terminal Services, purchase the CALs, and install, etc. etc.

 

full address:s:512.802.768.266:3389
prompt for credentials:i:1
administrative session:i:1

 

FYI, Group Policy has nothing to do with this, so that was eventually removed as a part of the solution. (https://support.microsoft.com/en-us/help/2833839/guidelines-for-installing-the-remote-desktop-session-host-role-service)

Configuring RSA Authentication Agent for ADFS 3.0 + Office 365

Security/Multi-Factor (MFA) are some of the big buzz words this year (2017) and when deploying Office 365, MFA (Multi-Factor Authentication) is almost a no-brainer. In the following post, I will demonstrate how to configure RSA Authentication Agent for ADFS 3.0. There has been some configuration done prior to the agent deployment, ie. TCP/UDP ports, RSA Auto-Registration, sdconf.rec export, etc. For the full documentation, please see the footnotes from RSA and Microsoft for ADFS 3.0 for implementation requirements guidelines.

Let’s get started. Please note, the following is for a Windows Server 2012 R2 (ADFS 3.0) and RSA Authentication Agent 1.0.2.

You will need this, “sdconf.rec” file from your RSA Administrator(s).

 

Next, within the ~\RSA\RSA Authentication Agent\AD FS Adapter\ folder, copy the “ADFSRegistrationSample.ps1” script to the “SampleRegistrationScripts” folder. This is a known bug in RSA Authentication Agent 1.0.2, as the file should be within the folder by default, but it is not.

Execute the PowerShell script as Local Administrator…

Now you should be able to see the RSA configurations within the AD FS management console.

If we go into the to Authentication Policies > Per Relying Party Trust > we can now edit the MFA settings for Office 365.

For this demo, we will enable both, Extranet, and Intranet.

Enable the RSA SecurID Authentication. Now if all was configured correctly, users within the Office 365 portal will be prompted for an RSA token once they supply valid Office 365/AD credentials!

 

 

 

Connect Batch of Azure VMs to Log Analytics (OMS) via PowerShell

So, you have a bunch of Virtual Machines (VMs) in Azure, and didn’t used an ARM template, and now need to connect the VMs to Log Analytics (OMS). Earlier this month, I demonstrated on this can be done with the ARM portal, here’s that blog post. Of course, this has to be done individually and can be very tedious if you have 10’s or 100’s of machines to do this to… All I can think of is PowerShell!

Here is a script I tweaked that Microsoft has already provided but for a single VM. I have just tweaked it to automate and traverse through your entire resource group, and add ALL VMs within the RG to Log Analytics.

Here is the link to Microsoft TechNet for that script. Please test it out and let me know. And if it helped you out, please give it a 5 start rating.

Microsoft TechNet PowerShell Gallery

If all went well, your before and after should look similar to this. I had two test VMs in my Resource Group.

Before:

After:

(more…)

Connect Azure VMs to Log Analytics (OMS) via ARM Portal

Let’s say you have a bunch of machines in Azure, and want them communicating with Azure Log Analytics (aka OMS). Well, I am pretty sure that last thing you want to do is deploy the Microsoft Monitoring Agent to each machine, manually…

Well, now you can connect a VM to Log Analytics (OMS) with just a few clicks.

Go into the ARM (Azure Resource Manager) portal, and navigate to your “Log Analytics” blade, select your OMS workspace name, and within the Workspace Data Sources, select Virtual Machines.

Here you should have your machines that currently live within Azure. As you can see, there is one machine that is not connected to the OMS workspace. Let’s connect it now.

Select the VM in question, and you will now be presented with the following:

Make sure the VM is online/running, and select Connect. The VM must be online in order for the extensions to be passed through.

Give it a few moments, and there we go! No manual agent deployment.

 

We can also verify now in OMS, to see our new machine chatting with Log Analytics. (Go into the Agent Health solution/title)

ADFS Monitoring with Azure, OMS, SCOM 2016

ADFS (Active Directory Federation Services) has really taken flight since the inception of Office 365 and Azure Active Directory. Getting your on-premises environment configured with online identity services such as Azure, and having the SSO (Single Sign-On) abilities makes ADFS fundamental. Implementing ADFS is one thing, but what about monitoring your ADFS environment?

The following post is intended to illustrate the differences between ADFS monitoring by comparing the following monitoring tools: Azure AD Connect Health, OMS (Operations Management Suite) and SCOM 2016 (System Center Operations Manager).

SCOM (Operations Manager) 2016

First step is to deploy SCOM agents to your ADFS environment/servers along with the ADFS Management Pack install. Once that is complete, and discovery has run, we should start seeing data within the ADFS view(s).

Within the ADFS view, we can see some useful information such as Token requests. This data is represented in an hour fashion, and we can see the number of tokens being requested per hour over the given date range.

And good view is the Password Failed attempts. We can see how many bad password attempts were made over the various date range, but information such as which user, and when, could be useful.

This information is all good, however without doing some custom management pack work, it is impossible to get any additional data, ie. which users are requesting the token, which users are inputting bad passwords, and which users are connecting to which site/service offered by ADFS.

OMS (Operations Management Suite)

OMS does a nice job with dashboards, but unlike SCOM, we need to not only know which Event IDs we need to capture, we also need to build our dashboards out. This is not ideal, as it does require some custom work, and some investigation with regards to ADFS related Event IDs.

The query below, “EventID=4648 OR EventID4624 | measure count() by TargetAccount” shows us which target account/active directory user has requested the most ADFS tokens over the last 1 hour. Please note, this query is based on the OMS Log Analytics language version 1.

Since OMS does require a lot of ADFS knowledge, ie Event IDs, I decided not to proceed any further and build additional queries and dashboards.

Azure AD Connect Health

Lastly, Azure AD Connect is probably the most simple, and least technical configuration.

As a prerequisite, I enabled the all event types on the ADFS logs.

After running the AD Connect agent on the ADFS server(s).  And launching the Azure Resource Manager portal, we get some dashboards. Right off the bat, we can see some excellent information. Let’s take a deeper look.

If we click on the total request widget, this shows us similar data as we see in SCOM 2016, with some exceptions. Not only can we see the number of tokens being requested. We also can see which ADFS server within the farm is distributing the tokens. Since this is a highly-available and load-balanced configuration, it is comforting to know ADFS is distributing tokens as it is designed.

Secondly, we can also see which services within ADFS are generating the most hits. This is great to see which sites are the most busy. This something that lacks in SCOM and OMS, and I was unable to generate even after some custom MP work.

 

 

If we go into the Bad Password Attempts widget, we can see not only the number of bad password attempts, but also see which user and at what time and their source IP the attempt was generated from — very cool!

Overall, AD Connect Health does an excellent job and provides rich data and expands on what SCOM already does.

Verdict

After comparing SCOM 2016, OMS and Azure AD Connect Health, the clear winner is Azure AD Connect Health. Not only is the configuration straight forward, but provides more than enough information to monitor the ADFS environment. Azure AD Connect Health provides rich and very clear dashboards with almost no effect other than some log configuration on the ADFS server(s). The data is comparable to what SCOM presents, however much more richer and detailed. OMS and SCOM are still good tools, however does require some more technical knowledge and building the dashboards can be laboursome.