Month: March 2018

How to Increase ASR (Azure Site Recovery) Replication and Failback Default Settings

Now that you have deployed ASR (Azure Site Recovery), for Hyper-V and have started to up being replication, you notice the replication process just might take forever, as there are several VMs still queued. That is right, by default, ASR will replicate 4 (four) VMs at a given time. This value can be increased (to a maximum of 32), however, where to change this setting?

In order to increase the number of replication threads from 4 to 32, or whatever in between, you first need to launch the Registry and navigate to: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure Backup\Replication

From there, you will need to create the key, if the key does not exist (I have never see it by default in any of my deployments…). Create the key, “UploadThreadsPerVM” and set its value to whatever you see fit. Again, the maximum is 32.

Likewise, you can increase the default (4) number of threads used for data transfer during failback. This value represents the maximum number of VMs that will failback from ASR. That path is the same, and the Registry Key is,”DownloadThreadsPerVM“, and again, can be set to a maximum of 32.

After that is completed, your Hyper-V Registry Keys, would look something like this. Please note, this change is fully supported by the ASR/Microsoft team. However, do note, this change can saturate your network due to the increase in uploads to Azure. You can also increase and change the schedule for the bandwidth throttle settings, you can see that previous post here, see Step 10.

 

For additional information on this, please visit, https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-plan-capacity-vmware#control-network-bandwidth.

Advertisements

Azure Update Management – Part II

A little while ago, I blogged on OMS’ (Operations Management Suite) Update Management Solution. As great as this solution was, there were some limitations at the time, such having the ability to exclude specific patches, co-management with SCCM (Configuration Manager), and few others.

Since that post, there have been some great improvements to Update Management, so let’s go over some of the key updates, and do a quick setup walk-through:

  1. Both Windows (2008R2+) and (most) Linux Operating Systems are supported
  2. Can patch any machine in any cloud, Azure, AWS, Google, etc.
  3. Can patch any machine on-premises
  4. Ability to Exclude patches

One of the biggest improvements I want to highlight is, the ability to EXCLUDE patches, perhaps in time there will also be INCLUDE only patches. 😉

First, we need to get into our Azure VM properties.. Scroll down to the Update Management.

  • If the machine belongs to a Log Analytics workspace, and/or does not have an Automation Account, then link it now, and/or link/create the Automation Account
  • If you do not have an Log Analytics workspace and/or an Automation Account, then you have the ability to create it at run-time now.

In this scenario, I kept it clean as possible, so both the Log Analytics workspace needs to be created, and likewise for the Automation Account, and Update Management needs to be linked to the workspace.

Once enabled, it a few minutes to complete the solution deployment….

After Update Management has been enabled, and it has run its discovery on the VM, insights will be populated, like its compliance state.

Now we know this machine is not compliant, as it missing a security update(s), in addition, missing 3 other updates too. Next, we will schedule a patching deployment for the future. So let’s do that now.

Now we can create a deployment schedule with some base settings, name, time, etc. But one thing to note, we can now EXCLUDE specific patches! This is a great feature, as let’s say, we are patching an application server, and a specific version of .NET will break our application, as the application Dev team has not tested the application against the latest .NET framework.

In this demo, I am going to EXCLUDE patch, KB890830.

Next, we need to create a schedule. This can be an ad-hoc schedule, or a recurring schedule.

Once you hit create, we can now see the Deployment Schedule, under Scheduled Update Deployments.

You can also click on the deployment to see it’s properties, and which patches have been excluded.

After the deployment has initiated, you can take a look at its progress.

If we go into the Update Deployment (yes, I got impatient, and deleted the first one, and re-created it…), and click on the Deployment we created, we can see the details.

As you can see, patch, KB890830 was not applied! Awesome.

If we not go back to the Update Management module, we can now see the VM is compliant.

 

Azure Virtual Network (VNet) Peering

In this blog post, I will go over,

  • What is Azure VNet (Virtual Network) Peering,
  • When to use VNet Peering,
  • How to implement VNet Peering.

What is Azure Virtual Network (VNet) Peering?

Azure VNet (Virtual Network) Peering enables resources within two separate virtual networks to communicate with one another. Leveraging Microsoft’s backbone infrastructure, the two peered virtual networks will communicate over its own isolated network.

Below we have two Virtual Networks (VNet01 and 02), that have different IP Address spaces. By implementing VNet Peering, the two networks will be able to communicate with one another, as if all resources are in one network. Some notes, VNet Peering is not transitive, ie. If VNet01 and VNet02 are Peered, and VNet02 and VNet03 are Peered. This means, VNet01 and VNet03 cannot communicate with one another. Another note, inbound and outbound traffic in the VNet Peer are $0.01 per GB. Prices are a bit higher for Global VNet Peering. Get the official numbers here, https://azure.microsoft.com/en-us/pricing/details/virtual-network/.

When to use Azure Virtual Network Peering?

As mentioned above, you want to enable Azure VNet Peering when you have two virtual networks that have resources (VMs) in both networks that need to communicate with one another. For example, let’s say you have exhausted 4,000 VM limit within a VNet…

Some of the benefits of VNet Peering is:

Before you go ahead and implement, there are a few requirements:

Finally, how to implement it!

In this example, both of my virtual networks (VNets) are in the same region, Canada Central.

Select VNet01, and select Peering:

 

Give the Peering a name, “VNet01Peering” and select the other VNet, VNet02.

 

Give it a few seconds, and it should now be connected to VNet02:

Next, we now need to apply the same concepts to VNet02. So let’s do that now.

 

 

Now if we go to the VMs within each of the Virtual Networks, and try to ping another VM in the other VNet, it should now work! Based on the images below, you can see the Ping failed, that was from a previous ping response prior to VNet Peering being implemented.

VM01 in VNet01 trying to Ping VM02 in VNet02; 10.10.10.4 -> 192.168.1.4: 10.10.10.0/24 -> 192.168.1.0/24.

And conversely, the other way around…

VM02 in VNet02 trying to Ping VM01 in VNet01; 192.168.1.4 -> 10.10.10.4 -> : 192.168.1.0/24 -> 10.10.10.0/24.