Tag: Hyper-V

Deploying and Configuring Storage Spaces Direct (S2D)

This blog post will focus on deploying Storage Spaces Direct (S2D) with Windows Server 2016 (steps with Server 2019 should be very-very similar, if not exact…) in a RoBo (Remote Office Branch Office) configuration with Dell Ready Nodes (S2DRN) leveraging RDMA (Remote Direct Memory Access). Now that is a mouthful, so let’s focus on what is Storage Spaces Direct first.

What is Storage Spaces Direct? With Server 2016, Microsoft introduced Storage Spaces Direct (S2D) with the release of Server 2016. S2D allows you to take industry-standard servers and leverage the internal local drives within the nodes and create a highly-available, highly-scalable software defined storage. Using hyper-converged or converged architecture, you are able to quickly deploy, scale storage, while implementing features such as storage tiers, caching, all while taking advantage of RDMA networking.

What is RDMA? Remote Direct Memory Access, or in short, RDMA, is an enterprise networking technology that allows you to exchange data through memory, without consuming the CPU or Operating System kernel. RDMA allows your applications to have high IOPS and with very low latency, while leveraging either RoCe (RDMA over Converged Ethernet) or iWARP (Internet Wide Area RDMA Protocol).

Note: the steps below focus on a single node of a 2-node cluster. All the steps below need to be executed on the secondary node.


Network Connectivity

Before we begin implementing, deploying and configuring we need to plan out the networking connectivity design. However before we do that, we need to understand what our design will look like. Below is a high-level diagram that illustrates the network connectivity for the host management and VM traffic, and the RDMA (Storage) traffic.


Network Configuration

Next we should map out our IP configuration. With this 2-node deployment we know we need the following network adapters and the following IPs.

Traffic Class Purpose Minimum IPs required VLAN ID Tagged/Untagged IP Address Space VLAN IP Address
Out of Band (iDRAC) Remote Management 2 Untagged /29
Management (Host) Management of Cluster and Cluster Nodes 3 Tagged/Untagged /29
Storage 01 SMB Traffic 2 Tagged/Untagged /29
Storage 02 SMB Traffic 2 Tagged/Untagged /29

Now that we have defined our networking configuration, we can move forward with booting the nodes, and making some necessary changes to the BIOS.


BIOS Configuration

Launch the node, and log into the BIOS (usually F2 at the Dell prompt)… Next go to the Device settings and let’s configure the RDMA/QLogic adapters.

Your configuration should look similar to this. In my instance, I am leveraging iWARP and not RoCE. By default, the adapters will allow for both modes, but we want to force iWARP only.

Disable Virtualization Mode

Disable DCBX (Data Center Bridging)

  • Link Speed: SmartAN
  • NIC + RDMA Mode: Enabled
  • RDMA Operation Mode: iWARP
  • Virtual LAN ID: 1 (which is default)

Remember, this needs to be done to both RDMA adapters!!! Once the settings have been applied, and saved, go ahead and reboot the node. Remember to do the second node too!


Install & Update Operating System

Next, we now need to install the Operating System. As best practice, once the OS is installed, update the OS and update all network drivers.


Validate & Rename Network Adapters

Also, it is a good idea to rename the Network adapters. Before we do that, let’s just confirm the adapters are there and look right.

Get-NetAdapter


Install Windows Features & Roles

Once the OS has been installed, and patched. Next we now need to install the necessary roles and features, ie. Hyper-V, Failover Manager, etc.

Install-WindowsFeature -Name Hyper-V, Failover-Clustering -IncludeAllSubFeature -IncludeManagementTools -Verbose -Restart

Configure Host Network

Now we need to configure the host management network. In this step we will create a SET switch (Switch Embedded Teaming). This switch will not only team the two network (host) adapters but at the same time a SET switch will be created that will be leveraged by the guest VMs via Hyper-V.

New-VMSwitch -Name S2DSwitch -AllowManagementOS 0 -NetAdapterName 'NIC1','NIC2' -MinimumBandwidthMode Weight -Verbose

Within this code, note, NIC1 and NIC2 are the host management adapters that were renamed to make life easier.

Now we need to create and configure the host management adapter. We will do this by executing the following cmdlet. Please note, in my environment, the Host Management network is untagged.

Add-VMNetworkAdapter -ManagementOS -Name 'Management' -SwitchName S2DSwitch -Passthru | Set-VMNetworkAdapterVlan -Untagged –Verbose

Once we execute this command, and run the Get-NetAdapter cmdlet, we can now see we have an additional network adapter.

In the event you need to tag your Management adapters you can use the following cmdlet below as reference.

Set-NetAdapterAdvancedProperty -Name 'SLOT 3 PORT 1' -DisplayName 'VLAN ID' -DisplayValue 103 -Verbose
Set-NetAdapterAdvancedProperty -Name 'SLOT 3 PORT 2' -DisplayName 'VLAN ID' -DisplayValue 104 -Verbose

Great, now we can add the nodes to the domain, and set the Management network adapters with static IPs.


Create the Cluster, Configure Witness, Enable Storage Spaces Direct

Now that are nodes are domain joined, and static IPs have been applied to the host management network, we can now begin creating the cluster.

In the code below, I am going to create the cluster; add the two nodes to the cluster; provision the Quorum witness (file witness) and enable Storage Spaces Direct on the cluster.

$cluster="Cluster_Name"
New-Cluster -name $cluster -Node "node01", "node02" -StaticAddress "IP Address" -NoStorage -Verbose
#assign cluster quorum
Set-ClusterQuorum -Cluster $cluster -FileShareWitness "\\server\filewitness\UNCPatch"
#enable storage spaces direct
Enable-ClusterS2D -Verbose

Once we have executed the commands above, if we launch Failover Manager, we can now see the created Cluster, with the 2 nodes, and Storage Spaces Direct enabled.


If we go into the Pool, we can also now see our Software Defined Storage Pool. We now can create volumes off of this pool.

If we go into the Enclosures, we can now also see all the disks available within the nodes and all disks that are members of the Storage Pool.

Great, now we need to do some configuration on the RDMA Adapters… Also to note, in this scenario I have leveraged a file share witness for the cluster. I would highly recommend considering or using Azure Cloud Witness. The egress traffic is next to 0, and you can connect several clusters to the storage account. For more information, see the following blog post(s): HERE.


Change RDMA mode to iWARP on QLogic Adapters

Again, remember which RDMA adapter is which. As mentioned previously, I renamed all of the network adapters to keep things simple and easy to remember.

Set-NetAdapterAdvancedProperty -Name 'SLOT 3 PORT 1' -DisplayName 'RDMA Mode' -DisplayValue 'iWarp'
Set-NetAdapterAdvancedProperty -Name 'SLOT 3 PORT 2' -DisplayName 'RDMA Mode' -DisplayValue 'iWarp'

Now we can leverage the QLogic adapters with RDMA via iWARP for our Storage traffic.


Create Cluster Shared Volumes (CSV)

Now that our cluster is created, nodes have been added, RDMA is configured, we can now create a CSV that will be leveraged by the VMs as their data store. We will do this by creating the CSV with the following cmdlet.

New-Volume -StoragePoolFriendlyName "Storage Pool" -FriendlyName "Volume01" -FileSystem CSVFS_ReFS -size 2TB

Now I elected to keep the CSV small with a 2TB volume, however I did have another 3TB to work with.


Update Live Migration

We are almost there, we now need to update the Live Migration network. This will ensure we make use of the RDMA network and not the Management network. We will do this via Failover Manager console.

Also a good idea to rename the networks. As you can see, I have renamed my storage networks to Storage1 and Storage2, and the host management network to Management.

Go to the Failover Manager Console >> Right Click Networks >> Select Live Migration Settings >> deselect the Management network.

\

You may have also noticed, I have configured the networks and their cluster use. Storage networks will be only available for the cluster, and the Management network will be available for both the cluster and client (guest VMs).


Next steps

We have now successfully created a Storage Spaces Direct cluster, leveraging RDMA networking and using the iWARP protocol. We now also created a SET switch that can be leveraged by our VMs as their network adapter. We have now also created a Storage Pool, with a volume dedicated for our VM disks leveraging the Cluster Shared Volume.

Next steps is now to create a VM and leveraging Storage Spaces Direct!

Advertisements

How to Increase ASR (Azure Site Recovery) Replication and Failback Default Settings

Now that you have deployed ASR (Azure Site Recovery), for Hyper-V and have started to up being replication, you notice the replication process just might take forever, as there are several VMs still queued. That is right, by default, ASR will replicate 4 (four) VMs at a given time. This value can be increased (to a maximum of 32), however, where to change this setting?

In order to increase the number of replication threads from 4 to 32, or whatever in between, you first need to launch the Registry and navigate to: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure Backup\Replication

From there, you will need to create the key, if the key does not exist (I have never see it by default in any of my deployments…). Create the key, “UploadThreadsPerVM” and set its value to whatever you see fit. Again, the maximum is 32.

Likewise, you can increase the default (4) number of threads used for data transfer during failback. This value represents the maximum number of VMs that will failback from ASR. That path is the same, and the Registry Key is,”DownloadThreadsPerVM“, and again, can be set to a maximum of 32.

After that is completed, your Hyper-V Registry Keys, would look something like this. Please note, this change is fully supported by the ASR/Microsoft team. However, do note, this change can saturate your network due to the increase in uploads to Azure. You can also increase and change the schedule for the bandwidth throttle settings, you can see that previous post here, see Step 10.

 

For additional information on this, please visit, https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-plan-capacity-vmware#control-network-bandwidth.

System Center Virtual Machine Manager (SCVMM) 2016 – Error 2912 – Unknown error (0x80041008)

Problem: Cannot to deploy a logical switch (vSwitch) to a Windows Server 2016 node.

Environment: 2x10GB Network Cards – IBM Flex Chassis (not that matters…)

Error:

An internal error has occurred trying to contact the ‘hypervserver01.domain.com’ server: : .

WinRM: URL: [http://hypervserver01.domain.com:5985], Verb: [INVOKE], Method: [GetFinalResult], Resource: [http://schemas.microsoft.com/wbem/wsman/1/wmi/root/scvmm/AsyncTask?ID=1001]

Unknown error (0x80041008)

Recommended Action
Check that WS-Management service is installed and running on server ‘hypervserver01.domain.com’. For more information use the command “winrm helpmsg hresult”. If ‘hypervserver01.domain.com’ is a host/library/update server or a PXE server role then ensure that VMM agent is installed and running. Refer to http://support.microsoft.com/kb/2742275 for more details.

Solution: In my case, I tried the following. Ultimately, it came down to my last case (enabling the physical network card).

  • Disable Windows Firewalls on both SCVMM and the Hyper-V 2016 server
  • Change the default WinRM port to 5985
winrm set winrm/config/Listener?Address=*+Transport=HTTP '@{Port="5985"}'

  • Enable the secondary physical port

Hyper-V 2016 Linux Ubuntu PXE Network Boot Error

If you’re like me, you want to run Linux on your Hyper-V 2016 host, in my case I am attempting to run a Linux Ubuntu 16.04.1. Booting from an ISO, I kept getting the same error over and over. “PXE Network Boot using IPv4 ( ESC to cancel ) Performing DHCP Negotiation….“. After realizing it wasn’t the ISO media. It wasn’t the size of the VHDX. It wasn’t the memory/vCPU or vNIC configuration. It wasn’t even due to the fact it was a Generation 1 or Generation 2 VM…. It was Secure Boot function.

1 2

Solution

  1. Stop the VM
  2. Go to its Settings
  3. Within Hardware > Select Security > Disable/UncheckEnable Secure Boot” > Start your machine back up!

3

Yay!

How to upload Custom Images to Microsoft Azure using PowerShell

In this post, I am going to show how to upload a custom image used in Windows Hyper-V (2016) to Azure cloud. I will be using a combination of the UI in Hyper-V and PowerShell in Azure Resource Manager. I will be working with Azure Resource Manager (ARM) and with Hyper-V 2016 with a custom image of Windows Server 2008 R2 SP1.

Okay, let’s get started.

Prepare On-Premises Virtual Machine Image

First, we need an image to work with. As mentioned, I am using a Windows Server 2008 R2 SP1 (yes, 2008 — needed it for a customer). The VM is Generation 1, which is not only a requirement for Windows 2008, but also a requirement for Azure, as it currently does not support Generation 2 VMs. See HERE to read more on preparing a Windows VHD.

Next, we need to install Hyper-V role on the VM. Since this is a nested VM, we will first need to enable nested-virtualization on the Hyper 2016 box. See a previous post on how to go about this HERE. Once that is complete, go ahead and install the Hyper-V role.

Next, we now need to SysPrep our VM. From an Administrative command prompt, navigate to %windir%\system32\sysprep and then execute the command “sysprep.exe”. Here, we will be using OOBE and enabling “Generalize”, also “Shutdown” the VM once SysPrep completes.

Once the VM is SysPrep’ed, we now need to compact the VHDx (remember Hyper-V 2016 here) and also will need to convert the VHDx to a VHD. This is due to the limitation of Azure at the moment, as it only supports Gen1 VMs and VHD’s.

Go into Hyper-V and within the VM properties, edit the Virtual hard disk. Then we will need to compact the virtual hard disk. Go ahead and do that..

Great, now we need to convert the VHDx to a VHD. Time for PowerShell!

Convert-VHD –Path “<source VHDX path>" –DestinationPath "<destination VHD path>" -VHDType Fixed -Verbose


Let this run (I let it go over night.. it was getting late =) )

Great, now we are ready to move on to Azure and more PowerShell.

Build Azure Container and Upload Image to Azure

First, we need to download  and install the latest AzureRM bits module locally to the Hyper-V box (if you have done this.. jump down a few lines…)

Install-Module AzureRM -Force

Next, since there was a recent update to the AzureRm module, I now need to update the module path location.

$env:PSModulePath = $env:PSModulePath + "; C:\Program Files\WindowsPowerShell\Modules"

Next, we will need to import the AzureRm module.

Import-Module AzureRM -Force

Next, we’ll need to log-in into our Azure account, and specify the subscription to want to work with. In my case, there are multiple Azure subscriptions tied to my email.

Login-AzureRmAccount
Get-AzureRmSubscription
#select the subsciption you will be working with -- if you have one, you can skip this line
Select-AzureRmSubscription -SubscriptionId "<ID>"

Next, we will create a resource group and storage account, and bind the account the group.

New-AzureRmResourceGroup -Name "ResourceGroupName" -Location "Canada East"
New-AzureRmStorageAccount -ResourceGroupName "ResourceGroupName" -Name "StorageAccountName" -Location "Canada East" -SkuName "Standard_LRS" -Kind "Storage"

If you want to change the storage type, to let’s say Geo-redundant, here are the other types of storage:

Valid values for -SkuName are:

  • Standard_LRS – Locally redundant storage.
  • Standard_ZRS – Zone redundant storage.
  • Standard_GRS – Geo redundant storage.
  • Standard_RAGRS – Read access geo redundant storage.
  • Premium_LRS – Premium locally redundant storage.

Now, we need to create a Container and grab the URL needed to upload our image. I did this through the Azure Resource Manager (ARM) Portal since I couldn’t figure out the PowerShell cmdlet (Get-AzureStorageBlob) — if you can get this to work, please let me know!

You can get the URL from the Web UI when you go into the Storage Account >> Blobs >> Container (in my case, I called it “VHD”) >> Properties.

Now we are ready to upload our image/VHD to Azure! For me this took about 2 hours, uploading a 80GB file @ 9-10MBs.

$rgName = "ResourceGroupName"
$AzureVHDURL = "URL"
$LocalVHDPath = "LocalPathtoVHD"
Add-AzureRmVhd -ResourceGroupName $rgName -Destination $AzureVHDURL -LocalFilePath $LocalVHDPath

Great, now we just need to register the VHD disk to the Gallery, and we can begin creating machines based off our image that is now in the cloud! — Another post! 🙂

Step-by-Step: Setup and Configure Azure Site Recovery (ASR) with Windows Server 2016 Hyper-V using ARM

Not too long ago, Microsoft announced the support of Windows 2016 and Azure Site Recovery (ASR). Microsoft’s announcement can be found HERE.

With that said, I decided to setup ASR with my Hyper-V 2016 environment. Rather than the typical blog posts (screenshots etc.,) I decided to create a step-by-step video that demonstrates how to setup ASR with Windows Server 2016 and Hyper-V. That video can be found HERE at Channel 9.

In addition this post is a series of blog posts for Azure Site Recovery (ASR).