Tag: Hyper-V

Deploying and Configuring Storage Spaces Direct (S2D)

This blog post will focus on deploying Storage Spaces Direct (S2D) with Windows Server 2016 (steps with Server 2019 should be very-very similar, if not exact…) in a RoBo (Remote Office Branch Office) configuration with Dell Ready Nodes (S2DRN) leveraging RDMA (Remote Direct Memory Access). Now that is a mouthful, so let’s focus on what is Storage Spaces Direct first.

What is Storage Spaces Direct? With Server 2016, Microsoft introduced Storage Spaces Direct (S2D) with the release of Server 2016. S2D allows you to take industry-standard servers and leverage the internal local drives within the nodes and create a highly-available, highly-scalable software defined storage. Using hyper-converged or converged architecture, you are able to quickly deploy, scale storage, while implementing features such as storage tiers, caching, all while taking advantage of RDMA networking.

What is RDMA? Remote Direct Memory Access, or in short, RDMA, is an enterprise networking technology that allows you to exchange data through memory, without consuming the CPU or Operating System kernel. RDMA allows your applications to have high IOPS and with very low latency, while leveraging either RoCe (RDMA over Converged Ethernet) or iWARP (Internet Wide Area RDMA Protocol).

Note: the steps below focus on a single node of a 2-node cluster. All the steps below need to be executed on the secondary node.


Network Connectivity

Before we begin implementing, deploying and configuring we need to plan out the networking connectivity design. However before we do that, we need to understand what our design will look like. Below is a high-level diagram that illustrates the network connectivity for the host management and VM traffic, and the RDMA (Storage) traffic.


Network Configuration

Next we should map out our IP configuration. With this 2-node deployment we know we need the following network adapters and the following IPs.

Traffic Class Purpose Minimum IPs required VLAN ID Tagged/Untagged IP Address Space VLAN IP Address
Out of Band (iDRAC) Remote Management 2 Untagged /29
Management (Host) Management of Cluster and Cluster Nodes 3 Tagged/Untagged /29
Storage 01 SMB Traffic 2 Tagged/Untagged /29
Storage 02 SMB Traffic 2 Tagged/Untagged /29

Now that we have defined our networking configuration, we can move forward with booting the nodes, and making some necessary changes to the BIOS.


BIOS Configuration

Launch the node, and log into the BIOS (usually F2 at the Dell prompt)… Next go to the Device settings and let’s configure the RDMA/QLogic adapters.

Your configuration should look similar to this. In my instance, I am leveraging iWARP and not RoCE. By default, the adapters will allow for both modes, but we want to force iWARP only.

Disable Virtualization Mode

Disable DCBX (Data Center Bridging)

  • Link Speed: SmartAN
  • NIC + RDMA Mode: Enabled
  • RDMA Operation Mode: iWARP
  • Virtual LAN ID: 1 (which is default)

Remember, this needs to be done to both RDMA adapters!!! Once the settings have been applied, and saved, go ahead and reboot the node. Remember to do the second node too!


Install & Update Operating System

Next, we now need to install the Operating System. As best practice, once the OS is installed, update the OS and update all network drivers.


Validate & Rename Network Adapters

Also, it is a good idea to rename the Network adapters. Before we do that, let’s just confirm the adapters are there and look right.

Get-NetAdapter


Install Windows Features & Roles

Once the OS has been installed, and patched. Next we now need to install the necessary roles and features, ie. Hyper-V, Failover Manager, etc.

Install-WindowsFeature -Name Hyper-V, Failover-Clustering -IncludeAllSubFeature -IncludeManagementTools -Verbose -Restart

Configure Host Network

Now we need to configure the host management network. In this step we will create a SET switch (Switch Embedded Teaming). This switch will not only team the two network (host) adapters but at the same time a SET switch will be created that will be leveraged by the guest VMs via Hyper-V.

New-VMSwitch -Name S2DSwitch -AllowManagementOS 0 -NetAdapterName 'NIC1','NIC2' -MinimumBandwidthMode Weight -Verbose

Within this code, note, NIC1 and NIC2 are the host management adapters that were renamed to make life easier.

Now we need to create and configure the host management adapter. We will do this by executing the following cmdlet. Please note, in my environment, the Host Management network is untagged.

Add-VMNetworkAdapter -ManagementOS -Name 'Management' -SwitchName S2DSwitch -Passthru | Set-VMNetworkAdapterVlan -Untagged –Verbose

Once we execute this command, and run the Get-NetAdapter cmdlet, we can now see we have an additional network adapter.

In the event you need to tag your Management adapters you can use the following cmdlet below as reference.

Set-NetAdapterAdvancedProperty -Name 'SLOT 3 PORT 1' -DisplayName 'VLAN ID' -DisplayValue 103 -Verbose
Set-NetAdapterAdvancedProperty -Name 'SLOT 3 PORT 2' -DisplayName 'VLAN ID' -DisplayValue 104 -Verbose

Great, now we can add the nodes to the domain, and set the Management network adapters with static IPs.


Create the Cluster, Configure Witness, Enable Storage Spaces Direct

Now that are nodes are domain joined, and static IPs have been applied to the host management network, we can now begin creating the cluster.

In the code below, I am going to create the cluster; add the two nodes to the cluster; provision the Quorum witness (file witness) and enable Storage Spaces Direct on the cluster.

$cluster="Cluster_Name"
New-Cluster -name $cluster -Node "node01", "node02" -StaticAddress "IP Address" -NoStorage -Verbose
#assign cluster quorum
Set-ClusterQuorum -Cluster $cluster -FileShareWitness "\\server\filewitness\UNCPatch"
#enable storage spaces direct
Enable-ClusterS2D -Verbose

Once we have executed the commands above, if we launch Failover Manager, we can now see the created Cluster, with the 2 nodes, and Storage Spaces Direct enabled.


If we go into the Pool, we can also now see our Software Defined Storage Pool. We now can create volumes off of this pool.

If we go into the Enclosures, we can now also see all the disks available within the nodes and all disks that are members of the Storage Pool.

Great, now we need to do some configuration on the RDMA Adapters… Also to note, in this scenario I have leveraged a file share witness for the cluster. I would highly recommend considering or using Azure Cloud Witness. The egress traffic is next to 0, and you can connect several clusters to the storage account. For more information, see the following blog post(s): HERE.


Change RDMA mode to iWARP on QLogic Adapters

Again, remember which RDMA adapter is which. As mentioned previously, I renamed all of the network adapters to keep things simple and easy to remember.

Set-NetAdapterAdvancedProperty -Name 'SLOT 3 PORT 1' -DisplayName 'RDMA Mode' -DisplayValue 'iWarp'
Set-NetAdapterAdvancedProperty -Name 'SLOT 3 PORT 2' -DisplayName 'RDMA Mode' -DisplayValue 'iWarp'

Now we can leverage the QLogic adapters with RDMA via iWARP for our Storage traffic.


Create Cluster Shared Volumes (CSV)

Now that our cluster is created, nodes have been added, RDMA is configured, we can now create a CSV that will be leveraged by the VMs as their data store. We will do this by creating the CSV with the following cmdlet.

New-Volume -StoragePoolFriendlyName "Storage Pool" -FriendlyName "Volume01" -FileSystem CSVFS_ReFS -size 2TB

Now I elected to keep the CSV small with a 2TB volume, however I did have another 3TB to work with.


Update Live Migration

We are almost there, we now need to update the Live Migration network. This will ensure we make use of the RDMA network and not the Management network. We will do this via Failover Manager console.

Also a good idea to rename the networks. As you can see, I have renamed my storage networks to Storage1 and Storage2, and the host management network to Management.

Go to the Failover Manager Console >> Right Click Networks >> Select Live Migration Settings >> deselect the Management network.

\

You may have also noticed, I have configured the networks and their cluster use. Storage networks will be only available for the cluster, and the Management network will be available for both the cluster and client (guest VMs).


Next steps

We have now successfully created a Storage Spaces Direct cluster, leveraging RDMA networking and using the iWARP protocol. We now also created a SET switch that can be leveraged by our VMs as their network adapter. We have now also created a Storage Pool, with a volume dedicated for our VM disks leveraging the Cluster Shared Volume.

Next steps is now to create a VM and leveraging Storage Spaces Direct!

How to Increase ASR (Azure Site Recovery) Replication and Failback Default Settings

Now that you have deployed ASR (Azure Site Recovery), for Hyper-V and have started to up being replication, you notice the replication process just might take forever, as there are several VMs still queued. That is right, by default, ASR will replicate 4 (four) VMs at a given time. This value can be increased (to a maximum of 32), however, where to change this setting?

In order to increase the number of replication threads from 4 to 32, or whatever in between, you first need to launch the Registry and navigate to: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure Backup\Replication

From there, you will need to create the key, if the key does not exist (I have never see it by default in any of my deployments…). Create the key, “UploadThreadsPerVM” and set its value to whatever you see fit. Again, the maximum is 32.

Likewise, you can increase the default (4) number of threads used for data transfer during failback. This value represents the maximum number of VMs that will failback from ASR. That path is the same, and the Registry Key is,”DownloadThreadsPerVM“, and again, can be set to a maximum of 32.

After that is completed, your Hyper-V Registry Keys, would look something like this. Please note, this change is fully supported by the ASR/Microsoft team. However, do note, this change can saturate your network due to the increase in uploads to Azure. You can also increase and change the schedule for the bandwidth throttle settings, you can see that previous post here, see Step 10.

 

For additional information on this, please visit, https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-plan-capacity-vmware#control-network-bandwidth.

System Center Virtual Machine Manager (SCVMM) 2016 – Error 2912 – Unknown error (0x80041008)

Problem: Cannot to deploy a logical switch (vSwitch) to a Windows Server 2016 node.

Environment: 2x10GB Network Cards – IBM Flex Chassis (not that matters…)

Error:

An internal error has occurred trying to contact the ‘hypervserver01.domain.com’ server: : .

WinRM: URL: [http://hypervserver01.domain.com:5985], Verb: [INVOKE], Method: [GetFinalResult], Resource: [http://schemas.microsoft.com/wbem/wsman/1/wmi/root/scvmm/AsyncTask?ID=1001]

Unknown error (0x80041008)

Recommended Action
Check that WS-Management service is installed and running on server ‘hypervserver01.domain.com’. For more information use the command “winrm helpmsg hresult”. If ‘hypervserver01.domain.com’ is a host/library/update server or a PXE server role then ensure that VMM agent is installed and running. Refer to http://support.microsoft.com/kb/2742275 for more details.

Solution: In my case, I tried the following. Ultimately, it came down to my last case (enabling the physical network card).

  • Disable Windows Firewalls on both SCVMM and the Hyper-V 2016 server
  • Change the default WinRM port to 5985
winrm set winrm/config/Listener?Address=*+Transport=HTTP '@{Port="5985"}'

  • Enable the secondary physical port

Hyper-V 2016 Linux Ubuntu PXE Network Boot Error

If you’re like me, you want to run Linux on your Hyper-V 2016 host, in my case I am attempting to run a Linux Ubuntu 16.04.1. Booting from an ISO, I kept getting the same error over and over. “PXE Network Boot using IPv4 ( ESC to cancel ) Performing DHCP Negotiation….“. After realizing it wasn’t the ISO media. It wasn’t the size of the VHDX. It wasn’t the memory/vCPU or vNIC configuration. It wasn’t even due to the fact it was a Generation 1 or Generation 2 VM…. It was Secure Boot function.

1 2

Solution

  1. Stop the VM
  2. Go to its Settings
  3. Within Hardware > Select Security > Disable/UncheckEnable Secure Boot” > Start your machine back up!

3

Yay!

How to upload Custom Images to Microsoft Azure using PowerShell

In this post, I am going to show how to upload a custom image used in Windows Hyper-V (2016) to Azure cloud. I will be using a combination of the UI in Hyper-V and PowerShell in Azure Resource Manager. I will be working with Azure Resource Manager (ARM) and with Hyper-V 2016 with a custom image of Windows Server 2008 R2 SP1.

Okay, let’s get started.

Prepare On-Premises Virtual Machine Image

First, we need an image to work with. As mentioned, I am using a Windows Server 2008 R2 SP1 (yes, 2008 — needed it for a customer). The VM is Generation 1, which is not only a requirement for Windows 2008, but also a requirement for Azure, as it currently does not support Generation 2 VMs. See HERE to read more on preparing a Windows VHD.

Next, we need to install Hyper-V role on the VM. Since this is a nested VM, we will first need to enable nested-virtualization on the Hyper 2016 box. See a previous post on how to go about this HERE. Once that is complete, go ahead and install the Hyper-V role.

Next, we now need to SysPrep our VM. From an Administrative command prompt, navigate to %windir%\system32\sysprep and then execute the command “sysprep.exe”. Here, we will be using OOBE and enabling “Generalize”, also “Shutdown” the VM once SysPrep completes.

Once the VM is SysPrep’ed, we now need to compact the VHDx (remember Hyper-V 2016 here) and also will need to convert the VHDx to a VHD. This is due to the limitation of Azure at the moment, as it only supports Gen1 VMs and VHD’s.

Go into Hyper-V and within the VM properties, edit the Virtual hard disk. Then we will need to compact the virtual hard disk. Go ahead and do that..

Great, now we need to convert the VHDx to a VHD. Time for PowerShell!

Convert-VHD –Path “<source VHDX path>" –DestinationPath "<destination VHD path>" -VHDType Fixed -Verbose


Let this run (I let it go over night.. it was getting late =) )

Great, now we are ready to move on to Azure and more PowerShell.

Build Azure Container and Upload Image to Azure

First, we need to download  and install the latest AzureRM bits module locally to the Hyper-V box (if you have done this.. jump down a few lines…)

Install-Module AzureRM -Force

Next, since there was a recent update to the AzureRm module, I now need to update the module path location.

$env:PSModulePath = $env:PSModulePath + "; C:\Program Files\WindowsPowerShell\Modules"

Next, we will need to import the AzureRm module.

Import-Module AzureRM -Force

Next, we’ll need to log-in into our Azure account, and specify the subscription to want to work with. In my case, there are multiple Azure subscriptions tied to my email.

Login-AzureRmAccount
Get-AzureRmSubscription
#select the subsciption you will be working with -- if you have one, you can skip this line
Select-AzureRmSubscription -SubscriptionId "<ID>"

Next, we will create a resource group and storage account, and bind the account the group.

New-AzureRmResourceGroup -Name "ResourceGroupName" -Location "Canada East"
New-AzureRmStorageAccount -ResourceGroupName "ResourceGroupName" -Name "StorageAccountName" -Location "Canada East" -SkuName "Standard_LRS" -Kind "Storage"

If you want to change the storage type, to let’s say Geo-redundant, here are the other types of storage:

Valid values for -SkuName are:

  • Standard_LRS – Locally redundant storage.
  • Standard_ZRS – Zone redundant storage.
  • Standard_GRS – Geo redundant storage.
  • Standard_RAGRS – Read access geo redundant storage.
  • Premium_LRS – Premium locally redundant storage.

Now, we need to create a Container and grab the URL needed to upload our image. I did this through the Azure Resource Manager (ARM) Portal since I couldn’t figure out the PowerShell cmdlet (Get-AzureStorageBlob) — if you can get this to work, please let me know!

You can get the URL from the Web UI when you go into the Storage Account >> Blobs >> Container (in my case, I called it “VHD”) >> Properties.

Now we are ready to upload our image/VHD to Azure! For me this took about 2 hours, uploading a 80GB file @ 9-10MBs.

$rgName = "ResourceGroupName"
$AzureVHDURL = "URL"
$LocalVHDPath = "LocalPathtoVHD"
Add-AzureRmVhd -ResourceGroupName $rgName -Destination $AzureVHDURL -LocalFilePath $LocalVHDPath

Great, now we just need to register the VHD disk to the Gallery, and we can begin creating machines based off our image that is now in the cloud! — Another post! 🙂

Step-by-Step: Setup and Configure Azure Site Recovery (ASR) with Windows Server 2016 Hyper-V using ARM

Not too long ago, Microsoft announced the support of Windows 2016 and Azure Site Recovery (ASR). Microsoft’s announcement can be found HERE.

With that said, I decided to setup ASR with my Hyper-V 2016 environment. Rather than the typical blog posts (screenshots etc.,) I decided to create a step-by-step video that demonstrates how to setup ASR with Windows Server 2016 and Hyper-V. That video can be found HERE at Channel 9.

In addition this post is a series of blog posts for Azure Site Recovery (ASR).

Creating a Converged Network Fabric with SCVMM 2012R2

This blog post should have been posted quite some time ago, however, after numerous revisions and the details in the post, you’ll understand why.

In this post I will demonstrate creating a converged network fabric in SCVMM 2012R2. This converged network will consist of logical network adapters, QoS, NIC (vNIC) teaming, and network adapters.

Step 1, Understand your infrastructure

To begin, my environment is using a Cisco UCS (B200 M4) back end, with Cisco Nexus 9K switches and of course Hyper-V (Windows 2012R2) as its hypervisor. The UCS profile used here, has been provisioned with 7 vNICs and dedicated VLANs for each vNIC to isolate the traffic between the networks. The 7 vNICs for the following jobs (see below). All vNICS have a 10GB interface.

  1. iSCSI-A (traffic to the SAN controller 1)
  2. iSCSI-B (traffic to the SAN controller 2)
  3. CSV-Heartbeat
  4. Live Migration
  5. Management
  6. Server-A (VM Production traffic)
  7. Server-B (VM Production traffic)

Server-A and Server-B vNICs we will team, but we will get into that later.

Step 2, we need understand what all these vNICs are intended for. The logical networks below illustrate the purpose of each network.

  1. SAN/Storage (1) (iSCSI-A) – This network will be for access storage via iSCSI on SAN controller 1. In this environment, we will have two VLANs for redundancy, thus two iSCSI networks.
  2. SAN/Storage (2) (iSCSI-B) – see above. This network will be for access storage via iSCSI on SAN controller 2.
  3. Live Migration – This network will be communication between the hypervisors to transfer VM memory, states, etc.
  4. CSV/Heartbeat – This network will be used by the cluster to communicate a healthy (online) state of the environment.
  5. Management – This network will be used to manage the Hyper-V/hypervisors. SCVMM will make use of this network to communicate to the Hyper-V nodes.
  6. VM Traffic (Server-A + Server-B) – This network will be intended communication for VMs and VMs only. This will be not only a redundant network, but a teamed network to allow additional I/O throughout. As mentioned, all vNICs are on a 10GB interface, teaming these two vNICs/networks will allow I/O to operate at 20GB/s.

Please refer to Microsoft article further details, HERE.

Step 3, SCVMM – Create Logical Network(s)

Within SCVMM, you will now need to create your logical networks within the Fabric pane. As mentioned, I am using VLANs to isolate my traffic. I am also planning to have 15 VM network environments with each having its own dedicated VLAN, VLAN 101 through 116, ie. 10.47.101-116.x. Likewise, dedicated VLANs for iSCSI, Live Migration, etc.

1

Here you need to specify the IP subnet and VLAN ID, and apply it to your Host(s) group.

2

3

Step 4, SCVMM – Create IP Pool(s)

Once you create all of your logical networks, you can now create IP Pools. IP Pools will allow you to manage your logical network, and ensure there are no duplicate IPs consumed. You can also reserve IPs for VIPs, etc. In the screenshot below, as you can see, within my “Production” VM network traffic, my IP range states at 10.47.101.100/24 and ends at 10.47.101.252. This allows 155 IPs to be used. If the IP Pool is soon to be exhausted, this setting configuration can be changed to increase the scope. But for now, I know 155 IPs is more than enough.

By right-clicking on the Logical Network you just created, select “Create IP Pool“.

4

You will need to bound the IP Pool to the Logical Network.

5

Choose, “Use an existing network site” and ensure the right network site and IP subnet populated.

6

Here, I am defining a range of IPs for my Pool. Although I know 155 IPs are more than enough, and will never need all 254 IPs, I am comfortable with the range starting at 100.

7

As you can see here, I have also specified the Gateway and provided 2 DNS servers for the IP Pool. When a new VM will be created, all of the IP Properties will be pulled from here and populated once the VM has been built.

8

At the end of all this, your Logical Network Fabric could look something like this, with your Logical Networks and IP Pools per network.

1

Step 5, SCVMM – Create VM Networks + IP Pools

Within the VMs and Services pane, we will now need to create VM networks. This will be assoicated to our Logical Networks we just created. Within the creation process, we will need to specify the Logical network bound to this VM network. Here I created IP Pools again. I find this process of IP Pools a bit odd/redundant. I have IP Pools in both the Logical Network and the VM Network.

9 10

2

Step 6, SCVMM – Creating Uplink Port Profile

Now we need to create the Uplink Port Profile for our VM Production Traffic. Unfortunately with SCVMM 2012 R2 UR8, SCVMM does not come with a default Uplink port profile, so we must create one. Microsoft best practice indicates using a Dynamic and Switch Independent for the Hyper-V workload.

3

Now we will need to bound all the networks we previous created to the Uplink Port Profile. Here VMM will tell the hypervisors how they are connected and mapped to the network fabric. iSCSI traffic, Live Migration, VM Production, CSV-Heartbeat, etc.

4

 

5

Step 7, SCVMM – Create Logical Switch

Now we will create the logical switch, or also known as a vSwitch. The logical switch is the last part of the fabric puzzle. This logical switch will contain the Uplink Port Profile along with the Virtual port profiles (if we chose to manage QoS via SCVMM).

Within the Logical Switches – Fabric, we will create a new Logical switch. In my scenario, I have not made use of SR-IOV (Single Root – Input Output Virtualization).

6

We will use the default Microsoft Windows Filtering Platform for our vSwitch extension.

7

Here will will specify the uplink port profile(s) that will be associated to the logical switch.  We will Team the mode, and add our Production Uplink/Network sites.

8

We will need to specify the port classifications for each virtual port for the logical switch. Here you can see we are using three classes, high, medium and low bandwidth. 9

Step 8, SCVMM – Assign Logical Switch to Hypervisor

Finally, we now need to assign the logical switch to our hypervisor(s). Navigate to (each) the host group within the fabric work-space and within each hypervisors properties, navigate to the Virtual Switches. Select “New Virtual Switch“. Here we will specify which (in our case only 1) Uplink port profile to use on the physical adapter. Since my two vNICs will be teamed, I will have two (2) adapters bound to the same Uplink port profile.

10

 

Now you are ready to start building machines, making use of your network fabric, and maximizing System Center Virtual Machine Manager 2012R2’s  power.

 

If you have any questions, please drop me a line, and/or need some guidance.

 

Cheers!

Exporting and Importing VMs in Hyper-V 2012R2

Let’s say you have a Virtual Machine on one Hyper-V server, and need to migrate it over to another Hyper-V server. For whatever reasons, end of life on the existing server, different data center, etc. Of course this is one of the many good reasons why having a clustered Hyper-V environment is the way to go, but this post is not about that. So, let’s get to it.

 

  • First, shutdown your VM and determine a destination to store the VM. Simply shutdown the VM within the Hyper-V console, and right-click and select Export. Once you define this, you can track its progress. Depending on your storage, how big the VM is, Hyper-V server specs, etc. this could take a few minutes…

1

2

3

  • Next, copy the VM data (you just exported) to the new Hyper-V server or some storage location. Again, based on your environment, network, server etc., this could take a few minutes.

4

  • Next, on your (new) Hyper-V server, launch the Hyper-V console, and select Import. Browse to the location where the VM being imported resides.

5

6

  • When selecting the Import Type, I chose the third option (Copy the virtual machine (create a new unique ID))

8

  • Now you can set the location of the VMs properties, or leave them defaulted to your Hyper-V servers settings.

9

  • Depending on your VM/Hyper-V server, you may have had some fancy properties, like a virtual switch. In my case I did, and on the new Hyper-V server I did not have the same virtual switch, or at least not the same name. You can either create the Network Switch your VM requires, or select “Not Connected” and finish this task later.

10

  • Now you can go ahead and finish the import process, and allow the new machine to be officially imported on your new Hyper-V hypervisor. Again, based on your environment, this may take a few moments, so go get another coffee, and enjoy!

11

 

 

Hyper-V Network Virtual Switches

So you’ve spun up a Windows 2012R2 machine with Hyper-V installed and ready to go. However, now you’re stuck and not sure which type of  Network Virtual Switch (vSwitch) applies to your environment(s)…

In Windows 2012R2, Hyper-V’s network virtual switch runs at Layer 2 (Data Link layer). If you are unfamiliar with this, or either terms, I suggest good old Wikipedia. 🙂 Layer 2 maintains a MAC address table contains the MAC addresses of all the virtual machines (VMs) connected to it. The switch determines where to direct/redirect the packets to based on MAC addresses. It should be noted, in Hyper-V, you can have an unlimited amount of VMs connected to this vSwitch.

In Hyper-V you have three types of Network Virtual Switches: External, Internal and Private. All have similar functions but are disgustingly different.

  1. External vSwitch allows communication between the VMs running within the Hyper-V hosts, the Hyper-V parent partition, and between all VMs on the remote host server. The External vSwitch does require a network adapter on the host (that is not mapped to any other Hyper-V External vSwitch). You can also tag to a VLAN ID.
  2. Internal vSwitch allows communication between all VMs that are connected to the vSwitch and also allows communication between the Hyper-V parent partition. You can also tag to a VLAN ID.
  3. Private vSwitch allows communication between all VMs that are connected to the vSwitch, and that is it. (Note, no communication between the VMs and its Hyper-V parent partition. Also no VLAN ID tagging can occur on the vSwitch)

Without the use of SCVMM (System Center Virtual Machine Manager), I have found there are two ways to go about creating a vSwitch, one via Hyper-V GUI and second via PowerShell.

Let’s start with the GUI:

Launch the Hyper-V console, and right-click on the Hypervisor’s Virtual Switch Manager. Now selecting New virtual network switch, you can specify your properties here. Name your vSwitch, associate to the correct vNIC, tag to the appropriate VLAN ID, etc.

1 vSwitch HyperV Host

You can now specify which vSwitch for your guest VM to use. Within the VMs properties, you will have the option to chose within the Virtual Switch (you will need to create a Network Adapter if not already done). Once selected you can specify your VLAN ID here. (I am finding you cannot specify the VLAN within the Management vSwitch, but it must be done on the client VM’s end) *Again, this is without the use of SCVMM..yet*

2 vSwitch client OS

 

The same process above can be automated via PowerShell. If you’re like me and need to provision a few dozen Hyper-V hosts, creating vSwitches via the GUI is rather tedious. This can be automated with PowerShell (and SCVMM). Please see the code below:

First you will need to get a list of all the Network Adapters your Hyper-V host has to offer. Hopefully you have named them, if you have not, I highly suggest doing this, and considering this best practice and keeping your sanity.

3 Get Adapter names via PS

Once you have the list of vNICs and their names, you can go ahead and start creating vSwitches.

4 Create vSwitch via PS Code 5 Output Create vSwitch via PS

If the code below worked (note only Line 6 is needed to create the External vSwitch) your Hyper-V host should have the vSwitch, or something similar:

1 vSwitch HyperV Host

 

(more…)