Category: Hyper-V

Deploying and Configuring Storage Spaces Direct (S2D)

This blog post will focus on deploying Storage Spaces Direct (S2D) with Windows Server 2016 (steps with Server 2019 should be very-very similar, if not exact…) in a RoBo (Remote Office Branch Office) configuration with Dell Ready Nodes (S2DRN) leveraging RDMA (Remote Direct Memory Access). Now that is a mouthful, so let’s focus on what is Storage Spaces Direct first.

What is Storage Spaces Direct? With Server 2016, Microsoft introduced Storage Spaces Direct (S2D) with the release of Server 2016. S2D allows you to take industry-standard servers and leverage the internal local drives within the nodes and create a highly-available, highly-scalable software defined storage. Using hyper-converged or converged architecture, you are able to quickly deploy, scale storage, while implementing features such as storage tiers, caching, all while taking advantage of RDMA networking.

What is RDMA? Remote Direct Memory Access, or in short, RDMA, is an enterprise networking technology that allows you to exchange data through memory, without consuming the CPU or Operating System kernel. RDMA allows your applications to have high IOPS and with very low latency, while leveraging either RoCe (RDMA over Converged Ethernet) or iWARP (Internet Wide Area RDMA Protocol).

Note: the steps below focus on a single node of a 2-node cluster. All the steps below need to be executed on the secondary node.


Network Connectivity

Before we begin implementing, deploying and configuring we need to plan out the networking connectivity design. However before we do that, we need to understand what our design will look like. Below is a high-level diagram that illustrates the network connectivity for the host management and VM traffic, and the RDMA (Storage) traffic.


Network Configuration

Next we should map out our IP configuration. With this 2-node deployment we know we need the following network adapters and the following IPs.

Traffic Class Purpose Minimum IPs required VLAN ID Tagged/Untagged IP Address Space VLAN IP Address
Out of Band (iDRAC) Remote Management 2 Untagged /29
Management (Host) Management of Cluster and Cluster Nodes 3 Tagged/Untagged /29
Storage 01 SMB Traffic 2 Tagged/Untagged /29
Storage 02 SMB Traffic 2 Tagged/Untagged /29

Now that we have defined our networking configuration, we can move forward with booting the nodes, and making some necessary changes to the BIOS.


BIOS Configuration

Launch the node, and log into the BIOS (usually F2 at the Dell prompt)… Next go to the Device settings and let’s configure the RDMA/QLogic adapters.

Your configuration should look similar to this. In my instance, I am leveraging iWARP and not RoCE. By default, the adapters will allow for both modes, but we want to force iWARP only.

Disable Virtualization Mode

Disable DCBX (Data Center Bridging)

  • Link Speed: SmartAN
  • NIC + RDMA Mode: Enabled
  • RDMA Operation Mode: iWARP
  • Virtual LAN ID: 1 (which is default)

Remember, this needs to be done to both RDMA adapters!!! Once the settings have been applied, and saved, go ahead and reboot the node. Remember to do the second node too!


Install & Update Operating System

Next, we now need to install the Operating System. As best practice, once the OS is installed, update the OS and update all network drivers.


Validate & Rename Network Adapters

Also, it is a good idea to rename the Network adapters. Before we do that, let’s just confirm the adapters are there and look right.

Get-NetAdapter


Install Windows Features & Roles

Once the OS has been installed, and patched. Next we now need to install the necessary roles and features, ie. Hyper-V, Failover Manager, etc.

Install-WindowsFeature -Name Hyper-V, Failover-Clustering -IncludeAllSubFeature -IncludeManagementTools -Verbose -Restart

Configure Host Network

Now we need to configure the host management network. In this step we will create a SET switch (Switch Embedded Teaming). This switch will not only team the two network (host) adapters but at the same time a SET switch will be created that will be leveraged by the guest VMs via Hyper-V.

New-VMSwitch -Name S2DSwitch -AllowManagementOS 0 -NetAdapterName 'NIC1','NIC2' -MinimumBandwidthMode Weight -Verbose

Within this code, note, NIC1 and NIC2 are the host management adapters that were renamed to make life easier.

Now we need to create and configure the host management adapter. We will do this by executing the following cmdlet. Please note, in my environment, the Host Management network is untagged.

Add-VMNetworkAdapter -ManagementOS -Name 'Management' -SwitchName S2DSwitch -Passthru | Set-VMNetworkAdapterVlan -Untagged –Verbose

Once we execute this command, and run the Get-NetAdapter cmdlet, we can now see we have an additional network adapter.

In the event you need to tag your Management adapters you can use the following cmdlet below as reference.

Set-NetAdapterAdvancedProperty -Name 'SLOT 3 PORT 1' -DisplayName 'VLAN ID' -DisplayValue 103 -Verbose
Set-NetAdapterAdvancedProperty -Name 'SLOT 3 PORT 2' -DisplayName 'VLAN ID' -DisplayValue 104 -Verbose

Great, now we can add the nodes to the domain, and set the Management network adapters with static IPs.


Create the Cluster, Configure Witness, Enable Storage Spaces Direct

Now that are nodes are domain joined, and static IPs have been applied to the host management network, we can now begin creating the cluster.

In the code below, I am going to create the cluster; add the two nodes to the cluster; provision the Quorum witness (file witness) and enable Storage Spaces Direct on the cluster.

$cluster="Cluster_Name"
New-Cluster -name $cluster -Node "node01", "node02" -StaticAddress "IP Address" -NoStorage -Verbose
#assign cluster quorum
Set-ClusterQuorum -Cluster $cluster -FileShareWitness "\\server\filewitness\UNCPatch"
#enable storage spaces direct
Enable-ClusterS2D -Verbose

Once we have executed the commands above, if we launch Failover Manager, we can now see the created Cluster, with the 2 nodes, and Storage Spaces Direct enabled.


If we go into the Pool, we can also now see our Software Defined Storage Pool. We now can create volumes off of this pool.

If we go into the Enclosures, we can now also see all the disks available within the nodes and all disks that are members of the Storage Pool.

Great, now we need to do some configuration on the RDMA Adapters… Also to note, in this scenario I have leveraged a file share witness for the cluster. I would highly recommend considering or using Azure Cloud Witness. The egress traffic is next to 0, and you can connect several clusters to the storage account. For more information, see the following blog post(s): HERE.


Change RDMA mode to iWARP on QLogic Adapters

Again, remember which RDMA adapter is which. As mentioned previously, I renamed all of the network adapters to keep things simple and easy to remember.

Set-NetAdapterAdvancedProperty -Name 'SLOT 3 PORT 1' -DisplayName 'RDMA Mode' -DisplayValue 'iWarp'
Set-NetAdapterAdvancedProperty -Name 'SLOT 3 PORT 2' -DisplayName 'RDMA Mode' -DisplayValue 'iWarp'

Now we can leverage the QLogic adapters with RDMA via iWARP for our Storage traffic.


Create Cluster Shared Volumes (CSV)

Now that our cluster is created, nodes have been added, RDMA is configured, we can now create a CSV that will be leveraged by the VMs as their data store. We will do this by creating the CSV with the following cmdlet.

New-Volume -StoragePoolFriendlyName "Storage Pool" -FriendlyName "Volume01" -FileSystem CSVFS_ReFS -size 2TB

Now I elected to keep the CSV small with a 2TB volume, however I did have another 3TB to work with.


Update Live Migration

We are almost there, we now need to update the Live Migration network. This will ensure we make use of the RDMA network and not the Management network. We will do this via Failover Manager console.

Also a good idea to rename the networks. As you can see, I have renamed my storage networks to Storage1 and Storage2, and the host management network to Management.

Go to the Failover Manager Console >> Right Click Networks >> Select Live Migration Settings >> deselect the Management network.

\

You may have also noticed, I have configured the networks and their cluster use. Storage networks will be only available for the cluster, and the Management network will be available for both the cluster and client (guest VMs).


Next steps

We have now successfully created a Storage Spaces Direct cluster, leveraging RDMA networking and using the iWARP protocol. We now also created a SET switch that can be leveraged by our VMs as their network adapter. We have now also created a Storage Pool, with a volume dedicated for our VM disks leveraging the Cluster Shared Volume.

Next steps is now to create a VM and leveraging Storage Spaces Direct!

How to Increase ASR (Azure Site Recovery) Replication and Failback Default Settings

Now that you have deployed ASR (Azure Site Recovery), for Hyper-V and have started to up being replication, you notice the replication process just might take forever, as there are several VMs still queued. That is right, by default, ASR will replicate 4 (four) VMs at a given time. This value can be increased (to a maximum of 32), however, where to change this setting?

In order to increase the number of replication threads from 4 to 32, or whatever in between, you first need to launch the Registry and navigate to:Ā HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure Backup\Replication

From there, you will need to create the key, if the key does not exist (I have never see it by default in any of my deployments…). Create the key, “UploadThreadsPerVM” and set its value to whatever you see fit. Again, the maximum is 32.

Likewise, you can increase the default (4) number ofĀ threads used for data transfer during failback. This value represents the maximum number of VMs that will failback from ASR. That path is the same, and the Registry Key is,”DownloadThreadsPerVM“, and again, can be set to a maximum of 32.

After that is completed, your Hyper-V Registry Keys, would look something like this. Please note, this change is fully supported by the ASR/Microsoft team. However, do note, this change can saturate your network due to the increase in uploads to Azure. You can also increase and change the schedule for the bandwidth throttle settings, you can see that previous post here, see Step 10.

 

For additional information on this, please visit,Ā https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-plan-capacity-vmware#control-network-bandwidth.

System Center Virtual Machine Manager (SCVMM) 2016 – Error 2912 – Unknown error (0x80041008)

Problem: Cannot to deploy a logical switch (vSwitch) to a Windows Server 2016 node.

Environment: 2x10GB Network Cards – IBM Flex Chassis (not that matters…)

Error:

An internal error has occurred trying to contact the ‘hypervserver01.domain.com’ server: : .

WinRM: URL: [http://hypervserver01.domain.com:5985], Verb: [INVOKE], Method: [GetFinalResult], Resource: [http://schemas.microsoft.com/wbem/wsman/1/wmi/root/scvmm/AsyncTask?ID=1001]

Unknown error (0x80041008)

Recommended Action
Check that WS-Management service is installed and running on server ‘hypervserver01.domain.com’. For more information use the command “winrm helpmsg hresult”. If ‘hypervserver01.domain.com’ is a host/library/update server or a PXE server role then ensure that VMM agent is installed and running. Refer to http://support.microsoft.com/kb/2742275 for more details.

Solution: In my case, I tried the following. Ultimately, it came down to my last case (enabling the physical network card).

  • Disable Windows Firewalls on both SCVMM and the Hyper-V 2016 server
  • Change the default WinRM port to 5985
winrm set winrm/config/Listener?Address=*+Transport=HTTP '@{Port="5985"}'

  • Enable the secondary physical port

Hyper-V 2016 Linux Ubuntu PXE Network Boot Error

If you’re like me, you want to run Linux on your Hyper-V 2016 host, in my case I am attempting to run a Linux Ubuntu 16.04.1. Booting from an ISO, I kept getting the same error over and over. “PXE Network Boot using IPv4 ( ESC to cancel ) Performing DHCP Negotiation….“. After realizing it wasn’t the ISO media. It wasn’t the size of the VHDX. It wasn’t the memory/vCPU or vNIC configuration. It wasn’t even due to the fact it was a Generation 1 or Generation 2 VM…. It was Secure Boot function.

1 2

Solution

  1. Stop the VM
  2. Go to its Settings
  3. Within Hardware > Select Security > Disable/UncheckEnable Secure Boot” > Start your machine back up!

3

Yay!

How to upload Custom Images to Microsoft Azure using PowerShell

In this post, I am going to show how to upload a custom image used in Windows Hyper-V (2016) to Azure cloud. I will be using a combination of the UI in Hyper-V and PowerShell in Azure Resource Manager. I will be working with Azure Resource Manager (ARM) and with Hyper-V 2016 with a custom image of Windows Server 2008 R2 SP1.

Okay, let’s get started.

Prepare On-Premises Virtual Machine Image

First, we need an image to work with. As mentioned, I am using a Windows Server 2008 R2 SP1 (yes, 2008 — needed it for a customer). The VM is Generation 1, which is not only a requirement for Windows 2008, but also a requirement for Azure, as it currently does not support Generation 2 VMs. See HERE to read more on preparing a Windows VHD.

Next, we need to install Hyper-V role on the VM. Since this is a nested VM, we will first need to enable nested-virtualization on the Hyper 2016 box. See a previous post on how to go about this HERE. Once that is complete, go ahead and install the Hyper-V role.

Next, we now need to SysPrep our VM.Ā From an Administrative command prompt, navigate to %windir%\system32\sysprep and then execute the command ā€œsysprep.exeā€. Here, we will be using OOBE and enabling “Generalize”, also “Shutdown” the VM once SysPrep completes.

Once the VM is SysPrep’ed, we now need to compact the VHDx (remember Hyper-V 2016 here) and also will need to convert the VHDx to a VHD. This is due to the limitation of Azure at the moment, as it only supports Gen1 VMs and VHD’s.

Go into Hyper-V and within the VM properties, edit the Virtual hard disk. Then we will need to compact the virtual hard disk. Go ahead and do that..

Great, now we need to convert the VHDx to a VHD. Time for PowerShell!

Convert-VHD –Path ā€œ<source VHDX path>" –DestinationPath "<destination VHD path>" -VHDType Fixed -Verbose


Let this run (I let it go over night.. it was getting late =) )

Great, now we are ready to move on to Azure and more PowerShell.

Build Azure Container and Upload Image to Azure

First, we need to download Ā and install the latest AzureRM bits module locally to the Hyper-V box (if you have done this.. jump down a few lines…)

Install-Module AzureRM -Force

Next, since there was a recent update to the AzureRm module, I now need to update the module path location.

$env:PSModulePath = $env:PSModulePath + "; C:\Program Files\WindowsPowerShell\Modules"

Next, we will need to import the AzureRm module.

Import-Module AzureRM -Force

Next, we’ll need to log-in into our Azure account, and specify the subscription to want to work with. In my case, there are multiple Azure subscriptions tied to my email.

Login-AzureRmAccount
Get-AzureRmSubscription
#select the subsciption you will be working with -- if you have one, you can skip this line
Select-AzureRmSubscription -SubscriptionId "<ID>"

Next, we will create a resource group and storage account, and bind the account the group.

New-AzureRmResourceGroup -Name "ResourceGroupName" -Location "Canada East"
New-AzureRmStorageAccount -ResourceGroupName "ResourceGroupName" -Name "StorageAccountName" -Location "Canada East" -SkuName "Standard_LRS" -Kind "Storage"

If you want to change the storage type, to let’s say Geo-redundant, here are the other types of storage:

Valid values for -SkuName are:

  • Standard_LRS – Locally redundant storage.
  • Standard_ZRS – Zone redundant storage.
  • Standard_GRS – Geo redundant storage.
  • Standard_RAGRS – Read access geo redundant storage.
  • Premium_LRS – Premium locally redundant storage.

Now, we need to create a Container and grab the URL needed to upload our image. I did this through the Azure Resource Manager (ARM) Portal since I couldn’t figure out the PowerShell cmdlet (Get-AzureStorageBlob) — if you can get this to work, please let me know!

You can get the URL from the Web UI when you go into the Storage Account >> Blobs >> Container (in my case, I called it “VHD”) >> Properties.

Now we are ready to upload our image/VHD to Azure! For me this took about 2 hours, uploading a 80GB file @ 9-10MBs.

$rgName = "ResourceGroupName"
$AzureVHDURL = "URL"
$LocalVHDPath = "LocalPathtoVHD"
Add-AzureRmVhd -ResourceGroupName $rgName -Destination $AzureVHDURL -LocalFilePath $LocalVHDPath

Great, now we just need to register the VHD disk to the Gallery, and we can begin creating machines based off our image that is now in the cloud! — Another post! šŸ™‚

Step-by-Step – Installing System Center Virtual Machine Manager (SCVMM) 2016

Finally got some time to installing and play around with SCVMM (System Center Virtual Machine Manager) 2016 this weekend. Along with the installation and configuration, I figured to snapshot the installation and configuration. Below are the steps I have taken to get a PoC (Proof of Concept) of SCVMM installed.

For this installation, I will be installing SCVMM 2016 on Windows Server 2016 (with UI) and on a virtualized machine within a Hyper-V (2016) environment. There is no fancy storage here, so I will omit that for this configuration/blog post.

Prerequisites, you will need some service accounts:

  • SCVMM Service Account
  • SCVMM Administrator Account
  • SCVMM Administrator Group
  • SQL Service Account

You can use PowerShell to quickly create the accounts, see here:

#create scvmm service accounts
New-ADUser -Name "SCVMM_SA" -GivenName SCVMM -Surname SA -SamAccountName scvmm_sa -UserPrincipalName scvmm_sa@ravilocal.com; -AccountPassword (ConvertTo-SecureString ā€œPassw0rdā€ -AsPlainText -Force) -PassThru | Enable-ADAccount
New-ADUser -Name "SCVMM_ADMIN" -GivenName SCVMM -Surname ADMIN -SamAccountName scvmm_admin -UserPrincipalName scvmm_admin@ravilocal.com; -AccountPassword (ConvertTo-SecureString ā€œPassw0rdā€ -AsPlainText -Force) -PassThru | Enable-ADAccount

#create scvmm admins security group, add scvmm_sa and scvmm_admin to the group
New-ADGroup SCVMM_ADMINS -GroupScope Global -GroupCategory Security
Add-ADGroupMember SCVMM_ADMINS -Members SCVMM_SA
Add-ADGroupMember SCVMM_ADMINS -Members SCVMM_ADMIN

#create sql sa account
New-ADUser -Name "SQL_SA" -GivenName SQL -Surname SA -SamAccountName sql_sa -UserPrincipalName sql_sa@ravilocal.com -AccountPassword (ConvertTo-SecureString ā€œPassw0rdā€ -AsPlainText -Force) -PassThru | Enable-ADAccount

Once you have done this, I then added the SCVMM accounts to the Local Administrators group on the server.

Next, you will need to prep you server with the Windows Automated Deployment Kit (ADK) and SQL Command Line Utilities.

Download the Windows ADK for Windows 10.

You will need to install the Deployment Tools and Windows Preinstallation Environment Ā (Windows PE) features.

0

Then I downloaded the SQL Server Command Line Utilities 11 along withĀ ODBC Driver 11 for SQL Server. Both of these downloads can be found below.

Once complete, I then installed a new SQL instance on my SQL 2016 SP1 machine, called it “SCVMM16“.

After that, then I was rebooted my SCVMM server, and I was ready to start the SCVMM 2016 install.

Executing the Setup.exe as the Local Administrator

1

2

3

Connect to a SQL instance. If you need to know the SCVMM SQL requirements, go HERE.

4

Since this is a PoC, and not being prepped for a Production environment, I can go ahead and skip the Distributed Key Management, although this is required and recommended if you’re deploying in a HA/Production environment.

5

Double check the default ports are open for the install, or update the ports as needed to correspond to your environment.

6

Since this a fresh install, and I did not setup an external SAN storage, I will keep this as default, and configure later.

7

Double check and confirm the summary details before proceeding — no going back after this….

8

Once you’re ready, go ahead and hit Install. For me, the install took about 15 minutes.. Good time for a walk and fresh air. šŸ™‚

9

 

Sweet!! Now we are ready to roll.

Next steps (I will do that next and blog soon…)

  • Configure SCVMM 2016
    • Deploy the SCVMM agent to our Hyper-V host(s)
    • Configure the Library Share/PXE
    • Configure the Fabric/Network/etc.,
  • Install Update Rollup 2 (UR2)

Until then, happy SCVMM’ing!

Step-by-Step: Setup and Configure Azure Site Recovery (ASR) with Windows Server 2016 Hyper-V using ARM

Not too long ago, Microsoft announced the support of Windows 2016 and Azure Site Recovery (ASR). Microsoft’sĀ announcement can be found HERE.

With that said, I decided to setup ASR with my Hyper-V 2016 environment. Rather than the typical blog posts (screenshots etc.,) I decided to create a step-by-step video that demonstrates how to setup ASR with Windows Server 2016 and Hyper-V. That video can be found HERE at Channel 9.

In addition this post is a series of blog posts for Azure Site Recovery (ASR).

Step-by-Step: Setup and Configure Azure Site Recovery (ASR) for On-Premises Hyper-V Host with Azure Resource Manager (ARM)

This post is a series of blog posts for Azure Site Recovery (ASR).

Here is a step by step walk-through on how to go about setting up and configuring ASR (Azure Site Recovery) and backing up your On-Premises Virtual Machines (VMs) with Azure Resource Manager (ARM).

First things, first, Azure’s Recovery Service Vault is a unified vault/resource that allows you to manage your backup and data disaster recovery needs within Azure. For example, if you are hosting your VMs on-premises you can create a link between your on-prem site and Azure to allow your VMs to be backed-up into Azure. This is regardless of your hypervisor, it can be either ESX or Hyper-V, either will work. However for the interest of this blog post, I will be setting up ASR for a Hyper-V 2012R2 host.



Configuring Azure

Step 1: Create a Recovery Services Vault

Within Azure Resource Manager (ARM), if we select New, within the Marketplace, select Monitoring + management, then select Backup and Site Recovery (OMS) within the featured apps. Of course if this is no longer present, just search for it within the marketplace.

1

Next we will now need to create our vault.

Give it a meaningful name, and you can either create a new Resource Group, or use an existing. I opted with existing, as I will (another post) next setup a Site-to-Site ASR.

2

Give this a few seconds, maybe minutes to do its thing…

Great, now our Vault is up and ready to go!

3

Step 2: Choose your Protection Goal(s)

Click Settings > Site Recovery (Under Getting Stated) > Step 1: Prepare Infrastructure > Protection Goal > And specify the following > Click OK:

  • Replicating to: Azure
  • Machines Virtualized: Yes, with Hyper-V
  • Using SCVMM (Virtual Machine Manager): No

4

Step 3: Setup the Source Environment

Next, we will now need to Prepare our source give our Hyper-V site a name, “Ravi-OnPrem” makes sense here, but give it something meaningful.

5

6

NowĀ we need to download the ASR Provider Installer, along with the Vault Registration Key.

set-source3

Step 4: Ā Install and Configure the ASR Provider on Hyper-V Host

1

2

3

4

5

This Hyper-V host is not behind any Proxy…

6

If we go back to Azure, we can now see our Hyper-V host populated.

7

Step 5: Create a Replication Policy

Within our Vault properties > Settings > Manage: Site Recovery Infrastructure > For Hyper-V Sites: Replication Policies > +Replication Policies

8

Step 6: Associate Hyper-V Site(s)

Next we will need to Associate our Hyper-V site:

9

10

Great! Now we can continue on with Step 3 (Target Environment) of Step 1 (Preparing Infrastructure).

Step 7: Create a Storage Account + Virtual Network

8

9

Within the Replication, we have a few options here. I left mine as default (GRS) Geo-Redundant.

Next, we need to create a Target Virtual Network:

11

Now we can go ahead and setup the replication settings:

Step 8: Setup Replication Settings

12

Since we create the Replication Policy beforehand, this auto-filled. Next we need to do some Capacity Planning. Since this is simply a walk-through example, I elected to skip this, but for a real-production environment, I would highly recommend doing this.

Here is a link to Microsoft’s Capacity Planner for Hyper-V Replica.

14

Hit OK, and now we are ready to to move on to Step 2 (Replication Application)

15

16

This all should have populated since we created our Storage account and Virtual Network just earlier… If not, add them.

Now Azure should have connected with our Hyper-V host, we can now see our VMs within our Hyper-V host. Here we now need to select which machines we will want to include within ASR. For simplicity and variety, I am going to select a domain controller and a Linux machine.

17

Now we need to configure the VMs properties:

18

Once we are good, we can go ahead and apply the Replication Policy to our VMs.

19

Once satisfied, go ahead and hit “Enable Replication“.

20

 

Lastly, Step 3, we now need to complete creating our Recovery Plan:

Step 9: Create Recovery Plan

21

Great! All done? Before we say all done, let’s go back to our Hyper-V host, and configure the Network/Throttling bandwidth.

Step 10: Network/Throttle Bandwidth

My Hyper-V host is not equipped with a GUI as I am using Windows 2012R2 Minimal Server, so navigate here to launch the Microsoft Azure Backup Agent, “C:\Program Files\Microsoft Azure Recovery Services Agent\bin\“. Launch, “wabadmin“.

23

In the Actions pane, select “Change Properties” >> Select the Throttling tab.

24

Change these settings as to your needs. I wanted to increase my non-work hours to 4MB, but looks like 1MB is the max.

Great! Since we already hit, enable replication, this process should have already started… Let’s go back to Azure:

If we take a look at the Vault > Settings > Protected Items > Replicated ItemsĀ 

25

Once these VMs are 100% Synchronized, the next steps will be to simulate a fail over, both Test and Planned.

26