Month: March 2019

Step-by-Step – Installing System Center Operations Manager (SCOM) 2019 on Windows Server 2019 with SQL 2017

This post I will be installing System Center Operations Manager 2019 (SCOM) RTM, Build Number 10.19.10050.

Here is some of the background information. As this post will concentrate on the installation of SCOM 2019, I am going to omit the setup and configuration of the Domain Controller, Windows Server 2019 for the SCOM Management Server. Also to note, I am using a PaaS instance of SQL 2017 (hosted on Azure), likewise the entire environment lives on Azure in an IaaS and PaaS configuration.

Service Accounts and Local Administrator:

DomainAccount Description Local Admin on…
domainSCOM_AA SCOM Action Account SCOM
domainSCOM_DA SCOM Data Access/SDK Account SCOM
domainSCOM_SQL_READ SCOM SQL Reader n/a
domainSCOM_SQL_WRITE SCOM SQL Writer n/a
domainSCOM_Admins SCOM Administrators Group SCOM
domainSQL_SA SQL Service Account n/a

Now, if you’re lazy like me, or are tired of doing this setup for environments, I have scripted the automation of these accounts. You can find that link here, Microsoft TechNet Gallery.


Let’s Begin:

Since I am hosting SQL on a dedicated server, I will install SSRS (SCOM Reporting) on that server.

Well, that’s not new… Prerequisites. Since this is a clean, vanilla Windows 2019 server, we will need to install all the necessary Web Console components, along with Report Viewer Controls (probably SQL CLR Types too..).

  • For the Report Viewer Prerequisites, go HERE.
  • Here is the PowerShell command I ran to install the necessary IIS features/roles:
Import-Module ServerManager
Add-WindowsFeature Web-Server, Web-WebServer, Web-Common-Http, Web-Default-Doc, Web-Dir-Browsing, Web-Http-Errors, Web-Static-Content, Web-Health, Web-Http-Logging, Web-Log-Libraries, Web-Request-Monitor, Web-Performance, Web-Stat-Compression, Web-Security, Web-Filtering, Web-Windows-Auth, Web-App-Dev, Web-Net-Ext45, Web-Asp-Net45, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Mgmt-Tools, Web-Mgmt-Console, Web-Mgmt-Compat, Web-Metabase, NET-Framework-45-Features, NET-Framework-45-Core, NET-Framework-45-ASPNET, NET-WCF-Services45, NET-WCF-HTTP-Activation45, NET-WCF-TCP-PortSharing45, WAS, WAS-Process-Model, WAS-Config-APIs -restart

 

Once the server is back online, you will need to register ASP.Net.

6

You will need to apply the following using Command Prompt (as Administrator)). Yes, this is a screenshot from a previous post…Forgot to capture the screenshot when running it this time..

  1. cd %WINDIR%Microsoft.NETFramework64v4.0.30319
  2. aspnet_regiis.exe -r
  3. IISRESET
  4. Reboot your server…

Once the server is back online, let’s try that Prerequisites check again….

Great! Now all of Prerequisites have been met!

Provide a meaningful Management Group Name (there’s no going back after this…)

SQL Server will be where your SCOM SQL instance(s) were installed. Remember, to either disable the Windows Firewall, or open SQL TCP Ports 1433.

 

I recommend always keeping this off, and manually updating your SCOM infrastructure.

One quick review. Looks good. Hit Install, and get some fresh air!

A few minutes later….

Sweet! All good. I hope this helps. If you have any questions or issues, please drop me a line.

Happy 2019 SCOM’ing!

(more…)

Data Deduplication in Windows Server 2019

When Windows Server 2016 was released, Data Deduplication was not available for ReFS file system, and only available for NTFS. With Windows Server 2019, data deduplication is now available for both NTFS and ReFS file systems.

Data Deduplication is a great technology that allows you to reduces your storage footprint by removing any duplicated data blocks and replacing it with metadata.

In the scenario below, I will show you how to enable Data Deduplication and tracking the ‘saving rate’ of the data deduplication.

Install-WindowsFeature FS-Data-Deduplication

This cmdlet will allow you to install the feature. In most scenarios, ie. Storage Spaces Direct, Hyper-V, this will make most sense. Also, this cmdlet would need to be executed on all nodes.

Get-Command *Dedup*

Now that we have data deduplication installed, we can now see all the of the cmdlets available.

Enable-DedupVolume -Volume "E:","F:" -UsageType HyperV

Finally, once we enable data deduplication on the volumes, we can now track the saving rate. Note, this can be done via PowerShell, or Windows Admin Center (WAC). Note, this can only be enabled on Cluster Shared Volumes (CSV).

Get-DedupVolume

I hope this helps, and now you can start minimizing the data deduplication within your environment.

Deploying and Configuring Storage Spaces Direct (S2D)

This blog post will focus on deploying Storage Spaces Direct (S2D) with Windows Server 2016 (steps with Server 2019 should be very-very similar, if not exact…) in a RoBo (Remote Office Branch Office) configuration with Dell Ready Nodes (S2DRN) leveraging RDMA (Remote Direct Memory Access). Now that is a mouthful, so let’s focus on what is Storage Spaces Direct first.

What is Storage Spaces Direct? With Server 2016, Microsoft introduced Storage Spaces Direct (S2D) with the release of Server 2016. S2D allows you to take industry-standard servers and leverage the internal local drives within the nodes and create a highly-available, highly-scalable software defined storage. Using hyper-converged or converged architecture, you are able to quickly deploy, scale storage, while implementing features such as storage tiers, caching, all while taking advantage of RDMA networking.

What is RDMA? Remote Direct Memory Access, or in short, RDMA, is an enterprise networking technology that allows you to exchange data through memory, without consuming the CPU or Operating System kernel. RDMA allows your applications to have high IOPS and with very low latency, while leveraging either RoCe (RDMA over Converged Ethernet) or iWARP (Internet Wide Area RDMA Protocol).

Note: the steps below focus on a single node of a 2-node cluster. All the steps below need to be executed on the secondary node.


Network Connectivity

Before we begin implementing, deploying and configuring we need to plan out the networking connectivity design. However before we do that, we need to understand what our design will look like. Below is a high-level diagram that illustrates the network connectivity for the host management and VM traffic, and the RDMA (Storage) traffic.


Network Configuration

Next we should map out our IP configuration. With this 2-node deployment we know we need the following network adapters and the following IPs.

Traffic Class Purpose Minimum IPs required VLAN ID Tagged/Untagged IP Address Space VLAN IP Address
Out of Band (iDRAC) Remote Management 2 Untagged /29
Management (Host) Management of Cluster and Cluster Nodes 3 Tagged/Untagged /29
Storage 01 SMB Traffic 2 Tagged/Untagged /29
Storage 02 SMB Traffic 2 Tagged/Untagged /29

Now that we have defined our networking configuration, we can move forward with booting the nodes, and making some necessary changes to the BIOS.


BIOS Configuration

Launch the node, and log into the BIOS (usually F2 at the Dell prompt)… Next go to the Device settings and let’s configure the RDMA/QLogic adapters.

Your configuration should look similar to this. In my instance, I am leveraging iWARP and not RoCE. By default, the adapters will allow for both modes, but we want to force iWARP only.

Disable Virtualization Mode

Disable DCBX (Data Center Bridging)

  • Link Speed: SmartAN
  • NIC + RDMA Mode: Enabled
  • RDMA Operation Mode: iWARP
  • Virtual LAN ID: 1 (which is default)

Remember, this needs to be done to both RDMA adapters!!! Once the settings have been applied, and saved, go ahead and reboot the node. Remember to do the second node too!


Install & Update Operating System

Next, we now need to install the Operating System. As best practice, once the OS is installed, update the OS and update all network drivers.


Validate & Rename Network Adapters

Also, it is a good idea to rename the Network adapters. Before we do that, let’s just confirm the adapters are there and look right.

Get-NetAdapter


Install Windows Features & Roles

Once the OS has been installed, and patched. Next we now need to install the necessary roles and features, ie. Hyper-V, Failover Manager, etc.

Install-WindowsFeature -Name Hyper-V, Failover-Clustering -IncludeAllSubFeature -IncludeManagementTools -Verbose -Restart

Configure Host Network

Now we need to configure the host management network. In this step we will create a SET switch (Switch Embedded Teaming). This switch will not only team the two network (host) adapters but at the same time a SET switch will be created that will be leveraged by the guest VMs via Hyper-V.

New-VMSwitch -Name S2DSwitch -AllowManagementOS 0 -NetAdapterName 'NIC1','NIC2' -MinimumBandwidthMode Weight -Verbose

Within this code, note, NIC1 and NIC2 are the host management adapters that were renamed to make life easier.

Now we need to create and configure the host management adapter. We will do this by executing the following cmdlet. Please note, in my environment, the Host Management network is untagged.

Add-VMNetworkAdapter -ManagementOS -Name 'Management' -SwitchName S2DSwitch -Passthru | Set-VMNetworkAdapterVlan -Untagged –Verbose

Once we execute this command, and run the Get-NetAdapter cmdlet, we can now see we have an additional network adapter.

In the event you need to tag your Management adapters you can use the following cmdlet below as reference.

Set-NetAdapterAdvancedProperty -Name 'SLOT 3 PORT 1' -DisplayName 'VLAN ID' -DisplayValue 103 -Verbose
Set-NetAdapterAdvancedProperty -Name 'SLOT 3 PORT 2' -DisplayName 'VLAN ID' -DisplayValue 104 -Verbose

Great, now we can add the nodes to the domain, and set the Management network adapters with static IPs.


Create the Cluster, Configure Witness, Enable Storage Spaces Direct

Now that are nodes are domain joined, and static IPs have been applied to the host management network, we can now begin creating the cluster.

In the code below, I am going to create the cluster; add the two nodes to the cluster; provision the Quorum witness (file witness) and enable Storage Spaces Direct on the cluster.

$cluster="Cluster_Name"
New-Cluster -name $cluster -Node "node01", "node02" -StaticAddress "IP Address" -NoStorage -Verbose
#assign cluster quorum
Set-ClusterQuorum -Cluster $cluster -FileShareWitness "\\server\filewitness\UNCPatch"
#enable storage spaces direct
Enable-ClusterS2D -Verbose

Once we have executed the commands above, if we launch Failover Manager, we can now see the created Cluster, with the 2 nodes, and Storage Spaces Direct enabled.


If we go into the Pool, we can also now see our Software Defined Storage Pool. We now can create volumes off of this pool.

If we go into the Enclosures, we can now also see all the disks available within the nodes and all disks that are members of the Storage Pool.

Great, now we need to do some configuration on the RDMA Adapters… Also to note, in this scenario I have leveraged a file share witness for the cluster. I would highly recommend considering or using Azure Cloud Witness. The egress traffic is next to 0, and you can connect several clusters to the storage account. For more information, see the following blog post(s): HERE.


Change RDMA mode to iWARP on QLogic Adapters

Again, remember which RDMA adapter is which. As mentioned previously, I renamed all of the network adapters to keep things simple and easy to remember.

Set-NetAdapterAdvancedProperty -Name 'SLOT 3 PORT 1' -DisplayName 'RDMA Mode' -DisplayValue 'iWarp'
Set-NetAdapterAdvancedProperty -Name 'SLOT 3 PORT 2' -DisplayName 'RDMA Mode' -DisplayValue 'iWarp'

Now we can leverage the QLogic adapters with RDMA via iWARP for our Storage traffic.


Create Cluster Shared Volumes (CSV)

Now that our cluster is created, nodes have been added, RDMA is configured, we can now create a CSV that will be leveraged by the VMs as their data store. We will do this by creating the CSV with the following cmdlet.

New-Volume -StoragePoolFriendlyName "Storage Pool" -FriendlyName "Volume01" -FileSystem CSVFS_ReFS -size 2TB

Now I elected to keep the CSV small with a 2TB volume, however I did have another 3TB to work with.


Update Live Migration

We are almost there, we now need to update the Live Migration network. This will ensure we make use of the RDMA network and not the Management network. We will do this via Failover Manager console.

Also a good idea to rename the networks. As you can see, I have renamed my storage networks to Storage1 and Storage2, and the host management network to Management.

Go to the Failover Manager Console >> Right Click Networks >> Select Live Migration Settings >> deselect the Management network.

\

You may have also noticed, I have configured the networks and their cluster use. Storage networks will be only available for the cluster, and the Management network will be available for both the cluster and client (guest VMs).


Next steps

We have now successfully created a Storage Spaces Direct cluster, leveraging RDMA networking and using the iWARP protocol. We now also created a SET switch that can be leveraged by our VMs as their network adapter. We have now also created a Storage Pool, with a volume dedicated for our VM disks leveraging the Cluster Shared Volume.

Next steps is now to create a VM and leveraging Storage Spaces Direct!

System Center Operations Manager (SCOM) 2019- Requirements for Windows Server 2019 via PowerShell

The following PowerShell code is to install all the necessary IIS components for System Center Operations Manager (SCOM) 2019 Web Console on Windows Server 2019.

Import-Module ServerManager
Add-WindowsFeature Web-Server, Web-WebServer, Web-Common-Http, Web-Default-Doc, Web-Dir-Browsing, Web-Http-Errors, Web-Static-Content, Web-Health, Web-Http-Logging, Web-Log-Libraries, Web-Request-Monitor, Web-Performance, Web-Stat-Compression, Web-Security, Web-Filtering, Web-Windows-Auth, Web-App-Dev, Web-Net-Ext45, Web-Asp-Net45, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Mgmt-Tools, Web-Mgmt-Console, Web-Mgmt-Compat, Web-Metabase, NET-Framework-45-Features, NET-Framework-45-Core, NET-Framework-45-ASPNET, NET-WCF-Services45, NET-WCF-HTTP-Activation45, NET-WCF-TCP-PortSharing45, WAS, WAS-Process-Model, WAS-Config-APIs, web-asp-net -restart

You can also find this in Microsoft’s TechNet Gallery, HERE.

What’s new in System Center Operations Manager (SCOM) 2019?

When it comes to monitoring your on-premises datacenter, System Center Operations Manager (SCOM) is still the tool of choice. The System Center stack has been the Microsoft go-to toolset for decades for datacenter management. System Center 2019 it is expected be made Generally Available (GA) in the next few weeks, as Q1 comes to an end.

It is also worth mentioning, SCOM and the entire System Center 2019 stack will be following the Long-Term Servicing Channel (LTSC) model.

Some of the key features that will be highlighted with the release of SCOM 2019 are below.

  • Improved Azure Management Pack (faster and easier to manage)
  • Improved HTML5 dashboards – The new web console has no dependencies on SilverLight and is officially HTML5
  • Azure Service Map Integration
  • Enhanced notifications and alert management – Rich HTML notifications are now default
  • Customize and Preview HTML notification content
  • Email notification improvements such as the ability operators (‘OR’ and ‘EXCLUDE’) to the criteria builder
  • Backend support for SQL Server 2017 and SilverLight dependencies removed
  • Enhanced Agentless alerting during failover scenarios
  • Enhanced certificate validation for Web Application monitoring
  • Application Performance Monitoring (APM) support for CSM (Client Side Monitoring) for Edge and Chrome
  • Support for OpenSSL 1.1.0 for Linux platforms
  • Kerberos support for Linux agent
  • Linux log file monitoring (any custom datasource ie. docker/kubernetes container monitoring)

Deploy an Azure Cloud Witness for your Failover Cluster Quorum for Windows Server 2016 & 2019 with PowerShell

For the longest time, when deploying a cluster with Windows Server, you only had the two options,

  1. Using a dedicated disk for the quorum, or
  2. Configuring an SMB file-share as the quorum witness

With Server 2016 and 2019, there is now a third option, Cloud Witness. The Cloud Witness leverages Azure Blob storage to provide that additional cluster/quorum vote.

Before showing you how this is done, one should understand the purpose of a witness/quorum is with respect to a failover cluster.

When one or more members of a cluster stops reporting to the other cluster members, there is a vote. The vote ensures that there is no split-vote, and ensures the cluster has a true owner. For example, in a two node cluster, if each node believe it is the owner, then this will cause a “split-brain”. In short, neither node will ever agree it is the owner (or not). This is where a quorum is required to determine who is the owner by providing the third vote, ie. majority. This ensures the cluster has a true owner by having the majority of votes. Each member gets a vote, plus the quorum.

Why this matters, in the even there is no quorum, a node from the cluster can be evicted and as a result will suspend all application services to prevent data corruption by more than one system writing data without the cluster services coordinating data writes and access. Depending on policies, VMs running on the ejected cluster member will either suspend operations or be migrated to other nodes before being ejected.

Below is a step-by-step guide on how to configure the Azure Blob storage as the Cloud Witness.

Assumptions:

  • The Azure Blob storage account has already been created,
  • The cluster with at least 2 nodes already exists.

Launch the PowerShell console as Administrator, and execute the following cmdlet:

Set-ClusterQuorum -CloudWitness -AccountName "storage_account_name" -AccessKey "primary_access_key"

Now if we go back to the Failover Manager console we can see we have successfully configured cluster with a Cloud Witness.

In conclusion, deploying a Cloud Witness for a Failover Cluster is very simple, and in case of power outage in one datacenter, maintenance on a node, etc. then the entire cluster and its members (nodes) are all given an equal opportunity. Not only is it recommended and a requirement for 2-node clusters, but for any number of nodes, having a quorum is key ensuring high-availability.  As mentioned, there are the traditional options such as using a dedicated disk or a file-share (SMB) as the cluster witness. However with Azure Blob storage with its 16×9 uptime, we can always ensure the quorum witness is online and available.

Deploy an Azure Cloud Witness for your Failover Cluster Quorum for Windows Server 2016 & 2019

For the longest time, when deploying a cluster with Windows Server, you only had the two options,

  1. Using a dedicated disk for the quorum, or
  2. Configuring an SMB file-share as the quorum witness

With Server 2016 and 2019, there is now a third option, Cloud Witness. The Cloud Witness leverages Azure Blob storage to provide that additional cluster/quorum vote.

Before showing you how this is done, one should understand the purpose of a witness/quorum is with respect to a failover cluster.

When one or more members of a cluster stops reporting to the other cluster members, there is a vote. The vote ensures that there is no split-vote, and ensures the cluster has a true owner. For example, in a two node cluster, if each node believe it is the owner, then this will cause a “split-brain”. In short, neither node will ever agree it is the owner (or not). This is where a quorum is required to determine who is the owner by providing the third vote, ie. majority. This ensures the cluster has a true owner by having the majority of votes. Each member gets a vote, plus the quorum.

Why this matters, in the even there is no quorum, a node from the cluster can be evicted and as a result will suspend all application services to prevent data corruption by more than one system writing data without the cluster services coordinating data writes and access. Depending on policies, VMs running on the ejected cluster member will either suspend operations or be migrated to other nodes before being ejected.

Below is a step-by-step guide on how to configure the Azure Blob storage as the Cloud Witness.

Assumptions:

  • The Azure Blob storage account has already been created,
  • The cluster with at least 2 nodes already exists.

Launching the Failover Manager within Windows Server manager, connect to the cluster, and do the following. Right click the cluster object and select More Actions > Configure Cluster Quorum Settings…

Next select the Advanced Quorum configuration..

Ensure we have all the nodes selected, as seen below.

Next, select the Configure a Cloud Witness:

Now we need to get our Azure Blob storage account name, and its primary account key. This can be retrieved from the Azure portal.

Now validate the settings and complete the configuration.

Now if we go back to the Failover Manager console we can see we have successfully configured cluster with a Cloud Witness.

In conclusion, deploying a Cloud Witness for a Failover Cluster is very simple, and in case of power outage in one datacenter, maintenance on a node, etc. then the entire cluster and its members (nodes) are all given an equal opportunity. Not only is it recommended and a requirement for 2-node clusters, but for any number of nodes, having a quorum is key ensuring high-availability.  As mentioned, there are the traditional options such as using a dedicated disk or a file-share (SMB) as the cluster witness. However with Azure Blob storage with its 16×9 uptime, we can always ensure the quorum witness is online and available.