An excellent blog post on how to deploy Azure Stack TP3. For details, please visit Greg Schulz’s blog post HERE.
Recently I came across an environment where Exchange was being migrated to Office 365. As you may know, DirSync is no longer supported for Exchange/O365 migrations and Microsoft recommends you now use Azure AD Connect.
With that said, recently in a PoC environment, using Azure AD Connect, the domain controller that was running the Azure AD Connect utility was never uninstalled, and the VM was shortly deleted. Well, as a result, the O365 admins are now getting reminded daily that their AD Sync has failed to connect.
As of today, there is no way to disable Azure AD Connect via the Azure Resource Manager (ARM) portal, but this can be done with some PowerShell. If you take a look at the ARM portal, there is no option to currently disable the directory synchronization.
First, you will need to install the Azure Active Directory Connection utility, the download for that can be found HERE. This will provide you the PowerShell cmdlet’s needed to run the code below. No, AzureADPreview V2 will not work (yet…).
Once installed, launch the PowerShell console and we will need to connect to Azure AD and trigger the Directory Sync to false. Below are the commands you will need to get this done. Note, you will need an Azure global admin account with the *@*.onmicrosoft.com domain to successfully sign into Azure AD via PowerShell.
#specify credentials for azure ad connect $Msolcred = Get-credential #connect to azure ad Connect-MsolService -Credential $MsolCred
#disable AD Connect / Dir Sync Set-MsolDirSyncEnabled –EnableDirSync $false
#confirm AD Connect / Dir Sync disabled (Get-MSOLCompanyInformation).DirectorySynchronizationEnabled
If you choose to re-enable the AD Connect, just change the flag to TRUE.
Set-MsolDirSyncEnabled –EnableDirSync $true
Once complete, we can now verify the Directory Sync has now been disabled in ARM.
For more on Azure AD PowerShell cmdlets, visit the following page, HERE.
In this post, I am going to show how to upload a custom image used in Windows Hyper-V (2016) to Azure cloud. I will be using a combination of the UI in Hyper-V and PowerShell in Azure Resource Manager. I will be working with Azure Resource Manager (ARM) and with Hyper-V 2016 with a custom image of Windows Server 2008 R2 SP1.
Okay, let’s get started.
Prepare On-Premises Virtual Machine Image
First, we need an image to work with. As mentioned, I am using a Windows Server 2008 R2 SP1 (yes, 2008 — needed it for a customer). The VM is Generation 1, which is not only a requirement for Windows 2008, but also a requirement for Azure, as it currently does not support Generation 2 VMs. See HERE to read more on preparing a Windows VHD.
Next, we need to install Hyper-V role on the VM. Since this is a nested VM, we will first need to enable nested-virtualization on the Hyper 2016 box. See a previous post on how to go about this HERE. Once that is complete, go ahead and install the Hyper-V role.
Next, we now need to SysPrep our VM. From an Administrative command prompt, navigate to %windir%\system32\sysprep and then execute the command “sysprep.exe”. Here, we will be using OOBE and enabling “Generalize”, also “Shutdown” the VM once SysPrep completes.
Once the VM is SysPrep’ed, we now need to compact the VHDx (remember Hyper-V 2016 here) and also will need to convert the VHDx to a VHD. This is due to the limitation of Azure at the moment, as it only supports Gen1 VMs and VHD’s.
Go into Hyper-V and within the VM properties, edit the Virtual hard disk. Then we will need to compact the virtual hard disk. Go ahead and do that..
Great, now we need to convert the VHDx to a VHD. Time for PowerShell!
Convert-VHD –Path “<source VHDX path>" –DestinationPath "<destination VHD path>" -VHDType Fixed -Verbose
Let this run (I let it go over night.. it was getting late =) )
Great, now we are ready to move on to Azure and more PowerShell.
Build Azure Container and Upload Image to Azure
First, we need to download and install the latest AzureRM bits module locally to the Hyper-V box (if you have done this.. jump down a few lines…)
Install-Module AzureRM -Force
Next, since there was a recent update to the AzureRm module, I now need to update the module path location.
$env:PSModulePath = $env:PSModulePath + "; C:\Program Files\WindowsPowerShell\Modules"
Next, we will need to import the AzureRm module.
Import-Module AzureRM -Force
Next, we’ll need to log-in into our Azure account, and specify the subscription to want to work with. In my case, there are multiple Azure subscriptions tied to my email.
Login-AzureRmAccount Get-AzureRmSubscription #select the subsciption you will be working with -- if you have one, you can skip this line Select-AzureRmSubscription -SubscriptionId "<ID>"
Next, we will create a resource group and storage account, and bind the account the group.
New-AzureRmResourceGroup -Name "ResourceGroupName" -Location "Canada East" New-AzureRmStorageAccount -ResourceGroupName "ResourceGroupName" -Name "StorageAccountName" -Location "Canada East" -SkuName "Standard_LRS" -Kind "Storage"
If you want to change the storage type, to let’s say Geo-redundant, here are the other types of storage:
Valid values for -SkuName are:
- Standard_LRS – Locally redundant storage.
- Standard_ZRS – Zone redundant storage.
- Standard_GRS – Geo redundant storage.
- Standard_RAGRS – Read access geo redundant storage.
- Premium_LRS – Premium locally redundant storage.
Now, we need to create a Container and grab the URL needed to upload our image. I did this through the Azure Resource Manager (ARM) Portal since I couldn’t figure out the PowerShell cmdlet (Get-AzureStorageBlob) — if you can get this to work, please let me know!
You can get the URL from the Web UI when you go into the Storage Account >> Blobs >> Container (in my case, I called it “VHD”) >> Properties.
Now we are ready to upload our image/VHD to Azure! For me this took about 2 hours, uploading a 80GB file @ 9-10MBs.
$rgName = "ResourceGroupName" $AzureVHDURL = "URL" $LocalVHDPath = "LocalPathtoVHD" Add-AzureRmVhd -ResourceGroupName $rgName -Destination $AzureVHDURL -LocalFilePath $LocalVHDPath
Great, now we just need to register the VHD disk to the Gallery, and we can begin creating machines based off our image that is now in the cloud! — Another post! 🙂
Not too long ago, the OMS team introduced the Update Management solution. This solution has definitely made the patch management process a lot easier, however at the same time, has raised some questions, such as:
- What’s the future of SCCM (Configuration Manager) with OMS now deploying patches?
- Can this be used concurrently/dual-homed with SCCM for on-premises environments?
- Is OMS essentially System Center in the cloud?
- Is OMS the future?
For the most part, SCCM will still be required for on-premises environments. When it comes to application deployments, computer/server images, granularity with patch/hotfix selection, etc. Microsoft has explicitly stated SCCM configured machines cannot be tagged to OMS with respect to patching, so at the current time, OMS and SCCM cannot/will not work together, whereas OMS and SCOM work hand-in-hand (for now?).
It also seems OMS is slowly becoming System Center in the cloud. It has absorbed the monitoring capabilities from SCOM, and now the patch management process from WSUS and SCCM.
So, is OMS the future, in my opinion, no, it is not the future, it is very much the present! I think OMS/Azure will soon welcome the demise of System Center.
Getting started with OMS Update Management is very easy. For starters, you will need the following:
- OMS Workspace
- Update Management Solution added to OMS
- Automation account (create in Azure first)
- Machines to manage
Once we have taken care of these steps, the rest is pretty easy.
Clicking on the Update Management title, here is an overview.
Update Management Overview
As we can see, I have a few machines that need some updates. I haven’t introduce a Linux machine to OMS yet (had to create a new workspace recently… Maybe I will get to that in another blog post, until then..)
If we move over to the right, and click on Manage Update Deployments, we can then see our current configuration and schedules, and create new ones.
Create an Update Deployment
Creating a deployment schedule is very, very easy!
Name your deployment, and select some computers to manage in this group.
Notice the big red box, we cannot couple machines with OMS Update and System Center Configuration Manager!
Next, create a schedule, for example deploy patches on the Last Sunday of the month, and run the schedule for 5 hours (300 minutes). If a machine is unable to complete the patching cycle within the 300 minutes, it will resume with patches at the next scheduled deployment.
We can also choose to run this schedule 1-time, or reoccur weekly. Once happy with the schedule and computers, hit Save.
Once saved, we can now see our Scheduled Update Deployment.
Post Update Deployment
Once the Update Deployment has executed, let’s take a quick look at the updates that were applied.
If go back to OMS and select the results, we can see what patches were applied, what failed, etc.
We can also expand further, by seeing the KB for the patch, and the exact patch that was applied, or failed, or missed, etc.
If we go back to the Update Deployments, we can see our deployment ran as planned, and patched the 4 servers in the group (one of the five machines was offline).
Now we should have a better/happier Update Management Overview:
Here’s the title from OMS (All Solutions) view:
So how exactly did this all happen?
How the Update Management Solution Work?
The short answer is, “I don’t know.” 🙂
I say this because the solution clearly adds two Runbooks to the solution, and both of these Runbooks are not accessible and not even visible. If you dig a bit into the Job details, then we can see some information. But overall, the guts of this solution is pretty well hidden.
Go into your Azure portal (ARM — Azure Resource Manager) and find Automation Account you created for this solution to work. I called mine, “OMSUpdateMgmt“.
As you can see, there are 0 (Zero) Runbooks… and we now have 7 Hybrid Worker Groups.
If we click on the Jobs title, we can see 4 jobs completed (I had 5 schedules/deployments. I cancelled 1).
If we click on this title, we can get some more information:
Ah ha!! There are the Runbooks, “Patch-MicrosoftOMSComputer” and “Patch-MicrosoftOMSComputers“. If we select one of the jobs, we can get some more detail.
We can now see the Runbook executed against my SQL server.
If we click on Input, we can see the Runbook, “Patch-MicrosoftOMSComputer” has 6 Input parameters:
If we go back to the OMSUpdateMgmt Automation Account overview, and select Hybrid Worker Groups, we can get some more insight. 😉
Nice, the Automation Account found all the machines in my workspace, and made them each a worker.
- Very easy to create collections/groups (similar approach to SCCM/WSUS)
- Simple setup for multiple patch cycles
- Able to patch Nano machines as well
- Can leverage OMS Alerting for missed patches
- Complexity of the implementation could not be easier
- Works for both Windows Server and Client OS
- Cannot approve/decline specific patches/fixes like in SCCM/WSUS
- Cannot use SCCM in parallel
- Computers must be Windows Server 2012 or higher
- No roll-back of patches
- No inventory of patches/hotfixes/drivers to be deployed
The Update Management solution for OMS is great! It makes the entire process a lot easier and cleaner (IMO), especially since I am not a SCCM expert. You can setup collections just like we could in WSUS and SCCM. However there are some negative takeaways (see above). Of course like most OMS solutions, updates are pushed out regularly, so I am sure the Update Management solution will get some tweaks here and there as time goes on.
For more information on the OMS Update Management Solution, please visit the following: https://docs.microsoft.com/en-us/azure/operations-management-suite/oms-solution-update-management
Earlier in 2016, Microsoft increased the number of Canadian Data Centers to two: Canada East and Canada Central. With most of my customers being within Canada, naturally they want their Azure Backup data stored within the Canada Data Centers/Regions — makes sense for many (legal) reasons. Only problem is, Azure backup is still very limited to specific locations (see chart below).
Fellow Canadian and MVP — Stéphane Lapointe, was able to get this working with some PowerShell magic — Please visit his blog to get the more details of his workaround. The PowerShell code below is workaround to get Azure Backup services bound to the Canadian Regions/Data Centers, specifically the Canada Central region (note, this is still in Preview state), until Microsoft officially allows all Monitoring/ASR services (along with others) to be generally available. This will allow you to create new Azure Backup services and bound them to Canada Central. For more information on this announcement and code details, please visit Microsoft’s announcement.
Also, worth noting, this will only allow you to use Canada Central region for new setup/configurations. It will not change current setups to Canada Central.
Execute the following code on your machine (Run As Administrator…)
Import-Module AzureRM -Force #azure account login stuff $username = "" $cred = New-Object -TypeName System.Management.Automation.PSCredential -argumentlist $username, $password Login-AzureRmAccount -Credential $cred $SubscriptionName = 'Visual Studio Enterprise' #update recovery services to Canada Central from whatever region it may be (US East, US Central, etc.) $ErrorActionPreference = 'Stop' Get-AzureRmSubscription –SubscriptionName $SubscriptionName | Select-AzureRmSubscription Register-AzureRmResourceProvider -ProviderNamespace Microsoft.RecoveryServices Register-AzureRmProviderFeature -FeatureName RecoveryServicesCanada -ProviderNamespace Microsoft.RecoveryServices
After about 5 minutes, I re-ran the query, and the Recovery Services were registered to Canada! Sweet..eh? 🙂
Now you can create new Azure Backup services bound to the Canada Central region:
If you’re like me, you have probably banged your head against the wall a few times with the Login-AzureRmAccount cmdlet… I reached out to the Azure Development team and not only is this a known issue, but there is currently no solution at the time…. Hmm.
Here is a bit of the background story, followed with the problem and solution to the issue.
Using PowerShell to script an auto-login to Azure, and start (and shutdown) Virtual Machines (yes, OMS Automation could help/solve this, but in this scenario my customer is currently not on-board with OMS). At any rate, the script is designed to capture some data on a on-premises server, if the threshold breaks, then begin starting resources in Azure, likewise, if the threshold falls back then shutdown those same resources in Azure.
Running the following code, I keep getting the a null entry for SubscriptionId and SubscriptionName. Even though the user I have created is a co-administrator and has access to all the resources necessary. Assuming the login did work and the data isn’t needed…when try to start my Azure VM I get an Azure subscription error. So, let me check the subscription details. Well, there we go, I get the following response, “WARNING: Unable to acquire token for tenant ‘Common’” ….. So what gives?
I check and confirm the test-user is in-fact an administrator in ARM (Azure Resource Manager):
Turns out, the user account created, not only needs to be created and added to the resources with Azure Resource Manager (ARM), but also needs to be assigned as an Administrator within Azure Classic Portal.
Once the test-user was added within the Classic Portal Administrators and set as Co-administrator, I could then get SubscriptionId and SubscriptionName info populate, and Get-AzureRmSubscription with proper details. Yay! (Still get that tenant ‘Common’ warning however…)
Now I can go ahead with my script!
I hope this helps you as much as it helped me.
In a past series of blog posts, focusing on Azure Site Recovery (ASR) we setup and configured ASR for various deployments:
- ASR for Hyper-V 2016 On-Premises
- ASR for Hyper-V 2012R2 On-Premises
- ASR for VMs being hosted within Azure
- ASR for VMs being hosted On-Premises
- ASR for an ESXi hosted On-Premises, coming soon…
In this post, we can now track the charges accrued by our VMs and ASR. Azure’s Billing (currently in Preview) breaks down the costs per resource group (RG), and components within that RG.
For starters, you get the following notification pop-up in the upper right corner of your Azure portal:
If you go into your Billing via Marketplace, you can get a complete breakdown of the costs you are pilling up by using various services such as ASR.
You can also drill down by viewing the Burn rate, which breaks down the costs per service/resource.