Tag: PowerShell

How To Disable Azure AD Connect via PowerShell

Recently I came across an environment where Exchange was being migrated to Office 365. As you may know, DirSync is no longer supported for Exchange/O365 migrations and Microsoft recommends you now use Azure AD Connect.

With that said, recently in a PoC environment, using Azure AD Connect, the domain controller that was running the Azure AD Connect utility was never uninstalled, and the VM was shortly deleted. Well, as a result, the O365 admins are now getting reminded daily that their AD Sync has failed to connect.

As of today, there is no way to disable Azure AD Connect via the Azure Resource Manager (ARM) portal, but this can be done with some PowerShell. If you take a look at the ARM portal, there is no option to currently disable the directory synchronization.

First, you will need to install the Azure Active Directory Connection utility, the download for that can be found HERE. This will provide you the PowerShell cmdlet’s needed to run the code below. No, AzureADPreview V2 will not work (yet…).

Once installed, launch the PowerShell console and we will need to connect to Azure AD and trigger the Directory Sync to false. Below are the commands you will need to get this done. Note, you will need an Azure global admin account with the *@*.onmicrosoft.com domain to successfully sign into Azure AD via PowerShell.

#specify credentials for azure ad connect
$Msolcred = Get-credential
#connect to azure ad
Connect-MsolService -Credential $MsolCred
#disable AD Connect / Dir Sync
Set-MsolDirSyncEnabled –EnableDirSync $false 
#confirm AD Connect / Dir Sync disabled
(Get-MSOLCompanyInformation).DirectorySynchronizationEnabled 

If you choose to re-enable the AD Connect, just change the flag to TRUE.

Set-MsolDirSyncEnabled –EnableDirSync $true 

Once complete, we can now verify the Directory Sync has now been disabled in ARM.

For more on Azure AD PowerShell cmdlets, visit the following page, HERE.

Transfer Active Directory FSMO Roles via PowerShell

Sometimes a domain controller (DC) just needs to be decommissioned for whatever reason, let’s say an upgrade, or corrupted VM and the roles are now seized.. nevertheless, moving the FSMO (Flexible single master operation) roles can be done via the UI, however if you want to speed things up and do it with PowerShell, here is how to that.

In my scenario, I am decommissioning my Hyper-V server which at the time was acting as the primary DC. Now that it is being decomm’ed I need to transfer the FSMO roles to another DC. The destination DC is “DC01” in this case.

Move-ADDirectoryServerOperationMasterRole -Identity "DESTINATION DC" -OperationMasterRole 0,1,2,3,4

You have the option here to specify a numerical value or specifying the role itself. See below for the number assoicated to each roles. You could input each role, or as I did, just input the number(s).

PDCEmulator or 0
RIDMaster or 1
InfrastructureMaster or 2
SchemaMaster or 3
DomainNamingMaster or 4

To verify the FSMO roles have been transferred, run the netdom query fsmo command.

netdom query fsmo

How to upload Custom Images to Microsoft Azure using PowerShell

In this post, I am going to show how to upload a custom image used in Windows Hyper-V (2016) to Azure cloud. I will be using a combination of the UI in Hyper-V and PowerShell in Azure Resource Manager. I will be working with Azure Resource Manager (ARM) and with Hyper-V 2016 with a custom image of Windows Server 2008 R2 SP1.

Okay, let’s get started.

Prepare On-Premises Virtual Machine Image

First, we need an image to work with. As mentioned, I am using a Windows Server 2008 R2 SP1 (yes, 2008 — needed it for a customer). The VM is Generation 1, which is not only a requirement for Windows 2008, but also a requirement for Azure, as it currently does not support Generation 2 VMs. See HERE to read more on preparing a Windows VHD.

Next, we need to install Hyper-V role on the VM. Since this is a nested VM, we will first need to enable nested-virtualization on the Hyper 2016 box. See a previous post on how to go about this HERE. Once that is complete, go ahead and install the Hyper-V role.

Next, we now need to SysPrep our VM. From an Administrative command prompt, navigate to %windir%\system32\sysprep and then execute the command “sysprep.exe”. Here, we will be using OOBE and enabling “Generalize”, also “Shutdown” the VM once SysPrep completes.

Once the VM is SysPrep’ed, we now need to compact the VHDx (remember Hyper-V 2016 here) and also will need to convert the VHDx to a VHD. This is due to the limitation of Azure at the moment, as it only supports Gen1 VMs and VHD’s.

Go into Hyper-V and within the VM properties, edit the Virtual hard disk. Then we will need to compact the virtual hard disk. Go ahead and do that..

Great, now we need to convert the VHDx to a VHD. Time for PowerShell!

Convert-VHD –Path “<source VHDX path>" –DestinationPath "<destination VHD path>" -VHDType Fixed -Verbose


Let this run (I let it go over night.. it was getting late =) )

Great, now we are ready to move on to Azure and more PowerShell.

Build Azure Container and Upload Image to Azure

First, we need to download  and install the latest AzureRM bits module locally to the Hyper-V box (if you have done this.. jump down a few lines…)

Install-Module AzureRM -Force

Next, since there was a recent update to the AzureRm module, I now need to update the module path location.

$env:PSModulePath = $env:PSModulePath + "; C:\Program Files\WindowsPowerShell\Modules"

Next, we will need to import the AzureRm module.

Import-Module AzureRM -Force

Next, we’ll need to log-in into our Azure account, and specify the subscription to want to work with. In my case, there are multiple Azure subscriptions tied to my email.

Login-AzureRmAccount
Get-AzureRmSubscription
#select the subsciption you will be working with -- if you have one, you can skip this line
Select-AzureRmSubscription -SubscriptionId "<ID>"

Next, we will create a resource group and storage account, and bind the account the group.

New-AzureRmResourceGroup -Name "ResourceGroupName" -Location "Canada East"
New-AzureRmStorageAccount -ResourceGroupName "ResourceGroupName" -Name "StorageAccountName" -Location "Canada East" -SkuName "Standard_LRS" -Kind "Storage"

If you want to change the storage type, to let’s say Geo-redundant, here are the other types of storage:

Valid values for -SkuName are:

  • Standard_LRS – Locally redundant storage.
  • Standard_ZRS – Zone redundant storage.
  • Standard_GRS – Geo redundant storage.
  • Standard_RAGRS – Read access geo redundant storage.
  • Premium_LRS – Premium locally redundant storage.

Now, we need to create a Container and grab the URL needed to upload our image. I did this through the Azure Resource Manager (ARM) Portal since I couldn’t figure out the PowerShell cmdlet (Get-AzureStorageBlob) — if you can get this to work, please let me know!

You can get the URL from the Web UI when you go into the Storage Account >> Blobs >> Container (in my case, I called it “VHD”) >> Properties.

Now we are ready to upload our image/VHD to Azure! For me this took about 2 hours, uploading a 80GB file @ 9-10MBs.

$rgName = "ResourceGroupName"
$AzureVHDURL = "URL"
$LocalVHDPath = "LocalPathtoVHD"
Add-AzureRmVhd -ResourceGroupName $rgName -Destination $AzureVHDURL -LocalFilePath $LocalVHDPath

Great, now we just need to register the VHD disk to the Gallery, and we can begin creating machines based off our image that is now in the cloud! — Another post! 🙂

How to Enable Nested Virtualization on Hyper-V Windows Server 2016

I figured this post may be useful if you’re like me and testing out Azure Stack. If you are unaware of Azure Stack, in short, it allows organizations to have Azure (Cloud) on their own environment/datacenter. Here is a LINK for more information on Azure Stack. Azure Stack is currently in phase TP2 (Technical Preview 2) and this is the version I will be deploying and testing.

Anyways..

Before getting started with Azure Stack, your physical Windows Server 2016 box must have Nested Virtualization enabled.

First things first, the VM will need to have:

  • Dynamic Memory disabled and provide a minimum of 96GB of memory

3

  • VM will need to have at least 1 vCPU. I gave it 16 as per system/hardware recommendations.

4

  • MAC address spoofing must be enabled.

5

  • Lastly, Virtualization Extensions need to enabled/set to TRUE.

With the following PowerShell code, we can get the value, and then change the value from. By default this value is disabled.

Get-VMProcessor -VMName VMName | FL *
Set-VMProcessor -VMName VMName -ExposeVirtualizationExtensions $true

1

Re-run the first command to confirm the change.

2

Now we are ready to move forward with the Azure Stack install!

Installing SCOM 2016 License Key

Launch the PowerShell console, and Run as Administrator:

Import-Module OperationsManager
Set-SCOMLicense -ProductId "XXXXX-XXXXX-XXXXX-XXXXX-XXXXX"
Start-Sleep -s 10
Restart-Service healthservice, omsdk, cshost

capture

Don’t forget, in order for the Product Key to be applied, you will need to restart all SCOM Services, (or you can run the code above (there is a 10 second delay after the key is applied before the services are restarted)):

  • Microsoft Monitoring Agent (healthservice)
  • System Center Data Access Service (OMSDK)
  • System Center Management Configuration (cshost)

 

Cheers!

How to enable Azure Backup to Canada (Central)

Earlier in 2016, Microsoft increased the number of  Canadian Data Centers to two: Canada East and Canada Central. With most of my customers being within Canada, naturally they want their Azure Backup data stored within the Canada Data Centers/Regions — makes sense for many (legal) reasons. Only problem is, Azure backup is still very limited to specific locations (see chart below).

Fellow Canadian and MVP — Stéphane Lapointe, was able to get this working with some PowerShell magic — Please visit his blog to get the more details of his workaround. The PowerShell code below is workaround to get Azure Backup services bound to the Canadian Regions/Data Centers, specifically the Canada Central region (note, this is still in Preview state), until Microsoft officially allows all Monitoring/ASR services (along with others) to be generally available. This will allow you to create new Azure Backup services and bound them to Canada Central. For more information on this announcement and code details, please visit Microsoft’s announcement.

Also, worth noting, this will only allow you to use Canada Central region for new setup/configurations. It will not change current setups to Canada Central.

Execute the following code on your machine (Run As Administrator…)

Import-Module AzureRM -Force 

#azure account login stuff
$username = ""
$cred = New-Object -TypeName System.Management.Automation.PSCredential -argumentlist $username, $password
Login-AzureRmAccount -Credential $cred
$SubscriptionName = 'Visual Studio Enterprise'

#update recovery services to Canada Central from whatever region it may be (US East, US Central, etc.)
$ErrorActionPreference = 'Stop'
Get-AzureRmSubscription –SubscriptionName $SubscriptionName | Select-AzureRmSubscription
Register-AzureRmResourceProvider -ProviderNamespace Microsoft.RecoveryServices
Register-AzureRmProviderFeature -FeatureName RecoveryServicesCanada -ProviderNamespace Microsoft.RecoveryServices

powershell-result

After about 5 minutes, I re-ran the query, and the Recovery Services were registered to Canada! Sweet..eh? 🙂

powershell-result-2

Now you can create new Azure Backup services bound to the Canada Central region:

arm

(more…)

Issues with Azure Active Directory and Login-AzureRmAccount

If you’re like me, you have probably banged your head against the wall a few times with the Login-AzureRmAccount cmdlet… I reached out to the Azure Development team and not only is this a known issue, but there is currently no solution at the time…. Hmm.

Here is a bit of the background story, followed with the problem and solution to the issue.

Background:

Using PowerShell to script an auto-login to Azure, and start (and shutdown) Virtual Machines (yes, OMS Automation could help/solve this, but in this scenario my customer is currently not on-board with OMS). At any rate, the script is designed to capture some data on a on-premises server, if the threshold breaks, then begin starting resources in Azure, likewise, if the threshold falls back then shutdown those same resources in Azure.

Problem:

Running the following code, I keep getting the a null entry for SubscriptionId and SubscriptionName. Even though the user I have created is a co-administrator and has access to all the resources necessary. Assuming the login did work and the data isn’t needed…when try to start my Azure VM I get an Azure subscription error. So, let me check the subscription details. Well, there we go, I get the following response, “WARNING: Unable to acquire token for tenant ‘Common’” ….. So what gives?

powershell-reply-1

powershell-reply-2

I check and confirm the test-user is in-fact an administrator in ARM (Azure Resource Manager):

arm-portal-1

Solution:

Turns out, the user account created, not only needs to be created and added to the resources with Azure Resource Manager (ARM), but also needs to be assigned as an Administrator within Azure Classic Portal.

classic-portal-1

classic-portal-2

classic-portal-3

Once the test-user was added within the Classic Portal Administrators and set as Co-administrator, I could then get SubscriptionId and SubscriptionName info populate, and Get-AzureRmSubscription with proper details. Yay! (Still get that tenant ‘Common’ warning however…)

powershell-reply-3

Now I can go ahead with my script!

I hope this helps you as much as it helped me.