Tag: Log Analytics

Azure Security Center – Continuous Export via Azure Policy

Earlier this week, I highlighted how you can use Azure Security Center (ASC) and its Continuous Export feature to send Security Alerts and Recommendations to Event Hubs (and/or Log Analytics) — you can find that post HERE. Today I want to show you how to can use Governance best practices, and leverage Azure Policy to ensure ASC is configured to forward to either Event Hubs and/or Log Analytics.

As a quick intro, Azure Security Center (ASC) is a holistic solution provided by Microsoft to not only assess your Azure resources, but can also be extended to your On-Premises infrastructure as well. ASC is a security management solution that improves your overall security posture within your Azure environment and on-premises infrastructure. I work with a lot of customers where they require an “agnostic” SIEM solution. ASC generates detailed security recommendations and alerts that can be viewed through the ASC portal. However when customers have a requirement to send this telemetry to some third party SIEM, such as QRadar, Splunk, etc. In short, your Azure resources can send their security events directly to Event Hubs (via Diagnostic Agents) or can be configured (the easier approach) with ASC.

To get these policies, go HERE to the Azure GitHub repo. Next post, I will walk you through the setup and all the necessary parameters that are required to get this policy up and ‘governing’.

Azure Security Center – Continuous Export

Azure Security Center (ASC) is a great holistic solution provided by Microsoft to not only assess your Azure resources, but can also be extended to your On-Premises infrastructure as well. ASC is a security management solution that improves your overall security posture within your Azure environment and on-premises infrastructure. I work with a lot of customers where they require an “agnostic” SIEM solution, so they don’t have all of their eggs in one basket (sort of speak) with a single vendor. Azure Sentinel is a great solution, but still lacks maturity in comparison to other products like IBM’s QRadar, Splunk and some others.

ASC also generates detailed security recommendations and alerts that can be viewed through the ASC portal. However when customers have a requirement to send this telemetry to some third party SIEM, Azure’s Event Hubs is a great middleman solution.

In short, your Azure resources can send their security events directly to Event Hubs (via Diagnostic Agents) or can be configured (the easier approach) with ASC. Choosing the latter, we can also configure ASC to Continuously Export the data being collected in ASC to be forwarded to Event Hubs. Which in turn will allow the third party SIEM to ingest the data within Event Hubs.

Once you have enabled ASC, enrolled your resources, (assuming you have already configured Event Hubs and a third party SIEM) you can then setup Continuous Export within the ASC console as shown below.

Setting up ASC Continuous Export is pretty straightforward, provided you have already configured Event Hubs, and your SIEM to ingest from Event Hubs. Within ASC, select Continuous Export. Enable which workspace to send the data to, either Event Hubs, or Log Analytics (Sentinel). Select the type of alerts and recommendations (All, Low, Medium, High). Specify the Subscription where Event Hubs lives, the Event Hub Namespace, Name, and Policy Name. Hit Save, and that is it!

That is, pretty simple. Definitely a much easier solution than deploying Linux and Windows Agent Diagnostic (LAD/WAD) — another post for another day 🙂

Log Analytics (OMS) AD Assessment – “No Data Found”

So, you deployed the OMS/Log Analytics AD (Active Directory) Assessment solution, and let it sit for a few hours, or maybe even a few days now.. Yet, the AD Assessment tile is still shows, “No Data Found“….

Well that is frustrating! Below is the series of steps I took to get this working, and ultimately what the actual solution was to get this OMS/Log Analytics solution pack working.

First things first,  did the basics… Check to ensure the Microsoft Monitoring Agent is deployed, and installed correctly. Also checked to see the service was running.

Confirmed the AD Assessment prerequisites were all satisfied:

  • The Active Directory Health Check solution requires a supported version of .NET Framework 4.5.2 or above installed on each computer that has the Microsoft Monitoring Agent (MMA) installed. The MMA agent is used by System Center 2016 – Operations Manager and Operations Manager 2012 R2, and the Log Analytics service.
  • The solution supports domain controllers running Windows Server 2008 and 2008 R2, Windows Server 2012 and 2012 R2, and Windows Server 2016.
  • A Log Analytics workspace to add the Active Directory Health Check solution from the Azure marketplace in the Azure portal. There is no further configuration required.

After all that, I decided to execute the following query within Log Analytics, I got the following results:

Operation | where Solution == "ADAssessment" | sort by OperationStatus asc

Okay, so I ensured .NET 4.0 was installed, fully. For safe measures, I enabled all of the .NET 4.6 sub-features, and for kicks, installed .NET 3.5 as well. Yet.. still nothing!

Next, I decided to take a look at the registry…

If we navigate to the following Registry Key, “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HealthService\Parameters\Management Groups\<YOUR Management Group Name>\Solutions\ADAssessment

I decided to delete the “LastExecuted” key, and then decided to reboot the server….

After a few minutes, I went back to the OMS/Log Analytics portal, and there it is!!!!

I ran the same query again, and verified the AD Assessment solution was working as expected:

Operation | where Solution == "ADAssessment" | sort by OperationStatus asc

Great! Now, if I click within the tile, I get the following AD Health Checks.

I hope this helped! Cheers! For more information on the OMS Active Directory Assessment Solution, please visit: https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-ad-assessment

 

Connect Batch of Azure VMs to Log Analytics (OMS) via PowerShell

So, you have a bunch of Virtual Machines (VMs) in Azure, and didn’t used an ARM template, and now need to connect the VMs to Log Analytics (OMS). Earlier this month, I demonstrated on this can be done with the ARM portal, here’s that blog post. Of course, this has to be done individually and can be very tedious if you have 10’s or 100’s of machines to do this to… All I can think of is PowerShell!

Here is a script I tweaked that Microsoft has already provided but for a single VM. I have just tweaked it to automate and traverse through your entire resource group, and add ALL VMs within the RG to Log Analytics.

Here is the link to Microsoft TechNet for that script. Please test it out and let me know. And if it helped you out, please give it a 5 start rating.

Microsoft TechNet PowerShell Gallery

If all went well, your before and after should look similar to this. I had two test VMs in my Resource Group.

Before:

After:

(more…)

Connect Azure VMs to Log Analytics (OMS) via ARM Portal

Let’s say you have a bunch of machines in Azure, and want them communicating with Azure Log Analytics (aka OMS). Well, I am pretty sure that last thing you want to do is deploy the Microsoft Monitoring Agent to each machine, manually…

Well, now you can connect a VM to Log Analytics (OMS) with just a few clicks.

Go into the ARM (Azure Resource Manager) portal, and navigate to your “Log Analytics” blade, select your OMS workspace name, and within the Workspace Data Sources, select Virtual Machines.

Here you should have your machines that currently live within Azure. As you can see, there is one machine that is not connected to the OMS workspace. Let’s connect it now.

Select the VM in question, and you will now be presented with the following:

Make sure the VM is online/running, and select Connect. The VM must be online in order for the extensions to be passed through.

Give it a few moments, and there we go! No manual agent deployment.

 

We can also verify now in OMS, to see our new machine chatting with Log Analytics. (Go into the Agent Health solution/title)

Dual-Homing OMS/Microsoft Monitoring Agent (MMA) — Questions

Earlier this week, I posted on how the OMS/Microsoft Monitoring Agent (MMA) can be dual-homed for multiple OMS Workspaces.

A good question from the community came up (thank you @ Manoj Mathew), “Have you noticed any performance impacts on the Agents when they are multi homed to OMS?

In the OMS Query below — making use of OMS’ Log Analytics, I checked the performance data in the last 48 hours. Unfortunately I cannot go any further, since the MMA was deployed earlier in that day, and the second OMS workspace was added later that afternoon.

There are a few spikes in the Memory and CPU, but this is also a result of a few factors:

  • Initially there is a high level of CPU/Memory usage as OMS did its stuff when the MMA/OMS made friends and synced up their data/solutions
  • There is a small spike when the second OMS workspace was added but this is minimal at best
  • This server was being patched with 90+ Windows Server OS patches around 8PM.

The query I used to collect the data is here,

perfover48hours

Computer="COMPUTERNAME.FQDN" Type=Perf (CounterName="Available MBytes" OR CounterName="% Processor Time") (ObjectName=Memory OR ObjectName=Processor)

A second question being asked here is, “how many OMS Workspace IDs can be added to “dual-home” the MMA agent?

Unfortunately I only have 3 OMS Workspace’s to work with at the moment in this environment, but with that said, I can surely say a minimum of 3. If you have the ability to test more than 3, I would love to find out!

How to deploy OMS Agent on Linux

There are multiple ways how to deploy the OMS agent on your Linux server. In my post,  I am going to make use of GitHub and do a quick install on a Linux server.

In my environment, I am deploying the OMS Linux (Preview) agent (version 1.1.0-124) on a 64-bit Ubuntu server, version 14.04.4. Your Ubuntu server will of course need an Internet connection (directly or via Proxy). At the time of this post, the following Linux Operating systems are currently supported, and I am deploying the Linux agent version 1.1.0-124.

*image/source, Technet.Microsoft.com

Let’s get started…

Copy and save your OMS Workspace ID and Primary Key, as your OMS agent will need these to authenticate against. These can be found within your OMS Settings > Connected Sources:

4a

Within your Ubuntu shell/terminal, you will need to execute the following three commands in order to download and install the OMS Agent. First we will download the latest OMS Agent from GitHub.

1

  • Followed by,
    • sha256sum ./omsagent-1.1.0-124.universal.x64.sh

2

  • Finally,
    • sudo sh ./omsagent-1.1.0-124.universal.x64.sh –upgrade -w <WORKSPACE ID> -s <WORKSPACE PRIMARY KEY>

3

If all goes well, you should now have an added server to your Connected Sources. Yay!

4b

Very quickly, I can see my Ubuntu server is already transmitting data to OMS.

5

Like Windows servers, we can now start collecting data from the Syslog, collecting performance metrics in Near Real Time, and if your Linux box is deployed with Nagios and/or Zabbix, we can link this data into OMS too!

For additional information on configuring Linux Performance Counters, please visit the following page, HERE.

Lastly, don’t forget to add some important syslog OMS Data Log Collection, here is what I have configured:

6

Cheers!

Monitoring VMware (ESX/ESXi) with OMS

We all know monitoring Hyper-V and/or SCVMM with OMS is rather straight forward, and native. However, what about VMware (ESX/ESXi)?

In my VMware environment, I am using ESXi Host version 5.5 and vCenter version 6.0.

The following post is to help you monitor your ESX/ESXi environment with OMS.

  • First, you will need to enable the ESXi Shell, or SSH on your ESXi host, see HERE how
  • Next, you will need to configure the syslog(s) on your ESXi host, see HERE how

My ESXi server’s IP 10.10.10.30, and I will be forwarding the syslog(s) to my vCenter Windows Server IP 10.10.10.34. To be safe, I am going to configure both port 514 UDP and TCP .

ConfiguringSyslogOnESXiviaSSH

  • Remember to disable the firewall(s) on your vCenter Windows server
  • Now on your vCenter Windows Server, you will need to deploy the OMS Agent (Microsoft Monitoring Agent), see HERE how
    • Once your vCenter server is communicating with OMS, we can move on to the next step
  • Within OMS, if you haven’t already, you will need to enable “Custom Logs“; Settings > Preview Features > Enabled Custom Logs

EnableCustomLogs

  • Next, set up the following syslog file as your custom log on your vCenter server. In my case, my ESXi hostname is ‘RaviESXi’ and its IP is 10.10.10.30.
  • Followed by importing your syslog into OMS for the first time (see below for instructions)

C:\ProgramData\VMware\vCenterServer\data\vmsyslogcollector\yourESXiHostnameHere\syslog.log

For me, that path translates to, “C:\ProgramData\VMware\vCenterServer\data\vmsyslogcollector\RaviESXi\syslog.log

In my example, I then created an OMS custom log named “VMwareWin” for ESXi syslog. (By default, _CL suffix will be automatically added, which will result as, “VMwareWin_CL”) If you are unfamiliar with OMS’ Custom Logs, see HERE.

Once you have completed this step, it make take some time for your data to start showing up in OMS. Give it an hour or so…

  • Now we can start creating some custom fields within OMS. For example, ESXi Hostname, vmkernel, hostd, etc. See HERE about OMS’ custom fields in log analytics.
    • If you have done everything correctly, you should have custom logs and custom fields similar to this:

CreatingCustomLogs(2)

CreatingCustomFields

  • Now  you can start creating some dashboards with some custom queries!

For example, here’s one query I tested with and thought was worthy for its own dashboard:

All events and number of occurrences:

Type=VMwareWin_CL | measure count() by VMwareProp_CFDashboard1Example

Of course the number of queries and dashboards is endless at this point. Feel free to let me know your thoughts and some queries/dashboards you have come up with!

Lastly, don’t forget to add some important syslog OMS Data Log Collection, here is what I have configured:

6

Cheers!