Azure Sentinel: design considerations

I’ve written about Azure Sentinel before and how cloud SIEM’s are changing the security landscape. Microsoft provides Azure Sentinel as-a-service, which you can enable with the click of a button, only paying for the storage you use.

However, Azure Sentinel, as with any cloud services and/or SIEM, still needs some design considerations if you are putting it into production. What are these considerations? And what are the options available to me and my company?

In this article I’ll show you a couple of things to consider when designing for Azure Sentinel. From foundational choices, to identity & access, to (data) connections and dashboarding; I’ll share some real-world experiences.

Let’s look at the foundation first

Before we start, let’s make sure we are on the same page first and understand the fundamentals. Azure Sentinel uses a Log Analytics workspace as its backend, storing events and other information. Log Analytics workspaces are the same technology as Azure Data Explorer uses for its storage. These backends are ultra-scalable, and you can get back results in seconds using the Kusto Query Language (KQL).

The first thing to plan for is the Log Analytics workspace we’ll be using. When setting up Azure Sentinel for the first time, it allows you to create a new Log Analytics workspace or to pick an existing one.

DESIGN CONSIDERATION: New or existing Log Analytics workspace?

Let’s look at why would you want to re-use an existing workspace. Of course, it would be the easy way; it is already there, you’ve set up the right access to it, data is already streaming in and you can just add Azure Sentinel to it. No problem, right?

Well, access control is particularly one of the bigger reasons to potentially create a new Log Analytics workspace. That allows you to tightly control who has access to that aggregated data in Azure Sentinel, which often is a CISO requirement as we’ll be discussing below.

Apart from access control reasons, you might also run into a technical challenge that forces you to create a new workspace; it is relatively hard to move an existing Log Analytics environment over to another subscription. You need to first offboard agents, remove current Solutions, before you can move it. And that might cause ‘downtime’ for the monitoring solution currently using that workspace.

And of course, the last reason would be that sometimes you’ve created a bit of history in your current environment; experimented with settings, have a name for your workspace that you’d like to change etcetera, so you might want to start with a ‘clean slate’ because of that.

DESIGN CONSIDERATION: How long do we need to store our data?

One other thing to consider is how long you will want to store the data. The default setting will be 31 days. However, you can change this workspace setting and extend to up to two years. As per the documentation:

“The retention period of collected data stored in the database depends on the selected pricing plan. By default, collected data is available for 31 days by default, but can be extended to 730 days. Data is stored encrypted at rest in Azure storage, to ensure data confidentiality, and the data is replicated within the local region using locally redundant storage (LRS). The last two weeks of data are also stored in SSD-based cache and this cache is encrypted.”

No, Azure Sentinel will NOT replace Azure Security Center

An often-heard remark is: “Oh, so Azure Sentinel will replace Azure Security Center.” No, no no. Azure Security Center has its own place in the security landscape. It acts as the primary ‘engine’ to perform detections on Microsoft Azure, in your VM’s, in containers and on other properties such as Azure Stack, your on-premises infrastructure, etcetera.

Want to detect crypto miners in your Linux VM on Azure? Enable Azure Security Center. Want to get best practices and insights on securing your network in Azure? Enable Azure Security Center.

However, if you want to coordinate your security operations centrally, and aggregate multiple security solutions, such as Azure Security Center, Microsoft’s Cloud App Security, Azure ATP and others, you will want to enable Azure Sentinel.

By connecting all these data sources, you can start building a single pane of glass, and have one point of entry for your responders when they need to go threat hunting.

DESIGN CONSIDERATION: Which other security solutions will I be enabling alongside Azure Sentinel?

The identity and access piece is important

As pointed out above, often the CISO office will require you to tightly control who has access to that aggregated data in Azure Sentinel, because it could contain personal identifiable (PII) data. Normally only appointed security officers will be granted ‘read access’.

DESIGN CONSIDERATION: Who needs access to the data in Azure Sentinel? Can we provide that access ‘just in time’ to these people & roles?

Microsoft is in the process of adding RBAC features to Log Analytics workspaces as Oleg Ananiev, the group program manager for both Azure Monitor and Log Analytics, points out. This will then implicitly work for Azure Sentinel. More information can be found here.

As people come and go in a company, security offers will also likely be changing over time. We don’t want to grant access to a specific person but to the role he or she is fulfilling. We also don’t want to grant access all the time, but only when needed; for instance, when hunting for threats, or when a specific case was raised, and an investigation is opened.

This is where Azure Active Directory (AAD) Privileged Identity Management (PIM) can help. You can find more information on Azure AD PIM here. You will need either Azure AD P2 licenses or EM+S E5 licenses for those users you would like to use with Azure AD PIM.

Plan for the data connections

Azure Sentinel has a lot of possible data sources. Each and everyone of those needs a data connection and potentially a configuration.

DESIGN CONSIDERATION: Which data sources will I be connecting? What configuration does that data connection need?

I won’t be writing up each configuration of each possible data source. But I will provide you with a few ones that you need to think about, because there are things to know on how you can connect them:

*** Office 365 ***
You can have only ONE connection going back to your Office 365 tenant. Some of you may already be using the Office 365 ‘solution’ that was part of Microsoft’s Operations Management Suite (OMS) to ‘monitor’ security. You need to disconnect that one, to be able to connect that Office 365 tenant to Azure Sentinel.

.

*** Azure Security Center ***
If you have multiple subscriptions in your tenant, some or all containing Azure Security Center instances: no problem. However, Azure Sentinel will only be able to aggregate information from and connect to instances in the tenant that it is residing in.

.

*** Network appliances ***
Most of the dashboards that Azure Sentinel provides for network vendors, such as Palo Alto, Check Point, Fortinet, F5 and Barracuda, rely in the data to be ingested as syslog messages. This will require you set up a Linux-based VM that has the Microsoft Monitoring Agent installed. That machine will receive the syslog messages and bring it to the Log Analytics workspace natively.

How should I connect other SIEM systems?

Some enterprises will already have some sort of SIEM solution, like for instance ArcSight. And while I am NOT advocating that this is the preferred way of setting up cloud governance, your CISO office might want to hook up Azure Sentinel to that current system.

DESIGN CONSIDERATION: Will I be connecting Azure Sentinel to another SIEM solution?

If that is a requirement, you will want to consider using Azure Monitor and Event Hubs to forward your alerts to this other system. By using Event Hubs, you can do this safely and reliably; even when the receiving end is offline or malfunctioning, events get stored in the queue and Azure will release them when the system is back online.

If your system is not supported by Azure Monitor or Event Hubs, there still a fair chance to get it integrated with Azure Sentinel. There is a growing list of third parties that have built their own integrations on top of the API, that you can use. You can find the list here.

How can we support the threat hunters?

Up until now, we’ve talked about getting data into Azure Sentinel. But after it gets processed and an alert gets raised, you will want to investigate. Your threat hunting colleagues need access to the data to understand what is going on.

DESIGN CONSIDERATION: What technology will the threat hunting colleagues be using? Do they prefer Jupyter? Will they require KQL access to the workspace?

One of the ways to do threat hunting is using the Kusto Query Language (KQL) and search through events quickly and easily in the workspace. They could use Azure Data Explorer, the ‘Logs’ function of the Log Analytics workspace, a third-party application (such as Grafana) or the native Azure Sentinel UI in the Microsoft Azure portal.

That last option, going threat hunting from the Azure Sentinel portal UI, is a neat option. Microsoft provides you out of the box with pre-fab hunting queries and maps them back to the right Tactics category (fi: Initial Access, Lateral Movement, etcetera). Either way, consider what you would to do to provide them with the right UI, and access rights.

Another popular option among threat hunters is Jupyter. Microsoft has a free service based on Jupyter notebooks called Azure Notebooks. Through the ‘Kqlmagic’ extension, you can use Python to directly query the workspace using KQL queries. I’ve wrote about that here. Consider if they will be using Jupyter locally (fi: in a docker container) or if they’ll use Azure Notebooks. Also consider where you will be storing the notebooks; GitHub is a great option for that. And remember, Microsoft already provides you with many sample notebooks to get you jumpstarted.

Dashboards: how will we visualize the Azure Sentinel data?

Azure Sentinel provides a lot of out-of-the-box dashboards. Some of them are solution focused (Office 365), some are technically focused (Insecure Protocols) and some are geared towards third parties (F5, Palo Alto, etcetera).

Technically, these are JSON files that work in the Azure Dashboards section of your portal. You import them into your Tenant, and they will be available for everyone who can access that Tenant. Of course, you can restrict this with the built-in Azure access controls as they are just resources like any other.

Microsoft regularly updates it (GitHub) repository with new versions of the dashboards as they receive feedback from the field. You can manually update the JSON file in your Tenant or use the built-in functions in the Azure Sentinel UI. Either way, you should plan for some change management around this.

DESIGN CONSIDERATION: What are my requirements for visualizing Azure Sentinel data? How do I provide access to those programs and/or operators?

Another popular choice to visualize data from Azure Sentinel is to use open source visualization tools. Grafana is a great option, because it has a large ‘store’ with visualization types (most of them free), and because Microsoft provides you with a native Log Analytics connector for Grafana.

With that connector, you can use Kusto (KQL) queries to get specific data from Azure Sentinel and map it onto one of Grafana’s visualizations. For instance, a world map with network connections, or a list of Alerts. Grafana has dashboarding features that most SOC’s will love, for instance the rotating dashboards. You will of course need to plan access from Grafana to Azure Sentinel’s data.

Escalation and notifications

All the above are technical design considerations. However, if Azure Sentinel will be powering your Security Operations Center (SOC), you will need to design your processes as well. How will your Alerts be followed up? Do we need a connection to our ticketing system? What if alerts & tickets stay open for too long? Are the right people, and potentially the management, informed in time (before breaching the SLA)?

DESIGN CONSIDERATION: What process do I need to run my Security Operations Center (SOC)? Which tools will support my Service Levels?

One of the options available to you out-of-the-box to automate the follow-up of alerts are Playbooks. In essence, these are Azure Logic Apps that can be triggered whenever a certain condition is met. For instance, an Alert with a high severity gets raised by Azure Sentinel, and you want to send this to a security engineer via text message. The logic app could contain code that connects to Twilio and sends the Alert description to a specified phone number.

But is this reliable enough? How do I know the security engineer has read it? What if he or she didn’t, and we need to escalate to the next engineer. Or worse yet, we’re approaching SLA times and we need to start informing management. This is where 3rd party solutions like SIGNL4 come in. There are a few out there, but SIGNL4 is great because it is a cloud service where you can set escalation paths, do two-way communication (to receive acknowledgement), use multiple channels (ersistent push, text and voice) and log the audit trail. They also support duty scheduling and have a 2-tier escalation model.

Key takeaways

The key takeaway from this article is that while Azure Sentinel is software-as-service, you should still plan for the implementation of the service. Gather your business / CISO requirements and consider for each subject what you should do. Also, don’t forget that it is not only a technical deployment, but you will need to plan for the process side as well.

Do you have other design considerations you are taking into account when deploying Azure Sentinel? Do you have real-world experience with Azure Sentinel? I would love to hear from you in the comments below.

Stay tuned for the next installment in my multi-part deep-dive series on Azure Sentinel!

— Maarten Goet, MVP & RD