No SIEM, No Splunk, No Problem!

I have been meaning to write a post on this for awhile now. There have been too many times to count where as an analyst I did not have the necessary tools to perform a good investigation during an engagement. There are some organizations that do not have Splunk, or ELK or any platform available to use for your investigations and log analysis, so you must do without. If you find yourself in that situation this post is for you. I will go through acquiring and investigating logs using JQ, Bash, and Python. For Windows users, I would suggest using WSL with your distro of choice for this exercise. This is multi-part series. This post will cover the following scenario: account compromise — O365.

Lab Setup

Before we get to the log analysis, we need to setup the lab scenario.

We can do this by setting up our very own O365 environment with Microsoft’s Developer Program.

We are going to be using sample data packs, so add both “users” and “mail & events” data packs to your subscription:

Once that is done, head over to the Office Protection Portal with the login credentials of the sandbox O365 Admin Account, then go to Search > Audit Log Search. In the top pane, click the option to “Enable Audit Logs for users and admin”. This process might take a few hours to complete as indicated below.

Great, now your O365 sample environment is setup with both “users” and the “mail & events” sample data.

Before we dive into running a phishing attack simulation we are going to setup a sample credential harvester phishing site using BHISCredSniper tool. […more on this later]

We’re now going to be running a phishing attack simulation that mimicks a scenario where an attacker has gained control of multiple accounts in your environment. Let’s assume that the victim users fell for a phishing link that led to a credential harvester, and that the accounts do not have multi-factor authentication enabled.

To simulate this, we are going to use Microsoft’s O365 built-in attack simulation tool located in the Security Portal. Be sure to use your sandbox O365 admin credentials to login.

Select “Credential Harvest” technique, select any payload, and send to all users in organization. You can select “no training” for this as it is not needed. For your landing page, selecting “Microsoft Landing Page” is fine, and leaving the other details as is, is fine. Launch the simulation as soon as you’re done with the configuration.

Great, now our environment and phishing simulation is setup. We will now continue to simulate some rule creations, and other malicious behaviors.

To generate some inbox rules for our user(s) we will be leveraging Microsoft’s Graph API w/ a custom script I’ve built for this — We will have to provision access to it via an “App Registration”, and also assign permissions, so let’s do that now.

To access Graph, we will need to record the following values and add it our script:

  • Application Client ID
  • Application Client Secret
  • Directory Tenant ID

And to be able to create / generate inbox rule(s) per user we will need the following permissions set in Graph:

  • MailboxSettings.ReadWrite
  • User.ReadWrite.All

So let’s get to to it.

With your Sandbox’s OrgAdmin Account, headover to the Azure Portal, and locate and browse to “App Registrations”.

App Registration

Then enter in custom name of the Application, and provision access to your organizational directory as indicated below:

Once done, it will take you to the “App Registration” — Overview page.

Copy down the following values in red in a text editor as we will use them for our script later:

On the left pane, click on “Certificate & Secrets” and enter in details as follows and click “Add”.

Great, now your client secret has been created, copy the value and save.

So now that we have our — ClientID, TenantID, and ClientSecret as values to access Microsoft’s Graph API, we need to add permissions.

On the left pane, click on “API Permissions”, then “Add”, select “Microsoft Graph”> “Delegated Permissions”, and search and select the following options:

  • “MailboxSettings.ReadWrite”
  • “Users.ReadWrite.All”

Click “Add Permissions” and your permissions should be added as follows:

Finally, you want to make sure the permissions are granted so select “Grant admin consent for …”, then “yes” on prompt:

Alright, great, that is it for Graph API access provisioning. Now let’s get into running our script to generate some rules our user(s).

Before running, we have to assign our environment variables with the previously recorded — ClientID, ClientSecret, TenantID, as well as add our organization sample users in our usernames variable in line 47 of our script as follows:

Also, we will have add our password with “Azure_User_Password”environment variable.

Depending on your system (we’ll use bash for this example) assign the values with previously saved values as follows:

export AZURE_CLIENT_ID=YourClientId
export AZURE_TENANT_ID=YourTenantId
export AZURE_CLIENT_SECRET=YourClientSecret
export AZURE_USER_PASSWORD=YourUserPassword

For our rule creation we will be using the following which is already defined in the script:

This should forward any emails containing payment, covid, outstanding, usd, euro, bill, transfer values to our attacker at I have changed this value to point to a custom lab email, feel free to do so as well.

Once done run the script with “python3.9” command.

Great…now are rules have been created for our small sample of users, and our lab has been setup.

And now for the log collection.

Log Collection — Export Method

Before we can analyze our logs, we need to collect the logs for our analysis.

The simplest (but time consuming) way to collect this data is directly from Microsoft Compliance Center > Audit, search and export the data from a specified time frame. From my experience, the download can take some time, so be patient.

Once the export is done, it can be opened with Microsoft Excel.

Finally, we have our logs to analyze, however, as you can see, this is a lot of data to work with. Looking in the file you can see that most of the real meat of the data, is in the “AuditData” column. “AuditData” contains more information about each event. We will convert the csv to a json object with a few lines of code below, then extract “AuditData” for further analysis.

import pandas as pd
file = “<YourFileName>”
csv_file_pd = pd.DataFrame(pd.read_csv(file, sep = “,”, header = 0, index_col = False))
csv_file_pd.to_json(“AuditLog.json”, orient = “records”, date_format = “epoch”, double_precision = 10, force_ascii = True, date_unit = “ms”, default_handler = None)

This drops a local json object called “AuditLog.json”. We now will extract the “AuditData” field from the object and dump it as a another object. We will do this with JQ and the following command:

jq ‘.[] | .AuditData’ “AuditLog.json” -r | jq . > AuditData.json

There, we now have a json object of the “AuditData” details stored as “AuditData.json” for further analysis.

Log Collection — API Method

An additional method of retrieving the logs is by APIO365 Management APIs. I will go ahead and cover this as well. Feel free to skip ahead to the “Log Analysis” section as needed.

We’re going to have to provision access to O365 Management APIs in “Registered Apps”, similar in fashion to our initial Lab Setup —

… with your Sandbox’s OrgAdmin Account, head over to the Azure Portal > “App Registrations” > “New Registration” (“O365 Management APIs” as name”), select “Accounts in this organizational directory only (<yourdomain> only — Single tenant)”, then “Register”.

Save, and record the following values in a text editor:

  • Application Client ID
  • Directory Tenant ID
  • Application Client Secret (created under “Certificates & Secrets” left pane)

There are many scripts available for retrieving logs via API, but I have created on for the purpose of this lab —

Set your environment variables with previous recorded values for ClientID, TenantID, Client Secret, and also add the password of the user you’ll be authenticating with.

export AZURE_CLIENT_ID=YourClientId
export AZURE_TENANT_ID=YourTenantId
export AZURE_CLIENT_SECRET=YourClientSecret
export AZURE_USER_PASSWORD=YourUserPassword

Run the script, and you’ll be prompted for a username. This is the username that has access to use the O365 Management API. Once the script runs, it will drop a json object to the local directory for analysis.

Log Analysis

Great, so now we have logs to analyze. Let’s start with quickly browsing through the json object with the following command (press “space” bar to run through more contents of object, and press “q” to exit when finished):

jq . AuditData.json | less

Quickly browsing through this, we see a lot of information that could prove to be useful. Since this is a json object, the format of the data is going to be in key-value pairs…e.g. “field”:”value”. Knowing this let’s see if we can get a list of all the ‘keys’ with this command:

jq ‘keys’ AuditData.json | sort -u

Vroila, all the fields available to query in our json object.

We can get some more detailed information on what each field from the audit log means here.

Of the available fields, we are going to focus on “ActorIpAddress”, “Operation”, “UserId”, “CreationTime”.

Let’s look at all the unique values for “Operations” with this command:

jq .Operation AuditData.json | sort -u

As you can see, there are a number of operations performed.

Let’s get a count of each recorded operation with this command:

jq .Operation AuditData.json | sort | uniq -c | sort -nr

In the previous screenshot, you’ll see a count of “UserLoginFailed” operations, we’ll filter on these failures with this command:

jq ‘. | select(.Operation == “UserLoginFailed”)’ AuditData.json | more

Great, you can dig into this more with this command to get a list of unique ClientIp addresses with “UserLoginFailed” events.

jq ‘. | select(.Operation == “UserLoginFailed”)’ AuditData.json | jq .ClientIP | sort -u

The same can be done with “UserLoggedIn”:

jq ‘. | select(.Operation == “UserLoggedIn”)’ AuditData.json | jq .ClientIP | sort -u

Let’s see if we can find the phishing simulation created during our lab setup, and the number of user(s) that were targeted.

***to be continued***




Just a blog for cybersecurity analysis, threat intel + engineering.

Recommended from Medium

Octopus Protocol IDO on Bounce Finance is Live

Phase 1 of Quantum Community Program of Axelar continues to 17th of January.

​​🤝 S-Wallet values 🤝

Beware of False Conclusions in Cybersecurity

{UPDATE} Baldi Horror Basics Capitolo 2 Hack Free Resources Generator

{UPDATE} Boxy Strike Battle Simulator Hack Free Resources Generator

Access Controls for Electronic Machine-Readable Travel Documents

The Role of MDM in the Remote Workforce

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
J. Isⲁⲁc 🥷

J. Isⲁⲁc 🥷

Hello world. I am an experienced security analyst, developer, and aspiring engineer here to share my adventures, knowledge, and expertise in the field.

More from Medium

Three Types Of Margin

The College Bubble: Befriend or Break?