Mordor Labs 😈 — Part 1: Deploying ATT&CK APT29 Evals Environments via ARM Templates 🚀 to Create Detection Research Opportunities 🌎!

In late 2019, the ATT&CK Evaluations team evaluated 21 endpoint security vendors using an evaluation methodology based on APT29. On April 21st, 2020, they released the results of that evaluation, the emulation plan, all payloads used for Day 1 and Day 2 , and a Do-It-Yourself Caldera plugin.

One of the main goals of the Mordor project is to create detection research opportunities for the Infosec community by releasing datasets generated after emulating adversarial techniques. Therefore, I saw this as a great opportunity to learn a little bit more about APT29, build the environment and be able to expedite the creation of datasets that I could share with the community. I hope this helps other security researchers around the 🌎 to spend less time trying to figure out a lab setup and focus more on the analysis of the data 🍻.

This post is part of a three-part series where I share my experience deploying the ATT&CK APT29 evaluation environment via Azure Resource Manager (ARM) templates and collecting free telemetry produced after executing the emulation plans for each scenario.

The other two parts can be found in the following links:

What is Mordor 😈 Labs?

The Mordor Labs project is a repository with cloud templates, configurations and scripts to deploy network environments exclusively to generate datasets for the Mordor project. Mordor Labs is committed to build/replicate the environments used for ATT&CK Evaluations once the environment design is released along with the evaluation results and emulation plans 🍻.

What APT29 Environment?

As I mentioned before, this is the environment that was released along with the evaluation results.

However, not every endpoint was used during the evaluations. After reading the emulation plans for the two scenarios, I decided to draw a similar design and use it as a reference to deploy everything in Azure via ARM Templates.

Azure Resource Manager (ARM) Templates?

To implement infrastructure as code for your Azure solutions, use Azure Resource Manager templates. The template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it.

The Azure Resource Manager is the deployment and management service for Azure and below you can see some of the ways you could interact with it.

A few things that I like about ARM templates are the orchestration capabilities to deploy resources in parallel which makes it faster than serial deployments, and also the feature to track deployments via the Azure portal.

Additional Reading

Designing the Network (ARM Templates)

Virtual Networks

"name": "string",
"type": "Microsoft.Network/virtualNetworks",
"apiVersion": "2019-11-01",
"location": "string",
"tags": {},
"properties": {
"addressSpace": {
"addressPrefixes": [
"subnets": [
"id": "string",
"properties": {
"addressPrefix": "string",
  • I needed to deploy 2 virtual networks to separate the corporate and adversaries environment as well as a few subnets to group some of the endpoints in the corporate environment and host VPN connections.

Virtual Network Peering

"name": "string",
"type": "Microsoft.Network/virtualNetworks/virtualNetworkPeerings",
"apiVersion": "2018-11-01",
"properties": {
"allowVirtualNetworkAccess": "boolean",
"allowForwardedTraffic": "boolean",
"allowGatewayTransit": "boolean",
"useRemoteGateways": "boolean",
"remoteVirtualNetwork": {
"id": "string"
"remoteAddressSpace": {
"addressPrefixes": [
"peeringState": "string"
  • Virtual network peering enables you to seamlessly connect virtual networks. I needed to connect corporate with the adversary environment.

Virtual Network Gateway (Point-to-Site VPN)

"name": "string",
"type": "Microsoft.Network/virtualNetworkGateways",
"apiVersion": "2019-04-01",
"location": "string",
"tags": {},
"properties": {
"ipConfigurations": [
"id": "string",
"properties": {
"privateIPAllocationMethod": "string",
"subnet": {
"id": "string"
"publicIPAddress": {
"id": "string"
"name": "string"
"gatewayType": "string",
"vpnType": "string",
"enableBgp": "boolean",
"activeActive": "boolean",
"gatewayDefaultSite": {
"id": "string"
"sku": {
"name": "string",
"tier": "string",
"capacity": "integer"
"vpnClientConfiguration": {
"vpnClientAddressPool": {
"addressPrefixes": [
"vpnClientRootCertificates": [
"id": "string",
"properties": {
"publicCertData": "string"
"name": "string"
"vpnClientProtocols": [
  • A VPN gateway is a specific type of virtual network gateway that is used to send encrypted traffic between an Azure virtual network and an on-premises location over the public Internet.
  • The virtual network gateway requires a specific subnet named GatewaySubnet. I deployed that inside of the Virtual Network used for the corporate environment. I do not deploy anything in there.
  • Next, I just connect the virtual network gateway to it
  • Finally, I set the vpnClientProtocols to OpenVPN. Remember, I deploy a Point-To-Site (P2S) VPN gateway along with the environment. A Point-to-Site (P2S) VPN gateway connection lets you create a secure connection to your virtual network from an individual client computer. A P2S connection is established by starting it from the client computer.

That’s it from a network perspective! . Now let me show you how to create a self-signed Root certificate to set up the P2S VPN gateway.

Generate Root Certificate

Certificates are used by Azure to authenticate clients connecting to a VNet over a Point-to-Site VPN connection.

Once you obtain a root certificate, you use the name and public key information to set up the gateway network. The root certificate is then considered ‘trusted’ by Azure for connection over P2S to the virtual network. You also generate client certificates from the trusted root certificate, and then install them on each client computer. The client certificate is used to authenticate the client when it initiates a connection to the VNet.

Create a Self-Signed Root Certificate (NOT for production)

Azure docs has great step-by-steps to do it with:

I used part of the Linux one and ran the following commands:

Install strongSwan (MAC)

brew install strongSwan

Create a root CA Key

ipsec pki --gen --outform pem > caKey.pem

Create root CA certificate signed with the CA’s root key

ipsec pki --self --in caKey.pem --dn "CN=<YOUR-OWN-NAME>" --ca --outform pem > caCert.pem

You can verify new root certificate with the following command

openssl x509 -in caCert.pem -text -noout

Copy root CA cert with the following command and save it. It will be used while deploying the APT29 environment 😉

openssl x509 -in caCert.pem -outform def | base64 | pbcopy

We are going to use the root certificate during the deployment of the environment to set up the p2s vpn gateway.

Generate Client Certificate

Each client computer that you connect to a VNet with a Point-to-Site connection must have a client certificate installed. You generate it from the root certificate and install it on each client computer.

Create a Self-Signed Client Certificate (NOT for production)

provide a username

export USERNAME="xxxxxxxx"

create the client key

ipsec pki --gen --outform pem > "${USERNAME}Key.pem"

Generate the certificate with the root CA key and sign it with the CA’s self-signed certificate that we created earlier.

ipsec pki --pub --in "${USERNAME}Key.pem" | ipsec pki --issue --cacert caCert.pem --cakey caKey.pem --dn "CN=${USERNAME}" --san "${USERNAME}" --flag clientAuth --outform pem > "${USERNAME}Cert.pem"

Later, we are going to use the client’s self-signed certificate and client’s key to set up our OpenVPN client and connect to the environment. For now, I would recommend to download an OpenVPN client application.

We should be ready to deploy the environment from a network perspective. However, there are a few settings that the environment calls for to fit the emulation plans. Also, we need to deploy some logging pipeline to be able to capture the telemetry generated. The main goal is to Export the data and share it with others in the community.

Windows Endpoints Configurations

Based on the APT29 environment page, the following modifications were made to the standard Azure images used during the evals:

  • WinRM is enabled for all Windows hosts
  • Powershell execution policy is set to “Bypass”
  • Registry modified to allow storage of wdigest credentials
  • Registry modified to disable Windows Defender
  • Group Policy modified to disable Windows Defender
  • Configured firewall to allow SMB
  • Set UAC to never notify
  • RDP enabled for all Windows hosts

I added all those requirements to one of the scripts I run on every endpoint via an azure virtual machine extension named CustomScriptExtension while I deploy the environment:

CustomScriptExtension (ARM Template)

  • Resource type: Microsoft.Compute/virtualMachines/extensions
  • This extension downloads and executes scripts on Azure virtual machines. This extension is useful for post deployment configuration, software installation, or any other configuration or management tasks. Scripts can be downloaded from Azure storage or GitHub, or provided to the Azure portal at extension run time.
"name": "string",
"type": "Microsoft.Compute/virtualMachines/extensions",
"apiVersion": "2019-03-01",
"location": "string",
"tags": {},
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "string",
"autoUpgradeMinorVersion": "boolean",
"settings": {},
"protectedSettings": {}

Setting Patient Zero

There are a several steps provided for each scenario to set up the victim, and I wanted to automate as much as I could. I had to read these two docs several times to make sure I had everything to expedite the initial acccess steps.

I then created a script and ran it also with the CustomScriptExtension:

Deploying Payloads (Victim & Adversary Box)

I also packed the payloads in a zip file and dowloaded them automatically on each patient zero and adversary’s box depending on the scenario.

Windows Security Auditing

One of the main goals for all this research and work is to be able to generate data and create detection research opportunities for other security researchers around the 🌎. Therefore, I wanted to configure my Windows boxes with a few configs to enable free telemetry.

Security Audit Policies

Sysmon v.11 Config

SACLs (Default ones — POC)

Windows Event Forwarding Subscriptions

What about Network Logs? .. just wait! 😉

How do I Collect Security Events?

Now that you understand a little bit more how I set up the environment to execute the emulation plans and how endpoint telemetry is enabled, it is time to show you how I collect security events at the host and network layer 🍻

Collect Endpoint Security Event

I like to use existing tools that are proven to work at scale and this is not the exception. TL;DR — I set Windows Event Forwarding in my Windows environment, send data to a windows event collector, ship the data with NXLog CE over to a Logstash server to then be sent over to an Azure Event Hub. Finally, I use Kafkacat to connect to the Azure event hub to read events as they get to the hub, and write them to a JSON file in real-time.

In more details the following is happening in the image above:

  • First, I set up the Windows event collector (WEC) to receive security events
  • Next, I set up windows endpoints to send events to the WEC
  • I then install nxlog on the WEC to ship events over to Logstash via the following nxlog config.
  • I use a Linux VM with Logstash installed as a docker container to process events send from nxlog. Logstash is an open source data collection engine with real-time pipelining capabilities.
  • Instead of a Kafka server, I use Azure Event Hubs with Kafka features enabled to receive and store events from Logstash. Azure Event Hubs is a server-less big data streaming platform and event ingestion service.
  • Finally, I use Kafkacat in consumer mode to connect to the Azure event hub and read the events available in the hub in real-time. I use this kafkacat config template to connect to the Azure event hub. Kafkacat is a generic non-JVM producer and consumer for Apache Kafka.

WEC -> Logstash -> Azure Event Hub

This is the Logstash config file I use to receive events from the WEC, filter them and send them over to an Azure Event Hub. The output plugin I use is the Kafka one.

input {
tcp {
port => 3515
filter {
json {
source => "message"
tag_on_failure => [ "_parsefailure", "parsefailure-critical", "parsefailure-json_codec" ]
remove_field => [ "message" ]
add_tag => [ "mordorDataset" ]
output {
kafka {
codec => "json"
bootstrap_servers => "${BOOTSTRAP_SERVERS}"
sasl_mechanism => "PLAIN"
security_protocol => "SASL_SSL"
sasl_jaas_config => "${SASL_JAAS_CONFIG}"
topic_id => "${EVENTHUB_NAME}"
ssl_endpoint_identification_algorithm => ""

Collect Network Events 💸

Now, here is where I gets even more interesting 😆 Why? I did not know one could start a packet capture in an Azure VM with one command, filter it and send it over to an Azure account storage once it is stopped. A serverless approach that I like 🍻.

Network Watcher Agent extension for Windows

"type": "extensions",
"name": "Microsoft.Azure.NetworkWatcher",
"apiVersion": "[variables('apiVersion')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
"properties": {
"publisher": "Microsoft.Azure.NetworkWatcher",
"type": "NetworkWatcherAgentWindows",
"typeHandlerVersion": "1.4",
"autoUpgradeMinorVersion": true

I created the following script to start a packer capture session:

The main part of that script is the Azure CLI function that one could use to start a packet capture session on a specific virtual machine in a specific resource group with a specific filter (filter our noise).

az network watcher packet-capture create --resource-group ${RESOURCE_GROUP} --vm ${COMPUTER} --name "${COMPUTER}_PCAP" --storage-account ${STORAGE_ACCOUNT} --filters "                           [

Filtering the noise is great for this type of lab environments, because a lot of the extensions installed on the endpoints generate a lot of network connections with information about my subscriptions, resource groups, etc. 😱

Once you start a session, you can see them in your Azure Portal > Network Watcher > Packet Capture

I also created the following script to stop and potentially delete the packet capture sessions:

az network watcher packet-capture stop --name "${COMPUTER}_PCAP" --location ${LOCATION}az network watcher packet-capture delete --name "${COMPUTER}_PCAP" --location ${LOCATION}

Once you stop a packet capture session, it is automatically saved in the storage account that you pass as a parameter to the Azure CLI commands to start the packet capture session in the first place.

You can then simply download it, drag and drop it on Wireshark (maybe?)😉

Apparently one could also use this packet capture feature for proactive network monitoring with alerts and Azure Functions 🙀. That’s for another blog post 🙏. Oh and you can do all that via the Azure Portal too. I just like to use Azure CLI to programmatically start and stop packet capture sessions.

Deploying the Environment

You might be wondering how you might be able to deploy this?


  • Sel-signed root certificate (Base64 Blob)
  • Name of root certificate (CN=<this part>)
  • Azure CLI set up (Optional — Azure CLI Option)

Azure CLI (Favorite Way)

Clone the project and change directory path to mordor-labs/environments/attack-evals/apt29

git clone
cd mordor-labs/environments/attack-evals/apt29

Run the following command and just pick which scenario you would like to deploy (e.g Day1). The core environment is the same. Only a few changes per scenario specially on the Victim’s box (defined automatically by the script) and the C2 server (Pupy C2 & Metasploit vs PoshC2)

az group deployment create --name <DEPLOYMENT-NAME> --resource-group <RESOURCE-GROUP-NAME> --template-file azuredeploy.json --parameters a
dminUsername=<ADMIN-USERNAME> adminPassword='<ADMIN-PASSWORD>' pickScenario="<Day1 or Day2>" clientRootCertName=<ROOT-CERT-NAME> cl
  • adminUsername: Local admin user on every endpoint (Windows & Linux)
  • adminPassword: Local admin password on every endpoint (Windows & Linux).
  • pickScenario: Day1 or Day2
  • clientRootCertName: root Certificate name (CN=<THIS>)
  • clientRootCertData: The encoded strings from the root certificate. if you are working on a mac, you can run the following:
    openssl x509 -in caCert.pem -outform def | base64 | pbcopy

Those are the only parameters that you need to deploy the APT29 environment needed for each scenario. easy right?

How long does it take?

From 30–45 mins.

How do I check the deployment progress?

Great question! Go to Azure Portal > Resource Group Name > Deployments > Deployment Name > Overview

What do I do while I wait? 😆 👨‍🍳

I recently started documenting all my favorite food recipes and other ones from the amazing community in here: . I just use my free time to do a non-tech activity ! 😉

Connect to the Environment

Once the environment deploys successfully, you will have to download a VPN config file from Azure, update it with your client certificate and key, import it to your OpenVPN client, and connect!

Download VPN Client config from Azure

Go to your Azure Portal > Resource Group Name > Virtual Network Gateway > Point-to-site-configuration and click on Download VPN Client.

Update VPN Client Config

Once it downloads, you will have a compressed file with a few configs in it. Edit the OpenVPN\vpnconfig.opvn file and insert the Client Certificate and Private Key to it. Open it with your favorite editor. I use Visual Studio Code.

I first comment out lines 20–21because TunnelBlick (Mac) handles it for you.

#log openvpn.log
#verb 3

Then, I modify lines 71–76 (These are official steps BTW). You need to copy the contents of your self-signed client certificate and paste it between <cert></cert> as shown below:

# P2S client certificate
# Please fill this field with a PEM formatted client certificate
# Alternatively, configure 'cert PATH_TO_CLIENT_CERT' to use input from a PEM certificate file.

Next, you have to do something similar but with your client private key. Open your client private key file and copy the contents of it and paste it between <key></key> as shown below:

# P2S client certificate private key
# Please fill this field with a PEM formatted private key of the client certificate.
# Alternatively, configure 'key PATH_TO_CLIENT_KEY' to use input from a PEM key file.

That’s it! You are ready to connect to the environment. Open your OpenVPN Client and drag and drop the client VPN config that we just edited or double-click on it depending on what VPN client app you are using. Finally, connect!

Make sure you keep this network design handy interacting with the environment to know where to authenticate and everything:

Also, you can find more information about the domain users and passwords in the mordor labs repo > apt29 section

Endpoints cannot be accessed from the Internet directly. Your P2S VPN helps!

Executing Emulation Plan (Day 1)

You can easily go through the empulation plan that I have already released:

I am releasing other two blog posts to show every single step .. 😉 🍻🔥 🔥 🚒



Threat Hunting, Data Science & Open Source Projects

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store