Misconceptions: Unrestricted Release of Offensive Security Tools

Andrew Thompson
8 min readNov 29, 2019

--

Uncontrolled distribution of Offensive Security Tools is an unnecessary contribution to real threat actor’s criminal and intelligence network operations.

Introduction

Uncontrolled proliferation of offensive capability is facilitating network intrusions by actors of all categories and sophistication. The proliferation of capability has increased the overall threat within the security environment — there are more actors with more capability than ever before. There are many accepted definitions of a threat; most involve a combination of intent, capability, and in many cases, opportunity. The trifecta is helpful in communicating about the relevant topics of vulnerabilities, exploits, and Offensive Security Tools. Vulnerabilities represent opportunities. Exploits represent capabilities. Offensive Security Tools represent capabilities. Vulnerability disclosure and exploit publication are not the primary focus of this writing; however, it is helpful to mention them initially to frame the discussion.

Public Disclosure of Vulnerabilities

Well planned, coordinated, and executed disclosure of vulnerabilities can produce desirable outcomes. Disclosing a vulnerability reveals existing opportunity. It is important to note that the opportunity exists independent of any action someone takes to discover or exploit it. The public disclosure of a vulnerability typically involves revelation of information that an actor could use to produce an exploit (capability). An actor will require the skills and resources to develop or otherwise acquire an exploit. Revealing an opportunity almost always increases risk at least for some duration. The likelihood that a malicious party can act on it increases. That increased risk is counteracted with measures taken by the affected parties to nullify that particular opportunity through patching and other security measures.

Public Release of Exploit Code

An escalation from publicly disclosing a vulnerability (opportunity) is publishing a working exploit (capability). When this raw capability is published, the bar for who may leverage knowledge of the vulnerability lowers commensurate with the difficulty to independently identify the vulnerability and develop the exploit themselves. “If I can do it, adversaries could do it” is a commonly used justification for publishing exploit code. It is important to remember that “could” is not the same as existing capability. The time and resources spent identifying vulnerabilities and developing working exploits are represented as opportunity cost. If someone else does it for you, you are free to focus on other aspects of your operations. The escalation in risk is easily articulated: vulnerability disclosure increases risk to a degree by making known opportunities that can be exploited; publishing exploit code makes capability accessible to everyone; the third component is intent, and it is assumed by making things accessible to all, those with malicious intent will likewise have access. Opportunities are negated with a degree of finality through patching.

Offensive Security Tool Proliferation

Offensive Security Tools are the aggregation of disparate functionality combined and streamlined to facilitate authorized intrusions or to circumvent existing security measures without leveraging a software bug. Each function may exist independently for legitimate purposes. Offensive Security Tools are not dependent on a software bug (vulnerability). In other words, you cannot patch software and negate the tool’s value. The articulation of “disparate functionality that may independently exists for legitimate purposes” is important, because a common argument is made that the “core problem should be fixed.” When the core problem being leveraged is functionality designed to facilitate business process, eventually you may thwart all usefulness of a system to a threat actor — and your users.

Some industry practitioners prefer to describe the “defensive” use of the tools. However, there are tenuous differences between “defense” and “security.” I distill the definition of security to “the ability to execute your will unimpeded.” For the purposes of this writing, security operations involve measures taken to minimize or eliminate the impacts of adversary operations. Defensive operations involve a range of activities that include security but also include actions taken to the detriment of the adversary and allows for counter attack — you get to hit back. If you could legally use Offensive Security Tools to defend against intruders, we could consider their defensive value. Our professional base have not yet articulated a convincing argument in favor of enabling self-defense and defense of others in this environment. Debating the merits of non-military defensive cyberspace operations is not the intent of this writing. The distinction is important, because the argument is that the tools are used for “defense” and “offense,” which isn’t true. They’re used offensively in an authorized way to drive a security outcome.

An example of a dual use tool that is NOT an Offensive Security Tool is plink. Plink is used by a number of threat actors, but it was not designed for purposes of conducting an intrusion — whether authorized or not. Higher security environments require threat actors to operate using resources organic to that environment, and those tools certainly are not considered Offensive Security Tools. Another example of a dual use tool that is NOT an Offensive Security Tool is PSEXEC. PSEXEC is not the aggregation of disparate functionality to facilitate authorized intrusions or to circumvent existing security measures.

Distinguishing between remote administrative tools and Offensive Security Tools is nuanced and warrants industry discussion. Examples of distinguishing characteristics could include evasive functionality, functionality to conceal, anti-forensics functionality, and functionality to abuse legitimate communication protocols, — such as tunneling C2 via DNS. Interesting keywords, and derivatives (not comprehensive): evade, covert, clandestine, conceal, anti-forensic, abuse, post-exploitation, obfuscate.

An example of Offensive Security Tools would be any post-exploitation framework. These tools in and of themselves are not the problem. Their unrestricted availability is a problem. Upon publishing these tools to the unrestricted internet, adversaries are provided crowdsourced raw capability that in totality is either enough to run their network operations program or at minimum supplement it. It lowers the bar for adversaries, as the capability is now uniformly accessible to all actors. Offensive Security Tool publishing to the unrestricted internet affords threat actors free and deniable capabilities which can be used in a semi-disposable manner without incurring cost. Not withstanding the fact that some of the tools released are on par with historically produced advanced frameworks traditionally thought to be exclusive to sophisticated and well resourced actors, the fact that an actor can leverage a tool they have no investments in for high risk initial access operations is a huge advantage to even well resourced threat actors.

If Offensive Security Tools were not available, threat actors would be forced to either invest in organic development or acquire tooling from vendors. Especially on the criminal side, market demand would go up, but those tools then become proprietary and definitively linked to criminal activity. Everyone involved in that market are now subject to targeting by intelligence and law enforcement entities. That’s good. That means after investigations, someone may get metal bracelets placed on their wrists and be brought to justice. You cannot currently target Offensive Security Tool developers, because they are not engaged in criminal activity. In anticipation of someone misconstruing the statement — the point is: actions can be taken against malware authors whereas the same cannot be said for legitimate Offensive Security Tool developers — more missed opportunity to exact cost.

Final Thoughts and Way Forward

Existing Offensive Security Tools, just like anything else put on the unrestricted internet, are not going away. The ones that are published and or leaked are here to stay and will continue to be used until they fall below an arbitrary usability threshold. That damage is done and irreparable. Security operators will have to counter threats that are leveraging industry provided tooling until they abandon them (unlikely). The longevity of these capabilities is in part due to how trivial it is to repackage, pack, manipulate, or otherwise obfuscate the code until the value of the capability is renewed. Unlike with vulnerabilities and exploits, you can extend the usefulness of Offensive Security Tools for years even after countermeasures are implemented. Longevity is also extended due to industry professionals maintaining the projects to meet Offensive Security needs. That means the capabilities evolve to counter security measures implemented by security teams trying to thwart real world threat actors. The argument that is used to justify these updates to publicly available offensive capabilities is one that is in and of itself flawed.

The idea is that a real threat actor could circumvent existing security measures, and so instead of demonstrating that in a private way, Offensive Security Tool developers build that capability into publicly available releases, and shorten or eliminate real adversary need to innovate and ultimately miss another opportunity to exact cost. What’s worse is the Offensive Security Tool developers often have access to privileged information and the opportunity to test those tools against industry leading security solutions and teams in a manner that would require a real threat actor to invest heavily in technology acquisition, insider threat, and or other intelligence collection — more missed opportunity to exact cost.

Offensive Security professionals often lack insight into real threat activity beyond what is made available in public threat intelligence reporting. Cyber Threat Intelligence (CTI) releases to the public may have given Offensive Security professionals the perception that the showcased actors are in fact representative of the most common threats. Reporting is dependent on the particular vendor, but usually the most interesting content to discuss is new and novel sophisticated activity. A CTI vendor could release an exposé about an actor that represents the top 1% — and has a minimal target profile that affects a fraction of total organizations. An Offensive Security professional may use that report to justify the production of raw capability to emulate that actor for paying customers. That is good. Publishing that same capability to the unrestricted internet is counter to security strategy. It increases the risk for everyone merely because a 1% actor possessed a capability.

Some Offensive Security professionals reject the allegations entirely, and others have suggested that while the allegations are true, the net gain in security for organizations is worth the damage being done. Offensive Security is a core and essential sub-function of a security program. It must be robust and capable. However, there is no viable justification for providing raw offensive capability to anyone that wants to obtain it pseudo-anonymously. Some have alleged that security for less resourced organizations improves due to the tools being available for free. This is a claim that invokes heavy skepticism; however, future solutions don’t need to be so cost prohibitive that these organizations cannot participate. The solution Offensive Security professionals can develop does not need to be expensive. CTI is imperfect in their execution of information sharing; however, the model still works better than unrestricted access to raw Offensive Security Tools.

A proposed solution from outside of the Offensive Security community is not likely to be adopted. Offensive Security needs strong leaders to rationalize why things are the way they are today, but not to use those reasons as excuses for continuing to damage security efforts due to a commitment to a strategy that is not working. Many have taken this criticism as blame for intrusions. Threat actors are to blame for intrusions, and no one else. There are very logical reasons for how the security industry got here. However, those reasons should no longer be used as an excuse to unwittingly or irresponsibly arm threat actors. We can do better; I have faith in the brilliant minds in this industry. The first step is accepting there is a problem that needs to be addressed. There is no finish line in security.

--

--