Thoughts on the Active Cyber Defense Certainty Act 2.0

On May 25, 2017, Representative Tom Graves released the second draft of proposed amendments to 18 U.S.C. 1030 (known as the Computer Fraud and Abuse Act). Representative Graves’ bill is known as the Active Cyber Defense Certainty Act (or ACDC Act). There is no universally accepted umbrella term for this, but it is variously called “Active Defense”, “Active Cyber Defense”, “hacking back,” “hackback”, and “strike back.” You will find the word “active” applied almost universally in these discussions, though it frequently results in establishing a simple (though false) dichotomy of “passive defense” vs. “active defense” and frequently leading to fallacious “straw man” arguments. I prefer the term “Active Response Continuum” to explicitly avoid setting up such binary choices. [Dittrich and Himma(2005)]

Without technical knowledge and a clear contextual understanding of the criminal actions, potentially triggering legal defensive response, two paradoxes emerge. First, the “attributional technology” cited in the draft ACDC Act may not achieve its desired goals. Second, some actions disallowed by the ACDC Act include previously witnessed “strike back” actions that have motivated calls for the kind of amendments embodied in the ACDC Act. [Robinson(2017)]

Why the ACDC Act?

The motivation for the initial draft from Representative Graves was, “about empowering individuals to defend themselves online, just as they have the legal authority to do during a physical assault,” [Hawkins(2017)] and “to fight back basically and defend themselves during a cyber attack.” [Kuchler(2017)] News reports about proposals like this typically invoke the “attacker” / “victim” (self-defense with force) frame, describing the outcome of an amendment like ACDC Act as, “[giving] cyberattack victims the go­-ahead to retaliate against their attackers.”[Robinson(2017)]

It is difficult discuss this topic without using terms like “attack” and “attacker,” so I will reluctantly use those terms in this analysis, but think it’s important to note that using this terminology only accommodates a rigid paradigm of a physical attack against a victim who (as seen in the quotes above) retaliates in kind with physical violence to the exclusion of other more common real-world scenarios. This self-defense with force frame invokes sympathy, appealing to emotion, in order to garner public support. Those invoking this frame do not follow through with clear definitions of what constitutes a “use of force” in cyberspace leaving critical questions unanswered. For example, how does one determine a proportionate response? How does one target the application of force to properly discriminate between the “attacker” and innocent third parties? In short what are the legal requirements for justifying self-defense with force?

The self-defense with force frame raises another set of issues, however. There is a distinct difference between protecting life (your life and/or the lives of others) using force vs. protecting property using force. This continually leads to some fundamental flaws in reasoning about the subject, since intellectual property (or IP) is just that: property, not life. The challenge is to look at criminal acts and responses without emotion so that rage does not drive legislation, potentially resulting in more harm than good. I have a forthcoming book on this subject to be released soon.

What does an “attack” look like?

A malicious program known as WannaCry made international news when it began spreading on May 12, 2017. By the end of that weekend it had compromised tens of thousands of systems in over 100 countries, including computers at the National Health Service (NHS) in the United Kingdom where critical-care delivery facilities had to close and, as seen in this tweet, the scheduling system of the Frankfurt rail (S-Bahn) service in Germany was disrupted.

Tweeted photo of Frankfurt rail system scheduling system infected with WannaCry

WannaCry exploited an unpatched vulnerability in Microsoft Windows systems to gain entry to the operating system, install itself, and encrypt the contents of selected files on the host rendering them unavailable. In the case of the NHS and S-Bahn computers, this also created a denial of service by causing the system itself to become unavailable. After encrypting the files, the message seen in the tweet above extorts money from the owner in order to receive the key to decrypt the files.

Representative Graves choose WannaCry as an example of an “attack” in statements to reporters. He says, “I do believe it would have had a positive impact potentially preventing the spread to individuals throughout the US.” [Kuchler(2017)] The problem is, WannaCry would not be prevented or stopped were ACDC 2.0 the law, and there are other criminal acts that might be considered “attacks” but are not addressed in the 2.0 draft. I will explain why, and how to improve further drafts of this Act.

What does ACDC allow?

The second draft of the ACDC Act focuses on a vaguely defined category “attributional technology” that groups like the Task Force from the Center for Cyber and Homeland Security call “beacons” and “dye-packs.” [Unspecified(2016)] The ACDC Act itself uses the verb “beacons” in section (k)(1).

The CCHS Task Force defines “beacon” to be:

Pieces of software or links that have been hidden in files and, when removed from a system without authorization, can establish a connection with and send information to a defender with details on the the structure and location of the foreign computer systems it traverses.

They define “dye-pack” as:

In the cybersecurity context, the terms beacon and dye pack are often used interchangeably. However, with the term’s physical namesake being the dye packs used to identify bank robbers, the cybersecurity tool sometimes takes on a more aggressive connotation. Where, in bank robberies, dye packs explode and contaminate the stolen money and their environment with a recognizable dye, cyber dye packs are often thought to not only be able to collect information on a hacker’s computer (similar to a beacon) but also to be able to have a destructive impact on their surrounding environment.

Let’s leave aside for a moment problems with these analogies, such as the fact that files cannot know when they are removed from a computer, or whether the person removing them is “authorized” (or not) to do so, allowing the file to then decide to either trigger an alarm or to cyber-explode indelible cyber-dye marking the stolen files’ contents or the thief’s cyber-clothes and cyber-skin identifying the thief to police. Forgive the snark here, but hopefully it helps show that physical analogies to the cyber realm are extremely difficult to get right without a very deep and sophisticated technical operational understanding on the part of both the person using the analogy and the one interpreting it.

In 2011, the government Computer Emergency Response Team in the nation of Georgia was responding to a series of intrusions into Georgian systems. As part of their response, they created a document with an enticing name like “Georgian-NATO Agreement” that included trojan horse code that triggered when the file was opened. This trojan horse code then infected the computer of the person who opened it, enabling the computer’s web cam and taking this photograph of the person sitting at the keyboard. [Osborne(2012)]

Image taken by trojan horse code inserted into document by Georgia CERT [Osborne(2012)]

Not only did the trojan horse code take a picture, but it also copied documents from the computer’s file system in a form of “remote search and seizure” of contents of the suspect’s system. Is this just a “beacon” anymore? Would this specific example be considered allowable, or excluded, by the language in section (l)(2)(B)(ii)(IV)? Does this level of intrusiveness fit the term “dye pack” described by the Center for Cyber and Homeland Security, which “are an inherently riskier measure than beacons from a legal standpoint, given that they install malware on an attacker’s system after data exfiltration.” [Unspecified(2016)] Is “malware” itself no longer a viable term, other than within a post-hoc analysis of criminal intent at the moment of its use?

Due to these problems, I believe the focus needs to be on the effects in the context of a given situation, not on high-level labels categorizing a tool or technique. The new wording in the second draft does seem to use this focus, which is hopeful.

Previous comments on ACDC

Two of the more thoughtful analyses of the initial draft ACDC Act were done by Bobby Chesney [Chesney(2017)] and Herb Lin [Lin(2017)].

Chesney identifies the main two problems as being, “mistaken attribution and unintended collateral impacts.” Of the two, I believe the latter is the greater potential problem. In his analysis, some elements that Chesney called for were implemented in this new draft:

  • “any legislative intervention [to] include some form of data-gathering and oversight mechanism regarding it’s use in the field”
  • “a sunset clause in order to force further deliberation informed by actual experience after a year or two.”
  • “include the phrase, ‘… or other U.S. government entities with responsibility for cybersecurity or intelligence functions.’” (The bill now includes, “or other United States Government entities responsible for cybersecurity.”)
  • Exclusions to the allowable actions that Chesney suggested needed “a lot of work” now includes (see section (l)(2)(B)(ii)): “(I) destroys or renders inoperable information that does not belong to the victim that is stored on a computer of another; (II) causes physical or financial injury to another person; (III) creates a threat to the public health or safety; or (IV) exceeds the level of activity required to perform reconnaissance on an intermediary computer to allow for attribution of the origin of the persistent cyber intrusion.

Chesney was satisfied that the original bill limited who could invoke “hacking back” to “an entity that is a victim of a persistent unauthorized intrusion of the individual entity’s computer.” The language of both versions of the ACDC Act (in my reading) also implicitly includes service providers or contractors who would be included under the language, “undertaken [at] the direction of, a victim.” If service providers and/or contractors are not considered part of the “victim” class, this needs to be spelled out clearly. And if they are considered to be included in the exemption “victim” category, there are a host of questions raised regarding how long the monitoring should be allowed to continue, when the collected attribution data must be delivered to federal authorities, and how to address the issue of “agency” in terms of Constitutional limits on monitoring vis-a-vis federal criminal investigation activities that are implicitly involved when a victim of a crime uses “active cyber defense measures” to collect, preserve, and delivery evidence of crimes to federal authorities who are the only ones who have the power to invoke criminal process.

Lin agrees with Chesney on calling out the terms “persistent” and “intrusion.” Lin suggests that DDoS could be considered an “intrusion” of packets into a network. (I would respectfully disagree with Lin on the “packets as intrusion” concept and will propose a cleaner alternative below.)

Lin’s main concern (which I share) is with the lack of a due care requirement, especially to innocent third parties who are in many cases intermediaries or stepping stones in a series of connections between “attacker” and “victim.” This concern is echoed by Yacin Nadji: “The bill currently allows any victim to hack back, but ignores the potential consequences of them doing it wrong. [Personally], I think a more prudent course is to improve the ability for law enforcement officers to do their job well.” [Kuchler(2017)] To begin addressing the due care issue, Lin called for differentiation between “proximate source or ultimate source” in terms of stepping stone connections to address the ambiguity that Chesney identified in his critique. The new draft includes a new subsection (l)(2)(D): “the term ‘intermediary computer’ means a person or entity’s computer that is not under the ownership or control of the attacker but has been used to launch or obscure the origin of the persistent cyber-attack.”

While the changed language addresses Chesney’s concern to some degree, I note that a new phrase “persistent cyber-attack” is added to (rather than replacing) the already problematic term “persistent intrusion”. In my view, this addition increases confusion in two ways. It conflates the term “attack” with “intrusion” while simultaneously not fully clarifying the issue of Denial of Service (which is often called an “attack”, but is not really an “intrusion”). This term may have been added here as an attempt to deal with the DDoS issue raised by Lin.

The original ACDC Act draft only added to 18 U.S.C. 1030 one new section (k) bearing the title, “cyber defense measures not a violation [of CFAA].” The second draft adds a total of three sections, including new sections (l) and (m) with the titles, “exception for the use of attributional technology” and “notification requirement for the use of active cyber defense measures.” Given the new wording of section (k), this draft appears to focus the act on applying only to, “a program, code, or command for attributional purposes that beacons or returns locational or attributional data in response to a cyber intrusion in order to identify the source of an intrusion.” The “[program, code, or command] that beacons” needs to originate from the victim’s computer, be copied out and executed by an “unauthorized user,” and “not result in the destruction of data [or] an impairment of the functionality of the attacker’s computer system, or create a backdoor enabling intrusive access into the attacker’s computer system.” The new section (m) requires that the victim first report to the FBI National Cyber Investigative Joint Task Force (NCIJTF) that includes “the type of cyber breach that the person was the victim of, the intended target of the active cyber defense measure, the steps taken to preserve evidence of the attacker’s criminal cyber intrusion, as well as steps taken to prevent damage to intermediary computers not under the ownership of the attacker.”

Even with this new language there remain several problems or deficiencies in the ACDC Act 2.0 draft. Some of these were already raised by Chesney and others, but I can identify even more. I will use Chesney’s style and address them each as mini-sections.

Use of the terms “intrusion,” “attack,” and “breach”

The continued use of the terms “cyber intrusion” and “breach” leave a core question raised by Chesney regarding Denial of Service. The inclusion of the phrase “persistent cyber-attack” interchangeably with “persistent intrusion” continues to conflate this issue. A richer phrase which encompasses all of the possible ways that someone would be victimized via cyber-means would be “compromise of the integrity, availability, and/or confidentiality of information or information systems owned by the victim.” I urge Representative Graves to consider using language like this instead of the terms “intrusion”, “attack”, or “breach.” Here is why.

An “intrusion” or “breach” would typically only involve compromise of the integrity of information systems and confidentiality of information, but not necessarily compromise the availability of information or information systems. A program like WannaCry compromises the integrity of information systems followed by compromising the availability of information (i.e., the encrypted files). In the case of the NHS and S-Bahn systems, the denial of service affect compromised the availability of infected information systems.

When intellectual property is stolen it does not necessarily mean it is no longer available to the owner, only that it is no longer confidential and copies may now be in the possession of someone who can exploit that access for financial gain (i.e., causing financial harm to the owner of the stolen IP). Theft is not, per se, an “attack”. The term “attack” could apply to compromise of any or all of the three attributes: integrity, confidentiality, and/or availability. It is easy to step into the common logical/rhetorical trap of conflating the concepts of physical violence or attack on a person with criminal acts (i.e., theft of intellectual property or extortion), but such conflation often leads to flawed conclusions.

To explicitly omit compromise of availability of information and information systems leaves a gaping hole in the intent of “hack back” legislation, especially when DDoS attacks have historically been a core motivator for calls for “hack back” rights. Using another high-profile historical case study, the multiple DDoS attacks against the U.S. banking sector in 2013 resulted in actions that the FBI began investigating as possible “hack back” overreach by the private sector. Those attacks were followed by similar attention from members of the U.S. House of Representatives. [Robinson(2017)] Because of this, the issue of “intrusion” vs. “denial of service” is something that a future draft of the ACDC Act needs to address if it is to be adequately comprehensive.

I can completely understand the need to connect with constituents and fellow House members using something current in the global media spotlight. From a sophisticated technical operational perspective, however, I find it hard to accept the claim that the ACDC Act would have had any preventative potential against WannaCry. WannaCry was not a data exfiltration attack where a “beacon” — as described in the proposed bill — could or would have been copied by the WannaCry “attackers” to be executed on their own computers, thereby exposing their location and attributing them. The “active cyber defense measures” described in the current draft don’t seem to me to apply to a response to WannaCry. The “victims” would have to use other mechanisms (such as directly scanning intermediary systems that appeared to have been the source of the WannaCry intrusions through the initial vulnerability and/or that hosted the the WannaCry malware that was then dropped onto the system) which does not fit the technical description of the proposed section (l)(2)(B). Nor would there have even been the time necessary to prepare the kind of reporting material for prior notification of the FBI NCIJTF called for by section (m)(2). This kind of highly technical difference is easy for a non-technical person to miss. It highlights one of the core issues faced by anyone wanting to develop law or policy in this area: experts in the law, in policy, in operations, and in malware technology absolutely must all be involved in developing solutions, since these issues are as much legal problems as they are policy problems, social problems and technical problems. I believe such cooperative and collaborative teaming is necessary to develop the right solutions in this incredibly complex problem set.

Use of the term “persistent”

Chesney and Lin both rightly question the use of the qualifying phrase “persistent intrusion,” which remains in this draft. The word “persistent” has an accepted meaning that involves activities continuing over a long period of time in the face of difficulty or opposition (i.e., active defensive responses). The term “Advanced Persistent Threat” includes this word specifically because of the years-long time component in some campaigns.

Chesney is not sure if this term refers to dwell time (the amount of time an intruder retains access to a victim’s system before discovery), or if it applies to any or all of the steps in a model like the popular “cyber kill chain.” [Hutchins et al.(2011)] It is possible that the widespread use of the “cyber kill chain” may actually contribute to this “intrusion” vs. “attack” issue, because of this model’s inherent limited focus on data exfiltration. Those referencing the “cyber kill chain” model rarely qualify their use of the model by pointing this out. The “cyber kill chain” does not work very well for the prototypical Distributed Denial of Service (DDoS) attack commonly seen using botnets. DDoS attacks are actually multi-phase attacks affecting different aspects of integrity, availability, and confidentiality at different times in the multi-phase lifespan of these attacks. DDoS attacks are quite complex, can be very devastating in terms of financial losses for disruption of revenue generating systems and services, and are extremely difficult to trace back to the entity commanding and controlling them. This complexity again calls for very sophisticated technical operational understanding to get right in terms of legislation like this.

One of the reasons to avoid the “self-defense” argument — you punch me, I have a right to punch you back — is the complication raised by the time component of the right to use force to protect self or others. Within seconds of being punched in the face, a right to punch back may exist and what would otherwise be considered an assault would be justifiable and excused. One could even argue that a pre-emptive punch was justifiable by someone being threatened by another if the threatened party had stepped back, gestured with their hands for calm, repeatedly told the aggressor “hey buddy, back off!” and then after continued (i.e., persistent) threatening behavior punched first to remove the threat. But if you were punched over 205 days ago (as Mandiant has reported the average intrusion takes to discover), you may have lost the right to punch back and you would now be considered the “attacker” committing an assault. Avoiding retributive or punitive actions is a core ethical and legal requirement in determining what is allowable in terms of “hacking back.” [Dittrich and Himma(2005)]

Again if we look at WannaCry, the term “persistent” becomes problematic. Chesney’s assumption that the term “seems to be intended to prevent invocation of ACDC Act exemption by someone who has experienced only a fleeting intrusion” would rule out use for a rapidly-spreading threat such as WannaCry. [Chesney(2017)] WannaCry was a quick, opportunistic “in and gone” kind of attack. Once a computer’s hard drive is encrypted, the “attack” is over. Anyone attempting to disrupt it after that point would be acting to disrupt future compromises of other victims, not to “disrupt continued unauthorized activity against the victim’s own network” as spelled out in section (2)(B)(i)(II). The only thing that persists is the data that is now encrypted, unless and until the ransom is paid and the key provided to decrypt it. This is not the same as the kind of intellectual property theft data exfiltration case studies that calls for “active defense” and “hack back” rights usually invoke. (The quote, “This is the largest theft of intellectual property in human history” is frequently a premise in arguments leading to the conclusion that amendments such as the ACDC Act are necessary to allow victims to “fight back” as Representative Graves puts it. [Kuchler(2017)]).

No formal standards for the form and content of notice

While section (m) does list some elements required in a notification to federal authorities by someone intending to perform an “active cyber defense measure,” it does not provide any guidance or requirement to follow any established stakeholder analysis methodology, ethical evaluation methodology, technical action planning methodology, or contingency planning methodology beyond simply listing what actions were taken to minimize harm to innocent third parties. This is not entirely surprising, since none of these methodologies exist in a form accepted and usable by the private sector at this time. (Though the Department of Homeland Security has funded an effort known as the Menlo Report Working Group to help establish ethical guidelines for computer security research that could be leveraged. [Menlo Report(2012)]) Such methodologies and standard operating procedures have been called for by nearly every substantial paper or report I have read supporting the kind of amendments to 18 U.S.C. 1030 being drafted here, yet no one to date has put forward details sufficient to establish these methodologies and standard operating procedures. Without at least some standards, anyone wanting to initiate an “active cyber defense measure” is on their own as to how to move forward, and the quality of the result (as well as the amount of due care that Lin calls for) will vary randomly and widely. In my opinion, this will only increase the risk of harm, not reduce the amount of cybercrime while pacifying some who people feel they are doing something to strike back.

Timing of requirement to report cybercrime

This second draft requires that an entity wishing to use “active cyber defense measures” must first report to the FBI NCIJTF, a high-level multi-agency center that includes the military and intelligence community. Such reporting will no doubt increase awareness within the federal government, but their awareness comes at the last moment before someone takes an aggressive action and only provides visibility into that small fraction of victims who invest in striking back. The situations in which someone would take advantage of the ACDC Act’s provisions are part of what are widely agreed to be broad criminal campaigns spanning multiple sectors over multiple years in multiple countries.

I would recommend that Representative Graves include in future drafts a two-step notification requirement that ensures that federal law enforcement receives a report of suspected criminal activities before (or at least at the start of) the planning phase and preparation phase for the currently defined report. The recipients of these reports could be the FBI and/or U.S. Secret Service via their many field offices, or possibly the NCIJTF (if they have the capacity to handle the volume of reports). The key is that these reports should contribute to contemporaneous nation-wide visibility into significant cybercrime campaigns. Only those victims who have already reported cyber crimes should be able to then initiate more aggressive or intrusive actions such as those covered in this draft. Arguments for amendments such as the ACDC Act (e.g., [Rabkin and Rabkin(2016)]) frequently assert that law enforcement is “unable to protect these organizations from becoming victims” and that the “federal government is incapable or disinclined to deal with the threat.” This is not surprising when a deeper examination of the causes of this lack of capacity by authorities includes the fact that fewer than 1% of victims of cyber crime are willing and able to quantify their losses and report the crimes to law enforcement in the first place! [Office of the National Counterintelligence Executive(2003)] I examined this line of reasoning in a talk at NCSC One in The Hague on May 17, 2017. [Dittrich(2017)] Earlier and better reporting of (paraphrasing from the ACDC Act) “the type of cyber breach that the person was the victim of [and the evidence preserved by the victim] of the [compromise of integrity, availability, and/or confidentiality]” will help federal law enforcement be more effective. After all, in the United States constitutional authority is vested in federal law enforcement agencies and the executive branch to investigate and prosecute crimes and in the courts to punish criminals, though the private sector may seek civil remedies by going to the courts.

Effectiveness of the proposed allowable actions

As mentioned before, analogies in this area are very difficult to get right. So too is getting the desired effects without considering the full technical context. I am often part of conversations with brilliant legal or policy minds, who echo the calls of frustrated Chief Information Security Officers and other corporate executives to be allowed to employ some of the techniques described in this draft such as “beacons” and deleting or encrypting one’s stolen data that resides on a third-party system. The idea of a “beacon” or “dye-pack” invokes a simple model in the minds of a non-technical person, who thinks they understand how easy and effective the technique is. They may not be capable of recognizing when an authority (who similarly lacks a sophisticated technical operational perspective) is saying something incorrect, or when someone with technical expertise (and should or does know better) is putting their own agenda to sell their products or services above the safety of the general public.

Let’s start with beacons. As we saw in the CCHS Task Force’s definition, the job of a beacon is to provide a feedback loop to the owner of stolen property accurately attributing who stole it. There are many levels of feedback that can be obtained, using many different techniques. Back in 1999, the term “web bug” was used to describe embedding a small or invisible image in HTML content (e.g., a web page or email message) to track viewers of the content. A Microsoft Office document can similarly have embedded content identified by a Uniform Resource Identifier (URI) that will cause the program rendering the stolen document to attempt to make a connection to access the embedded object so the document can be rendered. Regardless of which method is used, the theory is that this access attempt can be seen on the server pointed to by the URI. That is the theory, but this doesn’t always work in practice. Not all documents support embedded URIs. Some embedded objects in Microsoft Office don’t work, or may show up as a broken tile or caution sign icon when opened in LibreOffice or some other non-Microsoft product. If the embedded URI does trigger, there is no guarantee that the IP address seen on the beacon-receiving server is actually the computer of the “attacker.” The source IP address seen is only the “last hop” in a series of network routing operations. Despite what many authors of law review articles would have you believe, you can’t just trivially use a program like the Unix “traceroute” command to fully trace back the connection. Here are just some reasons why a simple beacon may fail or be inaccurate: The document can be opened anywhere, from any system, causing the beacon to incorrectly or incompletely identify the source; The document can be opened on a system that is not even connected to the Internet, or firewalled like an anti-honeypot to restrict outbound connections, preventing the beacon from working; The document may be opened on a host behind a tunneled network connection, a Network Address Translation (NAT) firewall with thousands of computers behind it, a Virtual Private Network (VPN) terminator, or a ToR exit node; The document can opened on an innocent third party’s system, causing the beacon to identify the wrong source (while a new “print to PDF” copy is taken elsewhere, now without the beacon being present).

One would have to use far more sophisticated mechanisms in order to increase the reliability and effectiveness of the beacon against a sophisticated “attacker,” or preferably they would use multiple sources of data rather than rely on a single TCP connection and a naive “traceroute” style traceback to source IP address of a connection. The problem here with attempts to codify something as complicated as this is that the more sophisticated the beacon — consider here the Georgia CERT trojan horse, what the Dutch federal police did with the Bredolab botnet, or what the U.S. Department of Justice and FBI did with the Coreflood botnet — the more it begins to cross over into the excluded categories of “[creating] a backdoor enabling intrusive access into the attacker’s computer system” or “[exceeding] the level of activity required to perform reconnaissance on an intermediary computer to allow for attribution of the origin of the persistent cyber intrusion.”

As for deletion or encryption of one’s stolen data found within a third-party’s system, that is even less reliably effective. People without a deep understanding of computer forensics, anti-forensics, and tools, tactics and processes used by computer criminals, would not even be aware of the technical limitations with this suggestion. Simply deleting a file by running an arbitrary program on a third-party’s computer (that is known to be compromised by the “attacker” already, mind you) assumes that un-trustworthy computer will properly, silently, and completely delete all contents of the “stolen data”. An attacker can use a trojan horse or rootkit to replace any programs, such as the “rm” program used to delete files on a Unix system. That program may itself be a beacon, triggering an alarm to the “attacker” that someone is messing around on a system they control. Or it may not delete anything, but simply make the file appear to disappear from the file system. Even if it really is the actual Unix “rm” program, all that “rm” does is to remove the directory entry (i.e., just the file’s name) making it appear the file is gone, but the actual contents of the file may still exist intact on the hard drive and can be trivially recovered by anyone who knows how and has similarly gained access to the system. Lastly, the file may have already been copied elsewhere, in which case deleting just one copy has absolutely no effect (other than to give the victim a false sense of security.)

Encrypting one’s stolen data on a system controlled by the “attacker” is even worse! Encryption requires a secure program, properly implementing a secure algorithm, using a secure encryption key. All of those must reside on the system that I just explained cannot be trusted because it is under control of the “attacker” and the current ACDC Act language prevents a victim from copying their own program onto the system. The ACDC Act 2.0 only allows programs for “attributional purposes” that “originated on the computer of the defender but is removed by an unauthorized user.” This means the attacker must be the one to copy the program used to encrypt and delete the data, or the defender must only use a program (“run a command”) already on the compromised intermediary or attacker’s own computer! And to encrypt a file you don’t just magically encrypt the contents in place. A new file must be opened, the contents of the original unencrypted file are then encrypted into the new file using the encryption algorithm and a secret key or passphrase. Only after successful encryption can the original unencrypted file contents be deleted. If the act of encrypting into the new file were to fill the hard drive partition, for example, not only would the encryption attempt fail and the original clear text file still remain but the system could be disrupted in unforeseeable ways which could fall under one or more of the exclusions listed in (l)(2)(B)(ii), or tip off the “attacker” that the victim is attempting to “hack back.”

Forgive me for going so deep into the technical weeds here, but please realize that things are far more complicated here than many proponents of “hack back” rights portray them to be. The bottom line is that all of these operations —beaconing, deleting files, copying files, encrypting files — occur on the system under control of the “attacker” and outside your zone of control or authority. Nothing you do using those systems can be completely trusted as a result, nor can the results be reliably predicted. Anyone who has actually dealt with rootkits and trojan horse programs, or has experienced the back and forth with an “attacker” trying to remove them from your system, knows this. Again, a technically unsophisticated, black/white thinking, hot-headed victim may feel better that they did something to strike back at their “attacker,” but it is entirely unknowable without examination of many fact-specific conditions whether their action would have or did have any beneficial or harmful results.

I believe that attempting to exempt tools or techniques by vague and coarsely-grained taxonomic labels (rather than using a case-specific examination of specific actions and their effects in context) is unlikely to turn out well. Proponents for allowing the private sector to be more aggressive really want to get universal agreement that “beacons” are innocuous and therefor safe to codify into exemptions to statutes like 18 U.S.C. 1030. They get frustrated and angry when confronted with the real-world analysis of the proposed technical actions and the resulting effects (both positive and negative), which just adds to their existing frustration and anger that they were compromised in the first place. I understand the frustration — and share it, though for other reasons — but I think it is more important to use a clear head and keep all potentially impacted stakeholders in mind so as to act more deliberately and thoughtfully. I am less interested in making a profit than I am of preventing harm to innocent third parties who just want to live their daily lives using Internet connected devices. This is where “due care” matters.

Due care and the potential to cause harm

Lin asks, “what if during the course of a response action, the victim willfully damages a device belonging to a 3rd party through an action taken on the attacker’s computer?” [Lin(2017)] I see two issues here: analyzing the risks and benefits of the plan of action, and whether any harms should be limited to just “willful damages” rather than damages caused by insufficiently well planned actions or inadequate analysis of risks.

Proponents of “Active Defense” or “hack back” rights often attempt to counter the issue of harm to intermediary third parties. They often attempt to dismiss the concerns of damage potential as being exaggerated, hyperbolic, or theoretical (e.g., the disrupted hospital system with a bot used for in a DDoS attack that is knocked offline by the “hack back” operator.) Or they deny the rights of the harmed third parties using an argument that since the intermediary allowed their systems to be compromised and used as stepping stones to “attack” the “victim”, they are not really “innocent” and have thus forfeited their rights. This latter argument is flawed in two ways. First, it neglects to put both the intermediary and the entity claiming “victim” status to justify striking back into the same category of “victim.” This then leads to the false non-equivalence that granting rights to strike back to one victim, but at the same time absolving them of responsibility because the entity they harmed is not themselves an “innocent victim.” In an interview with Stewart Baker on the Steptoe Cyberlaw Podcast, Ariel Rabkin suggests, “we have a legal system; if the third party feels that they were harmed they can sue and we can have a court case.” Personally, I don’t believe it is right or fair for the entity wishing to strike back to simultaneously claim (A) a right to strike back deriving from their status as a victim of a cyber compromise and (B) absolution of responsibility for harm to third parties because those third parties allowed their systems to be compromised. That makes no sense to me.

While it is possible that a victim (or their contractual agent) may willfully take risks that result in harm to innocent third parties, I am more worried that those wishing to strike back will rush to strike back and do an inadequate job of evaluating the options and potential consequences, or inadequately testing their “active cyber defense measures,” inadvertently causing harm as a result. The computer security and threat intelligence industry are more likely to be in a position to technically pull off an “active cyber defense measure” than many actual victims of compromises that result in intellectual property theft or losses due to denial of service. Some botnet takedown operations have been initiated with just enough planning and preparation to achieve the bare minimum goal of taking control away from the criminal operating the botnet, but without a lot of contingency planning, testing, or a fallback plan in case something does not go as hoped. (For example, the Mariposa Working Group’s takedown of the Mariposa botnet failed when the “attacker” was given back control of the botnet and proceeded to use it to DDoS the Mariposa Working Group members, causing collateral damage to government, academic, and private sector entities sharing the same network.) [Dittrich(2013)] At least today, the fear of violating the law has somewhat of a limiting effect on the level of aggressiveness. Granting overly broad exemptions to 18 U.S.C. 1030 for unstructured and ad-hoc attempts to strike back will in my opinion increase the chances of causing harm and destabilizing the situation, not the other way around.

I believe the burden of responsibility should be on the entity taking the aggressive hack back action, both in adequately evaluating and mitigating risks as well as redressing harms resulting from their actions. Were they to go to court to seek a temporary restraining order, such as those filed in a number of botnet takedown operations by Microsoft, the Federal Trade Commission, and most recently the Department of Justice, the court would likely require a bond be put up to cover any unjustified damages to the defendant (or other parties not present in court to object). [Patterson(2013)] In this case, I believe that a requirement of some form of a bond or professional liability insurance coverage to ensure the hack back actor takes risks in proportion to their ability to cover harms resulting from their errors or omissions would be more just than putting the burden on harmed third parties to fight in court to recover damages.

An additional benefit to using civil legal process is that the plaintiff must prepare and submit to the court before they initiate the takedown extensive documentation in the form of a complaint, declarations and pleadings that describe in great detail the damages being caused by the defendant(s), the effects of the court’s granting or denying the plaintiff’s pleas, etc. Only when the court orders it, does the action then take place. These filings to the court provide a form of stakeholder analysis and risk/benefit analysis that does not exist and is not required of the private sector today. [Dittrich(2012)] I would suggest that future drafts of ACDC Act explicitly call for more comprehensive and standardized pre-action evaluation and after-action review, not rely on a two-year sunset “kill switch” in case things go wrong.

Conclusion

There is a lot of work that needs to be done to ensure that allowing victims to hack back is done safely. It is not clear to me if the resources and expertise are available in this instance to do this. I am not talking only about the language of the Act, but rather the standards and methodologies listed earlier that are necessary to fulfill the evaluation and notification requirements. I sincerely hope that Representative Graves is able to identify experts with deep and sophisticated technical operational knowledge to help guide future drafts and development of the standards and methodologies before this Act is introduced for a vote. I hope that those experts have the integrity to provide the kind of open and honest analysis and advice as I have done here. The private sector often pushes self-serving legislative proposals that help sell their products and services. That is understandable, but when the changes involve exemptions from computer crime statutes allowing entities to pursue their own aggressive remedies outside of the legal process (either criminal or civil) putting the general public at risk, the consideration of things beyond corporate self-interest is necessary. (This is not a hypothetical: This is happening. [Cox(2017)]) I would note and amplify a request by Nuala O’Connor [Unspecified(2016)] who urges, “greater outreach to privacy and civil liberties groups beyond the experts whose views were sought” (O’Connor in relation to advice to the CCHS Task Force, and me here in relation to those advising Representative Graves.)

References

[Chesney(2017)] R. Chesney. Legislative Hackback: Notes on the Active Cyber Defense Certainty Act discussion draft. Lawfare Blog, March 2017. https://www.lawfareblog.com/legislative-hackback-notes-active-cyber-defense-certainty-act-discussion-draft.

[Cox(2017)] J. Cox. This UK Company Is Making It Easier for Private Companies to ’Hack Back’. Motherboard Vice, June 2017. https://motherboard.vice.com/en_us/article/this-uk-company-is-making-it-easier-for-private-companies-to-hack-back.

[Dittrich(2012)] D. Dittrich. Thoughts on the Microsoft’s “Operation b71” (Zeus botnet civil legal action). http://www.honeynet.org/node/830, March 2012.

[Dittrich(2013)] D. Dittrich. Offensive Anti-Botnet — So you want to take over a botnet…, October 2013. Presentation to the North American Network Operators Group (NANOG) meeting 59. http://www.youtube.com/watch?v=zqUL1mUEvGg, http://staff.washington.edu/dittrich/talks/nanog59/

[Dittrich(2017)] D. Dittrich. The Active Response Continuum: Debating the Future of “Hacking Back” in Terms of Language, Ethics, and Laws. Presentation to the NCSC One 2017 conference, May 2017. http://staff.washington.edu/dittrich/talks/NCSC-One-2017-Dittrich.pdf.

[Dittrich and Himma(2005)] D. Dittrich and K. E. Himma. Active Response to Computer Intrusions. Chapter 182 in Vol. III, Handbook of Information Security, 2005. http://ssrn.com/abstract=790585.

[Hawkins(2017)] G. Hawkins. Rep. Tom Graves Proposes Cyber Self Defense Bill, March 2017. http://www.thedallasnewera.com/local-news/1657-rep-tom-graves-proposes-cyber-self-defense-bill.

[Hutchins et al.(2011)Hutchins, Cloppert, and Amin] E. Hutchins, M. Cloppert, and R. Amin. Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains. In 6th Annual International Conference on Information Warfare and Security. Lockheed Martin Corporation, December 2011. http://www.lockheedmartin.com/content/dam/lockheed/data/corporate/documents/LM-White-Paper-Intel-Driven-Defense.pdf.

[Kuchler(2017)] H. Kuchler. Push to let companies ‘hack back’ after WannaCry. Financial Times, May 2017. https://www.ft.com/content/b5d1aa98-3cc9-11e7-821a-6027b8a20f23.

[Lin(2017)] H. Lin. More on the Active Defense Certainty Act. Lawfare Blog, March 2017. https://www.lawfareblog.com/more-active-defense-certainty-act.

[Menlo Report(2012)] David Dittrich and Erin Kenneally (co-lead authors). The Menlo Report: Ethical Principles Guiding Information and Communication Technology Research, December 2012. http://www.dhs.gov/sites/default/files/publications/CSD-MenloPrinciplesCORE-20120803.pdf.

[Office of the National Counterintelligence Executive(2003)] Office of the National Counterintelligence Executive. Annual Report to Congress on Foreign Economic Collection and Industrial Espionage — 2002. NCIX 2003–10006, February 2003. https://fas.org/irp/ops/ci/docs/2002.pdf.

[Osborne(2012)] C. Osborne. Georgia turns the tables on Russian hacker, October 2012. http://www.zdnet.com/georgia-turns-the-tables-on-russian-hacker-7000006611/.

[Patterson(2013)] T. Patterson. Litigation: The bond requirement for preliminary injunctions. Inside Counsel, September 2013. http://www.insidecounsel.com/2013/09/05/litigation-the-bond-requirement-for-preliminary-in.

[Rabkin and Rabkin(2016)] J. Rabkin and A. Rabkin. Hacking Back Without Cracking Up. Hoover Institution, Series Paper №1606, June 2016. https://drive.google.com/file/d/0B_PclSuEzVCVYUo1bE5fUjFEMHM/view.

[Riley and Robertson(2014)] M. Riley and J. Robertson. FBI Investigating Whether Companies Are Engaged in Revenge Hacking, December 2014. http://mobile.bloomberg.com/news/2014-12-30/fbi-probes-if-banks-hacked-back-as-firms-mull-offensives.html.

[Robinson(2017)] T. Robinson. Revised ‘Hack Back’ bill encourages ‘active-defense’ techniques, sets parameters. SC Magazine, May 2017. https://www.scmagazine.com/revised-hack-back-bill-encourages-active-defense-techniques-sets-parameters.

[Unspecified(2016)] Unspecified. Into the Gray Zone: The Private Sector and Active Defense Against Cyber Attacks. Center for Cyber and Homeland Security, October 2016. https://www.oodaloop.com/wp-content/uploads/2016/10/CCHS-ActiveDefenseReportFINAL.pdf.

My thanks to Cere Davis for feedback on this post.