A look at recent updates to three professional ethical codes
Advancing ethical thinking regarding responses to cyber crime
It is common for professional societies and membership organizations to have a Code of Ethics intended to guide their members. Professionals working in the field of information security (INFOSEC) are often members of one or more of these entities, as are academic cyber security researchers and students desiring to enter the INFOSEC field.
In this article I will focus on three such entities: The IEEE and the Association for Computing Machinery (ACM), which are general professional societies with broad membership across many disciplines, and the Forum of Incident Response and Security Teams (FIRST), who “cooperatively handle computer security incidents and promote incident prevention programs”.
Between mid-2018 and the end of 2019, all three of these professional bodies have been actively cultivating their codes of ethics :
- The ACM, who first published an extensive code of ethics in 1992, most recently updated their Code of Ethics and Professional Conduct in June 2018 after an open multi-draft revision process.
- The IEEE , whose original Code of Principles of Professional Conduct goes back to 1912, announced its most recent proposed revisions to its Code of Ethics on January 10, 2020. Member and volunteer comments will be accepted until April 10, 2020.
- The Ethics Special Interest Group of FIRST announced their proposed Ethics for Incident Response and Security Teams (EthicsfIRST) in December, 2019. Their open comment period ended in January 2020.
To frame an analysis of the codes for these entities, I will assume a perspective from the subset of INFOSEC professionals involved in digital forensics and incident response (DFIR) and threat intelligence and kinds of actions associated with countering criminal activity by taking over and dismantling malicious botnets. I’ve examined several such case studies in my publications and presentations over the years, in some of which I participated.
So You Want to Take Over a Botnet...
USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available…
Ethical codes as guides to behavior
There is a famous quotation in software engineering (variously attributed to Grace Murray Hopper, Andrew Tanenbaum, or Alan Cox, depending on which web site you check!):
“The good thing about standards is that there are so many to choose from.”
When it comes to aggressively responding to botnets and computer intrusions, it can seem like that with ethical codes, too!
Beyond the three codes listed so far, here are some other codes of ethics or codes of conduct that might apply in this space:
- EC Council Code of Ethics
- International Association of Special Investigation Units Code of Conduct
- ISC2 CISSP Code of Ethics
- ISSA Code of Ethics
- SANS GIAC Code of Ethics
- SANS IT Code of Ethics
- USENIX System Administrators’ Code of Ethics
Michael Bailey, Sven Dietrich and I analyzed several ethical codes associated with general society at large (think justifications for “self-defense”), the professional community, and the academic community. Individuals from, or groups compromised of people from, each of these three categories engage in things like: the takeover and takedown of botnets; deceiving computer users to better understand how they respond to social engineering (e.g., phishing emails); performing research studies involving access to realtime communications or manipulation of networks used by thousands of people; or demonstrating the need to fix vulnerabilities in widely used internet services or devices by breaking them and publishing functional “proof-of-concept” exploit code.
We observed that ethical codes run the gamut from implicit societal codes where decisions are influenced by friends, family, or one’s own internal moral compass, to published codes like those of ACM, IEEE and the others listed above that members agree to follow when signing up or renewing their membership, all the way up to (in the United States) the Belmont Report’s principles of Respect for Persons, Beneficence, and Justice as codified in the United States Code of Federal Regulations (45 CFR 46, also known as the “Common Rule” because of its uniform adoption by all federal agencies and departments of the United States government.)
We published our findings on the applicability and limitations of these codes and the efficacy of their enforcement mechanisms, along with over two dozen case studies with which to illustrate the ethical questions raised, in a technical report (“Towards community standards for ethical behavior in computer security research”). The case studies were adopted by the Menlo Working Group and included in the “Companion” to the Menlo Report.
Applying Ethical Principles to Information and Communication Technology Research: A Companion to…
33 Pages Posted: 25 Oct 2013 Last revised: 30 Mar 2014 Date Written: October 8, 2013 Researchers are faced with…
I will start strategically by looking at ACM’s Code. You’ll see why in a moment.
The ACM Code of Ethics
From their web site, “ACM is dedicated to: Advancing the art, science, engineering and application of information technology; fostering the open interchange of information to serve both professionals and the public; and promoting the highest professional and ethical standards.”
“The 1992 Code organized ethical principles into four categories: general moral imperatives, more specific professional responsibilities, organizational leadership imperatives, and compliance.”
Like our technical report and the Companion to the Menlo Report, the most recent revision of the ACM Code of Ethics is accompanied by a set of case studies showing how use the code of ethics for analysis and application.
The case study most relevant here is the Malware Disruption case.
Case: Malware Disruption
Rogue Services advertised its web hosting services as "cheap, guaranteed uptime, no matter what." While some of Rogue's…
This case study implicitly identifies stakeholders including Rogue Services (the network provider covering for unnamed malicious actors sending spam emails), Rogue’s service clients (some legitimate, but the majority malicious), ISPs and international organizations — FIRST members could fall into this group — reporting the malicious activity originating from Rogue’s network requesting it be ceased. It doesn’t categorize them or go into detail about their motivations, risks, benefits, etc.
For the purposes of clearly identifying stakeholders, I will call Rogue and the criminals they are enabling negatively inclined actors and the group acting to stop the harm being caused to the general public positively inclined actors.
After multiple reports and requests of Rogue to stop the criminal activity were refused by Rogue, citing their “no matter what” pledge of guaranteed service to their customers, multiple security vendors and governmental organizations acted to “forcibly [take spamming sources] offline through a coordinated effort [consisting of] a targeted worm that spread through Rogue’s network [in a] denial-of-service (DoS) attack successfully [taking] Rogue’s machines offline, destroying much of the data stored with the ISP in the process.”
The case goes into just enough detail to allow relevant principles to be identifiable and applied, which they state are:
- Principle 1.1 (Contribute to society and to human well-being, acknowledging that all people are stakeholders in computing)
- Principle 1.2 (Avoid harm)
- Principle 2.8 (Access computing and communication resources only when authorized or when compelled by the public good)
- Principle 3.1 (Ensure that the public good is the central concern during all professional computing work)
The analysis points out that the negatively inclined actors violated Principles 1.1, 1.2, 2.8 and 3.1. These violations are then weighed against the actions taken by the positively inclined actors in service to the public good.
Where it gets interesting is in analyzing the employment of a destructive worm to forcibly stop the spamming and delivery of “dropper” malware through tainted ads served from Rogue’s network. I consider this to be acting at Level 4.2 — Uncooperative Cease and Desist — on the Active Response Continuum.
Active Response to Computer Intrusions
35 Pages Posted: 31 Aug 2005 Date Written: June 10, 2005 Victims of hacker attacks are increasingly adopting an "active…
While it is acknowledged that the DoS worm violates Principle 1.2, it is justified by the negatively inclined actors’ violation of Principle 1.2 combined with (a) the worm authors’ adherence to Principle 1.1 and (b) specific targeting that minimizes any effects implicated in Principle 1.2. Additionally, the DoS worm authors’ violation of Principle 2.8 was mitigated by the targeting through a “compelling belief that the service disruption was consistent with the public good [as embodied by Principle 3.1].”
In other words, the conflict here between the duty to adhere to principles or chosing to violate them is resolved by balancing by the consequences of the actions taken in terms of the public good. (You might recognize this as a utilitarian philosophical position.)
The ACM task force used an open process that involved publishing drafts of the Code (with changes tracked) and articles in ACM publications, helping us understand the thinking behind the changes. I can imagine the discussions of how to apply the evolving Code to the use cases, since we went through a very similar exercise while drafting the Menlo Report and its Companion.
With the Malware Disruption case in mind, consider the evolution of Principle 2.8 in these three screenshots:
It is clear that the original 1992 blanket “must always” exclusion to accessing resources of others without authorization, cooperation, or coordination doesn’t allow the actions taken by the positively inclined actors in the Malware Disruption case study.
The final language acknowledges that non-cooperative criminal infrastructure takedowns do take place, that members of ACM take part in them and that the actions taken shouldn’t put members in conflict with their obligation to follow ACM’s Code of Ethics. Usually these botnet takedown actions are nowhere near as aggressive or damaging as described in the Malware Disruption case study, but the massive DDoS attacks stemming from insecure home network equipment and Internet of Things (IoT) devices like baby monitors and home security cameras are leading some people to head in that direction (e.g., see “Someone Has Hacked 10,000 Home Routers To Make Them More Secure,” “Vigilante botnet infects IoT devices before blackhats can hijack them,” and “BrickerBot is a vigilante worm that destroys insecure IoT devices.”)
The case study only mentions the law in terms of (the lack of) proscribing Rogue’s negatively inclined behavior as a service provider. It does not directly address the legality of the actions taken by the DoS worm authors, but there clearly could be violations of law in one or more jurisdictions in which the positively inclined actors reside, or at least grounds for civil action by innocent third parties who are legitimate (and benign) customers of Rogue. The legal concept of tortious interference with business process comes to mind here. There may also be a criminal violation of the Computer Fraud and Abuse Act (18 U.S. Code § 1030) by those actors residing in the United States.
I would argue that Principle 2.3 (Know and respect existing rules pertaining to professional work), while not mentioned in the case study, would be appropriate to include here. The relevant portions are:
“Rules” here include local, regional, national, and international laws and regulations, as well as any policies and procedures of the organizations to which the professional belongs. Computing professionals must abide by these rules unless there is a compelling ethical justification to do otherwise. […] A computing professional who decides to violate a rule because it is unethical, or for any other reason, must consider potential consequences and accept responsibility for that action.
This principle not only allows an exception for nuanced situations like that described in the Malware Disruption case study, but it also puts the onus on the positively inclined actors — who may believe their potential violation of law will achieve a greater moral good to society — to do their homework and be prepared to put forward an affirmative defense to justify their actions. In this case, the multiple attempts to report wrongdoing and get Rogue to take care of it, working in concert with government and non-governmental organizations, and efforts to narrowly target and minimize any harm by the DoS worm, all exhibit the kind of due diligence and responsibility called for by Principle 2.3.
Next, let’s look at IEEE’s code.
The IEEE Code of Ethics
From their web site, “IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity [whose] core purpose is to foster technological innovation and excellence for the benefit of humanity.”
The IEEE Code of Ethics is much shorter (10 items) than that of the ACM (25 items), though they both share some fundamental elements. Part of the difference in length results from IEEE only listing items, while ACM includes some explanation and guidance.
The Code is written with what you could describe as an “inward focus,” using language that centers on professional behavior in the workplace, with the impacts to society being those resulting from use of the products and services developed by the engineering professionals to whom the code is directed. It might be that the lack of explanations about application, like that accompanying the ACM Code, is what gives it that feeling.
IEEE does not have a similar set of case studies to the ACM, but we can experiment with applying their Code of Ethics to the Malware Disruption case.
- The first element of item 1 of the Code states, “to hold paramount the safety, health, and welfare of the public.” While this is similar to ACM’s Principle 3.8, it seems a weaker justification for actions such as the DoS worm.
- The last element of item 1 states, “to disclose promptly factors that might endanger the public or the environment.” This could apply to the reporting of malicious activity to Rogue by the positively inclined actors.
- Item 6 states, “to maintain and improve our technical competence and to undertake technological tasks for others only if qualified by training or experience.” I argue in some of my publications and talks that we ought to have requirements on the technical capability and maturity of those engaged in the most extreme and aggressive actions like botnet takedowns, since such situations increase the potential for harm to the public who is caught in the middle. This item alone does not seem to me to help, since the positively inclined actors may all believe they will not make any mistakes or encounter any unforeseen circumstances, and besides, there is nothing requiring them to meet any qualifications (as none exist in this space.)
- Item 9 of the Code states, “to avoid injuring others, their property, reputation, or employment by false or malicious action.” While this seems like it could apply to the targeting actions designed to minimize harm from the DoS worm, the final clause “by false or malicious action” seems to negate its utility here since the actions of the positively inclined actors (as ACM describes their actions) are neither “false” nor “malicious.”
The remainder of the items in the Code don’t seem to help in terms of the Malware Disruption case study.
One of the proposed changes to the Code — adding a requirement “to engage in lawful conduct” to item 4 — actually seems to make things more difficult in application to the Malware Disruption case study.
There is some nuanced language required to accommodate real-world cases such as this one, which is one of the difficulties encountered in writing codes like this. We had to deal with this in the Menlo Working Group effort and it took a lot of effort and patience to get to our final product. The fundamental problem has to do with using vague terms like “attack,” “breach,” and “intrusion” that can lead to fallacious logic and other misunderstandings. Legislative proposals like the ACDC Act share this same problem
I find the concept of Information Assurance to help in being clear, concise, and comprehensive. A worm that destroys files (compromise of the integrity of information and information systems) resulting in a denial of service (compromise of the availability of information and information systems) are both the types of acts that are often encompassed by computer misuse statutes like the United States’ Computer Fraud and Abuse Act (18 U.S. Code § 1030). The latter, when done by someone with malicious intent against a rival company, has in fact resulted in criminal indictments in the past.
I would argue that the new item 5 would also apply to the Malware case study:
“to seek, accept, and offer honest criticism of technical work, to acknowledge and correct errors, to be honest and realistic in stating claims or estimates based on available data […]”
I would hope that sufficient analysis and documentation by the positively inclined actors of the criminal activity being perpetrated from Rogue’s network and Rogue’s refusal to stop it — including the plan for the DoS worm, and how its effects were to be targeted and controlled — would be produced and vetted prior to taking such an aggressive action.
Lastly, we look at the new EthicsfIRST code.
Ethics for Incident Response and Security Teams
From their web site, FIRST describes themselves as “the premier organization and recognized global leader in incident response. Membership in FIRST enables incident response teams [from government, commercial, and educational organizations] to more effectively respond to security incidents reactive as well as proactive. […] FIRST aims to foster cooperation and coordination in incident prevention, to stimulate rapid reaction to incidents, and to promote information sharing among members and the community at large.” The Ethics SIG more directly states that, “FIRST functions similar to a professional association for CSIRT and PSIRT members as well as other cybersecurity professionals with training and experience related to the work of incident response and security teams.”
At first glance, you may notice that all of the principles are expressed as duties. The codes of IEEE and ACM also list duties, but more indirectly and subtly. Framing the code this way differs slightly from some other codes that blend duties with consequentialist or utilitarian principles that discuss outcomes (e.g., results of the acts, benefits that derive, who receives the benefits, etc.).
FIRST makes it clear that these principles are not intended to be as absolutist as you might encounter in a philosophical discussion of deontological ethical principles. The code starts by making it clear the principles are “formulated as statements of responsibility, based on the understanding that the public good is always the primary consideration.”
The introduction goes on to explain a feature shared with the ACM Code. “Each principle is supplemented by guidelines, which provide explanations to assist computing professionals in understanding and applying the principle,” and an Appendix provides further guidance on how to deal with dilemmas.
I’ll perform the same experiment of using the EthicsfIRST code to consider the ACM Malware Disruption case study.
Several principles appear to me to be implicated in application of the code to this case:
- Duty of coordinated vulnerability disclosure: This duty is intended more for disclosure of vulnerabilities that present risks to users of affected systems following public disclosure, at which point anyone developing or possessing a functioning exploit can begin causing harm. The purpose of coordinated disclosure is to maximize the ability to fix the problem and distribute patches to minimize risk exposure before widespread exploitation begins. In this case, however, the coordinated disclosure would be the reporting of the criminal activity to Rogue.
- Duty of authorization: This is a little like ACM’s Principle 2.8, and serves a similar purpose. (The concern for the public good portion is covered by the overall requirement to place this concern as the top priority.)
- Duty to inform: As with the previous principle, the DoS worm authors and others had already performed their duty to report malicious activity to Rogue, before contemplating any further coordinated (if uncooperative with respect to Rogue) actions.
- Duty to recognize jurisdictional boundaries: This principle is similar to ACM’s Principle 2.3 and IEEE’s item 4 revision, but FIRST does a much more thorough job of providing guidance appropriate to situations like the DoS worm action. The explanation of the SHOULD definition at the beginning of the code further reinforces the need for careful preparation, narrow targeting and discrimination (following the meaning of that term in the context of the law of war) of impacts resuting from actions taken.
- Duty of evidence-based reasoning: This duty seems to be the most important for the Malware Disruption case study, since the actions are so aggressive and destructive. As I described in the ACM section, a DoS worm that deletes files on a corporate network could result in civil or criminal legal action, or at least public debate about any damage that occurs.
I like how these principles complement each other in terms of guiding careful consideration of actions, following an escalatory path towards more aggressive actions only when it appears to be necessary, and stressing an evidence-based reasoning process. The latter makes it easier to seek pre-action review, perform post-action review in light of empirical data, and justify any harm along axes of proportionality, necessity, discrimination, etc.
Observations and suggestions
The IEEE Code was the hardest to use in analyzing the Malware Disruption case and justifying the actions taken by the positively inclined actors, due to what I interpret as a focus on the discipline and professional practice of engineering in the benign context. This focus is understandable for a large society who first put forward a code of ethics over a hundred years ago — decades before INFOSEC, DFIR, and threat intelligence became disciplines and professions. There was no need to contemplate activities intended to actively counter ongoing crime! Besides, a short and concise code is easier to understand and keep in mind.
ACM and FIRST have more detailed codes that consider application by their members, including when acting in the malign context where actions are intended to achieve a greater moral good in service to the public interest.
ACM published their first Code of Ethics just one year after Eugene Spafford published “Are computer hacker break-ins ethical” and two years after Dorothy Denning published “Concerning Hackers Who Break into Computer Systems.” The reasoning for updating the 1992 Code in 2018 acknowledges the risks presented by pervasive computing that have been growing over the last two decades:
Computing today is in our bodies — prosthetics, pacemakers, and insulin pumps. Computing is also integral to the ways in which societies wage war. Computers impact all areas of our lives and many life-preserving functions are relegated to a piece of computer guided machinery. […] The changes in technology and the kinds and number of impacted stakeholders are changing society in fundamental ways.
FIRST is the most recent organization I am aware of from the INFOSEC and DFIR space to put forward a code of ethics. It should come as no surprise that FIRST, whose membership is comprised of people focused on INFOSEC and DFIR, has a code that includes very clear principles and practical guidance for applying them.
I believe that realistic case studies prove very helpful in developing ethical codes as well as learning how to analyze real-world situations to apply ethical principles. This is the reason my colleagues and I included real-world cases in our technical report and why the Menlo Working Group created a synthetic case study for the Companion based on real historic events with many decision points for ethical consideration. I imagine that the ACM task force did exactly the same thing for the same reasons.
I would encourage IEEE and FIRST to either use the ACM case studies, those included in the Companion to the Menlo Report, or develop their own case studies based on historical events involving their membership. Using existing case studies is less work than developing new ones and I would think could prove helpful in producing codes that converge, rather than diverge, in end results.
Beyond case studies, ACM also has an “Ask an Ethicist” video/blog section. As of the writing of this article, it has only two Q&A items, both involving discovery and disclosure of vulnerability information. I expect the list will grow over time, providing further guidance for those attempting to navigate tough ethical issues not covered by the existing body of case studies.
I hope that my focus here on going beyond just producing good professional engineering results to also aggressively countering ongoing harm to the public is helpful.
Ethics is about doing the right thing in all situations you might encounter in your day to day working life, not just in the best-case scenarios. As the scale and scope of potential harm to the general public via networked computing systems increases, so does the necessity for capable and responsible people to actively counter the harm without making matters worse.
I think we have further to go, but in terms of producing actionable codes of ethics we’re heading in the right direction!