There is something tantalizing about a lone hacker using a single computer and a big brain to take down the bad guys or stick it to the man. The archetype of the hacker has a cultivated ethos of freedom, individuality, and subtle craftiness that cannot be denied. From the ’90s cult classic Hackers to the more modern (and realistic) Mr. Robot, the hacker has long held a special place in pop culture. Despite this fascination with hacking, hackers, and cyberwarfare, the field is poorly understood outside of industry professionals.

As the software industry continues to “eat the world,” the software security industry has grown alongside it. As more software is deployed, it stands to reason that more software is vulnerable to attack. Indeed, there is growing concern among professionals that cybersecurity firms are seriously understaffed, and there aren’t nearly enough of them to combat the growing number of cyberattacks. Making matters worse, the continued drive toward accelerated training programs for software developers means that more developers are deploying code who have not had any formal security training.

Devices on the “Internet of Things” are notoriously insecure.

Lack of security fundamentals has always been a problem—many universities don’t require security training in their computer science degree programs—but the problem is further exacerbated by schools encouraging developers to do more with less training. In addition, software ecosystems increasingly encourage developers to rely heavily on third-party software, often without evaluating that software for vulnerabilities. The 2016 left-pad scandal gives us a glimpse into how increasing reliance on third-party software can open the internet to risk.

The Left-Pad Scandal

Left-pad is a simple program that “pads” text values on the left with some character (typically a 0 or a space) until it is a specified size. This function is mostly used to format textual output so it’s easier to read. The implementation is simple; at the time of the scandal, the function was 11 lines of straightforward JavaScript. Nevertheless, thousands and thousands of developers included this library in their code, and many of them unwittingly included it by including a different library that included left-pad.

The scandal began when the left-pad library was unpublished from a popular tool for managing JavaScript libraries called npm. When it was, all projects that relied on left-pad broke. All projects that relied on projects that relied on left-pad also broke. It was a huge headache for the JavaScript community, and it temporarily brought development to a halt for many hobbyists and companies.

Here is the security angle: What if instead of unpublishing the library, the maintainer of left-pad added a “feature” to log information about what was being left-padded to a server under their control, or worse, attempted to install some more holistic monitoring malware? Less maliciously, what if the library just had a small bug that could be exploited by a clever hacker?

Because so many people relied on the code unwittingly, such an exploit could easily go unnoticed by downstream developers. This web of interdependent software is one way the increasing complexity of software ecosystems amplifies the power of small vulnerabilities.

Simple Errors Can Cause Catastrophic Problems

Last year, the Atlantic published “The Coming Software Apocalypse,” a harrowing look at the extraordinary complexity of modern software systems and how simple errors hidden in that complexity can cause catastrophic problems. One example? A six-hour-long 911 outage across the entire state of Washington:

The 911 outage, at the time the largest ever reported, was traced to software running on a server in Englewood, Colorado. Operated by a systems provider named Intrado, the server kept a running counter of how many calls it had routed to 911 dispatchers around the country. Intrado programmers had set a threshold for how high the counter could go. They picked a number in the millions.
Shortly before midnight on April 10, the counter exceeded that number, resulting in chaos. Because the counter was used to generate a unique identifier for each call, new calls were rejected. And because the programmers hadn’t anticipated the problem, they hadn’t created alarms to call attention to it.

While this 911 outage was not a result of a coordinated attack, it’s easy to imagine this vulnerability as part of an Ocean’s Eleven-style montage: Hackers roll the number past the limit right before their big heist, thus preventing reports of the robbery until after they’ve escaped. This is a garden-variety mistake that’s easy to forgive in the right context, but a rejected 911 call can have tragically dire consequences.

It is absolutely within the scope of a software security professional’s job to predict and protect against a failure of this type. Thinking about all the possible failure patterns for any piece of software is crucial for system hardening and risk mitigation. Unfortunately, little mistakes like setting an arbitrary threshold for database entries can have an outsized human impact—and little mistakes are all over the software world. (Remember Y2K?)

Consider another example where hackers were able to use a “smart fish tank thermometer” to steal a casino’s high roller database. In this case, the thermometer was less secure than other entry points to the casino’s network. In fact, devices on the “Internet of Things” are notoriously insecure. According to Wired magazine, these devices are frequently susceptible to attacks for a variety of reasons, including lack of commitment to security by device-makers, lack of transparency in the code running on devices, and lack of knowledge on the part of the people using and installing these devices.

Is the risk of a cyberattack worth the convenience of turning off your lights with your smartphone?

It’s understandable (honestly, expected) that employees installing a “smart thermometer” wouldn’t be software security experts. Even many software savvy individuals could be forgiven for not thinking about the thermometer as an attack vector. Unfortunately, every network-connected device opens us up to attacks. We need device-makers to start taking security seriously. Additionally, people would be wise to think carefully about how much value they get from a “smart” device versus a “dumb” one. Is the risk of a cyberattack worth the convenience of turning off your lights with your smartphone?

The growth and development of the software security field will continue to shape the trajectory of our future. Digital systems now play a crucial role in banking, payroll, distribution chains, voting, social interaction, medicine, cars, planes, trains, implanted medical devices, and so on. Each and every one of these digital systems is a potential vulnerability. As the scope of software broadens, so does the scope of software security. In 2018, the software security industry includes a wide variety of professionals with different skill sets, so what are they?

Hackers, Penetration Testers, and Government Agents

First is the most well-known archetype. Offensive security work is all about breaking into stuff and doing things you’re not supposed to be able to do. The particular goals of any given offensive hacker can be quite varied—from executing ransomware attacks such as WannaCry to stealing database records—but the crux of the craft is always about the question, “What can I do that the owner of this system doesn’t want me to do?”

Sometimes, this work involves deep knowledge of software design and implementation. Take a recent attack against cryptocurrency trading website MyEtherWallet.com. In this attack, the perpetrators exploited weaknesses in two critical networking protocols. First, hackers attacked the domain name system (DNS), which maps human understandable names like MyEtherWallet.com to computer understandable IP addresses used to route internet traffic. This attack, called DNS poisoning, allowed the hackers to send bogus IP addresses in response to queries for MyEtherWallet.com.

Second, the hackers attacked the border gateway protocol (BGP), which uses IP addresses and actually controls how traffic is routed through the physical infrastructure of the internet. This attack, called a BGP leak, caused internet traffic to travel through infected computers, which allowed the attackers to poison significantly more DNS queries.

As a result, several users who typed “MyEtherWallet.com” into their browsers URL bar were sent to a phishing website that looked like MyEtherWallet.com. When unsuspecting users typed in their usernames and passwords, that information was sent to the attackers who used it to empty those accounts.

Other offensive work can be more human-oriented. Take this hilarious (but terrifying) video of a social engineering expert taking over someone’s cellphone account with nothing more than a phone number spoofer, audio of a crying baby, and her charisma:

Hi there. No, it’s not my account, but I have a crying baby… can you please help me?

Breaking into systems requires creativity, flexibility, and a big picture mindset. Hackers—whether white-hat, black-hat, or something in between—benefit from thinking about multiple options for breaking into systems. Take the phone account example: Attackers might try brute force to get their mark’s password. They could have used a phishing scheme like the one used against MyEtherWallet. They could try a social engineering strategy like the one in the video. They might also try to actively break into the phone company’s network using malware.

When one strategy looks like it won’t work attackers try something else. The story of Stuxnet and Flame is illustrative. These two programs are some of the most impressive and complex pieces of malware ever created. The two worms are believed to have been created collaboratively by hackers working for the U.S. and Israeli governments starting around 2007. Stuxnet turned out to be a worm with a specific goal: infecting and shutting down Iranian nuclear centrifuges. Flame—a huge program by malware standards, at nearly 60 megabytes—was more of an espionage Swiss army knife. The malware enabled its controllers to steal data, monitor keystrokes, turn on cameras and microphones, and open up remote channels to install additional malware once the virus established itself on a host machine.

The features that originally led security experts to tie the two worms to the same creator were the mechanisms the viruses used to spread themselves. Flame famously utilized an exploit against Windows update servers that allowed the viruses to masquerade as legitimate software updates, obviously an effective way to spread the worm far and wide.

Stuxnet, on the other hand, was targeting a facility known to have an “air gap”—meaning no computers in the facility were connected to the internet. Stuxnet instead relied on an exploit that allowed infected USB devices to automatically infect any Windows machine they were plugged into. No one knows who the proverbial “patient zero” was, but for all we know NSA agents just scattered a few infected USB drives around the nuclear facility’s parking lot.

Flame used this exact same USB exploit, which is one of the findings that originally tied the two viruses together, but Stuxnet does not appear to have used the Windows update server exploit. The creators of Stuxnet knew they couldn’t break into the air-gapped systems this way, so they didn’t utilize that exploit to spread the virus. On the flip side, while Stuxnet always attempted to infect USBs that were plugged into an infected computer and spread itself further, Flame turned that feature off. Flame didn’t copy itself onto new USB devices the way Stuxnet did.

Because security is now a top priority for computer systems in general, the responsibility for making things secure by default has fallen on the people who know the system best.

The point is that even though the creators of Flame and Stuxnet had access to the same exploits for spreading the malware, they didn’t just throw them all at the wall to see what stuck. They thought carefully and made decisions about which exploits to use depending on their goals.

Government jobs where you create malware like Flame and Stuxnet are the NBA of hacking: A small handful of exceptional hackers do this kind of work. There are other people doing similar work with less expertise and in areas where the stakes are a little less extreme than cyberwarfare.

Penetration testers are hackers paid by companies to attempt to break into the company’s own systems. Companies pay hackers to report how they broke in so they can shore up their systems. Another kind of hacker, somewhere in between a penetration tester (white-hat) and a malicious hacker (black-hat) are private individuals who try to earn “bug bounties” provided by many companies. With a bug bounty program, a company agrees to pay anyone who can do something specific on their system (e.g., access a database) in exchange for the hacker explaining how they did it.

Much of the work done by penetration testers relies on using premade tools to execute an attack. They are tool users, not tool creators. Many of the tools still require some technical expertise to operate, but not nearly as much. Penetration testers may or may not be programmers or software engineers, but they are almost universally adept computer users who enjoy learning about new techniques and emerging toolsets.

Because the skills necessary depend a great deal on the specific goal, the scope of offensive work is broad and varied. From social engineering at the least technical levels to the development of malware, encryption breaking programs, and network intrusion tools at the most technical levels.

Mitigation and Prevention

Mitigation and prevention are all about defense: building systems that make it difficult for hackers to do things they should not be able to do. The people doing this type of work tend to be quite technical. Software engineers, especially engineers working at the systems level, shoulder an outsized portion of this work.

That said, system admins, DevOps engineers, and network engineers will all be involved in some prevention/mitigation projects. Application developers must also do some of this work, especially to cover security holes introduced by the applications they build.

Increasingly, this kind of work has been moving from application engineers to systems engineers. Operating systems engineers define the interface that application developers use to access important things, such as files on the filesystem or the network card. Because security is now a top priority for computer systems in general, the responsibility for making things secure by default has fallen on the people who know the system best. The Windows operating system team knows best how Windows works and, therefore, how to prevent critical break-ins from occurring at the OS level. Said another way, a great operating system makes it hard for application engineers to introduce security vulnerabilities.

Big organizations have software teams dedicated to creating APIs that are inherently safe for application developers to use. By limiting the choices that an app developer can make to only “secure” ones, we can dramatically decrease the total attack surface. For example, Google announced in April that the Android OS now defaults to TLS for all connections, meaning internet connections are encrypted by default.

Using an unencrypted connection is obviously a privacy risk, and defaulting to TLS prevents Android users from being exposed to that risk holistically. In previous versions, it was up to application developers to ensure connections were properly encrypted. By preventing application developers from making the “wrong decision” by accident, Google has eliminated a variety of potential attack vectors.

Software organizations need to get serious about awareness, training, and execution for building secure software systems.

The proliferation of software and the increasing number of would-be attackers is bringing pressure to all kinds of developers. Obviously, it would be best if application developers couldn’t create security vulnerabilities because the OS engineers plugged every conceivable hole, but that’s not realistic. As a result, application developers and their management teams need to be more mindful of security practices throughout the software development lifecycle.

There are more ways than ever to break into programming, but most of them (including many university degree programs) are entirely unfocused on how to design secure software. Software organizations need to get serious about awareness, training, and execution for building secure software systems.

The inevitable vulnerabilities in software necessitate security practices downstream of software development as well. IT professionals, such as system administrators, also perform mitigation and prevention tasks — e.g. setting up secure VPNs to limit access to important internal servers or databases, selecting a cloud provider who has dedicated significant resources to security, and setting up monitoring and logging tools to inform stakeholders when devices on the network are behaving strangely or sending suspicious traffic.

Everyday computer users should engage in mitigation as well, hopefully by choosing strong passwords, minimizing password reuse, using two-factor authentication, and using privacy/security focused software, such as the EFF’s HTTPS Everywhere, the Brave web browser, or Keybase.

Just like “real life” security, prevention is usually a game of being less vulnerable than others. An unlocked car is more likely to be robbed than a locked car. Pickpockets look for the wallet peeking out of a back pocket. House robbers avoid barking dogs and alarm systems. Data thieves try common passwords before resorting to an exhaustive brute force attack. Hackers look for weak points in the network (like the fish tank thermometer). One thing hackers love—the equivalent of an unlocked door—is out-of-date software.

Hackers want to spend their time and efforts breaking something that will give them access to lots of machines; finding a flaw in a major operating system, web server platform, or encryption library would be a golden goose. When such a flaw is discovered, the security teams of the flawed system react and publish updates to plug the hole. Staying up-to-date is a critical aspect of the security-focused IT manager’s job.

Finally, because of the ever-evolving nature of security work, it’s common for offense and defense teams to cross-train by switching sides in red-team-versus-blue-team exercises. Learning how to attack a system helps you better defend against attacks—and vice versa.

Forensics and Detection

Attacks are inevitable; forensics is all about investigating an attack after the fact. These firms are hired after, for example, the now infamous DNC email hack. It’s too late to get the emails back, but any sensible person would want to 1) stop someone else (or the same people) from breaking into the system again, 2) figure out what was stolen, and (if possible) 3) determine the identity of the hacker(s).

Ideally, the victim of a cyberattack knows it before the world does. In conjunction with mitigation and prevention efforts, security-focused engineers and IT professionals commonly add logging and reporting tools to critical software systems. This kind of reporting might involve receiving telemetry data from a crash report or logging inbound and outbound network traffic. These reporting efforts create clues that digital detectives ultimately use to figure out how someone broke into a system, what was compromised or stolen, and the potential scope of the problem.

Depending on the situation, forensics can involve looking at database access logs, network traffic logs, scouring the filesystem for clues about the break-in, such as malware and files that were created or altered by compromised users. This work is often multidisciplinary; hackers are creative and use lots of tactics to break into systems, so forensic experts need to be versed in a wide variety of tactics.

It’s never been more important to learn more about security best practices.

For most people in digital forensics, the goal is to identify the targeted data and/or systems, collect and recover information from those systems, and then analyze the collected data in order to determine how the attack took place. The result of a security audit may ultimately become a lawsuit or a criminal prosecution. Digital forensics experts benefit from an understanding of law and legal procedure to help them know what information is likely to be relevant to attorneys, judges, and juries. They also know how to acquire information in a way that doesn’t render it inadmissible in court, which is absolutely critical for digital forensics work within the FBI, for example.

In prevention, a lot of the work is done by software engineers making the systems secure. In forensics, it’s more about using tools and understanding the big picture than about writing code. Digital forensics experts will write short scripts and programs to help them find, collect, and preserve the clues, and there are definitely people writing these tools. But for the most part, forensic work does not involve the creation of software libraries or any large-scale software engineering efforts.

Cryptography and Encryption Research

The last type of security expert also involves the most math. Cryptography researchers develop new codes, ciphers, and encryption techniques to ensure that data can be safely stored or transmitted in a way that protects it. Cryptography is a field that relies heavily on topics from computer science and mathematics.

These are the people inventing algorithms like the RSA public key encryption process or the SHA family of cryptographic hash functions. This is fundamentally different work from any of the other jobs mentioned. All the other types of work above involve securing and breaking actual systems that exist; they’re about actual phones, actual databases, and actual web servers. Cryptography is about securing data more abstractly.

Cryptographers rely heavily on mathematical principles to create algorithms that can process data in a few crucial ways. Specifically, the field of cryptography revolves around five pillars.

  1. Confidentiality: Only trusted parties can read a message.
  2. Integrity: No one can tamper with or change secured data.
  3. Authentication: The identities of relevant parties can be confirmed
  4. Authorization: Different levels of access can be established for individual trusted parties.
  5. Non-repudiation: This makes it possible to prove that a message was received.

Transport Layer Security (TLS), which powers secure web communication, involves encryption algorithms that provide authentication, confidentiality, and integrity. Authentication to ensure that you’re connected to the right web server, confidentiality to ensure that only you and that web server can see your communication, and integrity to ensure that no one can alter those messages while they are in transit. TLS allows for the use of several different encryption algorithms for most of the connection, but requires the use of RSA for authentication and confidentiality during the “TLS Handshake.”

Developers of TLS implementations, such as OpenSSL, rely on math researchers to invent encryption algorithms with desired properties. Just like application developers rely on operating system engineers to provide a safe API into the operating system, OS engineers rely on cryptographers to invent safe encryption algorithms. Problems at this level can cascade throughout the technology that relies on the math; for example, Flame’s ability to spread by masquerading as a legitimate Windows update hinged on a fault in a cryptographic hashing algorithm called MD5.

Many cryptographers are working on encryption algorithms that will be impervious to quantum computers. Powerful quantum computers are expected to break RSA. If a high-powered quantum computer arrives on the scene, RSA will become entirely insecure, and huge swaths of internet traffic will have to make a switch to using a quantum-proof encryption algorithm instead of RSA.

Software security is a huge (and growing) market, and there’s never been a better time to dive in. If you’re already working in the software world, it’s never been more important to learn more about security best practices. Computers and the internet aren’t going to disappear anytime soon, so we might as well figure out how to secure the damn things!