The Computer Safety Industry?

Chris McNab
AlphaSOC
Published in
4 min readJan 24, 2019

Thinking of a system as either secure or insecure is nonsensical — these are misnomers that we should abandon. Many organizations believe they operate secure computer systems, when the goal is practically unachievable.

We should think of the problem as a safety issue instead.

Within computing, vendors create unsafe products that consumers naively deploy, which in-turn leads to data being processed in an unsafe manner.

In today’s world, this truth manifests itself in the form of PII compromise, payment card data exposure, theft of cryptocurrencies, and acquisition of wealth — whether as information being copied, or capital changing hands.

At a micro-level, this resembles fraudulent credit card transactions that are recovered via insurance mechanisms within the banking system. At the macro-level these exposures can benefit entire economies via the acquisition of intellectual property (upside) without incurring the cost of creating the material through research and development (downside).

Hacking is the art of manipulating a system to perform a useful action.

Many unsafe systems, for example, can be manipulated by software (e.g. an exploit or a piece of malware) that results in an unintended change in state. Examples of system state changes that may benefit an adversary include:

  • Exposure of credentials (e.g. private keys)
  • Encryption of files within a system (i.e. a ransomware attack)
  • Modification of payment details (e.g. business email compromise)
  • Denial of service, resulting in a loss to a party (e.g. not being able to trade)

Unauthorized changes in system state can affect confidentiality, integrity, and availability, often to the benefit of an adversary.

Hackers are the Symptom of a Problem

The knee-jerk reaction of authorities is to prosecute hackers and curb the proliferation of their tools. The adversaries we face, however, along with the tactics they adopt, are nothing but a symptom of a serious problem:

The systems that we build are unfit for purpose.

Product safety is an afterthought for many technology companies, and the challenges we face a manifestation of this — leading to adversaries adjusting system state without authorization for material gain.

This is possible not just because the underlying software products are unfit, but the design and implementation of the systems we build are not sound.

For example, root cause analysis of the unauthorized changes to system state made at Equifax found that failure to patch a known Apache Struts flaw was to blame. The reality is that a compromise was inevitable, and had likely already occurred, as the system itself was not sufficiently safe.

Native Safety

Within the automotive industry, seatbelts, airbags, side-impact protection, and other safety features are implemented natively within the products — the vehicles themselves. Cars have become both easy to operate and very safe.

Retrofitting Safety

Within the world of computing however, safety features are not widely implemented. This is primarily as they are costly, and organizations have not been sufficiently incentivized (or penalized) to build safe systems to date — and so they don’t.

Security vendors market products that identify and respond to perceived threats to a system. Threat blocking products look for specific changes in system state (e.g. a known piece of malware existing on disk) and protect the system by responding to them — either taking an action such as quarantining the corresponding item, or raising an alarm.

There are known problems with this approach:

  • Enterprise security products are expensive
  • Additional system elements increase complexity, and can reduce safety
  • Most products are signature-driven and catch only particular state changes
  • The products are widely available for purchase and reverse engineering!

The result is that, at best, there exists a minor deficit with regard to the investment— you spend X amount on these tools and people to operate them, and move the needle very slightly with regard to security. At worst however, the investment you’ve made represents a gross misappropriation of funds.

Venture capitalists and private equity firms recognize the size of this market and have poured money into cyber security companies to capitalize on the opportunity that has been created by both manufacturers* and their respective consumers failing to design and implement sufficiently robust systems in the first place.

* Apple, Microsoft, Google, Dell, Oracle, and so on.

Considering a Safe System Architecture

By strategically allocating capital, time, and human resource to building robust computer systems we can reduce our dependence on ineffective security products and implement safer platforms.

Considering a hypothetical system, an effective way to maintain safety is to:

  • Natively implement hardening principles (e.g. least privilege)
  • Actively test the system to ensure it is running in a safe fashion
  • Instrument visibility at both the compute and network layers
  • Process telemetry to identify anomalies (unauthorized changes to state)

Understanding the system configuration — both in terms of network traffic and processes at the compute layer, allows us to audit the system and ensure that the state has not significantly changed. This requires upfront investment, but the long-term quantitative gain from a safety perspective is significant.

Join the computer safety movement today!

--

--

Chris McNab
AlphaSOC

Author of Network Security Assessment (O’Reilly Media) and co-founder of AlphaSOC, Inc.