Unpatchable security flaws

Loren Kohnfelder
5 min readAug 18, 2017

How do you possibly mitigate unpatchable security flaws?

This isn’t a hypothetical question: consider, for instance, the recent alert from CERT about how attaching a small box to most modern cars can disable major components, including safety systems such as airbags and brakes. This vulnerability targets error handling for the Controller Area Network (CAN) Bus standard, and it appears that patching a fix is infeasible.

Perhaps by 2020 some new cars will have a new bus standard immune to the problem, but until then consumers just have to live with this. Securing physical access to the bus cable is the only known mitigation, hardly easy to retrofit without making service much more cumbersome.

Unpatchable security flaws are software’s worst nightmare, and almost without exception these would be classified as design level security flaws. When flaws are by-design all options for remedy are bleak:

  • redesign and reimplement all affected code — design flaws typically impact large amounts of code
  • break some functionality in the interest of mitigation — with the risk that the cure is worse than the original problem
  • convince customers it isn’t a bug, it’s a feature — however, “loss of brakes” is going to be an extremely hard sell

In the case of this CAN bus error handling, it’s easy to see how none of these is promising in the least. Any number of components, often produced by different makers, connect to the common bus and interact in complex ways. Error handling is a critical function that is also difficult to thoroughly test given the matrix of combinations that need to play together, and all their possible failure modes other components must endure gracefully. Just getting all components updated to comply to a new standard is a Herculean task, much less redoing all the integration testing.

Patching even one component would be a nightmare as by definition it involves intentionally implementing non-standard bus behavior that no other component could be expected to interoperate with. Since this bug involves error interactions, everything needs to change all at once for it all to work.

Even though none of these options is much worth serious pursuit, they certainly are not going to stop making cars until this is fixed. Imagine if this CAN bus was exposed over the air to the internet …

Furthermore, consider that there is no conceivable response plan anyone could have had in place that would help much. Even if all the automakers and component manufacturers cooperate, who would be in charge of redesigning the bus? It’s one thing for a consortium committee to design a new standard, but rapid response to a critical bug is another thing altogether.

I was unable to find any details about how this standard was created, but scanning the standard itself (version 2.0), I note that other than claiming without basis “a very high level of security” there is no other mention of security nor any occurrence of the term “denial of service”. If anyone gave more than cursory ad hoc consideration to security, there seems to be no evidence of it that I could find.

A Design-level Security Review is what was sorely needed here. Any major software undertaking needs to frontload security review into the design phase. Before implementation begins is the best time to identify and remedy design flaws, and the best chance of avoiding this unpatchable security flaws, and often catching early a number of patchable ones as well.

Design-level Security Reviews are especially important for distributed projects where separate teams of developers need to collaborate around a common design, including any kind of API, protocol, or data format.

Design-level flaws are always far fewer than implementation flaws, but they are almost always far more painful to deal with later in the process. That they are fewer is hardly surprising given that the level of detail in design documents is miniscule compared to the full implementation. Yet even if a Design-level Security Review finds only a few issues, it should be well worth the effort. The act of reviewing with an eye to security results in increased awareness, sheds light on the most critical areas of the design, even if no deadly flaws are uncovered. Not to mention the added assurance against possible future unpatchable security bugs.

What exactly is a Design-level Security Review?

The term is admittedly vague, in large part because the way architects and developers do design is so ad hoc and varies from project to project. In my experience, design reviews are based on the principles of threat modeling — essentially, considering “what could go wrong?” — but are performed informally without the explicit methodology, and as a result can be much more efficient.

An experienced practitioner given a clear and complete design in written form can perform the review in a few steps:

  • Study the design and supporting documents for a basic understanding
  • Ask the design team clarifying questions about the design and considerations for basic threats
  • Identify the most security critical parts of the design for close attention
  • Write a summary report of findings and recommendations

So why are Design-level Security Reviews not a routine part of development cycles today?

It’s a good question, considering the potential benefits, but since nearly all software development happens out of the public eye nobody really knows. Nonetheless, from my experience I believe I can hazard some good guesses.

  1. Few software teams — with the notable exception of major dominant software corporations — have the in-house expertise for the job.
  2. Experienced software leads overestimate their own abilities to think through the security implications of design.
  3. There is strong institutional inclination to outsource implementation security review (i.e. penetration testing) ahead of major releases — long after the design horse has left the barn.
  4. Software design itself is so ad hoc (and often poorly documented) the resources to capture a working design for analysis do not even exist.
  5. Designs morph over time and it is difficult to follow through with incremental security reviews to maintain currency.

These are significant challenges and it probably will require a significant evolution within software practice to make Design-level Security Review a standard part of the development cycle. Nonetheless, the importance and potential advantages of doing the reviews is quite significant and well worth the effort.

Bonus: The biggest design-level security flaw of all time that I am aware of was in Netscape Navigator version 1 which allowed script from one website to interact with window objects (and hence all document content) from other websites at will. [Mozilla archived same origin documentation is here.] This massive oversight was fixed by blocking cross-site script access — and I recall there were websites depending on this (since it just worked) that were completely broken. Fortunately, it was early enough that most websites were unaffected, and with Navigator dominating browsers they enforced the new rules and the change was relatively painless for most of the community. However, modern browsers still carry considerable security baggage as Same Origin Policy (for details, see the Browser Security Handbook).



Loren Kohnfelder

Author of Designing Secure Software: a guide for developers. Find me at https://designingsecuresoftware.com/ Writing software since 1968. Living on Kauai.