Stop Taking Dick Pics, But Not Because of the NSA (SFW)
Quinn Norton
771

When tech gets its Nader moment

or how to get software liability and certification wrong or right

The ever spot-on Quinn Norton recently called in her ‘Stop Taking Dick Pics, But Not Because of the NSA. (SFW)’ for a Ralph Nader moment in technology. For those younger than forty: Ralph Nader is the grandfather of safety, and consumer rights all over the world. This is because he wrote ‘Unsafe at any speed’, an indictment of the then callous disregard for safety in the automotive industry. The world has changed since and, as far as safety, and consumer rights are concerned, for the better since Ralph Nader’s book ignited regulation in favour of safety and the whole consumer rights phenomenon in general.

The current state of product liability for information technology products and services is not unlike that of the automotive industry in the 1950's. It can be summed up by the term ‘as-is’. Software license agreements typically contain an ‘as-is’ clause which expressly denies any fitness for any purpose of the software covered by the license agreement.

But what if a Ralph Nader moment were to happen to the information technology industry? Would it become a better place? Possibly, but not necessarily. Because what happened to the automotive industry were two major developments: strict liability and ever more stringent certification requirements, such as crash testing. And pardon the jargon, ‘strict liability’ requires some explanation.

Ford Pinto from Top Secret

In most legal systems there are two extreme flavours of tort liability: liability based on causality and liability based on assumption of risk. Liability based on causality is the general rule, so general that it typically is not even designated as such. When there is liability based on assumption of risk, which is typically called strict liability. The difference is best explained in terms of the infamous Ford Pinto burn accidents. The Ford Pinto had a rather unfortunate proclivity to burst into flames in rear-end collisions. The proximate cause of the horrible burn injuries were the rear-end collisions, the ultimate cause a deeply flawed design of the Ford Pinto. So deeply flawed that Ford was well aware of its implications, but chose to go ahead with it because fixing it would cost a few tens of US dollars per car and would be more expensive than paying of injury claims. Under liability based on causality, the driver of the car that caused the rear-end collision would be solely liable. Under strict liability Ford Motor Company would be also, because by putting out a car on the market, it assumes the risk of liability for flaws in its design. In practice, tort liability is typically decided upon a blend of these two flavours.

Reams of legal literature have been written about this topic, so all of this is an extreme simplification, if only because it leaves out variables such as foreseeability of the damages, negligence to take mitigating measures, et cetera. The point of this is that most industrialised nations have chosen to introduce incentives for improving quality and safety for producers of artefacts and services that can cause bodily harm through chains of causality that are too long for traditional liability claims to be successful. Product and medical liability are the most common examples in which strict liability is in play. Fun part of it all: the ability to exonerate a supplier from this type of liability through contractual terms and conditions is typically limited or even denied by statute law.

Paired with ever more stringent certification and in some jurisdictions the ability to instigate class-action lawsuits, this has resulted in an significant reduction of traffic deaths per kilometer traveled and injuries, as well as cases of medical malpractice. It has also resulted in a deeply held contempt, if not hatred, for the legal profession by health professionals, but hey, you did not go to law school to be loved by everyone, did you?

The parallels with the general brokenness of everything information technology related is striking: certification requirements are usually non-existent, causality chains are sufficiently long that even without onerous exoneration clauses in contracts liability is very difficult to claim. Which has not kept legal counsels of technology companies from introducing such exoneration clauses anyway.

So yes, the technology sector’s Ralph Nader moment is probably at least a decade overdue and it is very unlikely it will last another decade without a push for measures similar to those taken elsewhere to take hold. So what could possibly go wrong with implementing these same measures here?

First of all, software is speech. That it is not always terribly readable for a layperson, especially when it comes in the form of object code, does not take away from that. We do not consider a text written in Cuneiform unworthy of free speech protection either, although only people with special (if not arcane) knowledge can access it. So any certification or strict liability demands will be restrictions of speech. Legal frameworks for freedom of expression typically allow for more leeway in the case of commercial speech, but that complicates matters even more. Is open source software commercial speech or not? Nonetheless, there is precedent. All so-called Intellectual Property Rights (IPRs) are restrictions on the freedom of expression but do have some justifications. So are concepts such as libel. That said, the possibilities to get it wrong are endless, and if the attempts to regulate surveillance technology exports and the trade in zero-day exploits are an indication, it is unlikely that legislator will realise the unintended consequences of legislative action in time.

The two biggest potential pitfalls are the introduction of certification requirements for publishing software in general, not unlike the medieval guilds, and a further criminalisation of bona fide security research. Of the latter there are several examples, on either sides of the Atlantic, of researchers pointing out security flaws being threatened or actually taken to (criminal) court for doing so. Notable examples are Ed Felten’s research in voting machines and the criminal prosecutions of Dmytri Sklyarov and Andrew ‘Weev’ Auernheimer in the USA. The (dropped) criminal prosecution of Brenno de Winter in for revealing flaws in RFID passes for public transport and NXP taking the University of Nijmegen to court for revealing flaws in their MIFARE RFID chip are just two examples from the Netherlands of information technology providers and operators being similarly interested in shooting the messenger rather than fixing the issues. The former would, aside from the free speech issues, create an unnecessary barrier to entry and would result in a subsequent loss of innovation in information technology. Moreover, certification may very well lead to requirements that will be soon obsolete and even counterproductive. This is acceptable for products with longer life cycles, like airplanes and cars. Less so for fast-iterating products like software. The cure may very well be worse than the illness if certification would prevent the better practices of tomorrow from replacing the best practices of today.

Both potential pitfalls can be avoided by realising that liability is not a goal in itself, but a means to create accountability for issues in software. And when accountability becomes the goal, it is much easier to prevent unintended consequences by focusing on transparency. For example, through taking into account to what extent security of products can be assessed by downstream recipients, security warnings have been given to downstream recipients and issues can be fixed by downstream recipients. So the extent to which strict liability can be attributed should be a function of a) source code availability and b) vulnerability disclosure and patch availability. And since such a function is by definition qualitative and and not quantitative, its curve must be fuzzy.

Pseudo-scientific depiction of liability gradients

The above means that if the source code is proprietary (no source code available and no right to change it) and a vendor does not disclose vulnerabilities it has been made aware of, a strict liability model would come into play. This would not be the same as data breach notifications, which put disclosure obligations on operators, not producers, of information systems. Also, data breach notification obligations are typically obligations to disclose towards regulators, not to users, and usually enforced through criminal or administrative law. A dependency on disclosure of vulnerabilities in order to be able to exonerate oneself from product liability would be more a matter of private law.

However, the capability to exonerate oneself should only come into play for vendors of products that are fully proprietary, so not even auditable, after they have provided fixes for known vulnerabilities. This is because disclosing a vulnerability without allowing or providing the means to fix it should not fully disculpate one from responsibility. Also one disclosure is not another. Merely informing that there is a problem may not be due diligence. Informing users about the exact nature of the problem and providing suggestions for mitigating measures more likely is.

On the other end of the spectrum is open source software. This type of software is both auditable and fixable, since not only the source code is available but the license also allows for changing it. Vulnerabilities are commonly disclosed after patches redressing them have been made available. Even without taking into account additional factors, such as that it often is made available for free and that the relation between producer and user is sufficiently nebulous that damage is rarely foreseeable, it is clear that there already is a sufficiently high level of accountability that introduction of strict liabilities may be unnecessary or even counter-productive. The lack of product liability (all open source licenses exonerate product liability to the fullest extent possible) clearly has done no harm in this regard.

The above is not a silver bullet. Experiences in the aviation industry have taught us that disclosure obligations can lead to an over-abundance of incident reporting. Also, the current emphasis on user education, diligent systems maintenance and incident response on the user side of the equation should not cease, but broaden to include accountability. If only because the availability of a security patch often means that the actual technical details of the vulnerability are also available through documentations of the patch as much as analysis of the patch.

That said, if a liability approach based on accountability does not happen, the temptation to remain in denial of security issues remains too high. In a way, software copyright law as well as the Convention on Cybercrime already provide perverse incentives to accountability because they provide tools for shooting messengers of bad news. As such it is remarkable that the current debate on public policy about information security consistently overlooks accountability as an indispensable part of tackling the issue.

(with thanks to Amelia Andersdotter, Rasmus Larsson and Eric Skoglund for valuable feedback on the initial drafts, as well to Quinn Norton herself for pointing out my crimes against the English language and being the friendly native speaker who is willing to teach the Oxford comma)