I’ve written here often about the many aspects of the problem of software security, and have suggested that one important factor contributing to the often woeful state of security is our old friend, the economic externality. Often, the costs of a security failure are borne by someone other than the developer or vendor of a piece of software; the direct benefit, to the producer, of fixing a security flaw may be less than the cost of fixing it. Especially when it is difficult for the customer to evaluate the product’s security in advance, the market may deliver less than the optimal level of security.
Ars Technica has a good article, in its “Law & Disorder” blog, discussing some of these same issues. (The author is Timothy B. Lee, and features a discussion with Prof J. Alex Halderman of the University of Michigan. Both have posted regularly at Freedom to Tinker, the blog run by Princeton’s Center for Information Technology Policy.)
As the article points out, software producers have traditionally not been liable for security or other defects in their products. This is probably, at least in part, a historical artifact, stemming from the situation in the early days of computing, when (mostly systems) software was bundled with the hardware, and applications were mostly written by the customer. I can think of no obvious reason that software should be treated differently, in terms of product liability, than any other complex product, such as an automobile. Today, of course, the situation is complicated by the fact that automobiles, for example, contain a great deal of software. If the manufacturer decides to replace an electro-mechanical control with a software-based system, should that enable him to discard a liability he previously had?
As Prof. Halderman says, it is probably not reasonable to expect the average software consumer to be able to evaluate a product’s security with any degree of confidence.
He [Halderman] argued that consumer choice by itself is unlikely to produce secure software. Most consumers aren’t equipped to tell whether a company’s security claims are “snake oil or actually have some meat behind them.”
Just assuming that the market will sooner or later sort out the good, the bad, and the ugly of software security falls into realm of Management by Wishful Thinking. We already use regulation and other controls in markets for other complex goods, such as medical care and food safety, where the consumer cannot reasonably evaluate the product in advance.
There is a legitimate concern about using regulation as a tool. It is often expressed as a fear that regulation will “stifle innovation”. I think a better way of putting it is that regulation in practice tends to specify methods rather than results.
Making producers directly liable for the economic damages caused by security faults addresses the problem of externalities directly. (This worked well in a similar situation with credit cards in the US.) In essence, liability provides a feedback mechanism to focus the producers’ minds on security.
By making companies financially responsible for the actual harms caused by security failures, lawsuits give management a strong motivation to take security seriously without requiring the government to directly measure and penalize security problems.
Requiring producers to disclose security failures would also make the market more transparent..
Making vendors liable for security flaws is just another example of addressing externalities by trying to align people’s interest with their ability to influence the outcome. Bruce Schneier has been writing about this for a long time; he wrote this essay for Wired in 2006.