Who Do You Trust, Part 3

In the first two articles in this set, I have talked about the issue of trust.   Thinking about the idea of trust as it pertains to computer security makes it clear that it is a more complex concept than just trusting someone not to pinch the silver: one must trust not only the honesty, but also the competence, of other people.  And looking at the problem of trust in the context of food safety makes it clear that, in our modern, interconnected world, deciding whether trust is warranted is far from simple, even for people who are in a position to be relatively well informed.

Although there are a number of things  involved in this problem of trust, one important issue is the availability of information.  As I’ve noted before, the increasing complexity and specialization of our society makes it harder for any single person to know enough to evaluate all the products or services he might use or need.  We already recognize this by having qualification and licensing requirements for certain services and products: for example, just because it takes your fancy,  you cannot go out tomorrow morning and start work as a neurosurgeon, or as a plumber. And we have rules that require the provision of certain information: nutrition and ingredient labels on food products are an example. Even with this protection, it is not always easy to determine the relative quality of foods, or doctors, or even plumbers.

What does this mean for software?  It is a product arguably more complex than any of the examples I’ve cited.  It is certainly not within the ability of the average person to look at a shrink-wrapped software box, and decide whether the contents are of adequate quality, or give good value for the money.  And, as is the case with food safety problems, it is sometimes in the best economic interest of the vendor to conceal problems instead of fixing them.

There is one specialized area which I think might give us a useful example here.  That is the science of cryptography — the methods of encoding and decoding messages in order to keep them secret or secure their integrity.  It has been a cardinal principle of the field for many years that the security of a cryptographic system must be maintained even if the methods or algorithms used in the system are known to potential adversaries; the security should depend only on maintaining the secrecy of the cryptographic keys.  (This is analogous to saying that a combination lock must be secure, even if the bad guys have a sample of the lock, as long as the combination itself is secret.)  This is asometimes called Kerckhoffs’ Principle, after Auguste Kerckhoff von Nieuwenhof, a Dutch linguist who stated it definitively in his book, La Cryptographie Militaire, in 1883:

The security of the crypto system must not depend on keeping secret the crypto-algorithm.  The security depends only on keeping secret the key.

This is sometimes described as avoiding “security by obscurity”.  In fact, as a practical matter, experienced security people will refuse to use any cryptographic system whose methods and algorithms are not published and open to critical examination.  Getting cryptography right is hard, and experience has shown us that only prolonged scrutiny by a large number of knowledgeable people can give any assurance that the system is truly secure.  It is not an accident that almost every proprietary security system that uses a “secret sauce” has been broken rapidly after its general release (for example, the encryption system for DVDs, or software copy protection in the early days of the PC).

With software, we have a similar problem.  The product is complex, and at least in part requires expert knowledge for evaluation.  The usual functional testing of software does not really help: it is designed to check that the software does what is supposed to do when it is used as it’s supposed to be used.  It does not attempt to find all the ways in which the software can be broken by mis-using it, especially when the misuse is with malicious intent.

Using an open-source model of software development allows us to, at least partially, address this problem.  When the source is available, anyone who is interested can examine it for flaws.  When the software does something unexpected, one can check the source to see what is going on.  And it goes without saying that the source is the best possible documentation, because it avoids the problem that is always the death knell of software documentation: that the documentation does not agree with the software.

It’s useful, too, to consider the example of science.  Someone proposing a new scientific theory or principle will only be taken seriously if the work and experimental results underlying the new idea are published.  No one is immune from mistakes, but the scrutiny of a large interested community is in general much more effective at detecting errors than any individual.

Comments are closed.

%d bloggers like this: