Previously, I’ve written about the idea of trust, and how it takes on a special meaning and importance in the context of software development. In fact, it is an important consideration in all kinds of security issues. One of current buzzwords in the IT world is cloud computing, which is also sometimes called Software as a Service [SaaS]. The idea is that, instead of storing data and processing it on your local workstation, you use an application on a central server, via a Web browser over the Internet. Web E-mail services, like Yahoo! or Google mail, are examples of SaaS, as are many other offerings: Google Docs, Facebook, or YouTube.
Despite what the marketing folks might like you to believe, this is not a new concept. It is just a reincarnation, with new costumes and scenery, of the time-sharing model that was prevalent, with services like the original AOL and CompuServe, before PCs became ubiquitous. (I was using an American Airlines system called EAAsy Sabre to make travel reservations via CompuServe in the early 1980s.) Whether new or old, the model raises another issue of trust: do you trust Google to keep your E-mails safe and secure, for example.
Although it is tempting to think that one can ensure one’s security by maintaining “hands on” control of one’s complete system, this is really — except perhaps in very special circumstances — basically a pipe dream, You can use open-source software, design security into your network, and so on; but, ultimately, you have to trust someone at some level. In his Schneier on Security blog, Bruce Schneier has an excellent essay on cloud computing, in which he makes this point:
IT security is about trust. You have to trust your CPU manufacturer, your hardware, operating system and software vendors — and your ISP. Any one of these can undermine your security: crash your systems, corrupt data, allow an attacker to get access to systems. We’ve spent decades dealing with worms and rootkits that target software vulnerabilities. We’ve worried about infected chips. But in the end, we have no choice but to blindly trust the security of the IT providers we use.
SaaS moves the trust boundary out one step further — you now have to also trust your software service vendors — but it doesn’t fundamentally change anything. It’s just another vendor we need to trust.
The critical thing you must do, in order to be reasonably secure, is to make sure you know who you are trusting with what. Getting that knowledge can be harder than you think, as Ken Thompson of Bell Labs pointed out in his 1984 Turing Award lecture, Reflections on Trusting Trust. He illustrates how a “Trojan Horse” could be inserted into a system, in the form of a malicious modification of the C compiler that would be very hard to detect, once installed.. Although he uses the compiler as an example, it is clear that there are many possible “infection vectors” for this type of attack:
In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect.
The more the various parts of the system are open to scrutiny, the more confident one can feel; but there is no getting around the fact that there are some parts that you have to take on trust. This will always be an issue, yet it also is not new, as Schneier points out:
Trust is a concept as old as humanity, and the solutions are the same as they have always been. Be careful who you trust, be careful what you trust them with, and be careful how much you trust them.
I think that having an understanding of the fundamental problem is at least half the battle: it will allow you to focus on what’s important in the context of what you’re doing, and will help you identify purported “solutions” that are basically snake oil.