Back in the early days of cellular telephony, I had (for my job) one of the early phones, which was the approximate size and weight of a paving stone, and almost as comfortable to hold for any length of time. Obviously, cellphones have gotten a lot smaller and lighter since then. They’ve become smarter, too; while in the early days it was amazing just to be able to make and receive calls (look, Ma, no wires!), today’s phones can send and receive text messages and E-mail, store and play music, browse the Web, and more. In other words, they are becoming more and more like general-purpose computers; in fact, in many developing parts of the world (such as China and India), cellphones are probably the most common means of accessing the Internet.
Of course, there’s no silver lining without a cloud, and this increase in capability means that smart phones are also becoming more and more susceptible to the kinds of malicious software attacks that already bedevil the personal computer. So far, the scope of these attacks has been limited, but as the devices are more widely used for a greater range of applications, especially in the richer countries, we can probably expect attacks to increase.
The Technology Review has an interesting article about the security provisions in the recently-released Android mobile operating system from Google. One of the features of Android that Google emphasizes is that it is, unlike most, an open system; in particular, it is open for anyone who wants to develop applications. This is in contrast to the usual practice in the cellphone industry, at least in the US, of vendors tightly controlling the environment. The most obvious recent example is the Apple iPhone, which can only be used on the AT&T network; all iPhone applications require Apple’s approval. (As far as I know, there is not an app for that.)
One of the ways in which Google addresses security concerns is that Android treats each application as if it were a separate user on a multi-user system:
When multiple users share a single desktop machine, the operating system is designed to protect them from each other by giving each its own account. From one account, it’s not possible to see files in other accounts, or to affect another user’s data. In the same way, the Android operating system treats each application as a separate user, so that if an attacker breaks into the Web browser, for example, he won’t be able to access the address book.
Android applications must also specify what access to the phone’s capabilities they need, and the user is asked to grant permission for this when the application is installed. This is a better scheme than that of asking the user every time access is needed (as, for example, Windows Vista does); the constant stream of questions results in the dialogue box effectively becoming something along the lines of “Click OK to be able to work”.
There are some other thoughtful security features as well. In the PC environment, players for media files have been a fruitful source of security flaws; Android isolates these:
The team also looked at bits of software that are common entry points for attackers. For example, Cannings says, the software that runs media, such as audio and video on a Web browser, is very complex and a common target. In Android, that software runs apart from the browser in a separate media server, so that if it is compromised, an attacker can’t access the passwords and cookies stored in the browser.
Google does have to deal with the problem of having multiple vendors of Android phones in multiple countries, all of whom need to get timely security updates, and distribute them to their customers. Only time will tell how well, collectively, they can manage this. Still, security works best when it is designed into a system from the beginning, rather than being added later, as if putting a hat on a horse. It’s mildly encouraging to see that Google is paying attention.