Son of Stuxnet ?

July 31, 2011

Last fall, I wrote a couple of posts here about the Stuxnet worm, one of the more sophisticated bits of malware to have surfaced.  One of the installations severely affected by the worm was the Iranian nuclear facility at Natanz.  Analysis of Stuxnet showed that it was designed to focus its attacks on a particular industrial control system supplied by the German firm Siemens, a system used at Natanz to control the centrifuge cascade for enriching uranium.  (Wired has an interesting article describing the process of analyzing Stuxnet.  The worm first compromises control software running on Microsoft Windows PCs, then installs a rootkit in the programmable logic controller [PLC] used to control the machinery.)  There has been some suspicion that Stuxnet was developed specifically to target the Natanz facility, possibly by Israel or the United States.

Recently, some worries  have been expressed that Stuxnet, or a variant of it, might pose a significant threat to parts of the US infrastructure.  (If the US was indeed involved in Stuxnet’s development, the irony requires little comment.)   The Department of Homeland Security, in testimony to a congressional committee, expressed concern that Stuxnet might be used to mount such an attack.  While the Stuxnet worm was designed to attack Siemens systems, it could in principle be modified to seek out other types of control systems; various versions of the worm’s code have been available on the Internet for some time.  The effectiveness of such an attack would probably depend on how well it could manipulate the PLCs at the heart of the control system, but it could be a serious nuisance at least.

Wired also reports that a potential attack against a different segment of infrastructure is scheduled to be discussed at the DefCon 19 hacker conference, taking place later this week in Las Vegas. Apparently, many security and control systems used in prisons (used, for example, to control access and cell doors) use PLC-based systems fairly similar to the Siemens systems attacked by Stuxnet.

[John] Strauchs, who says he engineered or consulted on electronic security systems in more than 100 prisons, courthouses and police stations throughout the U.S. — including eight maximum-security prisons — says the prisons use programmable logic controllers to control locks on cells and other facility doors and gates.

Some of the networks connecting these systems are also connected to the Internet, providing an attack vector; or they may include other computers (in, for example, a prison laundry or commissary) that might be compromised by USB drives or phishing attacks.  Strauchs and a group of research colleagues have published a paper [PDF] describing the threat.

We are all used to hearing about PC viruses, malicious Web sites, and other varieties of malicious software.  SCADA systems are mostly “out of sight, out of mind”, but the example of Stuxnet should serve to remind us of how vulnerable they can be.

Update Tuesday, 2 August, 23:24 EDT

Bruce Schneier also has a blog post on the attack against PLC-based systems in prisons.  He says, reasonably, that this is a minor risk at present.   Stuxnet was very sophisticated, and developing an equivalent for a different environment is not a trivial task. Nonetheless, the long-term lesson is clear.

As we move from mechanical, or even electro-mechanical, systems to digital systems, and as we network those digital systems, this sort of vulnerability is going to only become more common.

Think about an old-fashioned warded lock.  The technology was not sophisticated, and the lock wasn’t hard to pick — but, as the saying goes, you had to be there.  It is a truism of security that attacks only get better over time, and interconnected digital systems let a lot more players in on the action.


Make ‘Em Pay

July 30, 2011

I’ve written here often about the many aspects of the problem of software security, and have suggested that one important factor contributing to the often woeful state of security is our old friend, the economic externality.  Often, the costs of a security failure are borne by someone other than the developer or vendor of a piece of software; the direct benefit, to the producer, of fixing a security flaw may be less than the cost of fixing it.  Especially when it is difficult for the customer to evaluate the product’s security in advance, the market may deliver less than the optimal level of security.

Ars Technica has a good article, in its “Law & Disorder” blog, discussing some of these same issues.  (The author is Timothy B. Lee, and features a discussion with Prof J. Alex Halderman of the University of Michigan.  Both have posted regularly at Freedom to Tinker, the blog run by Princeton’s Center for Information Technology Policy.)

As the article points out, software producers have traditionally not been liable for security or other defects in their products.  This is probably, at least in part, a historical artifact, stemming from the situation in the early days of computing, when (mostly systems) software was bundled with the hardware, and applications were mostly written by the customer.  I can think of no obvious reason that software should be treated differently, in terms of product liability, than any other complex product, such as an automobile.  Today, of course, the situation is complicated by the fact that automobiles, for example, contain a great deal of software.  If the manufacturer decides to replace an electro-mechanical control with a software-based system, should that enable him to discard a liability he previously had?

As Prof. Halderman says, it is probably not reasonable to expect the average software consumer to be able to evaluate a product’s security with any degree of confidence.

He [Halderman] argued that consumer choice by itself is unlikely to produce secure software. Most consumers aren’t equipped to tell whether a company’s security claims are “snake oil or actually have some meat behind them.”

Just assuming that the market will sooner or later sort out the good, the bad, and the ugly of software security falls into realm of Management by Wishful Thinking.   We already use regulation and other controls in markets for other complex goods, such as medical care and food safety, where the consumer cannot reasonably evaluate the product in advance.

There is a legitimate concern about using regulation as a tool.  It is often expressed as a fear that regulation will “stifle innovation”.  I think a better way of putting it is that regulation in practice tends to specify methods rather than results.

Making producers directly liable for the economic damages caused by security faults addresses the problem of externalities directly.  (This worked well in a similar situation with credit cards in the US.)  In essence, liability provides a feedback mechanism to focus the producers’ minds on security.

By making companies financially responsible for the actual harms caused by security failures, lawsuits give management a strong motivation to take security seriously without requiring the government to directly measure and penalize security problems.

Requiring producers to disclose security failures would also make the market more transparent..

Making vendors liable for security flaws is just another example of addressing externalities by trying to align people’s interest with their ability to influence the outcome.  Bruce Schneier has been writing about this for a long time; he wrote this essay for Wired in 2006.


Oracle Releases Java 7

July 29, 2011

Oracle has announced the release of the next major version of Java, Java 7.  The new version incorporates a number of new language features and APIs; more detail is given in the Release Notes.  The new version can be downloaded here.

At present, this release will primarily of interest to developers, and to those who have a portfolio of existing applications that they wish to test against the new release.  I see no reason for ordinary users to be in any hurry to update their systems, until some time has passed, and the new features begin to show up in real-life applications.  It is a fact of life that major new versions of software tend to have at least their share of bugs and problems.  (You should, though,  make sure that your system is running the latest release of Java 6, version 6 update 26.)

In fact, even developers might wish to tread cautiously for a bit.  There is a report at the Apache Lucene project site of some compiler optimization  bugs in the new version.   So be careful out there.


Google Issues Infection Warnings

July 20, 2011

As every Internet user knows, Google has its finger in a lot of pies.  Yesterday, the company began warning some users of its search engine that their computers appeared to be infected with malware.

In an announcement on the Official Google Blog, Google security engineer Damian Menscher said that the company first noticed some unusual patterns of network traffic during a routine maintenance operation..

Recently, we found some unusual search traffic while performing routine maintenance on one of our data centers. After collaborating with security engineers at several companies that were sending this modified traffic, we determined that the computers exhibiting this behavior were infected with a particular strain of malicious software, or “malware.”

Following the investigation, Google began to return a warning message at the top of the search results for some users, warning them that their machines appeared to be compromised.

Google's Malware Warning

Apparently, this particular variety of malicious software causes requests sent by the infected computer to be routed via a small group of proxy servers, which are controlled by the attackers.  If the request is to a search site like Google, or Bing, the proxy can then alter the returned search results to direct the user toward specific pay-per-click or malicious sites.  Google’s hypothesis is that the malware originally infected the users’ computers via a fake anti-virus program.

Because of the huge volume and diversity of Internet traffic that Google sees, it  is in an excellent position to detect this kind of thing; I think the company is to be commended for taking the trouble to notify users.

In addition to the announcement, Google has a Help Center article with advice on cleaning up an infected PC.   Brian Krebs also has an article on this development at his Krebs on Security blog.


Does Google Affect Your Memory?

July 20, 2011

Last week, the Washington Post carried a report of some new research that seemed to suggest that people’s use of Internet search engines, such as Google, is affecting the way their memories are organized.   The study, carried out by Betsy Sparrow of Columbia University, Jenny Liu of the University of Wisconsin (Madison), and Daniel Wegner of Harvard University, was published in the journal Science [abstract] on July 14.

The results of four studies suggest that when faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it.

The four experiments that the authors carried out, using a group of Columbia undergraduates as subjects, are described in the Post article (there is an article on this research in the “Wired Science” blog at Wired as well), and were fairly ingenious.  In essence, the researchers found that, when the subjects thought that new information would later be available online, they remembered the information less well; and they remembered where to find the information better than they did the content.

This may sound like something more novel than it actually is.  As the study’s authors point out, strategies in which people rely on computer data for part of their memory are in many ways another type of transactive memory strategy.  This is a phenomenon observed in, for example, couples, and in teams of co-workers, where the group develops a collective memory system based on the particular memories of individuals and on the members’ knowledge of each others’ memories.  For example, one member of a couple may be (implicitly) assigned to remember birthdays, anniversaries, and so on.  One member of a team at work might be the “go-to guy” for a particular class of technical questions.  The result is a group memory system that potentially surpasses the capabilities of any individual.  Readers, I’m sure, will be able to think of examples from their own experience.   The Internet has made a large body of information much more easily available to the average person, allowing it to be “recruited” as a member of the group.

The authors also note that, as expected, individuals shift their memory strategies when they know that looking up information later is an option.  This also is not so new; I have rarely seen a university professor in math, or physics, who did not have an office full of books.  There are a fair number of books in my office as I sit writing this.  What is perhaps new is the “democratization” of easily accessible knowledge, because of the existence of the Internet.  Of course, there is a good deal of rubbish out there; but it is also true that many people today have access to sources of knowledge that, a few decades ago, would have been completely out of reach because of geography or economics.


Ethernet History

July 17, 2011

We are all used to hearing people talk about wireless communications technology, which has grown to an astonishing degree in a very short time.  There are a significant number of households in the US. for example, that no longer have wired “Plain Old Telephone Service” [POTS], but use cellular phones exclusively.  Similarly, many firms and residences make extensive use of wireless networking technology.  Nonetheless, wired networking technology still forms the backbone for most networks in organizations around the world; the predominant technology used is Ethernet, introduced commercially in the early 1980s, and codified in a collection of standards known as  IEEE 802.3. As computer standards go, one that last more than 30 years is a genuine old-timer; understanding its longevity can shed some light on what makes for a scalable standard.

Ars Technica has a very good feature article that gives an overview of the history of Ethernet; it explains how the basic design principles of Ethernet were adapted to newer circumstances, gaining enormously in network speed in the process.

These days, it is perhaps easy to think that computer networking started with the development of Internet technology in the ARPANet, a project of the Defense Advanced Research Projects Agency.  But in fact, there was a fair amount of networking in use back around 1980, before PCs were introduced.  In our firm, for example, we used electronic mail extensively, as well as systems for database access, shared calendars, and document processing, across geographic locations; other firms had similar facilities.  The problem with these systems was that they were generally proprietary.  Ours was built on IBM’s networking facilities, under the general name of Systems Network Architecture [SNA]; other manufacturers, like Digital Equipment Corp., had their own networking standards.  So the idea of developing a common, standard method of networking computers was itself a significant step.

The introduction of personal computers had the effect of making computing much more accessible to a much larger group of people; but it also meant that there were, within a relatively short time, many more computers that might be connected in a network.   Ethernet was originally developed at Xerox’s Palo Alto Research Center [PARC], and originally ran at a speedy (for the time) 3 Mbps over coaxial cable, used as an electrical transmission line.  It was one of three leading candidates for adoption as a standard; the others were the Token Ring technology, supported by IBM, and Token Bus, backed by General Motors.  Both of these system regulated traffic on the LAN by passing a special message, the “token”, from one station to the next in rotation.  Only the station holding the token was allowed to transmit.  This meant that no confusion could result from everyone talking at once, and that the network’s capacity was deterministic.  Ethernet, in contrast, relies on detecting and resolving such confusion (called “collisions”), rather than on prevention.  In this case, the strategy results in a simpler, cheaper technology which, in practice, works well enough.  (Sometimes an ounce of cure is better than a pound of prevention!)

The original Ethernet (10BASE5) used a bus topology with relatively thick coaxial cable (RG-8X), which was fairly expensive and a pain to work with.  A more practical variant (10BASE2 or Thinnet) was developed using thinner (RG-58) coaxial cable, still in a bus topology.  These systems ran at 10 Mbps, which sounds very slow today, but was blazingly fast compared to data communications with, say, a 1200 baud modem. In the early 1990s, we ran a trading floor network of 100+ workstations and about a dozen servers, broadcasting real-time market data, all on 10 Mbps Ethernet.

Ethernet finally became nearly ubiquitous when technology was developed for running it over unshielded twisted-pair cables [UTP], like those used for telephone wiring.  The 10 Mbps version was called 10BASE-T; it was not long before 100 Mbps speeds were possible, using slightly higher grade UTP cabling technology.  Soon, Ethernet over fibre-optic media was developed, now with the potential to provide 100 Gbps.

But in the end it was Ethernet that won the battle for LAN standardization through a combination of standards body politics and a clever, minimalist—and thus cheap to implement—design. It went on to obliterate the competition by seeking out and assimilating higher bitrate protocols and adding their technological distinctiveness to its own.

In some ways, Ethernet reminds me of the UNIX operating system, whose first release was in 1969, and whose derivatives and descendants are still going strong.  In both cases, the initial designers got some very important things right; Ethernet, for example, omitted complicated collision-prevention logic, and UNIX adopted the “everything is a file” access paradigm.  Though, in both cases, much of the underlying technology has changed dramatically, the original designs provided a sound foundation.


Google and Open Source

July 16, 2011

In May, Google held its annual IO Conference for developers  in San Francisco. During the conference, the Austrian technology site, derStandard.at, had an interesting interview with Chris di Bona, Google’s manager of Open Source.  (Most of the site is in German, but the interview itself is in English.)  I have mentioned some aspects of Google’s involvement with open source here before, but the interview gives some additional insight into how pervasive open source really is at the Googleplex.

The Chrome web browser, and the Android and Chrome operating systems (both derived, in part, from Linux), are probably the best-known of Google’s open-source projects, but there are many others, as di Bona points out:

We have released something like 1,300 open source projects to the outside world in the last five years. That amounts to 24-25 million lines of code, using a variety of licenses.

Asked specifically about where Linux is used within Google, di Bona said:

Everywhere. Every production machine / server inside of Google is running Linux, Android of course, lots of desktops.

He goes on to say that engineering desktop machines overwhelmingly run Linux (Google engineers can in most cases use what they want).  Mobile devices are perhaps 70% Mac OS X (itself a UNIX derivative), with most of the rest Linux.  There is a very small population of Windows users.  (Google, as a software developer, of course needs some Windows machines for testing.)   He also described the way the internal networks for engineering are set up [emphasis added]

We have our own Ubuntu derivative called “Goobuntu” internally for that, integrating with our network – we run all our the home directories from a file server – and with some extra tools already built-in for developers.

I was struck by this, because the idea of having all home directories (user files) on a file server is one that we used with Sun UNIX workstations for securities trading 20 years ago.  (I mentioned this in an early post on Chrome OS.)   Doing it this way — we referred to it as having “dataless” workstations, with only the OS,  X Window System binaries, and the swap space, on the local disk — had several advantages:

  • The only files that needed regular backups resided on a file server, which was under IT Operations’ control
  • The only files with internal, possibly sensitive data, were on a file server, with physical and network security
  • A faulty workstation could be replaced very rapidly with a pre-built spare, getting the user back in business quickly
  • All user machines were built with a standard configuration, making the setup of a new machine a routine exercise.

Sun was also a proponent of this approach.

Mr. di Bona also discusses some of the differences in the way that the releases of the Chrome OS and Android are handled.  Chrome OS releases, including source code, are public as soon as the code changes are officially accepted, or committed.  Android has a schedule of periodic releases, which di Bona explains is due to the differences in the mobile device market.

If you look at Android we have lots of partners. We have chipset partners, we have handset partners, we have carrier partners. They all want to use Android and they all want to have something special about themselves.

Coordinating all these players takes more time.

Finally, the interview touches on some interesting questions about the future of the Chrome OS project and Android, and their market acceptance.

The really big question here is, will people accept the Linux desktop that looks like a ChromeOS machine, will they accept a Linux desktop that looks like Android? And if the answer is yes – and I think it is actually – then the Linux desktop will grow to be quite popular. But I don’t think the “classic” Linux desktop will ever be as popular as Mac OS X or Windows.

Working in technology for years you realize quickly how insecure most peoples machines are, how compromised they are, how compromised servers are. And I know when I use a ChromeOS machine that I don’t have to worry about this anymore, because it’s actually very very difficult for it to get compromised.

I think it’s quite possible that some security-conscious organizations will find the Chrome OS or Android model quite attractive, for at least some of their users, and especially for mobile devices.  The average user is not really able to be a competent systems administrator, and I don’t expect that to change; the user’s job, after all, is to do his or her job, not to be an amateur IT person.


%d bloggers like this: