… plus c’est la même chose

June 15, 2009

The Register, an IT-industry publication in the UK, has an article about a new kind of security problem currently under active development.  Motivated in part by a drive for grreater efficientcy, and more recently by the prospect of receiving economic stimulus finds from the US government, several electric utility companies are working on the development and deployment of so-called “smart” electric meters:

The new generation of meters will enable what utility companies call smart grids. They turn the power grid into a real-time computerized network, which has the ability to make automated decisions in real time based on data collected from millions of sensors. That would eliminate the need for meter readers to visit each customer to know how much electricity has been consumed, for instance.

Using the “smart grid” technology would facilitate the use of demand-based variable pricing; for example, rates might be lower at night, when demand is lower.  The utilities also envisage the capability to selectively limit or shut down parts of the grid, to prevent cascading failures.

Unfortunately, but unsurprisingly, there does not seem to have been much attention paid to security issues in the design of these devices:

There’s just one problem: The newfangled meters needed to make the smart grid work are built on buggy software that’s easily hacked, said Mike Davis, a senior security consultant for IOActive. The vast majority of them use no encryption and ask for no authentication before carrying out sensitive functions such as running software updates and severing customers from the power grid.

Mr. Davis is promising to demonstrate a computer worm that attacks a particular type of smart meter at the Black Hat Security Conference next month.

What is particularly depressing about some of the problems that Davis and his colleagues have found is that the vulnerabilities result from the use of certain software facilities and techniques that have been known to be problematic at least since the days of  the first Internet worm in 1988.

It seems like just another example of what has always been the curse of IT: There’s never time to do it right, but there’s always time to do it over.

Plus ça change …

June 15, 2009

In his “Security Fix” blog at the Washington Post, Brian Krebs has an article reporting on a scheme to steal international telephone service from more than 2500 organizations that have PBXs, to the tune of $55 million in charges.  The target firms were in the US, Canada, Australia, and Europe.  According to the indictment, a group of hackers broke into the PBX systems, and then sold access to those systems to a group in Italy that ran international calling centers:

The indictment alleges that between October 2005 and December 2008, Manila residents Mahmoud Nusier, Paul Michael Kwan and Nancy Gomez broke into PBX systems, mainly by exploiting factory-set or default passwords on the voicemail systems. The government charges that their Italian call center operators paid the hackers $100 for each hacked PBX system they found.

The call centers would advertise cheap international calls, and could turn a tidy profit, since they paid only for the initial call to the victim’s PBX, and nothing for the international portion of the call.

You’ll note that the article mentions that the main technique for getting access to these systems was by using the default access passwords installed on the PBX when it is shipped.  I have installed a few PBX systems myself, and I can vouch for the fact that at least some of the vendors do tell you that you must change the default passwords immediately when the system is installed.   So negligence and sheer stupidity on the part of the PBX owners contributed significantly to their own problems.

Unfortunately, this does not surprise me very much.  One of the installations I worked on, about 15 years ago,  was an AT&T PBX for a relatively small organization; it had an attractive option that would allow the vendor’s network operations center to perform remote diagnostic tests on the equipment.   This sort of thing is, of course, a potential security hole; in this case, the remedy was to employ special modems with hardware encryption on both ends of the maintenance connection.  That added about $800 to the price, and we had to have a significant argument with some of the business managers before they agreed to the expenditure.  In that case, our argument was vindicated sooner than we expected: one of that firm’s competitors had installed the same system, but without the special modems, and had been rewarded with a $55,000 monthly telephone bill, due to the same sort of call-center scam mentioned in Mr. Krebs’s article.

Even back then, this was not a new phenomenon.  Richard Feynman, the Nobel Prize winning physicist who was also a notable prankster, has a section in his book, Surely You’re Joking, Mr. Feynman (New York: W.W. Norton & Co., ISBN 0-393-01921-7), about some of his experiences at Los Alamos, while he was working on the Manhattan Project.  In one of the chapters, “Safecracker Meets Safecracker”, he described how he gained a reputation as “Feynman the great safecracker”, by learning about design flaws in some of the cabinets and safes used to hold classified information; and also by learning how many of his colleagues picked a common number for the combination (the value of π was popular, 3.14159), wrote the combination down on the inside of a desk drawer, or simply never bothered to change the factory default combination.

Security is not a product, but a process, and all the steps in the process have to be considered to arrive at a truly secure solution.  Otherwise, ignoring some parts of the process, particularly the human elements, is like putting five pick-resistant locks on your front door, and leaving all the windows open.

%d bloggers like this: