Y2K Remembered

December 31, 2009

Ten years ago today, there was a fair amount of anxiety in the world about the Year 2000 Problem, commonly referred to as the Y2K Problem.  The (potential) problem arose from the common practice among early computer programmers of representing the year portion of a date with just the last two digits; for example, the date 31 December 1999 might have been stored as ‘991231’.  Although it sounds ridiculous today, in the era of laptop PCs with 4GB of RAM and 1 TB hard disks, storage was expensive and in short supply, back when I started in computing in about 1970.  (The IBM 360/91 on which I was working was one of the largest and fastest systems around, and had a whopping 2 megabytes of memory.)  Programmers had therefore adopted all kinds of little shortcuts to save space.

The concern was that there were an unknown number of computer applications extant then that might fail or give inappropriate results when the data stopped being a monotonically increasing value.  Adding to the uncertainty was the realization that many programs also (incorrectly) defined a leap year as any year divisible by 4.   In fact, years divisible by 100 are not leap years, unless they are also divisible by 400.  (Hence 1900 was not a leap year, but 2000 was.)   Of course, much of this old code was poorly documented and difficult to maintain — some things never change.

(I count myself as fortunate in all this, because I wrote my first set of date calculation routines for a fixed-income investment application, keeping track of holdings of corporate and government bonds.  Even in 1971, 30-year US Treasury bonds went past the 2000 watershed, so I had to think about doing the job right.  I guess I did: the last time I checked, a few years ago, the date library was still working.)

A great deal of wailing and gnashing of teeth went on over the potential for trouble on account of Y2K.  In some places, such as the US and the UK, sizable and expensive projects were carried out to find and fix date-related defects in old software.  There were certainly some: for example, both Microsoft Excel and Lotus 1-2-3 spreadsheets had problems (and thought that 1900 was a leap year).  There were many cosmetic problems with the display of dates; you might have seen ’02/12/101′, for example.  But, when the year 2000 did finally roll around, civilization as we know it survived.  Some people suspect that the problem was somewhat oversold, because in some countries very little preventative work was done; yet, there too, few serious problems were encountered.

So, in the end, Y2K turned out to be something of a tempest in a teapot.  If it had a real benefit, perhaps it was getting people to take the question of software maintenance more seriously.

GSM Encryption Broken

December 30, 2009

The Global System for Mobile communications [GSM], developed in the late 1980s,  is the communications standard used by roughly 80% of the world’s cell phones. It pretty much is the standard everywhere outside North America, where other standards developed by Qualcomm, namely CDMAOne[IS-95] and CDMA-2000, are used by some significant carriers, like Verizon Wireless and Sprint PCS.  It provides some security for communications: it authenticates the user to the network (although, in its basic version, not the other way around), and provides two stream ciphers, A5/1 and A5/2, for encrypting the voice data stream. The second of these, A5/2, is the weaker encryption, and it has been known for some time that it can be broken using a ciphertext-only attack.  The A5/1 cipher, which is more secure, is the most commonly used; although it has been known for a few years that it was theoretically vulnerable to attack, the attack was thought to be impractical.

Now Technology Review is reporting that a German researcher, Karsten Nohl, has presented a proof-of-concept attack that demonstrates that interception of GSM calls, eavesdropping on voice calls, and interception of SMS (text) messages are all a practical possibility.  The presentation was made at the 26th Chaos Communication Congress in Berlin.

Karsten Nohl, who has a PhD in computer science from the University of Virginia, says he demonstrated the GSM attack to encourage people to develop a more sophisticated means of protection.

Predictably, the industry association for GSM providers, the GSM Association, downplayed the importance of the announcement, saying in a statement:

All in all, we consider this research, which appears to be motivated in part by commercial considerations, to be a long way from being a practical attack on GSM,

It is true that the presentation did not cover one aspect of the attack in detail: the actual interception of the GSM radio signal.  However, the researchers say that this is because publishing that type of information is illegal in some countries, and that equipment that can perform the interception is readily available.

Some experts, not involved in the research work, said that the industry should be using this as a wake-up call to implement better encryption, which is possible within the GSM standard, before the system is being routinely hacked:

“It would be a good time to start transitioning GSM systems to more advanced cryptographic algorithms,” says David Wagner, a professor at the University of California at Berkeley who was involved in work in the early 2000s that proved it was possible to break A5/1. “We should be grateful. We don’t always get advance warning that it’s time to upgrade a security system before the bad guys start taking advantage of it.”

Bruce Schneier has also dismissed the industry’s claims:

Cryptographer Bruce Schneier, chief security technology officer at BT Counterpane, dismisses the association’s claims. “Companies always deny that it’s practical,” he says. “The truth about cryptography is that attacks always get better, never worse.” While Schneier believes this work further demonstrates that GSM calls could be intercepted, he says that the recent move to use GSM for payments and authentication is “a bigger reason to be concerned about this attack.”

These interceptions are very likely to be happening already, from agencies like the NSA in the US or GCHQ in Britain.  But upgrading the security of the system would seem like a very good idea, especially given the fact that mobile phones are being increasingly used for purposes beyond telephony: as part of payment systems, for example.  If you are going to put a great deal of money in your safe, it is probably a good idea to put the new lock on it first. It would seem wise to bolster GSM security before making it a much more attractive target for the Bad Guys.

Security Snake Oil

December 30, 2009

I read a fairly wide range of traditional and online publications, but Playboy magazine has never really been on my list; and I certainly didn’t expect to be referencing it in an article about security.  But the “Threat Level” blog at Technology Review reports on a story in Playboy about a self-styled scientist and software expert who, it appears, conned numerous agencies of the US government out of quite a few million dollars for security software of, at best, questionable value — if it ever really existed at all.

The story involves one Dennis Montgomery, who was born in Arkansas, and received a two-year associate’s degree in medical technology from Grossmont College, near San Diego.   He apparently decided to try his hand at software development:

He maintains he invented and secured copyrights for various technologies related to “pattern recognition,” “anomaly detection” and “data compression.” Montgomery had attained some success with his media-compression software.

This claim in itself is something of a red flag: inventions are generally patented, not copyrighted.  Copyright is intended to protect particular expressions, in writing (including software), images (e.g., photography), and so on.  Registering copyright proves only that the applicant was able to fill out the necessary form and pay the registration fee.

Apparently, Mr. Montgomery approached some members of the Science & Technology Directorate of the CIA, and convinced them that he had developed a technology that could reveal previously unsuspected terrorist messages that were concealed as bar codes in images broadcast, unwittingly,  by the Qatari TV network Al Jazeera.  He claimed that these messages gave latitudes, longitudes, flight numbers, and dates for future terrorist attacks, to be carried out by “sleeper cells” in the US and Europe. Of course, the secret technology he had developed was the only way to find and interpret these codes.

Mr. Montgomery is apparently a pretty good salesman, and he was of course saying things that some people wanted to hear:

Al Jazeera was an inspired target since its pan-Arabic mission had been viewed with suspicion by those who saw an anti-American bias in the network’s coverage. In 2004 Secretary of Defense Donald Rumsfeld accused Al Jazeera of “vicious, inaccurate and inexcusable” reporting.

Ideology is a highly effective prophylactic against the influence of inconvenient facts.

Eventually, reality did win out, with an assist from the French intelligence service, because Montgomery continued to refuse to reveal his methods, and it was not clear why a terrorist organization would use such a round-about method of communication:

The CIA and the French commissioned a technology company to locate or re-create codes in the Al Jazeera transmission. They found definitively that what Montgomery claimed was there was not.

This was not the end of Mr. Montgomery’s work for the US government, however.  He also successfully sold a system that he claimed could automatically recognize weapons from video images; at least one of his then-colleagues has told the FBI that the demonstrations are fake.  He also claimed to have software that could locate submarines from a satellite photograph of the ocean’s surface. (He also told at least one person that he had been abducted by a UFO.)

There is much more of the same recounted in the original story.  What is interesting is that this shows, once again, that wanting something to be true does not make it so, but that people can be blinded by their own preconceptions, and do really irrational things.

Security Theater

December 29, 2009

Bruce Schneier, who I have mentioned here many time before, has an excellent new opinion piece up at the CNN Web site, “Is Aviation Security Mostly for Show?”.   (No prize will be awarded for guessing his answer.)  He makes the case once again that much of the public response to terrorist incidents is based on a form of “magical thinking”, and that what will make us more secure is better intelligence and police work, and better capacity for emergency response, rather than counter-measures designed to combat specific terrorist tactics.

Terrorism is rare, far rarer than many people think. It’s rare because very few people want to commit acts of terrorism, and executing a terrorist plot is much harder than television makes it appear.

The best defenses against terrorism are largely invisible: investigation, intelligence, and emergency response. But even these are less effective at keeping us safe than our social and political policies, both at home and abroad. However, our elected leaders don’t think this way: They are far more likely to implement security theater against movie-plot threats.

Some of the movie-plot mentality shows up even in small details.  For example, I’ve noted before that, in the Christmas Day incident on a Northwest flight, news reports have said that the suspect used 80 grams of the explosive PETN.  I have seen this now in several newspaper and wire service accounts, and I find it very interesting that none of these accounts has converted the quantity to units more familiar to most Americans: 2.8 ounces.   It may not even be a conscious choice, but I suspect that omission is because “80 grams” is unfamiliar and sounds scarier.

The whole concept of a “War on Terror” is another example of security theater.  Terror is a tactic, not a government or a state actor that can be conquered by force.  The tactic is successful to the extent that people are terrorized.  As Schneier says, no conceivable terrorist attack can destroy Western civilization or the US government; only our own foolish reactions could do that.  The best response is to refuse to be terrorized:

The best way to help people feel secure is by acting secure around them. Instead of reacting to terrorism with fear, we — and our leaders — need to react with indomitability, the kind of strength shown by President Franklin D. Roosevelt and Prime Minister Winston Churchill during World War II.

The whole essay is well worth reading.

Update, Tuesday, December 29, 15:30

Bruce Schneier was interviewed on this topic on the Rachel Maddow show on MS-NBC.

Killer Brussels Sprouts

December 28, 2009

We’re all familiar with the idea of the “fight or flight” reaction that animals display in threatening situations.  We don’t usually think about plants’ response to threats; but, from the perspective of natural selection, your average Brussels Sprout does not want to become dinner any more than you do.  Plants, of course, generally do not have the “flight” option available, so fighting is their only choice, and their weapon of choice is generally chemical warfare.  Many of the nastier poisons that occur in nature are manufactured by plants: atropine, nicotine, digitalis, hydrocyanic acid, and strychnine, to name just a few.

All of this is pretty elementary evolutionary biology, but a recent article in the New York Times, by Natalie Angier, discusses some recent research that illustrates that plant responses to threats are not only more complex, but can happen much faster, than we might have thought.

“I’m amazed at how fast some of these things happen,” said Consuelo M. De Moraes of Pennsylvania State University. Dr. De Moraes and her colleagues did labeling experiments to clock a plant’s systemic response time and found that, in less than 20 minutes from the moment the caterpillar had begun feeding on its leaves, the plant had plucked carbon from the air and forged defensive compounds from scratch.

In addition to producing toxic chemicals, plants can also emit chemical signals that, for example, attract the natural predators of the insects that are chewing on their leaves.  These signals can also alert other plants of the same species to begin defensive measures, even if they have not yet been attacked.

Some species of plants have evolved remarkably complex strategies to protect themselves against predators:

… when a female cabbage butterfly lays her eggs on a brussels sprout plant and attaches her treasures to the leaves with tiny dabs of glue, the vigilant vegetable detects the presence of a simple additive in the glue, benzyl cyanide. Cued by the additive, the plant swiftly alters the chemistry of its leaf surface to beckon female parasitic wasps. Spying the anchored bounty, the female wasps in turn inject their eggs inside, the gestating wasps feed on the gestating butterflies, and the plant’s problem is solved.

We’ve already begun to see, in many cases, that our assumptions about a vast gulf in intelligence between humans and “dumb animals” may be far too flattering to ourselves.  So I guess we should not be too surprised to find out that we underestimated plants, too.

Pants on Fire

December 27, 2009

By now, I’m sure everyone has seen the news stories about the attempted bombing of a Northwest Airlines flight from Amsterdam to Detroit.  The alleged terrorist, Umar Farouk Abdulmutallab, a Nigerian, was apparently listed in a database of terrorism suspects, but not on any list used to screen air passengers, despite warnings from his own father to the US Embassy in Nigeria.

Bruce Schneier has been saying for years that there were only two important security lessons to be learned from the September 11, 2001 hijackings:

  • That cockpit doors needed to be reinforced and kept locked while the aircraft was in flight.
  • That if a hijacking was attempted, the passengers needed to fight back.

At least in this case, the second lesson seems to have been learned.  When the suspect attempted to set off his device, a number of passengers overpowered him, allowing the resulting fire to be put out.

News reports have indicated that the device used contained the explosive PETN (pentaerythritol tetranitrate), which it appears is the same substance used by the unsuccessful “shoe bomber”, Richard Reid,  in December, 2001.  It is a fairly unstable compound, compared to TNT, and an ingredient of Semtex plastic explosive.  Standard references give its explosive power per gram as about 1.6 times that of TNT.  One news report, on ABC network news, said that the quantity of PETN in the device was about 80 grams.  Of course, the report may have it wrong, but that is slightly less than 3 ounces.  I am not at all knowledgable about explosives, but it seems unlikely that that would be enough to destroy a commercial airliner (absent the opportunity to put it in a particularly sensitive place).   I would think, for example, that a hand grenade probably contains at least as much explosive.

As we have learned to expect, a flurry of new security measures was put into place after this incident.  Apparently passengers must now remain  in their seats without anything on their laps during the last hour of flight, and more thorough pre-flight searches will be carried out.  There will be some additional restrictions on carry-on luggage.  Why potential terrorists will be unable to detonate their bombs 61 minutes before landing has not been explained.  Once again, like the generals who are always re-fighting the last war, we have a response directed at the most recent tactic the Bad Guys have tried to use.

I think Bruce Schneier has the best observation on this:

I wish that, just once, some terrorist would try something that you can only foil by upgrading the passengers to first class and giving them free drinks.

We really do need to try to keep a sense of perspective on this: your risk of dying in a car accident is  enormously larger than your risk of being a victim of a terrorist attack.

Update Monday, December 28, 21:25

Joel Esler over at the SANS Internet StormCenter has an interesting diary entry relating this incident to IT security.  He focuses on three main points:

  • Doing more of what didn’t work in the first place
  • Playing the blame game
  • Nonsensical allowances (e.g., you can carry on matches, but not a lighter)

It’s a quick read, and worth a look, especailly if you’re involved with IT security.

%d bloggers like this: