Fundamental Vulnerability Management

May 31, 2011

The security of computer systems and networks is a fairly frequent topic of conversation here, and on many other sites (check out some of the Useful Links, in the right-hand sidebar).  Many of these conversations, of necessity, are filled with technical details about vulnerabilities, mitigation steps, and software patches; and all of these are important.  But that should not lead us to forget one of the most basic vulnerabilities in the whole security process: the people that implement and use it.

The Internet Storm Center, run by the SANS Institute, has recently featured a couple of diary entries, written by members of their team of volunteer incident handlers, that are useful guides to re-focusing our attention.  The first, written by Kevin Liston, is titled Vulnerability Advisory: User clicks on something that they shouldn’t have (CVE-0).  Although the tone is a bit tongue in cheek (it reminded me of PEBKAC, the old system admin’s shorthand for a user that has performed a successful surgical strike on his foot — an acronym for Problem Exists Between Keyboard And Chair), I think it is a good and sensible discussion of how to approach the challenge of making users active participants in your systems’ security.  For starters, though it is common for system and security admins to disparage their users, the reality is that no one is good enough to catch everything.

Remember that everyone is vulnerable, even you, dear reader. There will come a time when you haven’t had your morning wake-up juice, or you are distracted, or one of your friends/family/clients gets compromised and they send you a message, or you become specifically targeted, then you will likely click on something that you shouldn’t have.

Those responsible for security (the defenders) must resist the temptation to make more and more draconian rules:

If the defender deploys too many rules, or too restrictive policies, the users (in their bounded rationality) will organize “solutions” that circumvent these controls so that they can get their jobs done. In the worst cases, this can turn the users hostile to the defenders. When these “solutions” and workarounds are discovered, you have to resist the urge to clamp down harder, because this is a clear sign that your policy lever is already pushed too far …

Equally, the defenders must avoid infantilizing the users, and making them wards of the security function:

Another common result of this conflict between defenders and criminals is that the defenders assume more and more control from the users so that they eventually they become wards of the defenders. This works for a while as the team deploys new tools and processes. Unfortunately these efforts only serve to mask the root cause of the problem

Security incidents will occur; they should be used as an opportunity for everyone involved to learn something.

The second, shorter article is by Chris Mohan; it is focused on strategies to help all of the folks involved to understand what IT security is about, and why it’s important.

The IT security community needs to get everyone, including itself, to good quality, relevant talks, presentations and debates on what’s happening in and around IT security.

The basic argument, which I think is eminently sensible, is that people in IT, management, and user organizations should have regular training and updates in what is happening in IT security.

Compared to the situation a decade ago, I think we have made progress.  The typical user is much more aware of the necessity of making timely software updates and patches, for example.  But we still need to work on raising awareness.

 


Windows Malware: Alive and Well

May 30, 2011

Those of us who work in some way with computer systems tend to spend a fair amount of time talking, writing, and worrying (not necessarily in that order) about security.  I sometimes wonder how effective all of that is.  If a recent article at the Computer World site is to be believed, the answer is “not nearly as effective as we’d like.”   The article reports on some data gleaned from a new security tool released by Microsoft, the Safety Scanner, which checks a Windows PC for malware, using an extensive data base compiled by Microsoft, and attempts to remove any infestations it finds.  At least in the early days following the tool’s release, the results are not altogether encouraging.

The 420,000 copies of the tool that were downloaded in the first week of its availability cleaned malware or signs of exploitation from more than 20,000 Windows PCs, Microsoft’s Malware Protection Center (MMPC) reported Wednesday. That represented an infection rate of 4.8%.

That is, almost one PC in twenty exhibited either an active malware exploit, or characteristic traces of a previous successful exploit.  Considering the number of Windows PCs in the world, if ~5% are compromised, that translates to an awful lot of potential mischief-making.   Since it is at least plausible that the early adopters of a tool like this one tend to be the more sophisticated users, that percentage might be an under-estimate.  The compromised machines had an average of 3.5 exploits (either current or past) each.

Of the ten most common exploits found by Safety Scanner, seven were directed at Java vulnerabilities.  That attacks against Java should predominate is not really surprising; as I’ve noted before, Java is an attractive target for the Bad Guys, since it is available in all major browsers across Windows, Mac, and Linux platforms.  (I posted a note last fall discussing whether keeping Java on your machine was worth the risk.)   Also, there is good evidence that the frequency of attacks against Java has increased significantly in the last year or so; as the Computer World article noted, in relation to the preponderance of Java exploits in the top ten:

That finding backs up a recent Microsoft security intelligence report that noted a huge spike in Java-based exploits in the second half of 2010, when the number tracked by Microsoft jumped to nearly 13 million from around 1 million in the first six months of that year.

A more than tenfold increase in the course of one year is certainly worthy of notice.

The Microsoft Malware Protection Center team has a blog post that gives some more detailed information on the results.  The Safety Scanner tool itself can be downloaded from the Microsoft site.

 


A Fishy Project at Bletchley Park

May 29, 2011

Back in January, I wrote about a project to rebuild one of the world’s first computers, the EDSAC, at Bletchley Park in the UK, home of the National Museum of Computing.  Bletchley Park was also the home, during World War II, of the Government Code and Cipher School, also called Station X, which was the center of British code-breaking efforts against the Axis powers.  It was the site of the successful operation, led by Alan Turing, to intercept and decrypt the German Enigma messages.

An article at the BBC site reports that another important artifact of computing history at Bletchley Park has now been reconstructed: the Tunny machine.  This was a machine designed to assist in routine decoding of the German Lorenz cipher, which was used for communications between the German High Command in Berlin and field headquarters.  Unlike the Enigma cipher, which was used to encode text to be transmitted by radio in the form of Morse code, the Lorenz machine was an attachment to a teleprinter (teletype).  It implemented a Vernam stream cipher as text was typed into the teleprinter.

A Vernam cipher takes as its inputs two streams of bits or characters: a plain-text stream, T, and a key stream, K.  The enciphered output stream is produced by performing an exclusive-OR [XOR] operation between T and K.  This operation works according to this logical truth table:

T K Output
0 0 0
0 1 1
1 0 1
1 1 0

If this cipher is implemented with a truly random key stream, then it is a one-time pad, the one provably unbreakable cipher.   However, key distribution for such  a method would present serious problems, so in practice a pseudo-random number generator [PRNG] is usually used to generate the key stream.  Obviously, it is vital that the generating process produce a stream that is effectively random.

The Lorenz device was, effectively, an electro-mechanical PRNG attached to the teleprinter.  The British cryptanalysts were able to deduce its design without ever seeing the device, because the Germans made a fundamental error — what you might call a rookie mistake — in August 1941.  A Lorenz message was sent by one operator, but not correctly received at the other end.  A clear-text request for a re-transmission was made, and then the message was re-sent using the same key stream, a serious no-no.  In addition, the sender made slight changes (e.g., using abbreviations) to the original plain text.  Because of these errors, the Bletchley Park analysts were able to recover both the plain text and the key stream used, and the mathematician Bill Tutte was able to deduce how the Lorenz device worked.

The Tunny machines were, in essence, a reverse-engineered copy of the decrypting Lorenz device.  (The name came from the practice of referring to the Lorenz intercepts as “fish” traffic; ‘tunny’ is a British English word for tuna.)   The BBC article has a short video explaining more about how they worked.

The reconstruction work is impressive, especially since, according to John Pether, one of the leaders of the project team, there was not all that much to go on:

Mr Pether said the lack of source material made the rebuild challenging.

“As far as I know there were no original circuit diagrams left,” he said. “All we had was a few circuit elements drawn up from memory by engineers who worked on the original.”

Most of the work done at Bletchley Park during the war was kept secret for many years afterwards.  It’s fascinating to see how much truly innovative work remained largely unknown for so long.


Open Source Cybersecurity

May 27, 2011

According to an article at the LiveScience site, a new initiative has been launched by the US Department of Homeland Security to investigate the suitability of open-source software to satisfy cyber-security requirements.

A new five-year, $10 million program aims to survey existing open-source software to find those that could fill “open security” needs. Called the Homeland Open Security Technology program, or HOST, it also may plant seed investments where needed to inspire innovative solutions that can fill gaps in cybersecurity defenses.

The program does not aim to mandate open-source solutions, but to examine the degree to which they might meet identified needs.

One obvious attraction of this approach is that open-source software is often (though not necessarily) free, meaning that the government could save on licensing costs.  A potentially bigger advantage, in my view, is that open-source solutions can offer superior security.  As I’ve discussed here before, it is an accepted principle in cryptography that the only methods that can be regarded as secure are those which have been made available for scrutiny.  There is a very considerable body of evidence  to support the idea that “security by obscurity” does not work, despite the intuitive appeal of keeping everything secret.  In the software world, there has certainly been no lack of security flaws in proprietary, closed-source software; perhaps more to the point, the lack of general availability of  the source code has not prevented the Bad Guys from finding those flaws.

The HOST program is part of a general security push in connection with the legislative proposals recently issued by the White House.  I’ll discuss those in a future post here.


A Black Box for your Car

May 26, 2011

Most people are probably familiar with, or have at least heard  of, the use of automatic flight data recorders (sometimes called “black boxes”) in commercial aircraft.  These devices record key items of aircraft performance data, such as air speed, altitude, and control settings, and are designed to be rugged enough to survive a crash.   Data collected by these devices has proved to be of enormous value in analyzing the cause of aircraft crashes, and in designing mitigation strategies.

Now, according to a post on the “Autopia” blog at Ars Technica, the US National Highway Traffic Safety Administration [NHTSA] is considering requiring similar event data recorders on all new cars.   The underlying motivation is the same as with aircraft: to provide information that can be used to analyze a crash and the events leading up to it.

Now the National Highway Traffic Safety Administration is considering a proposal that would “expand the availability and future utility of EDR data” — in other words, a possible requirement that all automobiles have the devices. The proposal is expected sometime this year. A separate discussion would outline exactly what data would be collected.

One aspect of this that may surprise some readers is that many cars already have data recorders of some sort.  General Motors, for example, has been installing data recorders on all vehicles with air bags since the early 1990s.  In at least one case, data from these systems was used to identify a defect in some Chevrolet cars that caused the air bags to deploy at low speeds, and a recall of the affected models was launched.  However, there are problems with the current systems, some technical, and some legal and procedural.

One problem with the current technology is that vehicle manufacturers use a hodge-podge of different proprietary devices; there is no standard for the dsta items collected, nor for how they are stored.  There is also a trade-off between the desire to collect more data, and the need to avoid overloading the relatively modest processing capacity of the cars’ on-board computers.   There is also a need, in some cases, to improve the design of data recording systems so that they are more able to survive a crash intact.

The procedural and legal issues are a bit thornier.  At present, the rules on who can access the data, such as they are, are determined by state law.

Other concerns involve law enforcement access to enhanced electronic data recorders or whether dealers or insurance companies could use that data to deny or support claims.

“It usually depends on state law whether they need a subpoena or a warrant,” Glancy [Prof Dorothy Glancy of Santa Clara Law School] said. “Lots of data just gets accessed at the crash scene or the tow yard, as I understand actual practice.”

The situation with cars is special, because cars are generally owned and operated by  individuals as a “freelance” affair, in contrast to other modes of transportation (airlines, railways) where there is a business organization selling the service.  I don’t know of any significant objection to flight data recorders in aircraft on privacy grounds, but there is some potential for abuse of event recorders in personal vehicles.  (Suppose, for example, that coordinates from GPS were among the data recorded.)

Still, there are very legitimate safety uses for recorded data, and it seems reasonable to establish some common standards for data collection and  recording.   There are trade-offs to be made; we can only do our best to make them on a reasonable basis.


Google Chrome Update

May 25, 2011

Google has released a new version, 11·0·696·71, of its Chrome Web browser for all platforms (Mac OS X, Linux, Windows, and Chrome Frame).   The new release fixes four identified security vulnerabilities, two of which are rated as Critical.  It also inckudes fixes for four miscellaneous bugs.   More details are available in the release announcement on the official Chrome Releases blog.

I recommend installing this update as soon as you conveniently can. Windows users can obtain the new version via the built-in update mechanism (Help / About Google Chrome). Linux users should be able to get the new version using standard package update tools (e.g., apt-get, synaptic).


%d bloggers like this: