Hacking WiFi Routers

December 30, 2011

Earlier this week, the US Computer Emergency Readiness Team [US-CERT] issued a warning about a new vulnerability in WiFi routers that implement the WiFi Protected Setup [WPS] standard.   The WPS standard, established by the WiFi Alliance, provides a simple means of setting up and configuring wireless routers, requiring the user to enter an eight-digit PIN, typically from a label or display on the device.

Stefan Viehböck discovered that the design of this protocol makes it susceptible to a particular form of brute-force attack.  Specifically, if an attacker sends an incorrect PIN to the router, the error response allows him to know when the first half of the PIN is correct.  Also, the last digit of the PIN is a check digit, and hence is known.  So, while one might naively assume that there are 108 possible PINs, in fact only 104 + 103 = 11,000 need to be tried.  The vulnerability is made worse because some routers do not have any lock-out provision after the entry of several consecutive incorrect PINs.  Mr. VIehböck’s technical paper can be downloaded here [PDF].

US-CERT summarizes the impact of this vulnerability:

An attacker within range of the wireless access point may be able to brute force the WPS PIN and retrieve the password for the wireless network, change the configuration downloaded hereof the access point, or cause a denial of service.

Most major brands of wireless equipment are affected (there is a list in the US-CERT Vulnerability Note).   Frequently, WPS is enabled by default.  The SANS Internet Storm Center also has a diary entry on this vulnerability; they suggest, as does US-CERT, that the only available mitigation in the near term is to disable WPS.

I will post again if I discover any significant new information on this.


New Web Service Vulnerability

December 29, 2011

According to a report at Security Week, a new assessment of a vulnerability in the hash table implementations of some Web development platforms indicates that many current software tools are vulnerable to a particular type of denial-pf-service attack.  Research done by German security firm n.runs AG indicates the vulnerability impacts PHP 5, Java, .NET, and Google’s v8, while PHP 4, Ruby, and Python are somewhat vulnerable.  The complete n.runs advisory (which is in English) is available here [PDF].

The use of hash tables is common in software applications that need to construct a lookup table of values at execution time.  (One might think, for example, of a compiler constructing a table of variable names.)   The idea is to compute some inexpensive function of the identifier, or item key, that produces a “sort of random” distribution of results.  That in turn makes searching for a value quicker, since the whole list need not be checked.

We sometimes use a very primitive manual hash function if we are setting out, say, name badges for a group of people: we arrange them based on the first letter of the last name.   Typically, of course, there will be more than one name that begins with a given letter (and at least in English-speaking countries, some letters, like ‘M’, will be much more common than others, like ‘X’); in the context of hash tables, this is called a collision.  This is, usually, perfectly OK; it is still quicker to look through all the ‘M’s than through the whole list.

The problem is that some hash functions, used in these Web development toolsets, have the property that it is possible to deliberately induce a large number of collisions by using specially crafted identifiers.  (Imagine our name tag example if everyone’s last name began with ‘M’.)  An attack like this could cause the Web server to use up huge amounts of time repeatedly scanning a long list of identifiers, leading to a denial of service.

According to Security Week, the security teams for Ruby and Tomcat have addressed the issue, and Microsoft has issued a Security Advisory, which rates the vulnerability as Critical for all supported versions of Windows.  (Note, however, that Microsoft’s Web server, IIS, is not enabled by default in any version of Windows.)   Oracle (for Java) says that nothing needs to be done.

The Microsoft advisory lists some mitigation steps, some of which are applicable on other platforms:

  • Limit the size of acceptable POST requests
  • Limit the allowable CPU time used per request
  • Limit the maximum number of parameters in a request.

Microsoft has also announced that it will release an out-of-schedule security fix for its ASP.NET framework later today.

Update Thursday, 29 December, 13:11 EST

Microsoft has now issued Security Bulletin MS11-100, which addresses this vulnerability, as well as three privately-reported vulnerabilities (CVE numbers are in the bulletin).  The .NET framework is affected on all supported versions of Windows.

If you are using .NET on Windows, I recommend that you apply this update as soon as you conveniently can.


Fixing Fingerprint Flaws

December 28, 2011

I’ve written here before about some of the problems with fingerprints, one of the older biometric technologies; with DNA analysis, currently the “gold standard” of biometric identification; and about some of the issues involved with biometrics in general.  These techniques, of course, always work brilliantly in the movies and on TV shows like CSI; but, as we are frequently reminded, the real world can be a messier place.  It should be obvious, at least, that biometric evidence is necessarily statistical in nature; it is no more possible to prove that fingerprints are unique than it is to prove that no two snowflakes are alike.  There have been miscarriages of justice, and near misses, because this fundamental principle was not understood, or just ignored.

A recent article at the New Scientist reports on some further evidence of problems with fingerprint forensics.  The Scottish government in the UK set up a Fingerprint Inquiry, chaired by The Rt Hon Sir Anthony Campbell, under the Inquiries Act 2005, to look into the fingerprint analysis used in the case of H.M Advocate v McKie.  Shirley McKie was a police detective involved in a murder investigation, who was subsequently tried for perjury based on a fingerprint found at the crime scene; her trial included conflicting expert testimony on whether the fingerprint in question matched McKie’s.  She was found not guilty by the jury.  (More background on the case is here.)

The Fingerprint Inquiry report was published on December 14, and provides a comprehensive history of the case, an examination of current practices with respect to fingerprint evidence, and a set of recommendations for improvements.

The report, published on 14 December, concludes that human error was to blame and voices serious concerns about how fingerprint analysts report matches. It recommends that they no longer report conclusions with 100 per cent certainty, and develop a process for analysing complex, partial or smudged prints involving at least three independent examiners who fully document their findings.

The New Scientist article also reports on findings of two other studies that looked at possible biases in the customary analysis of fingerprint evidence.  In the first, a team led by Itiel Dror of University College London tested whether fingerprint analysts’ results for crime-scene prints were affected if they saw suspects’ fingerprints at the time of analysis.  It should not come as a huge surprise to find out that there was a difference.  The same team also examined how analysts checked potential matches provided by an automated fingerprint identification system [AFIS].  They found that potential matches that came earlier on the AFIS-generated lists were more likely to be identified as matching by the examiners, and that changing the order of the entries on the AFIS list could change the “match” selected.  This research does suggest that how fingerprint evidence is analyzed matters.

In addition to the recommendations of the Fingerprint Inquiry, he [Dror] says examiners should always analyse crime-scene prints and document their findings before seeing a suspect’s print, and should have no access to other contextual evidence.

Despite the impression of scientific rigor and infallibility one may get from watching TV cop shows, fingerprint evidence, in addition to being based on a statistical assertion, is collected and analyzed by people, and is therefore subject to the same kind of errors people in other fields make as a matter of course.  If we want to make the administration of justice as fair as possible, we need to make sure that message is understood and reflected in forensic practice.


Another UNIX Retrospective

December 27, 2011

I have written here a couple of times about the 40+ year history of the UNIX® operating system, and its many “descendants”, including Linux and Mac OS X.  UNIX began as a sort of “under the corporate radar” project at AT&T Bell Labs, following AT&T withdrawal from the MULTICS project.  This month’s issue of the IEEE Spectrum has a feature article on the history of UNIX, which began as a simple system running on an orphaned PDP-7 minicomputer, made by Digital Equipment Corporation.  The name of the system was a takeoff on the MULTICS name:

The name Unix stems from a joke one of [Ken] Thompson’s colleagues made: Because the new operating system supported only one user (Thompson), he saw it as an emasculated version of Multics and dubbed it “Un-multiplexed Information and Computing Service,” or Unics. The name later morphed into Unix.

The article gives a good overview of the many twists and turns, technical, legal, and otherwise,  of the history of UNIX, and its derivatives.  It’s worth a read if the history of technology interests you.


Firefox & Thunderbird 9.01 Released

December 27, 2011

Shortly after the release last week of version 9.0 of Firefox and Thunderbird, Mozilla released an updated version 9.01 for both the browser and the E-mail client.   Apparently, one of the fixes for a security vulnerability introduced a new bug, potentially affecting all platforms, with Mac users being most affected.  Bugzilla has the relevant bug record; there is also a brief article at ThreatPost.

You can get the new version via the built-in update mechanism, or from the download pages for Firefox and Thunderbird.


Choosing a Cloud Provider

December 26, 2011

I’ve talked here from time to time about the development of “cloud computing”: the idea that many functions and applications that have, in the past, been performed on local PC workstations, could instead be carried out by an Internet service, with users accessing them via  Web browser.  It should not surprise anyone that Google is a major provider of cloud services, with GMail and its Google Apps services.  Microsoft, too, is working to get in on the action with its Office 365 service; and there are other, smaller providers, too.  Many businesses and other organizations have expressed interest in evaluating the potential of these offerings.

In a recent article, Wired reports on an evaluation that was carried out recently by the University of California at Berkeley, which focused on an evaluation of E-mail and calendar facilities provided by Google Apps for Education and by Microsoft Office 365.  Both products are aimed at the same market, but have some significant differences.

Though both Google Apps and Microsoft Office 365 are billed as “cloud” services, they are very different things. Google is built to operate entirely on the web, while Microsoft’s suite still leans on local software.

The University ended up, choosing the Google product, as it announced on December 21.  This particular evaluation is interesting, not so much because of the choice that was finally made, but because the reasons behind the choice have been published in considerable detail.  The University has released an Assessment Matrix, showing their main evaluation criteria and their ratings.  One key advantage of the Google solution was that it could be put in place more quickly, with less infrastructure investment.  But the decision was not entirely one-sided, and Berkeley will continue to use Microsoft products under a site license.

If you, or your organization, is thinking of cloud services as a possibility, I think the Berkeley material will be an interesting and thought-provoking worked example.


%d bloggers like this: