New Orleans May Dump Security Cameras

October 31, 2010

According to a report at the site, by Davis Hammer of The Times-Picayune, Mayor Mitch Landrieu of New Orleans has proposed discontinuing the city’s use of its security camera network.  The cameras began to be installed in 2003, as a response to the city’s high crime rate.  However, despite the expenditure of several million dollars, a recent report by the city’s inspector general said that only 41 out of the 211 installed cameras were actually working.  And their overall contribution to reducing crime has not been impressive, although it has been in part appropriately focused.

In seven years, New Orleans’ crime camera program has yielded six indictments: three for crimes caught on video and three for bribes and kickbacks a vendor is accused of paying a former city official to sell the cameras to City Hall.

Quite apart from the apparent venality involved in this project, which is the subject of a 63-count indictment obtained by the US Attorney, there has always been, to my mind, a significant question of whether these wide-scale security camera projects actually do much to improve security.

There are some specific situations in which security cameras can be useful.  Typically they involve a somehow limited physical environment, where the camera coverage and available lighting can be controlled.  Cameras at ATMs, for example, can be valuable, because the user has to be fairly close to the machine to use it, and the machine needs to be well-lighted, anyway.  Similarly, cameras and lighting in parking garages have helped reduce  crime; in that environment, legitimate pedestrian traffic is light, so picking out individuals is usually fairly easy.

In some places, however, widespread installation of cameras in all sorts of public places has been the norm.  (London is probably the poster child for this approach, with on the order of half a million cameras installed.)   Here, it is much less clear that the benefits are commensurate with even the monetary cost, not to mention the intangible costs of reduced personal privacy.  As Bruce Schneier points out in an essay on the issue at CNN, there is also a troubling history of misuse of these camera networks by the police and other authorities.    The benefits are open to considerable question.  With respect to preventing crime, the possibilities are limited, since it is not possible to have someone watching the video from each camera all the time.  A policeman’s time would be better spent actually watching the area in person, since he need not have fixed blind spots and predictable patterns of coverage.

Video recorded from cameras can sometimes be helpful in identifying criminals after a crime has been committed, but even there, they are far from infallible.   Schneier cites the example of the January assassination of  Hamas leader Mahmoud al-Mabhouh in Dubai.  The obviously professional team that carried out the hit was captured on video by security cameras.

Team members walk through the airport, check into and out of hotels, get into and out of taxis. They make no effort to hide themselves from the cameras, sometimes seeming to stare directly into them. They obviously don’t care that they’re being recorded, and — in fact — the cameras didn’t prevent the assassination, nor as far as we know have they helped as yet in identifying the killers.

Widespread installation of security cameras, like many other measures introduced since 9/11,. fall into the category that Schneier calls security theater.  They tend to be produced by a process along the lines of, “This is terrible.  We must do something.  X is something; therefore we must do X.”  They may make us feel better, temporarily; but, in addition to being a waste of time and money, they may also have the effect of damaging those aspects of our society that make it worth defending in the first place.

More on Cookies

October 30, 2010

This past week, I posted a note here about a new Firefox extension called FireSheep, which is designed to automatically eavesdrop on public wireless networks, and to capture session identification information included in browser cookies.  As I mentioned in that post, the cookie mechanism was grafted onto the Web, as a way of introducing the concept of sessions, with at least temporarily persistent state, on top of the stateless HTTP protocol.

I have just come across an interesting blog post, by Michal Zalewski, a security researcher, that gives an overview of the development of the cookie mechanism, and points out some of its inherent problems.   (Mr. Zalewski is also the author and maintainer of the Browser Security Handbook, available at Google’s code site.)  The first cookie mechanism was implemented in the Netscape (remember them?) Navigator browser by 1995.  Although the idea was quickly copied by other browser developers, it would be a couple of years before the first attempt at a formal specification was made, in RFC 2109.

The document captured some of the status quo – but confusingly, also tried to tweak the design, an effort that proved to be completely unsuccessful; for example, contrary to what is implied by this RFC, most browsers do not support multiple comma-delimited NAME=VALUE in a single Set-Cookie header; do not recognize quoted-string cookie values; and do not use max-age to determine cookie lifetime.

Browser makers continued to make their own “improvements” to the mechanism.  Three years later, another attempt to set a standard for the cookie mechanism was made in RFC 2965. As an attempt to establish standard behavior for browsers, this attempt was also notably unsuccessful.

All these moves led to a very interesting situation: there is simply no accurate, official account of cookie behavior in modern browsers; the two relevant RFCs, often cited by people arguing on the Internet, are completely out of touch with reality.

This led to a situation, which to a significant degree still exists, in which Web developers had to try to infer the rules by which cookies were processed by experiment.

In addition to the session hijacking risk that I outlined in that earlier post, the use of the cookie mechanism introduces other risks.  Clearly, since the values carried by the cookie have to be stored somewhere, both at the server and the client, there is some finite limit on the  amount of information that can be stored.  In addition, servers typically limit the size of the requests they are willing to process, in order to make mounting a denial-of-service attack a bit more difficult.  And there are limits to how much information the client browser can store.  Unfortunately, there are not any generally recognized safe limits for the size of these objects, and the limits that do exist do not suffer from any “foolish consistency” (according to Emerson, the “hobgoblin of little minds”).

For example, the later RFC 2965 standard specifies that browsers should support cookie sizes of at least 4096 (4K) bytes, and at least 20 cookies per host.  That  works out to 80K per host, considerably in excess of what real servers will typically accept (for example, Apache servers will typically not accept requests longer than 8K).    The standard also allows for the use of characters from extended character sets, but does not specify any particular method of encoding them.  In the face of such fuzzy standards, developers have often taken the path of least resistance, to the detriment of browser security.

There are many more examples of problems in Mr. Zalewski’s post.  Cookies are another example, it seems, of what is apparently the mantra of Internet technology: “There’s never time to do it right,  but there’s always time to do it over.”

Adobe Security Advisory

October 28, 2010

Adobe Systems has released a new Security Advisory [APSA 10-05] for its Flash Player, Reader, and Acrobat software.    There is a critical security vulnerability present in all current versions of these packages, on all platforms (Windows, Mac OS X, and UNIX/Linux):

A critical vulnerability exists in Adobe Flash Player and earlier versions for Windows, Macintosh, Linux and Solaris operating systems; Adobe Flash Player and earlier versions for Android; and the authplay.dll component that ships with Adobe Reader 9.4 and earlier 9.x versions for Windows, Macintosh and UNIX operating systems, and Adobe Acrobat 9.4 and earlier 9.x versions for Windows and Macintosh operating systems.

Adobe says that:

Adobe Reader and Acrobat 8.x are confirmed not vulnerable. Adobe Reader for Android is not affected by this issue.

The vulnerability is serious, and it appears that is is being exploited currently by Flash content embedded in PDF documents.

This vulnerability (CVE-2010-3654) could cause a crash and potentially allow an attacker to take control of the affected system. There are reports that this vulnerability is being actively exploited in the wild against Adobe Reader and Acrobat 9.x. Adobe is not currently aware of attacks targeting Adobe Flash Player.

The announcement says that Adobe is working on a fix, and expects to deliver it by the middle of November.

The vulnerability is in a shared (DLL)  library, called authplay.dll on Windows systems,  AuthPlayLib on Mac OS X, and on Linux.  Mitigation steps are detailed in the Security Advisory, but basically entail renaming, relocating, or removing this library.  This will in some cases cause a non-exploitable crash, when a document file using these features (even innocently) is opened.

I will post any updated information on this as I receive it.

Mozilla Updates Thunderbird

October 28, 2010

Mozilla has, in addition to the Firefox update I discussed in the last post, released a new version, 3.1.6, of its Thunderbird E-mail client, for Linux, Windows, and Mac OS X.  This update addresses the same vulnerability as the Firefox update; the risk for Thunderbird is lower, since the flaw can’t be exploited through normal E-mail usage.  More details of the update are in the Release Notes.   You can get the new version using the built-in update mechanism (Menu: Help / Check for Updates), or you can download an installation package here.

Although the risk from the patched vulnerability is not as great for Thunderbird as it is for Firefox, I do recommend installing this update as soon as you conveniently can.

Critical Firefox Update

October 28, 2010

Mozilla has released a new version, 3.6.12, of it Firefox browser, for Mac OS X, Linux, and Windows.  This update fixes a critical security vulnerability, which potentially allows a remote attacker to run arbitrary code on the target system.   You can get the new version via the built-in update mechanism (Menu: Help / Check for Updates); alternatively, you can download installation packages for all platforms, in many different languages.  Further information about the new version is available in the Release Notes, and specific information about the patched vulnerability is in the Security Advisory MFSA 2010-73.

Because of the seriousness of the flaw, and because there have reports of exploits for the flaw on the Internet, I recommend installing this update as soon as you can.


Session Hijacking Made Easy

October 27, 2010

The past ten years or so have seen tremendous growth in the use of wireless networking, sometimes called “Wi-Fi”, among most groups of Internet users.  Getting rid of the umbilical cord connecting one’s laptop to a wired network has certainly been a convenience, and made working while at least somewhat mobile a much more practical proposition.  Free or low-cost public wireless access is now offered by many public libraries, coffee shops, shopping centers, and other places.   This has prompted security folks to warn that, since Wi-FI is using radio transmissions, it is in principle possible for others to listen in on one’s Internet session, possibly intercepting login credentials, for example.  For private Wi-Fi facilities, like those in a business or a home,  this can be addressed by encrypting the entire wireless network; open public Wi-Fi networks, however, typically do not use encryption.   Many Web sites also ensure that their login transactions are done over a secure connection (usually indicated in the browser by a little lock icon).

But public networks are still dangerous.  Hyper-Text Transfer Protocol [HTTP], the core protocol of the Web, was designed to be a “connectionless” (or stateless) protocol, focused only on requests for pages, and responses to those requests.  The entire idea of a logged-in session at a Web site was more or less grafted on top of HTTP, principally by using cookies, small bits of text that are stored by the client browser, and are used to pass information back and forth.  You may have seen notices at Web sites to the effect that “you must have cookies enabled to log in”.   In general, when you log in, the site returns a cookie to your browser; the cookie, in effect, contains a temporary secret that allows you access to the site, because your browser returns its value with subsequent requests.

This means that someone who can  eavesdrop on your unencrypted Wi-Fi session can capture the value of the login cookie, and use it to impersonate you, at least for a time.  To do this has generally required a bit of detailed networking knowledge.  Now, however, according to a report at ThreatPost, a pair of security researchers has developed a proof-of-concept extension for the Firefox browser, which basically allows “one click” session hijacking on an unprotected wireless network, to dramatize the risks involved.

But now a pair of researchers have created a tool to identify and capture the social networking sessions of those around you. The tool, a Firefox browser extension dubbed “Firesheep,” was demonstrated at the ToorCon Hacking Conference in San Diego on Sunday. Its primary purpose is to underscore the lack of effective transaction security for many popular social networking applications, including Facebook, Twitter, Flickr and iGoogle: allowing users to browse public wifi networks for active social networking sessions using those services, then take them over using a built-in “one-click” session hijacking feature.

The Firesheep extension is set up to automatically detect and log sessions from some popular services, like Facebook.  It is important to emphasize that using a secure connection for the login transaction will not prevent this attack, because the session is hijacked after the login is completed, by “sniffing” the session cookie(s).  (Slides from the ToorCon conference presentation are available here.)

One way to avoid this risk is, of course, never to use public wireless networks for anything remotely confidential.  Another, somewhat less drastic, risk mitigation, described in a post at TechCrunch,  involves installing another Firefox extension called Force-TLS.   This will attempt to force the use of an encrypted session for Web sites specified by the user.  This solution is not perfect; some sites may not be able to serve all their content using secure connections, even if, for the most part, the site supports it.  (Some more technical detail is available at the developer’s site.)  Some sites have resisted making full secure sessions available, arguing that it would adversely affect performance.  It is worth noting that Google’s GMail service began offering full SSL session encryption in January; according to Google, the impact was minimal: “We had to deploy no additional machines, and no special hardware.”†

Still, I think that raising people’s awareness of this risk is an important first step in making the Web more secure.  I hope it will motivate Web site developers to take their part of the security responsibility seriously, by supporting secure connections properly.

If you use GMail, you should turn the full SSL feature on.  To do this, go to your GMail account.  Click on Settings in the top right corner of the page.  Click on the General tab.  The fifth item down is Browser Connection; select “Always Use https“.

Bees One-Up Computers

October 26, 2010

The continuing improvements in computer technology and algorithms, and the accompanying improvement in solving complex problems, tend to get a lot of attention.  Just yesterday, for example, I posted a note here about a new algorithm for solving linear systems, and have written previously about IBM’s attempt to build software to play Jeopardy!.  So it is probably salutary for us to be reminded occasionally that Nature has a few tricks up her sleeve.

According to some new research reported by the University of London, Queen Mary and Royal Holloway colleges, it turns out that ordinary bumblebees manage to solve a complex mathematics problem, even though they are hardly over-endowed in the brain department.  When the bees forage, they initially come across desirable flowers more or less at random; but they quickly learn the shortest path that allows them to visit all the flowers, and then to return home.  In effect, the bees are solving the Traveling Salesman problem, one of the most carefully studied problems in optimization.  It is known to be a very complex problem, in the general case, and in fact to be NP-complete, meaning that the difficulty of computing a solution is like to increase exponentially with the size of the problem.  Nonetheless, the bees have cracked it.

The Travelling Salesman must find the shortest route that allows him to visit all locations on his route. Computers solve it by comparing the length of all possible routes and choosing the shortest. However, bees solve it without computer assistance using a brain the size of grass seed.

The research is being published [abstract] in the journal American Naturalist.

Assuming that the results can be confirmed, and studied further, they might lead us to some new understanding of ways to address this class of problems — though that might not be very pleasing to our collective ego.

%d bloggers like this: