Browsing Incognito — Sort Of

August 21, 2010

Most modern Web browsers have implemented a browsing mode that is meant to keep others, who may have access to the machine, from discovering a user’s browsing history (searches, sites visited, etc.).  This capability goes by various names: Google’s Chrome calls it “Incognito Mode”, Firefox and Safari call it “Private Browsing”, and Internet Explorer calls it “InPrivate Browsing”.   Regardless of the name, the idea is that the browser either does not record, or erases on exit, things like the history list of URLs accessed.

An article at Ars Technica describes some experiments on private browsing carried out by a team of researchers from Stanford and Carnegie-Mellon, who presented a paper [abstract PDF] at the USENIX 2010 Security Symposium.  The authors begin by trying to specify a bit more precisely what the objective of private browsing is: to avoid any persistent state changes on the browser machine that would reveal the user’s past behavior.  (As the authors correctly point out, none of this matters if the attacker can have access to the machine during the browsing session, since in that case there are many ways in which the user’s actions can be tracked.)  They distinguish among four types of state information:

  1. Changes initiated by a Web site without user interaction, such as adding an entry to the URL history list, or setting a cookie.
  2. Changes initiated by a Web site with user interaction, such as setting a password that is saved.
  3. Changes initiated by the user, such as setting a bookmark, or downloading a file.
  4. Changes that are not user-specific, such as updating the browser or add-ons, or updates to a “no phishing” list.

The existing implementations focus on trying to remove traces of changes in the first category; changes in the other categories fall into something of a gray area, and are often treated inconsistently, even though they may provide an attacker with a good deal of information.

The team found a variety of weaknesses in the existing implementation of private browsing modes.  Some of these are direct consequences of how certain features are implemented; for example:

  • There is an HTML 5 feature called custom protocol handlers, which allows a Web site to define custom protocols via the use of a URL like xyz://, to define the ‘xyz’ protocol.  In the Firefox implementation, these definitions persist after exiting private mode, thus potentially leaking part of the browsing history.
  • Internet Explorer, Firefox, and Safari support SSL client certificates.  A Web site can request the browser to generate an SSL client public/private key pair.   The keys are retained when the private mode is exited, potentially leaking the site’s identity.  Internet Explorer and Safari also retain self-signed SSL certificates encountered during a private browsing session.

There is also potential information leakage from facilities in the machine environment that are not directly controlled by the browser.   The operating system can, for example, swap pages of virtual memory to disk; if these pages happen to contain private browsing data, they potentially can be recovered later.  Preventing this would require the browser to make all these data pages “unswappable”.  This can get messy, and might well adversely affect the performance of the system.

The authors go on to examine the example of Firefox in more detail, first by using a manual code review of how changes to persistent state were implemented, and second by using an automated tool to check for state changes.  They found a number of areas where information could “leak” from private browsing mode.

The problem becomes even more complicated when browser add-ons are taken into account, since the exact function of an extension is typically not controlled by the browser developer.  The browsers themselves have different default policies with respect to add-ons in private mode:  Chrome disables most extensions (but not plugins) by default, Firefox allows them by default, and Internet Explorer disables extensions but allows ActiveX controls (cf. “swallow a camel but strain at a gnat”).

There is much more in the paper of interest to security-minded browser users.  To the basic question of whether private browsing is secure, the answer is it depends: it’s probably enough to discourage your nosy kid brother , but it probably wouldn’t be wise to count on it for anything very important.   But then, you’d never do that anyway, would you?

The Atlantic Garbage Patch, Surveyed

August 20, 2010

Last August, I posted a couple of notes about Project Kaisei, an expedition to the Great Pacific Garbage Patch, a huge collection of plastic bottles and miscellaneous rubbish, concentrated by prevailing winds and currents into an area of the North Pacific ocean about the size of Texas.  Then in June, the existence of a similar Atlantic “garbage patch” was confirmed by a French ocean survey in the Sargasso Sea.

The “Wired Science” blog at Wired now has a report of a project that uses 22 years’ worth of survey data to map the extent of the Atlantic patch.

Scientists have gathered data from 22 years of surface net tows to map the North Atlantic garbage patch and its change over time, creating the most accurate picture yet of any pelagic plastic patch on earth.

The data were gathered by thousands of undergraduates aboard the Sea Education Association (SEA) sailing semester, who hand-picked, counted and measured more than 64,000 pieces of plastic from 6,000 net tows between 1986 to 2008.

Most of the pieces of plastic found were small — less than half an inch long.  (The nets used would not capture pieces smaller than about 0.01 inch.)  The highest concentrations of plastic were found in the area between the latitude of Virginia and the latitude of Cuba.  Because of the routes taken ny the collecting ships, the east-west extent of the patch is less clearly known.  However, experiments with drift buoys tracked by satellite suggest that most of the plastic originates from the east coast of the US.

One possibly hopeful sign is that the average concentration of plastic has not increased over the 22-year survey period, although consumption of plastics certainly has.  Possibly, this reflects the plastic being gradually broken down into pieces too small to collect.

I hope that at some point we can convince people to stop using the oceans as a trash can.

Google Patches Chrome

August 19, 2010

Google has released a new version of its Chrome browser, 5.0.375.127, which incorporates fixes for nine security vulnerabilities (two of which are rated Critical), as well as a workaround for a Windows kernel bug.  The new stable release is available for all platforms (Mac OS X, Linux, and Windows).  You should get the new version via the built-in updating mechanism, or via your Linux distro’s repository.

Update Friday, 20 August, 8:05 EDT

Details of the fixes included in this update are in the release announcement on the “Chrome Releases” blog.

Adobe Patches Reader, Acrobat

August 19, 2010

As promised, Adobe Systems today released security updates for its Acrobat and Reader software.  The updates address a critical security vulnerability [CVE-2010-2862], as outlined in Adobe’s Security Bulletin [APSB10-17].   A fix for the Flash vulnerability, patched earlier this month, is also included for the use of Flash in these two products.

Either product can be updated using the built-in updating mechanism (Main menu: Help / Check for Updates).  Alternatively, the updates for Reader can be downloaded using the following links:

If you are using Windows or Mac OS X, the patch is an incremental update from version 9.3.3 to 9.3.4, not a full installation.  If you are using UNIX or Linux, the package files (which will be in the FTP directory  9.3.4/ENU for US English) are apparently full installation packages (there are .rpm, .deb, and .pkg formats available, as well as a tarball).   If you use these download links, make sure you scroll down the page, if necessary, to get to version 9.3.4, which is the updated one.

Links for the Acrobat updates are in the Security Bulletin.

If you are using these products, especially Reader, I recommend that you install the update(s) as soon as you conveniently can.  Like Adobe’s Flash, Reader is very widely installed across a variety of platforms, and that makes it an attractive target.

Adobe to fix Reader, Acrobat

August 18, 2010

Adobe Systems has announced that it plans to release security updates tomorrow, Thursday, August 19, for its Reader and Acrobat software.  In an updated version of its Security Advisory for Adobe Reader and Acrobat [APSB10-17], the company said it would release updates for Reader on the Mac OS X, Windows, and Unix/Linux platforms, and for Acrobat on Mac OS X and Windows.   All versions of Reader up to and including 9.3.3 are affected by the vulnerability, CVE-2010-2862, as are all versions of Acrobat up through 9.3.3.

The vulnerability being addressed by this update is rated Critical by Adobe; you will probably want to patch this as soon as you can, especially for Reader, which is a popular vector for malware.  I’ll post a note here when the updates have actually been released.

A Probability Processor

August 18, 2010

By now, most of us are tolerably familiar with the basics of how computers process information, and with the idea that numbers and other bits of information are represented as a pattern of 0s and 1s.   You may be watching a YouTube video on your PC, but deep under the covers, the processor(s) involved are just carrying out a lot of arithmetic and logical operations on a string of bits (from binary digits).

Now, however, an article in Technology Review discussed a new type of processing chip that has been developed by Lyric Semiconductor, based in Boston.  The company has developed what it claims is the first probability processing chip.   Instead of using solid-state components to implement digital logic operations, Lyric’s chips use them to represent Bayesian probabilities.

Whereas a conventional NAND gate outputs a “1” if neither of its inputs match, the output of a Bayesian NAND gate represents the odds that the two input probabilities match. This makes it possible to perform calculations that use probabilities as their input and output.

The company, which was founded in 2006, has mostly worked out of the public eye, with most of its funding coming from DARPA [the Defense Advanced Research Projects Agency].   The company has now announced its first commercial product, a probability processing chip that, it claims, can perform error checking and correction for flash memory devices using less than 10% of the power of traditional digital logic, in about 1/30 of the space.

Potentially, this might be a very valuable niche product.  As flash memory components get smaller and smaller, the absolute amount of electrical charge used to represent a 0 or 1 bit gets smaller, too.  According to Ben Vigoda, CEO of Lyric, on whose PhD thesis the technology is based, the difference between a 0 and 1 on some devices is only the charge of 100 electrons.  So error correction is of great importance if storage densities are to be increased.

The company also plans to introduce a General-Purpose Programmable Probability Processing Platform [GP5] chip that will be able to handle general probability problems, as well as a programming language [PBSL  Probability Synthesis for Bayesian Logic].   There may well be a fairly formidable learning curve associated with employing this technology in a useful way; still, there are enough probability-based computing problems — ranging from Amazon’s book recommendations to credit card fraud detection — that the idea of the approach is intriguing.

New Micro-Supercapacitors

August 17, 2010

I’ve written here a number of times about new developments in energy storage technology.  Improvements in this area are of great importance, both to facilitate the use of renewable energy sources (because the wind does not always blow, nor does the sun always shine), and to improve the performance and range of electric vehicles.

Recent articles, at the and Ars Technica, describe a new super-capacitor technology developed by a research team from the US and France.   The team’s work is described in a letter published in the journal Nature Nanotechnology [abstract].  As you may recall from physics class, the most basic capacitor is just two conductive plates separated by an insulator.  The larger the area of the plates, the larger the electrical charge that can be stored.  For this reason, activated carbon is sometimes used in supercapacitors, because it has a very high ratio of surface area to volume.  The research team took a slightly different approach, and used “onion-like carbon” [OLC] as an electrode material; in OLC, the individual particles of the material are made up of concentric spheres of carbon atoms, giving a total particle size of 6-7 nm.  The particles are deposited on an electrode substrate by electrophoresis.

The team found that activated carbon supercapacitors had higher capacitance than the new-deisgn “micro supercapacitors” of the same electrode size, by a modest amount.  But the new devices had a discharge rate (= energy delivery rate) about ten times higher (at about 200 volts/second), and a power density about 100 times as great as the activated carbon devices.  The devices could easily find an application niche:

Compared to a thin film lithium battery, its energy per volume is an order of magnitude lower, but its power is over 10,000 times higher. In the paper’s final graph, which compares these two measures, the authors show that the device is the only one with this sort of performance, which means that OLC could find a home for applications that require large bursts of power, long lifetimes, and a decent storage capacity.

There are still improvements that can be made with these devices, but it is a hopeful sign that significant progress can be made.

%d bloggers like this: