Unvanishing Act

September 22, 2009

Back in July, I posted a note about a new approach to ensuring the deletion of sensitive information on the Internet.   Researchers at the University of Washington had developed an experimental software package called Vanish, which first encrypted sensitive data, and then stored the encryption keys on a public peer-to-peer [P2P] filesharing network, using a trick that would result in the keys’ effectively disappearing within a predictable time.

Now the New York Times has a report that another group of researchers, from the University of Texas at Austin, Princeton University, and the University of Michigan, has developed a was to defeat the Vanish system, and ensure that the data can be “un-Vanished”.  The researchers claim that their technique is highly effective:

“In our experiments with Unvanish, we have shown that it is possible to make Vanish messages ‘reappear’ long after they should have ‘disappeared’ nearly 100 percent of the time,” the researchers wrote on a Web site that describes their experiment.

The Vanish system works, as I noted above, by encrypting a data object [VDO] with a secret encryption key generated by the Vanish system, and then breaks that key into a large number of pieces.  The pieces are then stored in a distributed hash table [DHT] on a peer-to-peer file sharing network, Vuze.  (A hash table is essentially a data base consisting of {name, value} pairs, like a list of user IDs and real names.  A distributed hash table exists in sections on different networked machines.)  The implementation of the Vuze DHT requires each node to store copies of data held by its “neighbor” nodes.  The stored values are deleted after a defined time interval (usually eight hours) has passed.  The Vanish approach relies on this deletion to make the encryption key “disappear” after a time; once the key is lost, the encrypted data is no longer readable.

The approach used in Unvanish is straightforward.  It uses a small number of machines to masquerade  as a much larger set of machines.  By gaining access to the Vuze DHT, Unvanish software is able to read and store any {name,value} pairs that resemble Vanish keys

Unvanish reads the shares comprising a VDO’s encryption key at any time during the window between its creation and expiration. It then stores an archive version of the key outside of the DHT.

As long as the keys are still available, the original data object can be decrypted.  As far as I can see, the Unvanish system can only work contemporaneously; that is, it cannot recover the keys for data objects that have already disappeared.

This kind of back-and-forth of claims and counter-claims is normal, and healthy, in the security field especially.  It is the mechanism by which good systems are eventually found (if they are).  It is more or less a truism that anyone can design a security system that he himself cannot break.   But then, he’s not the one we’re worried about.

More on Two-Factor Authentication

September 22, 2009

A couple of days ago I posted a note about a new trend in attacks on two-factor authentication systems.  Bruce Schneier also has a post on this in his Schneier on Security blog.  He argues, and I agree, that the fundamental issue here is that the two-factor approach is solving the wrong problem, that what is needed is to authenticate the transaction, not the user.

Credit cards are a perfect example. Notice how little attention is paid to cardholder authentication. Clerks barely check signatures. People use their cards over the phone and on the Internet, where the card’s existence isn’t even verified. The credit card companies spend their security dollar authenticating the transaction, not the cardholder.

To put it another way, the two-factor approach is fundamentally a defense against a particular type of attack: stealing or guessing passwords.  Focusing on authenticating the transaction is more fundamental, in the sense that  it is focused on preventing the crime (fraud) rather than on foiling particular criminal tactics.

This, of course, is primarily a job for the banks, rather than for their customers.  I think it still makes sense for customers to take reasonable steps to protect themselves.

I think there is one more lesson to be learned from the credit card example.  The card issuers started to take security seriously when legislation was enacted that put a $50 limit on the cardholder’s liability for fraudulent use in most cases.  As I’ve discussed before, this removed an economic externality, and made the issuers, who are the ones in a position to address the fraud problem, responsible for the costs of not doing so.

%d bloggers like this: