Correspondence Courses

September 25, 2009

When I moved, not too long ago, one of the things I was sorting through was a bunch of old files containing letters to and from friends and colleagues.  (For younger readers who are unfamiliar with this idea, we used to actually write messages on paper, put them in envelopes, and send them via snail-mail to people we knew.  It was fun to get and send them, providing a break from sharpening our stone axes and hunting mastodons.)  Looking back on that, it is really amazing how much the technology of personal communications has changed; it’s tempting to think that the technological change has produced a corresponding change in our habits.

However, people’s communicating habits have stayed remarkably consistent, according to an article on the PhysOrg.com Web site, reporting a study by researchers at Northwestern University, and published today in Science [abstract]:

A new Northwestern University study of human behavior has determined that those who wrote letters using pen and paper — long before electronic mail existed — did so in a pattern similar to the way people use e-mail today.

The study examined the correspondence history of sixteen well-known historical personalities, ranging from Sir Francis Bacon, as far back as 1574, to writer Carl Sandburg, as recently as 1966.  It has been suggested that people’s use of E-mail is driven primarily by the need to respond to others (and that may be the case for a certain amount of business E-mail), but the study found that personal correspondence by E-mail followed the same patterns as pen-and-ink mail.

No matter what their profession, all the letter writers behaved the same way. They adhered to a circadian cycle; they tended to write a number of letters at one sitting, which is more efficient; and when they wrote had more to do with chance and circumstances than a rational approach of writing the most important letter first.

The researchers found that, with some adjustments to time scales, the same behavior models could describe both the historical correspondence and contemporary E-mail.

(As an aside, the time scale adjustment may in some cases be less than you might think.  In the late 19th and early 20th centuries, it was possible, even commonplace, for someone in London to send a letter to a friend in Oxford, inviting him to dinner that evening — and to receive a reply by a later post that day.)

People in some ways are amazingly adaptable when it comes to using technology; but there are some parts of our psychological make-up that tend to be pretty stable.


Chrome-Plated Browsing

September 24, 2009

Sometimes you almost feel sorry for Microsoft.  First Bill Gates, in the first edition of his book, The Road Ahead, in 1995, dismissed the Internet as a fad.  Then, after realizing it might turn out to be a bit more than that (and rushing out a new, corrected edition of the book), he focused Microsoft’s energies on adapting to the Internet.  And despite some inconveniences, like an anti-trust trial, Microsoft managed to see off Netscape, the maker of the only competitive Web browser.  No sooner had they relaxed, though, than the Mozilla organization introduced the Firefox browser (originally called Phoenix, then  Firebird), and began to eat some of Microsoft’s lunch.

Now Google has gotten into the act, too.  The Google Chrome browser was introduced in September of last year, and has quickly gained a reputation for its speed, as well as for some new approaches to stability and security.  In particular, Chrome incorporates a new JavaScript engine called V8, which was designed for high performance; it also does a considerably better job of following Web standards than Microsoft’s Internet Explorer does, especially in IE’s older versions.  Google makes its money by selling advertising, so it has an obvious interest in getting more people to spend more time surfing the Web.

Google has recently upped the ante by announcing the introduction of a browser “plug-in” for Internet Explorer, called Chrome Frame, that it hopes can give users a more up-to=date browsing capability:

A number of modern Web features cannot be used pervasively on the Internet because Microsoft’s dominant browser, Internet Explorer, often fails to support current and emerging standards. Google has a plan to drag IE into the world of modern browsing by building a plugin that will allow it to use Chrome’s HTML renderer and high-performance JavaScript engine.

In essence, the plug-in replaces most of the user-visible parts of Internet Explorer with corresponding pieces of Chrome.  One motivation is to allow organizations who cannot quickly change to a non-Microsoft browser, because of legacy applications that depend on Internet Explorer’s “features”, to have access to more modern Web functionality.   The plug-in, like Chrome itself, will be open source, so users have the opportunity to do their own tweaking.

Google is opening the source code now to get feedback and assistance with testing. The plugin will include Google’s speedy V8 JavaScript engine, support for Canvas, SVG, and all of the other features that users enjoy today in Chrome.

The set-up of Frame allows a Web page designer to add a tag to the page’s HTML to indicate that it can take advantage of new features that Frame provides.  The user can also specify that Frame be used to render a particular page, instead of the normal Internet Explorer rendering code.  Potentially, the Chrome plug-in could be something of a Trojan horse to help Google get more ensconced on the Windows desktop.

Microsoft countered the Google announcement more or less immediate;ly by issuing a statement that using the Chrome plug-in might make Internet Explorer less secure.  Considering IE’s security record, this might seem to some of us like carrying coal to Newcastle.  It is true that plug-ins can be problematic from a security perspective, not least because there is often not an effective means of ensuring that users apply security updates when they are released. But Microsoft cited no specific issues; I doubt that was because they just forgot.

A preliminary set of performance tests with the Chrome Frame plug-in has been carried out by the ComputerWorld / TechWorld publications.  As with the Chrome browser itself, the increase in performance with JavaScript was quite impressive:

According to tests run by Computerworld , Internet Explorer 8 (IE8) was 9.6 times faster than IE8 on its own. Computerworld ran the SunSpider JavaScript benchmark suite three times each for IE8 with Chrome Frame, and IE8 without the plug-in, then averaged the scores.

Of course, performance isn’t everything. But JavaScript is becoming more heavily used all the time to add features to Web sites; some sites, like Facebook or WordPress.com, effectively could not function without it.  In any case, having more competition in the Web browser market is good news for consumers.

The Chrome Frame plug-in will work with Internet Explorer versions 6, 7, and 8, running under Windows XP or Vista.  The currently available version is intended for testing and development, and is not recommended for sensitive or production applications.  You can find more information and downloads here.


Quantum Computing Chip

September 23, 2009

“I think I can safely say that nobody understands quantum mechanics.”
— Richard Feynman, in The Character of Physical Law

Over the weekend, I posted a note about Mozilla Firefox getting an exemption from some eof the normal regulations on the export of strong cryptography.  The cryptography in question is used to implement the TLS/SSL secure browsing  (the https: protocol) between the Web site and the user’s computer.  It employs a cryptographic technique called a public-key algorithm.  Without going into the details, these algorithms rely for their security on the difficulty of factoring very large numbers. (By factoring, I’m referring to the unique factorization of any positive integer as a product of prime numbers — according to the fundamental theorem of arithmetic.)  More specifically, there is no known algorithm that can compute factorizations in polynomial time.

The difficulty of factoring large integers using conventional algorithms and computers is fairly well understood.  It is always, of course, possible that a breakthrough will be found tomorrow, but the problem has been extensively studied and such a breakthrough seems unlikely.  However, there have been discussions of the use of quantum computing, which embodies principles from quantum mechanics, and in theory can solve problems like factorization much more rapidly than conventional computers.   A technique called Shor’s algorithm can theoretically  compute factorizations in polynomial time.  It is difficult to explain how this works without actually digging into the mathematics, but the effect is similar to a massively parallel attack on the problem.

This is a very cool bit of mathematics.  It has not had any practical application to date, because no one has yet figured out a way to build a quantum computer of reasonable scale.  Now the IEEE Spectrum has a report that researchers at the University of Bristol in the UK have constructed a chip-scale quantum computer, and used it to factor a number using Shor’s algorithm.

There have been various dire and hyperventilating predictions about what this might mean for security, Internet commerce, and the future of Western civilization.  I think we can relax for a while.  First, the number that was actually factored was only four bits long: 15.  And 15 is not just any four-bit number; it has the special form:

(2n + 1 ) (2n – 1)    or

5 · 3 = 15

the binary forms of which make factoring easier.  This is not to minimize the researchers’ achievement, just to say that you will probably not see this technology at Best Buy in time for Christmas.  Current, readily available cryptography software can use 4096-bit keys, and the scaling problems of the quantum technology are formidable.  Mordaxus, at the Emergent Chaos blog, has an essay on the likely near-term impact of this technology (answer: very little).

Perhaps more to the point, Bruce Schneier has pointed out that all of this is still many years away.  Also, although the effects on current public-key cryptography would be severe, the impact on traditional, symmetrical (secret key) cryptography would be considerably less, amounting essentially to reducing the effective key length by half.

This is interesting and important research, but your browsing sessions and PGP-encrypted E-mails are probably safe for a while.  And, as Schneier also points out, security is a chain, only as strong as its weakest link; there are many links in current security protocols much weaker than the cryptography they use.


Unvanishing Act

September 22, 2009

Back in July, I posted a note about a new approach to ensuring the deletion of sensitive information on the Internet.   Researchers at the University of Washington had developed an experimental software package called Vanish, which first encrypted sensitive data, and then stored the encryption keys on a public peer-to-peer [P2P] filesharing network, using a trick that would result in the keys’ effectively disappearing within a predictable time.

Now the New York Times has a report that another group of researchers, from the University of Texas at Austin, Princeton University, and the University of Michigan, has developed a was to defeat the Vanish system, and ensure that the data can be “un-Vanished”.  The researchers claim that their technique is highly effective:

“In our experiments with Unvanish, we have shown that it is possible to make Vanish messages ‘reappear’ long after they should have ‘disappeared’ nearly 100 percent of the time,” the researchers wrote on a Web site that describes their experiment.

The Vanish system works, as I noted above, by encrypting a data object [VDO] with a secret encryption key generated by the Vanish system, and then breaks that key into a large number of pieces.  The pieces are then stored in a distributed hash table [DHT] on a peer-to-peer file sharing network, Vuze.  (A hash table is essentially a data base consisting of {name, value} pairs, like a list of user IDs and real names.  A distributed hash table exists in sections on different networked machines.)  The implementation of the Vuze DHT requires each node to store copies of data held by its “neighbor” nodes.  The stored values are deleted after a defined time interval (usually eight hours) has passed.  The Vanish approach relies on this deletion to make the encryption key “disappear” after a time; once the key is lost, the encrypted data is no longer readable.

The approach used in Unvanish is straightforward.  It uses a small number of machines to masquerade  as a much larger set of machines.  By gaining access to the Vuze DHT, Unvanish software is able to read and store any {name,value} pairs that resemble Vanish keys

Unvanish reads the shares comprising a VDO’s encryption key at any time during the window between its creation and expiration. It then stores an archive version of the key outside of the DHT.

As long as the keys are still available, the original data object can be decrypted.  As far as I can see, the Unvanish system can only work contemporaneously; that is, it cannot recover the keys for data objects that have already disappeared.

This kind of back-and-forth of claims and counter-claims is normal, and healthy, in the security field especially.  It is the mechanism by which good systems are eventually found (if they are).  It is more or less a truism that anyone can design a security system that he himself cannot break.   But then, he’s not the one we’re worried about.


More on Two-Factor Authentication

September 22, 2009

A couple of days ago I posted a note about a new trend in attacks on two-factor authentication systems.  Bruce Schneier also has a post on this in his Schneier on Security blog.  He argues, and I agree, that the fundamental issue here is that the two-factor approach is solving the wrong problem, that what is needed is to authenticate the transaction, not the user.

Credit cards are a perfect example. Notice how little attention is paid to cardholder authentication. Clerks barely check signatures. People use their cards over the phone and on the Internet, where the card’s existence isn’t even verified. The credit card companies spend their security dollar authenticating the transaction, not the cardholder.

To put it another way, the two-factor approach is fundamentally a defense against a particular type of attack: stealing or guessing passwords.  Focusing on authenticating the transaction is more fundamental, in the sense that  it is focused on preventing the crime (fraud) rather than on foiling particular criminal tactics.

This, of course, is primarily a job for the banks, rather than for their customers.  I think it still makes sense for customers to take reasonable steps to protect themselves.

I think there is one more lesson to be learned from the credit card example.  The card issuers started to take security seriously when legislation was enacted that put a $50 limit on the cardholder’s liability for fraudulent use in most cases.  As I’ve discussed before, this removed an economic externality, and made the issuers, who are the ones in a position to address the fraud problem, responsible for the costs of not doing so.


FCC on Net Neutrality

September 21, 2009

Today, Julius Genachowski, chairman of the Federal Communications Commission, gave a widely-awaited speech on the subject of Net Neutrality.  He pointed out (correctly, in my view) that much of the success of the Internet, including its success in areas undreamed-of by its founders, is in large part due to its open standards and architecture.

His proposal for moving forward is centered on the development of four principles that the FCC has already articulated for addressing individual cases:

To date, the Federal Communications Commission has addressed these issues by announcing four Internet principles that guide our case-by-case enforcement of the communications laws. These principles can be summarized as: Network operators cannot prevent users from accessing the lawful Internet content, applications, and services of their choice, nor can they prohibit users from attaching non-harmful devices to the network.

He proposes extending this framework by adding two additional principles of non-discrimination and transparency:

  • Broadband providers cannot discriminate against particular Internet content or applications.
  • Providers of broadband Internet access must be transparent about their network management practices.

He also believes that this framework should in principle apply to all broadband providers, whether fixed-line or mobile, with the understanding that some details may need to be adjusted for particular service environments:

Even though each form of Internet access has unique technical characteristics, they are all are different roads to the same place. It is essential that the Internet itself remain open, however users reach it. The principles I’ve been speaking about apply to the Internet however accessed, and I will ask my fellow Commissioners to join me in confirming this.

The chairman’s intention is to embody some more specific proposals in a forthcoming Notice of Proposed Rulemaking, to make them available for public discussion and comment.  The FCC has also launched a new Web site, www.openinternet.gov, to provide a focus for discussion.  (Perhaps taking a leaf from Google’s notebook, the site is even labeled as “beta“.)

This seems to me to be a positive development for consumers. We can expect to hear some noisy opposition, particularly from the large network providers, who would very much like to find a way to skim some revenue from the streams of data that are flowing through their particular “tubes”; if past experience is any guide, some of these arguments will be (to use a lovely British phrase) quite economical with the truth.

I will discuss this in a more detail in a subsequent post.  You can read Mr. Genachowski’s speech at the new Web site; you can also download the text [PDF].


%d bloggers like this: