And the Winner Is …

March 30, 2011

A little over a year ago, I wrote about Google’s proposed high-speed Internet access experiment, in which the company planned to select a city/community in the US in which it would install a 1 Gbps fiber network infrastructure.   This stimulated a “beauty contest” among the candidates, which I have written about before.   The selected community was originally supposed to be announced by the end of 2010; but, as I noted in December of last year, Google felt that it should take some extra time for the decision, owing to the very high level of interest: more than 1000 communities applied.

According to an announcement on the Official Google Blog, a winner has now been selected: Kansas City, Kansas.  (For readers outside the US, this is not quite as redundant as New York, New York.  There are two cities called Kansas City: one, the winner, in Kansas, the other in Missouri, on the opposite side, just across, the Missouri River.)

After a careful review, today we’re very happy to announce that we will build our ultra high-speed network in Kansas City, Kansas. We’ve signed a development agreement with the city, and we’ll be working closely with local organizations, businesses and universities to bring a next-generation web experience to the community.

In a way, this is an inspired choice.  If one wished to select a quintessential mid-American city, it would be hard to top one of the Kansas Cities. (Disclaimer: a significant part of my extended family came from within ~200 miles of Kansas City.)   It has always been a solid, decent, comfortable, mainstream USA kind of place.   At the same time, as far as I know, the area is not a hot-bed of new ventures; perhaps Google’s initiative will spark some new enterprises.

Slashdot noted this development, with an amusing bit of (apparent) geographic confusion:

The city of Topeka had actually temporarily renamed itself Google, Kansas, the capital city of fiber optics, in a move to get Google to lay fiber there. It seems to have worked, because a deal has just been signed to roll out fiber in the city, which should be available to everyone in the area by 2012.”

(I wrote about Google, Kansas, here.)  Topeka is ~45 miles away from Kansas City; it has no other connection to Kansas City, other than being in the same state, as far as I know.

I will be most interested to see how this works out.

HTML 5: WebSockets

March 29, 2011

With the recent releases of Mozilla’s Firefox 4, and Microsoft’s Internet Explorer 9, and Google’s frequent updating of Chrome, current releases of the major browsers now include significant support for HTML 5, the latest iteration of the standard markup language for constructing Web pages.  One of the aims in developing HTML 5 has been to standardize HTML usage, which has developed into a sort of organic hodge-podge of various standards, features introduced by specific software packages (e.g., Web browsers), and miscellaneous hacks.  This has allowed the Web to be developed well beyond its initial conception, but is a bit messy.

One of the areas that HTML 5 attempts to address is application programming interfaces [APIs] for some basic functions.  One interface that is supported in HTML 5 is a new Web protocol called WebSockets.  The Technology Review has an overview article describing this new technology, which provides Web-based applications with a means of maintaining persistent communications between the client and the server, and to pass data back and forth as the application runs.

The original HTTP [Hyper-Text Transfer Protocol], formalized in version 1.0 in 1996, was designed around the original idea of the Web as a network of inter-linked documents.  It is a request-response protocol, and was designed to be stateless, meaning that the server did not maintain any “memory” of a client, once any outstanding request had been satisfied.   (This means that the concept of “logging in” to a Web site is, with pure HTTP, meaningless.)   This was done for a reason; a stateless server is much easier to restart, or to replace with a standby server;  stateless operation also makes the use of caching proxies much easier.  The familiar phenomenon of browser “cookies” was introduced in order to save state on the client, and pass it back to the server with the next request.

Although the cookie mechanism was an effective hack around the lack of state on the server, HTTP still required a “request – wait for response – response” pattern for all communications between the browser and the server.  This is a potential handicap for interactive application generally, and especially for applications that have timing requirements or dependencies.  What the new protocol does is to provide the ability to set up an alternate communications channel, which, once opened, stays open until closed, much like an ordinary file.  This permits data to be passed back and forth with much less processing and network overhead.

This protocol [WebSockets] allows a Web client to create a connection, keep it open as long as it wants, and both send and receive data continuously.

Some of the application providers showing early interest are financial trading sites, and gambling sites (but then, I repeat myself).

If you are interested in the technical nitty-gritty, the IETF site has a draft of the protocol specification; the W3C has a specification draft for the WebSockets API.

Air Freshener

March 28, 2011

Today is the 192nd anniversary of the birth, in 1819, of Sir Joseph Bazalgette.  Sir Joseph was a civil engineer, not a physician; but, through his work, he probably contributed at least as much to human health as any doctor then living.  He was the Chief Engineer of the Metropolitan Board of Works in London, and designed a new sewer system for London, a design accepted by the Board in 1859.

For centuries, one of London’s primary sources of water for drinking and washing had been the River Thames and its tributaries.  Unfortunately, many of these streams were highly polluted with sewage and industrial waste, from local businesses like tanneries, slaughterhouses, and breweries.  The problem of contaminated water had become acute by the 19th century, due to the growth of industry and increases in population.  Ironically, another factor contributing to the problem was the introduction of the water closet (a water-flushed toilet), which greatly increased the volume of sewage produced, overflowing the cesspits that were a principal disposal mechanism, and sending raw sewage into the rivers via the storm drains.

Epidemics of cholera became common in the first half of the century.  Although a London physician, Dr. John Snow, had concluded that the disease was caused by contaminated water, the idea was not widely accepted in the 1850s; consensus opinion was that cholera was caused by a miasma, or contaminated air, recognizable by its foul smell.   (I’ve noted in another context that travelers in the 17th century, approaching a city at night, could smell its proximity long before they could see lights.)

In the summer of 1858, the weather was unusually hot and dry, and the flow of the Thames and its tributaries was below normal.  Exposing more of the beds of these streams, which were in many cases essentially open sewers, resulted in a pervasive smell of sewage throughout London, an event still known as the Great Stink.  There was grave concern for health, especially since most people still thought cholera was spread by bad air.  Curtains soaked in chloride of lime were hung at the windows of Westminster Palace to protect Members of Parliament, and plans were underway to evacuate Parliament and the Law Courts to upstream locations away from London.  (The Today in Science site has a short satirical piece from a then-current issue of Punch, the humour magazine.)

Cooler weather and heavy rains finally brought the Stink to an end, and Bazalgette’s sewer system design was undertaken to prevent such “bad air” from returning.   The system was constructed, at very considerable expense, during the period 1859-1865, and is still in use today, in part because Sir Joseph was a very conservative engineer, and provided capacity considerably in excess of the requirements at the time.  The system created a network of sewer “tributaries”, with small local sewers leading into about 450 miles of main sewers, which in turn emptied into six large “intercepting” sewers, three on each side of the Thames, which carried the sewage to the Thames Estuary, downstream from London.  (Today, of course, it goes through treatment plants first.)

When asked to think of actions that improved human health, many people cite the discovery of antibiotics, or perhaps the discoveries of how diseases are transmitted (by mosquitoes, for example).  These were certainly important, but major improvements came from civil engineering projects, like the London sewers, that finally provided people, especially in cities, with reliably clean water.

Rewriting the Present

March 27, 2011

Yesterday, I posted a note about some of the methods that political dissidents and their supporters are using to circumvent attempts by authoritarian governments to block access to the Internet.  China also has an authoritarian government, run by the Communist party, although it has allowed more economic freedom than has been typical under such regimes.  The Chinese government’s approach to controlling the Internet has also been a bit more nuanced than that of Libya or Egypt.

Technology Review had an article this week on one approach that, according to Google, the Chinese government is using to alter the results of Web interactions “on the fly”.   The company believes this activity is responsible for apparent “technical problems” experienced by some Chinese users.

The Chinese government is thought to have tightened communications in response to political unrest in the Middle East. Google discovered that problems with Gmail from within China came in the form of an attack that caused the Web application to freeze when a user took certain actions, such as clicking the “send” button.

The technique that China is apparently using is not new, and in fact is in common use by many organizations.  It employs a Web “proxy server” between the user and the global Internet; all traffic to and from the Web must pass through the proxy.  This means that software on the proxy server can examine, record, or modify messages passing in either direction.  It is, of course, enormously easier to introduce this kind of proxy if the government controls all of the available communications links with the outside world.  Defending against this kind of attack is very difficult, if the proxy system has authentic cryptographic credentials (which an ISP run by a national government almost certainly has).

As in basic cryptology, there is always an arms race between those who want to protect information, and those who want to disclose it.

Apple Releases Mac OS X 10.6.7

March 26, 2011

Apple has released a new version, 10.6.7, of its OS X operating System for Mac computers.  This update fixes a number of security vulnerabilities, as well as other bug fixes and miscellaneous improvements.   More details on what’s in the update are available here, and you can download the update from the Support Downloads page.

Because of its security content, I recommend that you install this update as soon as you conveniently can.

Dissident Tech

March 26, 2011

With the recent rise in anti-government protests in the Middle East, there has been renewed interest in the use of the Internet as a means of communication.   Authoritarian governments, such as those in Egypt and Libya, have reacted by trying to shut down Internet service, or to sever connections with the outside world.

Last week’s issue of The Economist has an article on some of the methods that dissidents have used to circumvent these restrictions.  Some of the techniques used, such as making directional antennas for cell phones from metal cans and other hardware, in order to extend their range considerably, have been used by do-it-yourself enthusiasts for some time.  Others, like adapting satellite dishes to deliver wireless network connections over long distances, are relatively new.

Yahel Ben-David, an electrical engineer at the University of California, Berkeley, who has designed secret cross-border links to the internet for people in several countries, does so by adding standard USB dongles designed for home Wi-Fi networks. Thus equipped, two properly aligned dishes as much as 100km apart can transmit enough data to carry high quality video.

Techniques have also been worked out that can use satellite transmissions, ham radio, and portable “backpack” FM radio transmitters.  One advantage of many of these techniques, particularly those that use directional transmissions, is that they are much harder for the authorities to track down than the omni-directional transmissions from more conventional setups.

Some of the more creative approaches even use conventional consumer products as covert radio transmitters.

Kenneth Geers, an American naval-intelligence analyst at a NATO cyberwar unit in Tallinn, Estonia, describes a curious microwave oven. Though still able to cook food, its microwaves (essentially, short radio waves) are modulated to encode information as though it were a normal radio transmitter.

From time to time, various idealistic folks have forecast a future in which the Internet would bring free and open communications to everyone.  As with many idealistic predictions, reality has often refused to cooperate.  But the proliferation of methods does make a difference, since a government that wants to censor information effectively has to control all channels, not just some.  For those authoritarian regimes, it creates a situation a former colleague described (admittedly in a different context) as similar to “being attacked by ants”.   You can step on a lot of them, but it’s awfully difficult to make sure none gets through.


%d bloggers like this: