Rebuilding the Tunny Machine

August 19, 2011

Back in May, I wrote about a project to reconstruct a Tunny machine at Bletchley Park in the UK, home of the National Museum of Computing.  Bletchley Park was also the home, during World War II, of the Government Code and Cipher School, also called Station X, which was the center of British code-breaking efforts against the Axis powers.  The Tunny machine was used in breaking the German Lorenz cipher, used for communications between the Nazi high command in Berlin and its field commanders.

The UK magazine PC Pro has an interesting article giving a closer look at the reconstruction project.  The volunteers who undertook the task did not, at least initially, have a lot to go on.

A single photograph, scraps of circuit diagrams drawn from memory and a pile of disused components – it isn’t much to go on, but from such meagre beginnings, engineers rebuilt one of the precursors to the modern computer.

The team did have one photograph of a Tunny machine, but it was a general image of the room which housed the machine.  They did have one or two lucky breaks.

Another lucky break was the discovery of the lead engineer’s notes stashed away in an envelope in a toilet after the war. Pether’s [John Pether, one of the reconstruction team] fellow restorer, John Whetter, jokes that the team would have had more to work from if only British forces had stocked more toilet paper.

Perhaps the team’s most valuable asset was its collective experience working in telecommunications.  The Tunny machines, along with the Colossus computer, were built be technical staff from the Post Office’s telephone operations.  Naturally enough, the original engineers designed and built the equipment using the telephone system parts and circuit elements they were familiar with; those parts were also reasonably available during the war.

Pether worked for the GPO and BT for 36 years, and the Colossus and Tunny machines were built using standard telecommunications equipment – the very same bits that made up the GPO’s, and subsequently BT’s, network.

The use of telephone exchange components turned out to be advantageous to the restorers in another way.

The team started work on the code-breaking machines in the mid-1990s, about the same time as another massive project was coming to an end: BT’s move to digital exchanges. The decommissioning of the old equipment gave the engineers their pick of parts for the restoration.

“BT was kind enough to donate a lot of the components from these old exchanges that we needed for the Colossus rebuild and the Tunny,” Pether said.

Had it not been for the supply of old components, some of the electro-mechanical parts, such as relays, might have required custom manufacture.

Looking at photographs of the reconstructed machine, one is impressed by the enormous amount of skilled labor that  was required to build either the original or the replica.  Many hundreds of wires had to be carefully routed, according to a detailed plan, through the frame of the device, which contains ~5,000 solder joints.  Modern test equipment did make calibration of the machine’s temperamental timing circuits a bit easier.

The Museum now has a complete exhibit replicating the Lorenz code-breaking operation.

The National Museum of Computing now has a full, working WWII code-breaking system set up at Bletchley Park: visitors can hear the radio signal, see the Colossus being programmed, and watch the message being processed through the Tunny.

It’s instructive and fun to see this early technology brought back to life.  It is also good that the really extraordinary work that was done at Bletchley Park is being recognized, after having been kept secret for so long.

 


IBM Unveils Cognitive Computing Chips

August 18, 2011

According to an article at Technology Review, IBM has developed a new type of processor chip intended for use in “cognitive computing” applications.  (IBM’s press release on the technology, called SyNAPSE  for Systems of Neuromorphic Adaptive Plastic Scalable Electronics, is here.)  What makes the new chip special is that it attempts to replicate, in hardware, the structures that carry out processing in a human brain.

… a new microchip made by researchers at IBM represents a landmark. Unlike an ordinary chip, it mimics the functioning of a biological brain—a feat that could open new possibilities in computation.

The brain’s inner workings are quite different from those of a typical computer.  It is a massively parallel processor, containing ~1011 neurons, and ~1014 synapses connecting them.  Each neuron is, in a sense, a combination of processor and memory; as the brain learns, the strengths of the synapse interconnections change.

I’ve talked before about the brain’s ability to solve some very difficult problems, such as facial recognition or understanding speech, seemingly with little effort, while a digital computer can do some things — find the roots of a polynomial, for example  — very much faster than a person can.  IBM has had some success in using conventional computer technology to tackle more “human” problems,like  playing chess and winning on Jeopardy!.  But those successes required a great deal of hardware horsepower.  The Watson system that won the Jeopardy! test match used 10 racks of servers, containing 2880 processor cores, and 16 terabytes of memory.  I have not come across figures on its electricity consumption, but I’m sure it was substantial.  There are also basic physical constraints that limit how much faster and more densely packed circuit elements can become.  We now have processor chips that run at 5GHz clock speeds; but the clock speed of our neurons is about 10 Hz, and the brain runs on about 10 watts.  Speed isn’t everything.

The idea of replicating the processing used by the brain is not new.  It underlies the neural network approach used in artificial intelligence research.  Most of this work has traditionally been done by constructing a model of neurons and synapses in software.  The IBM research group, led by Dharmendra Modha, started out with simulations run a very large-scale computers.

Modha’s group started by modeling a system of mouse-like complexity, then worked up to a rat, a cat, and finally a monkey. Each time they had to switch to a more powerful supercomputer. And they were unable to run the simulations in real time, because of the separation between memory and processor that the new chip designs are intended to overcome.

By placing memory and processing elements in close physical proximity, and by making the “synapse” connections adjustable, the team hopes to be able to run much more complex software faster and more efficiently.  Preliminary estimates are that the new chips might reduce the energy cost of running such software by as much as a factor of 1,000.

The research, which is being funded in part by the Defense Advanced Research Projects Agency [DARPA], is still at an early stage, but it has the potential for some ground-breaking work.  If we had a working model of a brain, we might even manage to understand a bit more about the ones inside out skulls.

The “Wired Science” blog at Wired has an interview with Dr. Modha on the team’s work.


Mozilla Releases Thunderbird 6.0

August 16, 2011

In addition to the Firefox release today, Mozilla has released a new version, 6.0, of its Thunderbird E-mail client, for Windows, Linux, and Mac OS X.  The new version incorporates the latest Gecko 6.0 layout engine, and numerous other fixes and improvements.  Further details are in the Release Notes.

You can get the new version via the built-in update mechanism, or you can get versions for all platforms in many languages from the download page.


Firefox 6.0 Released

August 16, 2011

The Mozilla organization today released a new version, 6.0, of its Firefox Web browser, for Mac OS X, Linux, and Windows.  Version 6.0 incorporates some new features, some of particular interest to Web developers, including:

  • Added support for the latest draft version of WebSockets with a prefixed API
  • Added support for EventSource / server-sent events
  • The address bar now highlights the domain of the website you’re visiting
  • Added Scratchpad, an interactive JavaScript prototyping environment

More changes and links to further information can be found in the Release Notes.  The new release also incorporates several stability and security fixes.

You can obtain the new version, in 70+ languages, from the download page.  Alternatively, you can use the built-in update mechanism (Help -> About Firefox -> Check for Updates).  Because of its security content, I recommend that you upgrade to the new version reasonably quickly.

If for some reasons you are still using Firefox 3.6.x, Mozilla has released version 3.6.20, incorporating security fixes; you can download it here.  The 3.6.x series will only be supported for a limited time, so I encourage you to migrate the current version as soon as you can.

Update Tuesday, 16 August, 19:20 EDT

For those using the 3.6.x browser version, Mozilla has published a Security Advisory [MFSA 2011-30], detailing the security vulnerabilities fixed in 3.6.20, several of which are rated Critical.  If you must stay with 3.6.x, the upgrade to 3.6.20 is strongly recommended.

Update Tuesday, 16 August, 20:37 EDT

The Mozilla Foundation  Security Advisory [MFSA 2011-29] is now available, and lists the security fixes, many Critical, in Firefox 6.0.  I suggest you upgrade your installation as soon as you conveniently can.


GPS Troubles, Revisited

August 15, 2011

Back in July, I posted a note about a controversy involving the Global Positioning System [GPS] and a proposed new service to provide wireless broadband Internet access, offered by a firm called LightSquared.  The company holds licenses for a portion  of the spectrum reserved for Mobile Satellite Services, using frequencies just below those used by the GPS.  When the licenses were originally acquired by SkyTerra Communications, a predecessor company to LightSquared, the plan was to make connections primarily with satellite links, with some small ground stations to fill in holes in the coverage. The controversy has come about because LightSquared persuaded the Federal Communications Commission[FCC] to amend its license to allow a service based almost entirely on a network of 40,000 ground transmitters.

A recent article in the “Babbage” science and technology blog at The Economist provides some updated information on the situation.  The evidence that implementing the LightSqaured plan would create an enormous interference problem with the GPS continues to mount.  The fundamental problem is that, as I’ve discussed before, GPS signals as received at the Earth’s surface are quite weak, coming as they do from satellites at altitudes of ~20,000 km.  The original SkyTerra plan, which also used satellite transmitters, would not have presented a major interference problem, but the current LightSquared plan is a different animal altogether, using transmitters much more powerful than GPS.

The company intends to build a broadband wireless network comprising 40,000 base-stations across the United States. These stations will put out 15,000 watts apiece. Typical mobile-phone transmitters in urban areas radiate between five and ten watts. Even the 100-foot towers used in open countryside transmit no more than 60 watts.

As the article points out, LightSquared’s complaint that existing GPS receivers are not very well protected against interference has some validity; however, to make a receiver that can reject a signal from an adjacent frequency that is a billion times as strong, and still do a decent job at receiving the desired weak signal from the GPS satellites is not feasible.

The FCC ordered that a technical review be conducted to evaluate the potential for interference.  A report of the review, prepared by RTCA, Inc., a consultant to the government, was submitted in late June (the public version is available here [PDF]).  The conclusion of the review was that implementation of the LightSquared system would cause major interference with GPS-based aviation systems.  From the Executive Summary of the report:

The study concludes that the current LightSquared terrestrial authorization would be incompatible with the current aviation use of GPS, however modifications could be made to allow the LightSquared system to coexist with aviation use of GPS.

···

The impact of a LightSquared upper channel spectrum deployment is expected to be complete loss of GPS receiver function. Because of the size of the single-city station deployment, GPS-based operations below about 2000 feet will be unavailable over a large radius from the metro deployment center (assuming no other metro deployments are nearby). Given the situation in the high altitude U.S. East Coast scenario, GPS-based operations will likely be unavailable over a whole region at any normal aircraft altitude.

In essence, the planned deployment could make GPS unavailable over a large portion of the eastern United States.  A separate study [PDF], conducted by the National Executive Committee on Space-Based Positioning, Navigation, and Timing’s Systems Engineering Forum, recommended that deployment of the LightSquared system not be allowed to proceed, due to major adverse effects on GPS.  The group also found that proposed mitigations based on modification or replacement of existing GPS equipment were impractical, and probably insufficient for applications requiring high precision.

It seems clear, both from the reports of various study groups and from first principles of physics, that LightSquared’s planned system would cause serious disruption of the GPS, leading to many problems.  As the “Babbage” article noted, the costs just in the aviation sector would be huge.

The Federal Aviation Administration (FAA) reckons it would cost airlines, in particular, more than $70 billion over the next ten years if they had to find fixes to cope with a GPS blackout.

Many others would be affected, too, including mobile phone users, drivers, the armed forces, and emergency services.  “Babbage” concludes, and I have to agree, that, although the US badly needs an alternative broadband supplier, especially in less-populated areas, degrading the GPS is a price too high.


Chrome 14 Beta has Native Client

August 14, 2011

Google recently announced the release of a new beta version, 14.0.835.35, of the Chrome browser, for Linux, Windows, Mac OS X, and Chrome Frame.   Besides representing a new major version, the new beta release is notable for being the first to incorporate Google’s Native Client technology.  Essentially, Native Client is a set of software tools that allows the Chrome browser to run compiled C or C++ code, in the same way that it can run JavaScript.  The technology provides for a security “sandbox” in which the code runs, and has a set of APIs [application programming interfaces], called “Pepper”,  that allows connections between the compiled code and the capabilities of HTML 5.

The Native Client technology is likely to be most attractive to users that want to incorporate computation-intensive applications in the browser.  Some may have existing libraries of C or C++ routines for applications like video editing or statistical modeling, and may find that Native Client makes using them more straightforward.  Others may have performance issues with using JavaScript for some tasks, even though JavaScript performance has improved markedly in recent years for most browsers.

The official Google Chrome Blog has a short article discussing the new features in a bit more detail, and the Google Code site has a page with some  example Native Client applications.  The “Webmonkey” blog at Wired also has an article about the new technology.

You can get the Chrome 14 beta from the beta channel download page.