Alan Turing Centenary, Part 2

June 23, 2012

As one might expect, the BBC News site has a number of articles related to the Alan Turing Centenary.  In particular, it has been publishing  a series of essays on Turing’s life and work.   I have tried to give a brief overview of these below.  (The essays are set up as separate pages, but there is a set of links to all of them at the top of each article.)

The first essay, on “Turing’s Genius”, is by Google’s Vint Cerf, who I have mentioned before in connection with the ACM’s participation in the Turing Centenary, and who is a recipient of the ACM’s Turing Award.  (As he mentions in his essay, he also, coincidentally, shares a birthday with Turing: June 23.)  He discusses the many ways in which Turing’s original work relates to the technological world we all take for granted today.

The second essay, by Prof. Jack Copeland, University of Canterbury, Christchurch, New Zealand, relates Turing’s involvement in code-breaking at the Government Code and Cypher School at Bletchley Park (also called Station X).  It mentions Turing’s personal contribution to breaking the naval version of the German Enigma encryption system, and the Lorenz cipher.   These mathematical, cryptanalytic contributions would have been impressive; but Turing also made an enormous contribution to the work of turning Station X into what was, in effect, the world’s first code-breaking factory.  He helped develop the bombes, electro-mechanical computers used to break Enigma messages on a production basis, and the Tunny machine, used for the Lorenz cipher.   (A project to reconstruct a Tunny machine is underway.)  As in many aspects of wartime intelligence, time was of the essence.

The faster the messages could be broken, the fresher the intelligence that they contained, and on at least one occasion an intercepted Enigma message’s English translation was being read at the British Admiralty less than 15 minutes after the Germans had transmitted it.

The third essay, “Alan Turing: The Father of Computing?”, is by Prof. Simon Lavington, author of Alan Turing and His Contemporaries: Building the World’s First Computers.   He observes that Turing’s ideas were not always terribly influential in some of the early computer  implementations.

It was not until the late 1960s, at a time when computer scientists had started to consider whether programs could be proved correct, that On Computable Numbers came to be widely regarded as the seminal paper in the theory of computation.

On Computable Numbers, with an Application to the Entscheidungsproblem [PDF], Turing’s paper, proved nonetheless to be of immense importance.  In it, Turing laid out, for the first time as far as I know, the  idea of a theoretical machine that, as demonstrated in his mathematical analysis, could solve any solvable problem.

The fourth essay, by Prof. Noel Sharkey of the University of Sheffield, discusses the Turing Test, proposed by Turing in his 1950 paper, Computing Machinery and Intelligence.  That paper begins with a statement of the fundamental problem:

I propose to consider the question, ‘Can machines think ?’  This should begin with definitions of the meaning of the terms ‘machine ‘ and ‘ think ‘.

Turing’s paper was provocative, in part, because he realized how woolly the question, “Can machines think?”, really is   There are ongoing discussions of whether the test that Turing proposed is the right one, but it does have the considerable virtue of being realizable in practice.


“Safe Browsing” Turns Five

June 21, 2012

A little more than five years ago, Google launched its Online Security Blog, as part of an augmented effort to fight malware and phishing attacks, an effort the company called “Safe Browsing”.  Niels Provos, of Google’s security team, has just posted a brief summary of some of the knowledge gleaned from the Safe Browsing work.

A key part of the safe browsing effort is an infrastructure that can detect and catalog dangerous sites across the Internet.  Google uses this data to issue warnings with its search results, of course, but it also provides a free, public Safe Browsing API, so that other applications can check sites against Google’s list.  This protection, implemented in Chrome (of course), Firefox, and Safari, results in several million warnings being issued each day.   The scale of the effort is staggering; Google estimates that it identifies about 9,500 new malicious sites every day.  These are, in many cases, legitimate sites that have been compromised so that they attempt to install malware, or redirect the user to a site that does.  In other cases, the sites are built specifically for malicious purposes.

The general trend in these attacks, as we’ve seen before, is to get more polished and professional as time passes.  Google says that some sites use a given URL for an hour or less, in order to make detection more difficult.   Targeted phishing (or “spear phishing”) attacks are increasingly common, as are social engineering attacks, such as fake anti-virus warnings.  And the traditional “drive-by download” technique, in which the attacker attempts to compromise the user’s machine via a vulnerability in the browser or the OS, is still popular.

As Niels Bohr (the physicist and gunslinger) reportedly said, “Prediction is very difficult, especially about the future.”   Nonetheless, it seems unlikely that the current trends will change very much; we’ll continue to see more, and more sophisticated, attacks.  So pay attention to those security warnings, and be careful out there.


US-CERT: Intel CPU Vulnerability

June 19, 2012

The US Computer Emergency Readiness Team (US-CERT) has published a Vulnerability Note (VU#649219) about a newly discovered security vulnerability involving 64-bit operating systems or virtual machine hypervisors running on Intel x86-64 CPUs.   This does not affect Intel’s 64-bit Itanium processors. The vulnerability means that an attacker might be able to execute code at the same privilege level as the OS or hypervisor.

Some 64-bit operating systems and virtualization software running on Intel CPU hardware are vulnerable to a local privilege escalation attack. The vulnerability may be exploited for local privilege escalation or a guest-to-host virtual machine escape.

The x86-64 architecture was originally developed by AMD, with the aim of producing a 64-bit CPU that was backward-compatible with the 32-bit IA32 architecture, as implemented in, for example, Intel’s Pentium processor.  The vulnerability exists because of a subtle difference between AMD’s implementation and Intel’s.  The good folks at the Xen open-source virtualization project have posted a detailed technical explanation of the problem on the Xen community blog; I will attempt a brief summary here.

Whether one is running a standard operating system, such as Linux or Windows, or a virtual machine hypervisor, such as Xen, a mechanism is needed to switch from an application, which runs with limited privileges, to the OS or hypervisor, which typically has no restrictions.  Of course, the mechanism must allow switching back, too.  The most commonly-used mechanism on the x86-64 platform uses a pair of instructions, SYSCALL and SYSRET.   The SYSCALL instruction does the following:

  • Copy the instruction pointer register (RIP) to the RCX register
  • Change the code segment selector to the OS or hypervisor value

A SYSRET instruction does the reverse; that is, it restores the execution context of the application.  (There is more saving and restoring to be done — of the stack pointer, for example — but that is the responsibility of the OS or hypervisor.)

The difficulty arises because the x86-64 architecture does not use 64-bit addresses; rather, it uses 48-bit addresses.  This gives  a 256 terabyte virtual address space, which is considerably more than is used today.   The processor has 64-bit registers, but a value to be used as an address must be in a canonical form (see the Xen blog post for details); attempting to use a value not in canonical form results in a general protection (#GP) fault.

The implementation of SYSRET in AMD processors effectively changes the privilege level back to the application level before it loads the application RIP.  Thus, if a #GP fault occurs because the restored RIP is not in canonical form, the CPU is in application state, so the OS or hypervisor can handle the fault in the normal way.  However, Intel’s implementation effectively restores the RIP first; if the value is not in canonical form, the #GP fault will occur while the CPU is still in the privileged state.  A clever attacker could use this to run code with the same privilege level as the OS.

According to the most recent version of the Vulnerability Note, the following systems are known to be affected by this vulnerability. when run on Intel CPUs: Citrix, Microsoft Windows 7 and Windows Server 2008R2, NetBSD, FreeBSD, Oracle, Xen, Red Hat Linux, SUSE Linux, and Joyent.   VMware and Apple systems are known not to be affected; and, of course, no system running on an AMD processor is affected.  Check the links in the Vulnerability Note for more information.

Intel says that this is not a flaw in their CPU, since it works according to their written spec.  However, since the whole point of their implementation was to be compatible with the architecture as defined originally by AMD, this seems a bit disingenuous.

The immediate risk of successful exploits “in the wild” is probably not that high.  However, hardware flaws don’t get fixed overnight, so I hope the OS vendors can implement a software work-around (as Xen has already done) without too much delay.


Alan Turing Centenary, Part 1

June 19, 2012

I’ve written here a couple of time about the Alan Turing Centenary, marking the 100th anniversary of of the birth of the English mathematician, cryptanalyst, and pioneer computer scientist; and about some of the events planned for the occasion.  This coming Saturday, June 23, is Turing’s birthday, so there will undoubtedly be more events and tributes to follow.  In this, and subsequent posts, I’ll attempt to highlight some of the more interesting items that I come across.

Although it is not new, one item that deserves to be on the list is the wonderful biography of Turing by Andrew Hodges, Alan Turing: The Enigma.   Hodges also maintains The Alan Turing Home Page, a Web site dedicated to Turing.  It includes a short on-line biography, a scrapbook, and links to documents and publications.

Ars Technica has an article about Turing’s life and work, “The Highly Productive Habits of Alan Turing”, by Matthew Lasar, lecturer in history at the University of California, Santa Cruz.  It gives a good brief overview of Turing’s work, organized under seven “productive habits”:

  1. Try to see things as they are.
  2. Don’t get sidetracked by ideologies.
  3. Be practical.
  4. Break big problems down into smaller tasks.
  5. Just keep going.
  6. Be playful.
  7. Remember that it is people who matter.

If you aren’t familiar with Turing at all, this article is a good place to get the highlights quickly.

Wired has a couple of items on Turing.  The first is another brief biographical sketch, in the form of a time line of Turing’s life and work.  It mentions one occasion that I had forgotten: in the early 1950s, Turing wrote a program to play chess.  This was (pace Habit 3 above) not a very practical exercise, since at that time there was no computer powerful enough to run the program.  Turing tested the program by using an emulator — himself — executing the program with pencil and paper.

The second article at Wired is a more subjective look at some of Turing’s accomplishments.  It focuses mostly on his wartime work at the Government Code and Cypher School at Bletchley Park (also known as Station X), breaking the German’s Enigma encryption system, and on his work in computer science.  It also mentions Turing’s only paper on biology, “The Chemical Basis of Morphogenesis”, published in 1952.  Oddly, it doesn’t mention one of his best-known works, the essay Computing Machinery and Intelligence, published in October 1950 in the Oxford journal Mind, in which he proposes the “imitation game”, the Turing test of intelligence.

I’ll post additional items as I come across them.


Top 500: Sequoia is Number One

June 18, 2012

Since 1993, the TOP500 project has been publishing a semi-annual list of the 500 most powerful computer systems in the world, as a barometer of trends and accomplishments in high-performance computing.   The systems are ranked based on their speed in floating-point operations per second (FLOP/s), measured on the LINPACK benchmark, which involves the solution of a dense system of linear equations.

The latest version of the list has just been released, in conjunction with the 2012 International Supercomputing Conference, currently being held in Hamburg, Germany.  The top system this time is the Sequoia system at the Lawrence Livermore National Laboratory, which clocked in at over 16 petaflops (16 × 1015 flops):

For the first time since November 2009, a United States supercomputer sits atop the TOP500 list of the world’s top supercomputers. Named Sequoia, the IBM BlueGene/Q system installed at the Department of Energy’s Lawrence Livermore National Laboratory achieved an impressive 16.32 petaflop/s on the Linpack benchmark using 1,572,864 cores.

The Japanese K-RIKEN system, ranked number 1 in the November 2011 Top-500 list, is now ranked second.  Ranked third is the Mira system at the Argonne National Laboratory, an IBM BlueGene/Q system with 786,432 processing cores, running at 8.15 petaflops.  The Chinese Tianhe-1A system, ranked second in November 2011 with 2.57 petaflops, is now ranked number 5.  The total capacity of the entire list is now 123.4 petaflops, compared with 74.2 in November.

As has been true for some time, the distribution of operating systems used is rather different from that in the desktop computing market:

OS Family Number % of Capacity
Linux 462 92.4
Unix 24 4.8
BSD-based 1 0.2
Windows 2 0.4
Mixed 11 2.2

Microsoft’s dominance of the desktop OS market clearly does not cut much ice in this area.

You can see the complete list here.


Thunderbird 13.0.1 Released

June 17, 2012

Mozilla has released a new version, 13.0.1, of its Thunderbird E-mail client, for all platforms: Mac OS X, Windows, and Linux.  This release fixes several bugs, but does not appear to include any security patches.  More details are available in the Release Notes.

If you have enabled automatic checking for updates, Thunderbird should inform you of the new version.  Otherwise, you can obtain the new version via the built-in update mechanism (Help / About Thunderbird / Check for Updates), or you can download a complete installation package, in a variety of (human) languages.


%d bloggers like this: