“Safe Browsing” Turns Five

June 21, 2012

A little more than five years ago, Google launched its Online Security Blog, as part of an augmented effort to fight malware and phishing attacks, an effort the company called “Safe Browsing”.  Niels Provos, of Google’s security team, has just posted a brief summary of some of the knowledge gleaned from the Safe Browsing work.

A key part of the safe browsing effort is an infrastructure that can detect and catalog dangerous sites across the Internet.  Google uses this data to issue warnings with its search results, of course, but it also provides a free, public Safe Browsing API, so that other applications can check sites against Google’s list.  This protection, implemented in Chrome (of course), Firefox, and Safari, results in several million warnings being issued each day.   The scale of the effort is staggering; Google estimates that it identifies about 9,500 new malicious sites every day.  These are, in many cases, legitimate sites that have been compromised so that they attempt to install malware, or redirect the user to a site that does.  In other cases, the sites are built specifically for malicious purposes.

The general trend in these attacks, as we’ve seen before, is to get more polished and professional as time passes.  Google says that some sites use a given URL for an hour or less, in order to make detection more difficult.   Targeted phishing (or “spear phishing”) attacks are increasingly common, as are social engineering attacks, such as fake anti-virus warnings.  And the traditional “drive-by download” technique, in which the attacker attempts to compromise the user’s machine via a vulnerability in the browser or the OS, is still popular.

As Niels Bohr (the physicist and gunslinger) reportedly said, “Prediction is very difficult, especially about the future.”   Nonetheless, it seems unlikely that the current trends will change very much; we’ll continue to see more, and more sophisticated, attacks.  So pay attention to those security warnings, and be careful out there.


US-CERT: Intel CPU Vulnerability

June 19, 2012

The US Computer Emergency Readiness Team (US-CERT) has published a Vulnerability Note (VU#649219) about a newly discovered security vulnerability involving 64-bit operating systems or virtual machine hypervisors running on Intel x86-64 CPUs.   This does not affect Intel’s 64-bit Itanium processors. The vulnerability means that an attacker might be able to execute code at the same privilege level as the OS or hypervisor.

Some 64-bit operating systems and virtualization software running on Intel CPU hardware are vulnerable to a local privilege escalation attack. The vulnerability may be exploited for local privilege escalation or a guest-to-host virtual machine escape.

The x86-64 architecture was originally developed by AMD, with the aim of producing a 64-bit CPU that was backward-compatible with the 32-bit IA32 architecture, as implemented in, for example, Intel’s Pentium processor.  The vulnerability exists because of a subtle difference between AMD’s implementation and Intel’s.  The good folks at the Xen open-source virtualization project have posted a detailed technical explanation of the problem on the Xen community blog; I will attempt a brief summary here.

Whether one is running a standard operating system, such as Linux or Windows, or a virtual machine hypervisor, such as Xen, a mechanism is needed to switch from an application, which runs with limited privileges, to the OS or hypervisor, which typically has no restrictions.  Of course, the mechanism must allow switching back, too.  The most commonly-used mechanism on the x86-64 platform uses a pair of instructions, SYSCALL and SYSRET.   The SYSCALL instruction does the following:

  • Copy the instruction pointer register (RIP) to the RCX register
  • Change the code segment selector to the OS or hypervisor value

A SYSRET instruction does the reverse; that is, it restores the execution context of the application.  (There is more saving and restoring to be done — of the stack pointer, for example — but that is the responsibility of the OS or hypervisor.)

The difficulty arises because the x86-64 architecture does not use 64-bit addresses; rather, it uses 48-bit addresses.  This gives  a 256 terabyte virtual address space, which is considerably more than is used today.   The processor has 64-bit registers, but a value to be used as an address must be in a canonical form (see the Xen blog post for details); attempting to use a value not in canonical form results in a general protection (#GP) fault.

The implementation of SYSRET in AMD processors effectively changes the privilege level back to the application level before it loads the application RIP.  Thus, if a #GP fault occurs because the restored RIP is not in canonical form, the CPU is in application state, so the OS or hypervisor can handle the fault in the normal way.  However, Intel’s implementation effectively restores the RIP first; if the value is not in canonical form, the #GP fault will occur while the CPU is still in the privileged state.  A clever attacker could use this to run code with the same privilege level as the OS.

According to the most recent version of the Vulnerability Note, the following systems are known to be affected by this vulnerability. when run on Intel CPUs: Citrix, Microsoft Windows 7 and Windows Server 2008R2, NetBSD, FreeBSD, Oracle, Xen, Red Hat Linux, SUSE Linux, and Joyent.   VMware and Apple systems are known not to be affected; and, of course, no system running on an AMD processor is affected.  Check the links in the Vulnerability Note for more information.

Intel says that this is not a flaw in their CPU, since it works according to their written spec.  However, since the whole point of their implementation was to be compatible with the architecture as defined originally by AMD, this seems a bit disingenuous.

The immediate risk of successful exploits “in the wild” is probably not that high.  However, hardware flaws don’t get fixed overnight, so I hope the OS vendors can implement a software work-around (as Xen has already done) without too much delay.


Alan Turing Centenary, Part 1

June 19, 2012

I’ve written here a couple of time about the Alan Turing Centenary, marking the 100th anniversary of of the birth of the English mathematician, cryptanalyst, and pioneer computer scientist; and about some of the events planned for the occasion.  This coming Saturday, June 23, is Turing’s birthday, so there will undoubtedly be more events and tributes to follow.  In this, and subsequent posts, I’ll attempt to highlight some of the more interesting items that I come across.

Although it is not new, one item that deserves to be on the list is the wonderful biography of Turing by Andrew Hodges, Alan Turing: The Enigma.   Hodges also maintains The Alan Turing Home Page, a Web site dedicated to Turing.  It includes a short on-line biography, a scrapbook, and links to documents and publications.

Ars Technica has an article about Turing’s life and work, “The Highly Productive Habits of Alan Turing”, by Matthew Lasar, lecturer in history at the University of California, Santa Cruz.  It gives a good brief overview of Turing’s work, organized under seven “productive habits”:

  1. Try to see things as they are.
  2. Don’t get sidetracked by ideologies.
  3. Be practical.
  4. Break big problems down into smaller tasks.
  5. Just keep going.
  6. Be playful.
  7. Remember that it is people who matter.

If you aren’t familiar with Turing at all, this article is a good place to get the highlights quickly.

Wired has a couple of items on Turing.  The first is another brief biographical sketch, in the form of a time line of Turing’s life and work.  It mentions one occasion that I had forgotten: in the early 1950s, Turing wrote a program to play chess.  This was (pace Habit 3 above) not a very practical exercise, since at that time there was no computer powerful enough to run the program.  Turing tested the program by using an emulator — himself — executing the program with pencil and paper.

The second article at Wired is a more subjective look at some of Turing’s accomplishments.  It focuses mostly on his wartime work at the Government Code and Cypher School at Bletchley Park (also known as Station X), breaking the German’s Enigma encryption system, and on his work in computer science.  It also mentions Turing’s only paper on biology, “The Chemical Basis of Morphogenesis”, published in 1952.  Oddly, it doesn’t mention one of his best-known works, the essay Computing Machinery and Intelligence, published in October 1950 in the Oxford journal Mind, in which he proposes the “imitation game”, the Turing test of intelligence.

I’ll post additional items as I come across them.


Top 500: Sequoia is Number One

June 18, 2012

Since 1993, the TOP500 project has been publishing a semi-annual list of the 500 most powerful computer systems in the world, as a barometer of trends and accomplishments in high-performance computing.   The systems are ranked based on their speed in floating-point operations per second (FLOP/s), measured on the LINPACK benchmark, which involves the solution of a dense system of linear equations.

The latest version of the list has just been released, in conjunction with the 2012 International Supercomputing Conference, currently being held in Hamburg, Germany.  The top system this time is the Sequoia system at the Lawrence Livermore National Laboratory, which clocked in at over 16 petaflops (16 × 1015 flops):

For the first time since November 2009, a United States supercomputer sits atop the TOP500 list of the world’s top supercomputers. Named Sequoia, the IBM BlueGene/Q system installed at the Department of Energy’s Lawrence Livermore National Laboratory achieved an impressive 16.32 petaflop/s on the Linpack benchmark using 1,572,864 cores.

The Japanese K-RIKEN system, ranked number 1 in the November 2011 Top-500 list, is now ranked second.  Ranked third is the Mira system at the Argonne National Laboratory, an IBM BlueGene/Q system with 786,432 processing cores, running at 8.15 petaflops.  The Chinese Tianhe-1A system, ranked second in November 2011 with 2.57 petaflops, is now ranked number 5.  The total capacity of the entire list is now 123.4 petaflops, compared with 74.2 in November.

As has been true for some time, the distribution of operating systems used is rather different from that in the desktop computing market:

OS Family Number % of Capacity
Linux 462 92.4
Unix 24 4.8
BSD-based 1 0.2
Windows 2 0.4
Mixed 11 2.2

Microsoft’s dominance of the desktop OS market clearly does not cut much ice in this area.

You can see the complete list here.


Thunderbird 13.0.1 Released

June 17, 2012

Mozilla has released a new version, 13.0.1, of its Thunderbird E-mail client, for all platforms: Mac OS X, Windows, and Linux.  This release fixes several bugs, but does not appear to include any security patches.  More details are available in the Release Notes.

If you have enabled automatic checking for updates, Thunderbird should inform you of the new version.  Otherwise, you can obtain the new version via the built-in update mechanism (Help / About Thunderbird / Check for Updates), or you can download a complete installation package, in a variety of (human) languages.


Critical Updates for Java Released

June 16, 2012

Oracle has released its quarterly security fixes for Java.  The new Version 6 Update 33, addresses 14 identified security vulnerabilities; at least one of these is extremely serious, because it can be exploited remotely without a login.  (There is also a Version 7 Update 5 available for developers, with the same fixes.)  The new versions also fix some minor bugs.  Further information is available in the Critical Patch Update Advisory.

The new version is available for almost all platforms: Linux, Windows, and Solaris.  Apple supplies its own versions of Java for Mac OS X; there is usually a time lag of at least a few days after Oracle releases a new version before an updated Mac version is available

Because of the security content of this release, if you have Java installed on your system, I recommend that you install this update as soon as you conveniently can.  You can obtain the new version, including the browser plug-in, from the download page for Version 6 Update 33, or the download page for Version 7 Update 5.  Windows users can also use automatic updates to get the new release.


ACM to Celebrate Turing Centenary

June 16, 2012

Last October, I posted a note here about the upcoming 100-year anniversary of the birth of Alan Turing, the English mathematician and pioneer computer scientist.  Turing was a central figure in the successful British effort, at Bletchley Park, to break coded messages produced by the Germans’ Enigma cipher machine.  Some of Turing’s theoretical papers on cryptanalysis have been declassified only recently.

Network World has an article about some additional activities planned by the Association for Computing Machinery [ACM] around the anniversary, which is June 23.   Vint Cerf of Google, a noted computer scientist in his own right, is president-elect of the ACM and chair of the organization’s commemorative events, points out how fundamental Turing’s work is to modern computer science.

“Alan had such a broad impact on so many aspects of computer science,” says Cerf. “The deep notion of computability is so fundamental to everything we do in computing.”

In designing a hypothetical computing device, which we now know as a Turing machine, Turing provided a framework for analysing the possibilities and limitations of mechanical, and electronic, computing devices.

Since 1966, the ACM has given out its annual Turing Award, sometimes referred to as the “Nobel Prize” of computer science, to “an individual selected for contributions of a technical nature made to the computing community”.   (Vint Cerf received a Turing Award in 2004.)  This year, at an event to be held in San Francisco June 15-16, the ACM is trying to assemble all living Turing Award recipients, and will feature talks and panel discussions on Turing’s life and work.

Turing, who was named one of Time magazine’s 100 Most Important People of the [20th] Century, would have been an important figure even if the war had never occurred.  It is good to see that his contributions are being more fully appreciated.


%d bloggers like this: