First Petaflop Computer to be Retired

March 31, 2013

I’ve posted notes here about the Top500 project, which publishes a semi-annual list of the world’s fastest computer systems, most recently following the last update to the list, in November 2012.

An article at Ars Technica reports that the IBM Roadrunner system, located at the US Department of Energy’s Los Alamos National Laboratory, will be decommissioned and, ultimately, dismantled.  The Roadrunner was the first system whose performance exceeded a petaflop (1 petaflop = 1 × 1015 floating point operations per  second).  It held the number one position on the Top 500 list from June, 2008 through June 2009; it was still ranked number two in November, 2009.  The Roadrunner system contained 122,400 processor cores in 296 racks, covering about 6,000 square feet.  It was one of the first supercomputer systems to use a hybrid processing architecture, employing both IBM PowerXCell 8i CPUs  and AMD Opteron dual-core processors

The system is being retired, not because it is too slow, but because its appetite for electricity is too big.   In the November 2012 Top 500 list, Roadrunner is ranked at number 22, delivering 1.042 petaflops and consuming 2,345 kilowatts of electricity.  The system ranked as number 21, a bit faster at 1.043 petaflops, required less than half the power, at 1,177 kilowatts.

It will be interesting to see how the list shapes up in June, the next regular update.


The Internet Surveillance State

March 30, 2013

One of the hardy perennial issues that comes up in discussions of our ever more wired (and wireless) lives is personal privacy.  Technology in general has invalidated some traditional assumptions about privacy.  For example, at the time the US Constitution was being written, I doubt that anyone worried much about the possibility of having a private conversation.  All anyone had to do, in an age before electronic eavesdropping, parabolic microphones, and the like, was to go indoors and shut the door, or walk to the center of a large open space.  It might be somewhat more difficult to conceal the fact that some conversation took place, but it was relatively easy to ensure that the actual words spoken were private.

Similarly, before the advent of computer data  bases, getting together a comprehensive set of information about an individual took a good deal of work.  Even records that were legally public (e.g., wills, land records) took some effort to obtain, since they existed only on paper, probably moldering away in some obscure courthouse annex.  Even if you collected a bunch of this data, putting it all together was a job in itself.

People whose attitudes date back to those days often say something like, “I have nothing to hide; why should I care?”  They are often surprised at the amount of personal information that can be assembled via technical means.  The development of the Internet and network connectivity in general has made it easy to access enormous amounts of data, and to categorize and correlate it automatically.  Even supposedly “anonymized” data is not all that secure.

Bruce Schneier, security guru and author of several excellent books on security (including Applied Cryptography,  Secrets and Lies, Beyond Fear, and his latest book, Liars and Outliers), as well as the Schneier on Security blog, has posted an excellent, thought provoking article on “Our Internet Surveillance State”.  He begins the article, which appeared originally on the CNN site, with “three data points”: the identification of some Chinese military hackers, the identification (and subsequent arrest) of Hector Monsegur. a leader of the LulzSec hacker movement, and the disclosure of the affair between Paula Broadwell and former CIA Director Gen. David Petraeus.  All three of these incidents were the direct result of Internet surveillance.

Schneier’s basic thesis is that we have arrived at a situation where Internet-based surveillance is nearly ubiquitous and almost impossible to evade.

This is ubiquitous surveillance: All of us being watched, all the time, and that data being stored forever. This is what a surveillance state looks like, and it’s efficient beyond the wildest dreams of George Orwell.

Many people are aware that their Internet activity can be tracked by using browser cookies, and I’ve written about the possibility of identifying individuals by the characteristics of their Web browser.  And many sites that people routinely visit have links, not always obvious, to other sites.  Those Facebook “Like” buttons that you see everywhere load data and scripts from Facebook’s servers, and provide a mechanism to track you — you don’t even need to click on the button.  There are many methods by which you can be watched, and it is practically impossible to avoid them all, all of the time.

If you forget even once to enable your protections, or click on the wrong link, or type the wrong thing, and you’ve permanently attached your name to whatever anonymous service you’re using. Monsegur slipped up once, and the FBI got him. If the director of the CIA can’t maintain his privacy on the Internet, we’ve got no hope.

As Schneier also points out, this is not a problem that is likely to be solved by market forces.  None of the collectors and users of surveillance data has any incentive, economic or otherwise, to change things.

Governments are happy to use the data corporations collect — occasionally demanding that they collect more and save it longer — to spy on us. And corporations are happy to buy data from governments.

Although there are some organizations, such as the Electronic Privacy Information Center [EPIC]  and the Electronic Frontier Foundation [EFF], that try to increase awareness of privacy issues, there is no well-organized constituency for privacy.  The result of all this, as Schneier says, is an Internet without privacy.


Interview with James Randi

March 28, 2013

I’ve written here before about James Randi, the retired professional magician and skeptic of the occult, and his  James Randi Educational Foundation, which investigate claims of paranormal, supernatural, and occult  ideas.

The self-described “News for Nerds” site, Slashdot, has an interview with Randi, in which he answers questions submitted by readers,   As one might expect, the discussion focuses on the work, by Randi and the Foundation, to combat irrational and magical thinking.  It’s a brief but entertaining read.  The page also contains comments from Slashdot readers, which are worth glancing through: there are some insightful ones, though there is, as usual, a lot of drek as well.


Document Freedom Day 2013

March 27, 2013

The Free Software Foundation Europe [FSFE] has designated today, March 27, as Document Freedom Day [DFD] for 2013, to mark the importance of open standards for the exchange of documents and other information via the Internet.

It is a day for celebrating and raising awareness of Open Standards and formats which takes place on the last Wednesday in March each year. On this day people who believe in fair access to communications technology teach, perform, and demonstrate.

This year’s DFD is being sponsored by Google and openSUSE.

One of the key aims of DFD is to promote the use and promulgation of open standards for documents and other information.  The DFD site gives the FSFE’s definition of an open standard; as the Wikipedia article on the subject suggests. there is a range of definitions from different organizations.  The FSFE’s definition is fairly strict: essentially, it requires that a standard be open to assessment, implementation, and use without restrictions, and that a standard be defined by an open process, not controlled by any single party.  That there is some considerable similarity between the concepts of open standards and open source software is, of course, not a coincidence.

As I have mentioned before, I am a fairly enthusiastic proponent of open source software, and I’m a fan of open standards, too.  As I’ve already mentioned, there are several different definitions of open standards, and I think it is useful to realize that “openness” can be a matter of degree.

The standards for HTML (HyperText Markup Language, the language used to create Web pages), and for the C programming language, would meet most definitions as open standards.  At the other extreme, Microsoft’s original definitions of documents for its Office product were not at all open: undocumented binary formats, entirely under the vendor’s control.  The Portable Document Format [PDF] for text documents was originally defined by Adobe Systems, but the format definition was published; beginning in 1994, with the release of Adobe’s Acrobat 2.0 software, the viewing software (Acrobat Reader, now Adobe Reader) was available free.  (PDF was officially released as an open standard on July 1, 2008, and published by the International Organization for Standardization as ISO 32000-1:2008.)

While, in an ideal world, one might have wished, prior to 2008, to have the PDF specification fully open, the situation was far better than having an entirely closed spec: it was possible to evaluate the PDF definition, and developers other than Adobe were able to develop software to work with PDF files.  (I still use a small, fast program called xpdf to view PDF documents on my Linux PC.  It lacks a good deal of functionality, compared to Adobe’s Reader, which I also use regularly, but it is much faster for routine, “let’s have a look at this” usage.)

I think that the principle of open standards is worth supporting, for the very practical reasons that the FSFE has identified; they enable you to

  • Collaborate and communicate with others, regardless of which software they are using
  • Upgrade or replace your apps and still be able to open and edit your old files
  • Choose which phone / tablet / computer you want to use without worrying about compatibility

These are benefits worth having.


Google Releases Chrome 26

March 26, 2013

Google today released a new major version, 26.0.1410.43, of its Chrome browser for Linux, Mac OS X, Windows, and Chrome Frame.  This release incorporates fixes for 11 identified security vulnerabilities, two of which Google rates as High severity.  The new version also includes some new features:

  • “Ask Google for suggestions” spell checking improvements (e.g. grammar and homonym checking)
  • Desktop shortcuts for multiple users (profiles) on Windows
  • Asynchronous DNS resolver on Mac and Linux

Further details are available in the Release Announcement.

Because of the security content of this release, I recommend that you update your systems as soon as you conveniently can.   Windows and Mac users can get the new version via the built-in update mechanism; Linux users should check their distribution’s repositories for the new version.

Update Tuesday, 26 March, 22:14 EDT

Ars Technica has an article on the new Chrome release; it has a useful description of some the new spell-checking features.


Bletchley Park Trust Joins Google Cultural Institute

March 25, 2013

I’ve written here previously about Bletchley Park, the home during World War II of the UK Government Code and Cipher School, also known as Station X.  The work of the cryptanalysts at Bletchley Park was responsible for the breaking of the German Enigma machine encryption on a large-scale basis, as well as the more difficult Lorenz cipher, used by Hitler to communicate with his field commanders.   Some historians estimate that this work shortened the war in Europe by two or more years.  The site is now run by the Bletchley Park Trust, and also houses the UK National Museum of Computing.

A project to restore the Bletchley Park facility, along with some of its specialized equipment, was launched a couple of years ago.  I noted then that Google had taken an active role in supporting the project.

A recent post on the Official Google Blog describes some further developments in this relationship.  The Bletchley Park Trust has become a member of the Google Cultural Institute, which features an online gallery of exhibits dealing with (relatively) recent history.  The Bletchley Park exhibit has an overview of the work that was done at Station X.  It includes images of the Bombe machines that were used to break the Enigma cipher on a production basis, and of Colossus, the electronic computer used, along with the Tunny Machine, in breaking the Lorenz cipher.

The blog post also has an interesting short video presentation by Ms. Jean Valentine, one of the original Bombe operators.

In her role operating the Bombe, Jean directly helped to decipher messages encoded by Enigma. In this film Jean gives us a firsthand account of life at Bletchley Park during the war, and demonstrates how the Bombe worked using a replica machine now on show at the museum.

Much of this history remained a closely-guarded secret for many years after the end of WWII.  It’s fascinating to see how much truly creative work was done under very difficult conditions.


%d bloggers like this: