New Top 500 List

November 30, 2009

Since 1993, the TOP500 project has been publishing a semi-annual list of the 500 most powerful computer systems in the world, as a barometer of trends and accomplishments in high-performance computing.  The latest list has just been released this month, and there is a new speed champion, the Jaguar Cray XT5 system at Oak Ridge National Laboratory in Oak Ridge, Tennessee.  Unlike many previous Oak Ridge computers, which have been used to model nuclear explosions for military purposes, the Jaguar system is being used for civilian purposes, principally climate modeling.

The speed ratings for the systems on the list are based on 64-bit floating-point performance on the LINPACK benchmark, which solves a dense system of linear equations.  Of course, a single rating cannot possibly capture all aspects of a system’s performance; but since systems in this class are usually employed to solve extremely computation-intensive problems, the LINPACK measure is a reasonable first approximation.  The following table shows the top five systems, with their locations, speeds (in teraflops, 1 × 1012 floating-point operations per second), and basic processor technology:

Rank System Name Country Speed Technology
1 Jaguar USA 1800 Cray KT5
2 Roadrunner USA 1042 AMD / Cell
3 Kraken USA 831 AMD Opteron
4 Jugene Germany 825 IBM Blue Gene
5 Tianhe-1 China 563 Intel / AMD GPU

The TOP500 site allows you to generate charts and graphs of the systems categorized in various ways: by vendor, by country, and by processor architecture, for example. One of the more interesting categorizations, perhaps, is by operating system:

OS Family No. of Systems
Linux 446
UNIX 25
MS Windows 5
BSD 1
Mixed / Misc. 23

Clearly, Microsoft’s near monopoly on the desktop does not cut much ice in this market.

As far as I know, there’s no prize, other than bragging rights, awarded to the winner.  But it is interesting to see the progress being made in really high-performance computing.


The Royal Society: 350 Years

November 30, 2009

This week marks the beginning of the 350th year for The Royal Society of the UK, founded November 28, 1660, and the oldest scientific society in the world.  In order to mark the event, the Society plans a number of special activities and events.  One that you can look at now is the Trailblazing web site,  which has a timeline of selected historical publications from Philosophical Transactions, the Society’s journal.  Some of the articles include:

  • Sir Isaac Newton’s 1672 paper on the theory of light and colours
  • Observations of the solar eclipse of 1715
  • Ben Franklin’s 1752 report of flying a kite in an electrical storm
  • James Clerk Maxwell’s 1865 paper on the electromagnetic field
  • Observations of the 1920 solar eclipse, testing General Relativity
  • A 1954 paper by Crick and Watson on the structure of DNA

There are also markers on the time line for historical events, such as the Great Fire in London, and the American Civil War.  For each of the papers listed, there is a brief summary, and a facsimile of the paper itself can be downloaded as a PDF.

It’s fascinating to be able to see some of this work as it was originally presented.

Update Tuesday, December 1, 23:18

Wired Science now has a short article reviewing their “greatest hits” from the scientific papers at the Trailblazing site.


Policing the Cloud

November 29, 2009

I’ve posted several notes here about some of the privacy and security implications of the trend toward “cloud computing”, in which data storage and processing are carried out, not on the user’s machine, but on servers provided by an Internet-based computing utility.  In a post back in September,  I talked about a new class of security vulnerabilities that had the potential to compromise cloud services run on virtual machines, if the attackers could run a virtual machine on the same physical server as their target.

These vulnerabilities exist because existing virtual machine technology, for the most part, is not able to completely isolate virtual machines from one another; and, in any case, at some level the software that manages the virtual machines running on a server must know about them.  It turns out that this knowledge can be used to improve security,  too.  A recent article in Technology Review discusses some recent research carried out by IBM’s Watson Research Center and Zürich Research Lab, in which the researchers developed a technique they call “introspection monitoring” to try to detect malicious behavior by one or more virtual machines.

“It works by looking inside the virtual machine and trying to infer what it does. You don’t want malicious clients to give you all kinds of malware in their virtual machines that you will run in the cloud,” says Radu Sion, a computer scientist at Stony Brook University, who was not involved in the research.

In effect, the technique takes advantage of the same lack of total isolation that creates the vulnerabilities to watch for behavior that might indicate malicious activity or intent.

The research was presented at the recent ACM Cloud Computing Security Workshop, sponsored by Microsoft Research and held in conjunction with the ACM Conference on Computer and Communications Security.  At this point, the actual papers do not seem to be available on the Web, but there are two slide sets, here [PDF] and here [PDF] from the presentation.

The cloud computing environment does present some new security challenges and issues; but it also opens up some new possibilities in terms of detecting and preventing attacks.  It’s good to see that possibility is not being neglected.


If Not to Waist, to Waste

November 28, 2009

We have all been hearing reports for years about how rapidly Americans (and to a lesser degree, people in some other rich countries) are getting fatter.  Certainly, anyone who has walked around a shopping mall recently knows that there is plenty of pork on the hoof out there; and it’s been suggested that the industrialization of food production, and the push to sell more food, has contributed to this.

The current issue of The Economist has a report on some new research that suggests that, in addition to having larger waistlines, Americans are wasting more food than ever before.  A new paper, by four researchers at the National Institute of Diabetes and Digestive and Kidney Diseases, published in the online journal PLoS One at the Public Library of Science, compares the amount of food produced in the US, adjusted for imports and exports, with the daily calorie consumption based on nutritional surveys carried out by the Centers for Disease Control and Prevention.

Despite the clear evidence that a lot of food is being consumed, the researchers found that a lot was being thrown away, as well.  The amount wasted works out to about 40% of total calories produced:

They found that the average American wastes 1,400 kilocalories a day. That amounts to 150 trillion kilocalories a year for the country as a whole—about 40% of its food supply, up from 28% in 1974.

(A kilocalorie here is the same unit that is usually called just a calorie on food labels.)  This degree of waste is obviously a bit troubling in a world where a large number of people have trouble getting enough to eat.  It also has significant environmental costs.

Producing these wasted calories accounts for more than one-quarter of America’s consumption of freshwater, and also uses about 300m barrels of oil a year. On top of that, a lot of methane (a far more potent greenhouse gas than carbon dioxide) emerges when all this food rots.

Although the United States as a whole has adequate supplies of fresh water, they are not uniformly distributed; and irrigated agriculture is a prodigious consumer of water in some relatively dry areas.

As the Economist article points out, some of this waste probably occurs because food is inexpensive enough that it can make economic sense for suppliers to have, on average, some excess inventory, to avoid the opportunity costs of being out of stock.  Still, particularly in the current environment which has some people, even in rich societies, struggling to make ends meet, the idea of squandering so much of an essential resource is troubling.


Power by Osmosis

November 27, 2009

Back in July, I posted a note about some new research in power generation, which attempts to extract energy from the entropy increase that occurs when salt and fresh water are mixed.   Although that process is still at the research stage, it demonstrates that there are actually an awful lot of energy sources around us — we just need to figure out how to use them.

The New Scientist now has a report on another attempt to extract energy from combining salt and fresh water.  A pilot power plant has just been opened on the Oslo fjord in Norway that will use pressure differences resulting from osmosis to drive a turbine and thereby generate electricity:

Osmosis occurs wherever two solutions of different concentrations meet at a semipermeable membrane. The spontaneous passage of water from dilute to concentrated solutions through the membrane generates a pressure difference that can be harnessed to generate power.

Fresh and salt water naturally mix near the mouth of the fjord.  They are pumped into an osmotic cell, which generates water pressure equivalent to a column of water 120 meters high.

Tapping the energy potential inherent in mixing salt and fresh water, by whatever process, has some interesting advantages.  It is available on a nearly continuous basis, unlike wind or solar power, although seasonal variations in river flows would cause some fluctuations.  Also, many large cities are located on or near river estuaries, meaning that power could potentially be produced close to the ultimate customers.

The Norwegian plant is only a pilot; net of the power used to pump water into the plant, it can only produce a few kilowatts of power.  So it might be able to run the engineers’ coffee machines, but perhaps not much more.   However, building a pilot plant is normal procedure in engineering; it allows practical problems that may not have been apparent from the design work to be detected and, hopefully, solved.  (In the osmosis cell, for example, the possible effects of bacterial contamination or silt are not well understood.)  Still, it is a good sign that the work is being done.


Splitting Time and Space

November 26, 2009

The twentieth century saw the development of two new frameworks in theoretical physics: Einstein’s theory of General Relativity, which updated our understanding of gravity, and quantum mechanics, which gave us a window into processes that happen on a subatomic scale.  Predictions from both theories have been verified by experiments, but there is just one problem: the theories don’t seem to be consistent with each other.  Reconciling the two is the central problem in theoretical physics today.  Much of the work on various forms of string theory, for example, is directed towards this goal.

The December issue of Scientific American has an article describing another attempt to bridge this theoretical divide.  Gravity really is the anomaly, because the other fundamental forces can be reconciled within the existing framework:

For instance, the electromagnetic force can be described quantum-mechanically by the motion of photons. Try and work out the gravitational force between two objects in terms of a quantum graviton, however, and you quickly run into trouble—the answer to every calculation is infinity.

Now a  new theoretical approach has been developed by Petr Hořava, a physicist at the University of California at Berkeley.  In essence, his approach reverses one of the core concepts of General Relativity, namely that time is, in essence, just another dimension of the universe, like the more familiar spatial dimensions. Prior to Einstein’s work, time had been conceived of as a sort of giant clock ticking away in the background, unaffected by the presence or absence of matter.  What Hořava’s approach does is to restore the special character of time, “decoupling” it from space.  The effect of this, in the theory, is only apparent at very high energies; at lower energies, the solutions of General Relativity emerge as a special case:

The solution, Hořava says, is to snip threads that bind time to space at very high energies, such as those found in the early universe where quantum gravity rules. “I’m going back to Newton’s idea that time and space are not equivalent,” Hořava says. At low energies, general relativity emerges from this underlying framework, and the fabric of spacetime restitches, he explains.

This is an intriguing idea: perhaps, just as Newtonian mechanics is a special case that applies under “normal” conditions, General Relativity is a special case that applies at the energy levels we see in the universe today.  There is some early evidence that Hořava is onto something:

So far it seems to be working: the infinities that plague other theories of quantum gravity have been tamed, and the theory spits out a well-behaved graviton. It also seems to match with computer simulations of quantum gravity.

There is also some evidence that the new theory might account for the mysterious “dark matter” that seems to be required in order to explain the magnitude of observed gravitational forces.

The theory is still being developed, and is far from perfect.  But it’s another interesting attempt to reconcile two very successful ideas in theoretical physics; everyone’s intuition, including Einstein’s, was that the reconciliation was possible, but that there is a piece of the puzzle we have not yet found.