Getting Enough Sleep

June 17, 2010

Back in the early days of PCs, when MS-DOS more or less ran the world, reducing computers’ energy consumption was fairly simple: like incandescent lamps, you turned PCs off when they were not being used.  (I remember the educational effort we had to put forth, when we first introduced networked UNIX workstations, to teach users not to just switch them off.)    Today, most people leave their machines on almost all the time, especially if the are networked; in enterprise networks, this is often required to allow applications for backups, network management, malware scanning, and the like to run.  Modern hardware generally has the ability to go into “sleep mode”, which substantially reduces energy consumption, but this is hard to monitor or manage if done on a machine-by-machine basis.

A new report from Microsoft Research discusses an interesting approach to managing sleep.  Instead of relying on individual sleep settings on each machine, they set out to design a sleep management system that will work across a network segment.  The research is described in a paper to be presented at the annual USENIX technical conference in a few days; you can download the paper here [PDF].   The system that the team developed and deployed has two main components:

  • A sleep notifier process that runs on each of the clients whose slumber is to be managed.  When the client is about to go to sleep, it notifies the other component,
  • A sleep proxy application, which runs on a dedicated machine on the same network segment as the clients.  This machine monitors network traffic on behalf of the sleeping clients, and wakes them when something requires their attention.

In essence, when the client is about to go to sleep, usually because a stated time interval has elapsed with no activity, the following occurs:

  1. The sleep notifier on the client tells the sleep proxy that the client is going to sleep.  The notification also specifies which TCP ports the clients is listening on.
  2. The sleep proxy uses ARP probes to assume the sleeping clients IP address.
  3. The sleep proxy monitors network traffic directed to the client; if it sees traffic that the client needs to handle (e.g.,  a TCP SYN request on an active port), it sends the client a “wake on LAN” packet.  The original requestor will re-send the request when the original times out, and the client will then respond as usual.

The idea is a clever one.  Some current network cards have a limited ability to filter traffic intended for a sleeping machine, and “awaken” it only for certain types of packets.  In this case, since the sleep proxy is running on a general-purpose computer, it can accommodate fairly complex rules (which might, for example, be time-of-day dependent); it also, of course, allows for monitoring of how much sleep is actually achieved, and what events may cause some machines to suffer from “insomnia”.  (The team found that, on their network, IT’s network applications, like malware scanning, accounted for most of the “wakefulness” observed.)

The specific implementation tested does have some areas for improvement.  It produces a noticeable delay when a client machine first wakes from sleep, although most of this is due to the time the client itself takes to wake up.  It also, currently, only handles TCP traffic; some modifications would be necessary to handle UDP-based applications.

Still, this is interesting work.  Managing energy consumption is a relative newcomer to the list of IT management concerns.  Tools that help us understand how to do this job better are potentially quite valuable.

Ken Jennings v. Watson ?

June 16, 2010

Back in May of last year, I wrote a couple of posts here about an IBM project to build a software system that could be a successful contestant on Jeopardy!, the popular, long-running TV game show.  IBM has already managed, in 1997, to have its software, running on its Deep Blue supercomputer, win a chess match against Garry Kasparov.  Jeopardy! is in many ways a tougher nut to crack: the clues are given in natural language (clues are categorized), and the contestant must come up with a question that the clue answers.  For example, a recent category was “The 50 US States”, and the clue was, “The only state with a two-word name where neither word occurs in any other state name..”    The correct response is, “What is Rhode Island?”  (As a long-time viewer of Jeopardy!, I’d characterize that as a fairly easy question.)   Another example, from many years ago, was in the category “Words”; the clue was “A moral reservation, or an apothecary’s unit”.  Answer: “What is a scruple?”

The New York Times now has a magazine preview article up that gives an overview and progress update on the project.   The IBM team knew from the outset that it was tackling a tough assignment.

Software firms and university scientists have produced question-answering systems for years, but these have mostly been limited to simply phrased questions. Nobody ever tackled “Jeopardy!” because experts assumed that even for the latest artificial intelligence, the game was simply too hard: the clues are too puzzling and allusive, and the breadth of trivia is too wide.

Playing the game successfully requires not only a large store of factual knowledge, but the ability to identify relationships and links quickly, often on the basis of word play in the categories or clues.  There are existing systems that answer natural language questions (notably Wolfram Alpha, developed by Steve Wolfram), but they rely on carefully constructed data bases crafted to include the links necessary to answer particular types of questions.

IBM apparently now thinks that the system, which runs on a Blue Gene supercomputer and is called Watson (after Thomas, not Dr. John H.), is close to being ready for a public test.  The company has been running in-house matches against human contestants, and gradually refining the set of algorithms that Watson employs.  One difference from some previous “artificial intelligence” approaches is that Watson uses a large number of algorithms to look for relationships, and takes a statistical view of the world, trying to determine which potential answers are most likely to be right.  This kind of approach is made feasible, in part, because the development of the Internet has made an enormous body of written material, of all kinds, available in digital form.  And, of course, much cheaper processing power and memory capacity mean than Watson can “learn” from a truly immense “textbook”.

Unlike its Deep Blue chess software, which impressed many but had little commercial application, IBM sees the technology in Watson as something that might be very applicable to real-world systems.

John Kelly, the head of I.B.M.’s research labs, says that Watson could help decision-makers sift through enormous piles of written material in seconds. Kelly says that its speed and quality could make it part of rapid-fire decision-making, with users talking to Watson to guide their thinking process.

One idea mentioned in the article is a medical diagnostic “assistant”, which could help doctors cope with the constant stream of new information on diseases and treatments.

He [Kelly] imagines a hospital feeding Watson every new medical paper in existence, then having it answer questions during split-second emergency-room crises. “The problem right now is the procedures, the new procedures, the new medicines, the new capability is being generated faster than physicians can absorb on the front lines and it can be deployed.”

The producers of Jeopardy! have agreed to have a special televised match between Watson and selected former winners, possibly as early as this fall.  It should be fascinating to watch.

Apple Updates Snow Leopard

June 16, 2010

Apple has released a new version, 10.6.4, of the OS X “Snow Leopard” operating system for the Mac.  This release fixes more than 20 security flaws in the system; further details are in Apple’s security bulletin.  The new version can be obtained through the built-in Software Update mechanism, or can be downloaded here.  (There are several different download versions for different system configurations.)

Because of its security content, I recommend that you install this update as soon as you reasonably can.

Windows XP Flaw Being Exploited

June 15, 2010

There have been several reports today that a recently-discovered flaw in the Help and Support Center component of Microsoft Windows XP and Server 2003 is being actively exploited.   The vulnerability is serious; if a user were to view a maliciously-crafted Web page, or click on a malicious link in an E-mail message, the attacker could remotely execute code with the same privileges as the local user.  The attack makes use of the HCP protocol used by the Help and Support Center.

Microsoft has not yet released a fix for this problem; they do, however, have a suggested work-around that disables the HCP protocol.  Instructions for doing this manually (which involves editing the Windows Registry — not for the faint of heart) are included in the Security Advisory mentioned above, under  the heading “Workarounds”.  Alternatively, you can visit this Microsoft Support page, and click on the FixIt link under Enable this Fix.  Either of these methods will unregister the HCP protocol, and prevent the exploit from succeeding.  The downside of the work-around is that some functions of the Help and Support Center will not work, or crash.

I’ll post another note here if additional information or a patch becomes available.

Encrypted Cloud Computing

June 14, 2010

The Technology Review has an interesting report on a new cryptographic technique that might, in the future, allow computing to be done “in the cloud” using encrypted data, without the data ever having to be present in clear text.   The technique is called “fully homomorphic encryption”, and its feasibility was proved in a PhD thesis by an IBM researcher.

In 2009 Craig Gentry of IBM published a cryptographic proof that was that rare thing: a true breakthrough. He showed that it was possible to add and multiply encrypted data to produce a result that–when decrypted–reveals the result of performing the same operations on the original, unencrypted data.

In other words, if we have two numbers, α and β, and suitable encryption and decryption functions E(x) and D(x), respectively, and if

α + β = S

Then, if

E(α) + E(β) = S*

it will be true that

D(S*) = S

So we are able to add the two encrypted values to get a sum that, when decrypted, is the sum of the original (unencrypted) numbers.  A similar trick also works for multiplication.  (For the mathematically literate and adventurous, Mr. Gentry’s original thesis can be downloaded here [PDF, 209 pages].)

Two European researchers, Nigel Smart, of Bristol University in the UK, and Frederik Vercauteren, of Katholieke Universiteit Leuven, in Belgium, have recast Gentry’s framework in a somewhat simpler form, using integers and polynomials, rather than vectors and matrices.

The original scheme’s reliance on large matrices and vectors made it impractical because of the complexity of working with every element of the matrices at each step, and the fact that their complexity grows significantly with each extra operation on the data. Smart and Vercauteren’s rewrite of the scheme sidesteps that enough to allow testing of actual implementations of Gentry’s idea on a desktop computer.

Although the test implementation is still somewhat limited (to about 30 consecutive arithmetic operations), it does give a wider group a chance to experiment with the technique, and, hopefully, improve it.  Gentry and an IBM colleague, Shai Helevi, have been experimenting with another variant of the technique.

At this point, no one can really say when or if a practical implementation of this approach will be developed; but the interest in it is high, because it potentially allows even very sensitive information to be processed in the cloud.  It has the potential to change the way we think about some aspects of information security.

The Atlantic Garbage Patch

June 13, 2010

Last summer, I posted a couple of notes about Project Kaisei, an expedition to the Great Pacific Garbage Patch, a huge collection of plastic bottles and miscellaneous rubbish, concentrated by prevailing winds and currents into an area of the North Pacific ocean about the size of Texas.   There has been some speculation that similar “garbage patches” might exist in other oceans, as well.

According to a report at the site, just such a floating rubbish heap was encountered in the Sargasso Sea by a French ocean survey.  As in the Pacific, much of the detritus was plastic containers of various kinds, which tend to become entangled in the Sargassum seaweed for which the area is named.  The expedition’s findings confirm an earlier report, which found a pervasive “soup” of plastic fragments in the North Atlantic.

Long trails of seaweed, mixed with bottles, crates and other flotsam, drift in the still waters of the area, known as the North Atlantic Subtropical Convergence Zone. Cummins’ team even netted a Trigger fish trapped alive inside a plastic bucket.

But the most nettlesome trash is nearly invisible: countless specks of plastic, often smaller than pencil erasers, suspended near the surface of the deep blue Atlantic.

No one has come up with a practical way of removing all of this stuff from the ocean, so the only remedy is to try to prevent its getting there in the first place.  It is sad to think that the most lasting artifact of our civilization might be an island of crap.

Improving Lithium-Air Batteries

June 12, 2010

There is a lot of ongoing discussion about the relative merits of different energy sources, and the desirability of moving to more use of renewable resources, such as solar and wind power, and less use of hydrocarbon-based fuels, such as oil and coal.  There is also considerable interest in the development of new, more practical electric vehicles, and the infrastructure to support them, in place of cars powered by internal combustion engines.  One key problem that affects all of these efforts is that of energy storage in general, and of electrical energy storage in particular.   The sun does not shine, and the wind does not blow, all the time; better storage facilities would reduce the need for relatively dirty backup generating facilities.  Better batteries could give electric vehicles better performance and longer range.

According to an article in the Technology Review, a group of researchers at MIT has developed a new form of catalyst, using gold and platinum nanoparticles, that significantly improves the efficiency of lithium-air batteries.  These batteries have many attractive characteristics: they are much lighter than lithium-ion batteries of comparable capacity, for example, with an energy density potentially three times as great.   However, they have been plagued by limited lifetimes and low efficiency.   The new catalyst appears to have the potential to address both of these problems:

When lithium-air batteries are discharged, the lithium metal reacts with oxygen to form lithium oxide and release electrons. When charged, oxygen is released and lithium metal reforms. The new catalysts promote these reactions, and so reduce the amount of energy wasted as the cells are charged and discharged. The gold atoms in the catalyst facilitate the combination of lithium and oxygen; the platinum helps the opposite reaction, freeing the oxygen.

Making these reactions more efficient also reduces the accumulation of lithium oxide, which shortens the battery’s useful life.

Research into improvement of the catalyst is continuing: one path is exploring techniques that require smaller amounts of the precious metals; another is examining the use of manganese oxide as an alternative.

%d bloggers like this: