Another Sleep Aid

June 24, 2010

Last week, I posted a note about a new sleep management system, developed by Microsoft Research, which provides a mechanism to use the sleep state, common in modern PCs, to save energy in a managed way on a network.  Now there is a report on the PhysOrg.com site about another approach to tackling the same problem.  The SleepServer system, developed by computer scientists at the University of California, San Diego, has many features similar to the Microsoft system I discussed before; yet there are some key differences that make it worth examining.  SleepServer is described in a paper [PDF download] to be presented at the USENIX Annual Technical Conference in Boston this week.

Like the Microsoft Research system, SleepServer uses a sleep proxy, running on a server, to stand in for each sleeping client machine.  The overall SleepServer system is managed by the SSR-Controller application, running on the server.  Each managed client machine has a SSR-Client application running, which does two things: it keeps the SSR-Controller informed of the client’s current “network state” (e.g., on which ports it is listening), and it informs the SSR-Controller when the client is about to enter the sleep state.   When the client goes to sleep, the SSR-Controller sets up the sleep proxy to stand in for it, using gratuitous ARP probes to redirect traffic to the client’s IP address to the proxy.

So far, this is almost the same as the Microsoft system.  What makes SleepServer different is that the proxy is not just an application that processes network requests; it is a virtual machine image of the client, running under a hypervisor (virtual machine monitor), such as Xen.  This potentially is a more powerful solution, since the images can obviously reflect idiosyncratic characteristics of individual clients; the images can also incorporate what the authors call “stub applications” — minimal but functional versions of real applications running on the client.  (For example, a long-running data transfer might continue to run in the virtual machine, using only the data communications “core” of the application, without any user interface.)   Potentially, this approach provides more flexibility, if the network’s owner is willing to expend some effort in customization.  It also is more suitable for use in a mixed-platform environment, since network profiles and responses can be tailored at the client level.

The authors claim that their tests demonstrate that the SleepServer system can achieve significant energy savings in a real network environment.

We detail results from our experience in deploying SleepServer in a medium scale enterprise with a sample set of thirty machines instrumented to provide accurate real-time measurements of energy consumption. Our measurements show significant energy savings for PCs ranging from 60%-80%, depending on their use model.

Probably there is no “one size fits all” approach to managing network “sleep” that applies to all environments; but it is good to see that some thoughtful work is being done on the problem.

We detail results from our experience in deploying SleepServer in a medium scale enterprise with a sample set of thirty machines instrumented to provide accurate real-time measurements of energy consumption. Our measurements show significant energy savings for PCs ranging from 60%-80%, depending on their use model.

Mozilla Releases Firefox 3.6.4

June 23, 2010

The folks at Mozilla have released a new version of the Firefox Web browser, version 3.6.4, for all platforms: Mac OS X, Linux, and Windows.  This version incorporates a new feature, intended to provide better resilience in the (unfortunately frequent) cases where a plugin, such as Flash or Silverlight, crashes.  These plugins will now be run in a “sandbox”, as a separate process.   If the plugin hangs or crashes, you will be able to reload the page to try again; if the plugin is causing other problems, its process can be killed without terminating the browsing session. (I have been using this feature in a beta build of Firefox for several weeks, and it works well.)  The new version also incorporates fixes for seven security vulnerabilities, and some miscellaneous other bugs.  Further information is in the Release Notes.

You can get the new version via the built-in update mechanism (main menu: Help / Check for Updates); this requires that you have administrative privileges.  Alternatively, installation binaries for all platforms, in 70+ languages, are available from the download page.


A 17th Century Wish List

June 22, 2010

As I’ve mentioned here before, the Royal Society of London is celebrating its 350th year of existence; it is the oldest scientific society in the world.   It has been putting on a series of exhibitions, and making a collection of historic scientific papers available online, as part of its celebration.

Today’s Washington Post reports that, as part of a new exhibition at its headquarters in Carlton House Terrace, London, the Royal Society is displaying a document written in the 1660s by Robert Boyle, an English chemist who was a founder of the Society.

In the 1660s, English chemist Robert Boyle wrote an extraordinary document, a combination of wish list and predictions of what science might achieve in the coming centuries. Found in his private papers, the list is a centerpiece of the exhibition “The Royal Society: 350 Years of Science,” running until November at the society’s headquarters in London.

The list is fascinating as a glimpse of how an eminent scientist of the 17th century thought the future might develop.  Some of the items on the list reflect some fairly basic human wishes:

  • The Prolongation of Life
  • The Recovery of Youth, or at least some of the Marks of it, as new Teeth, new Hair colour’d as in youth.

Some of them reflected problems of the day that have subsequently been solved, although not necessarily as Boyle envisioned the solution:

  • The makeing of Glass Malleable.  (Arguably addressed by plastics.)
  • The use of Pendulums at Sea and in Journeys, and the Application of it to watches.  (Accurate timekeeping at sea was vital to determining longitude.  The problem was initially solved by John Harrison, who invented the marine chronometer.)
  • The Art of Flying.

And there were a couple in which Boyle might have been channeling Dr. Timothy Leary:

  • Potent Druggs to alter or Exalt Imagination, Waking, Memory, and other functions, and appease pain, procure innocent sleep, harmless dreams, etc.
  • Freedom from Necessity of much Sleeping exemplify’d by the Operations of Tea and what happens in Mad-Men.

Some of these, of course, such as drugs for pain relief, have become commonplaces of modern medicine.

The exhibition runs until November.


Cheaper Chilling

June 21, 2010

Summer is now officially with us, and here in the metro Washington DC area, it’s been here unofficially for a while.  We are supposed to be getting some of our 95² days later this week: 95° F, 95% relative humidity.  Going outside from an air-conditioned building feels like being swallowed.  Legend has it that at one time, presumably before air conditioning, British diplomats in Washington got extra pay for a tropical assignment.

So we are pretty glad to have air conditioning around here, but it can get expensive.  However, a new air conditioning process developed by the National Renewable Energy Laboratory [NREL] may help lessen the heat on our bank accounts.   The process, described in a press release from the NREL, uses a combination of evaporative cooling to lower air temperature, and a desiccant solution to lower humidity, to produce conditioned air at a significant energy savings:

The U.S. Department of Energy’s National Renewable Energy Laboratory has invented a new air conditioning process with the potential of using 50 percent to 90 percent less energy than today’s top-of-the-line units. It uses membranes, evaporative cooling and liquid desiccants in a way that has never been done before in the centuries-old science of removing heat from the air.

Evaporative cooling is based on the fact that the phase change of water from liquid to vapor requires a great deal of heat.  It is the principle that your body uses to cool you by sweating.  I have visited a number of places, in hot but very dry climates (for example, around Palm Springs CA), where evaporative cooling is used even  in outdoor spaces.  A fine mist of water is produced, and the cooling produced can make it quite pleasant, at least in the shade, even when the air temperature is 100° F.    Of course, it does not work terribly well in a humid climate like ours here.

In humid climes, adding water to the air creates a hot and sticky building environment. Furthermore, the air cannot absorb enough water to become cold.

The NREL’s invention, called the Desiccant-Enhanced eVaporative air conditioner (DEVap), addresses the humidity problem by using a syrupy liquid desiccant, typically a concentrated solution of calcium or lithium chloride, to absorb water from the air.  The device also uses a special hydrophobic membrane to keep the water separated from the cooled air stream; the membrane has pores big enough to allow the passage of water vapor, but not of the liquid solution.    The water can be evaporated out of the desiccant solution by heat, possibly from solar energy.

In addition to its energy efficiency, the DEVap has the additional advantage of not requiring any CFC or HCFC refrigerant gases.  These are of environmental concern, because they are much more potent greenhouse gases than carbon dioxide.  (In terms of global warming impact, one pound of CFC refrigerant is approximately equivalent to 1 ton, or 2000 pounds, of carbon dioxide.)

The NREL is still working on improving the efficiency of its device, but it plans to license the technology for industrial use.

The Technology Review also has an article on this development.


Google Unveils Command-Line Access

June 20, 2010

For many computer users, mentioning anything about the command line may evoke memories of early PCs with MS-DOS; for some others, it may just evoke a “Huh?”.   But those of us who have been around for a while do remember using computers, even before MS-DOS, when the command line was the only game in town; and we still managed to do some useful work.  Smart systems administrators, and particularly server administrators, know that the command line is still a very efficient way of accomplishing some jobs, especially since it lends itself so well to use in scripts.

So I was interested to see an announcement on the Google Open-Source blog about the introduction of the Google Command-Line Tool.  This tool, written in the Python language, uses the Google Python Data APIs to provide command-line access to Google Web services (the examples, taken from Google’s documentation, are all based on Linux/UNIX command syntax):

Ever wanted to upload a folder full of photos to Picasa from a command prompt? We did, a lot, last summer. It made us want to say:

> google picasa create --title "My album" ~/Photos/vacation/*.jpg

So we wrote a program to do that, and a whole lot more.

The tool currently supports Google’s  Blogger, Calendar, Contacts, Docs, Picasa, and YouTube services; other Google services are promised for future releases.  More information is available from the GoogleCL project home page.

To give a simple example of how this tool might be useful, observe that the tool allows you to submit a post to Blogger with the command:

> google blogger post --blog "MyBlog" --tags "stuff,software" post.html

This probably is not the way you would submit an ordinary post, but it might be quite useful in the context of a system that could generate a post automatically from a template.  A system for announcing software updates and releases might be built to post blog announcements in this way, for example.  Particularly in the UNIX/Linux environment, the underlying philosophy of command tools is that they are designed so that they can be used together (with pipes, for example), which makes even a simple scripting language like that of the shell (such as sh, bash, or csh) very powerful.

So, if you have spent all, or almost all, of your computing life using a graphical user interface [GUI], the idea of having a command line tool may seem quaint; but it is quite consistent with Google’s focus on computing in the cloud, and with the concepts underlying the forthcoming Chrome OS.

The current version of the software is available for download from the project page in two forms: the source code, in a compressed tar archive, and as a Debian (.deb) package for Linux systems.  Using the tool also requires the gdata-python-client library.  The project page also has examples of the tool’s use.


Internet Fraud Alert Launched

June 19, 2010

This past week saw the launch of a new global security project, Internet Fraud Alert, which aims to provide a single, secure channel by which security researchers can report stolen consumer credentials (such as passwords and credit card numbers).   According to an article about the announcement at Ars Technica, the service should make it easier to ensure that, when stolen information is found, it can be communicated promptly to the appropriate organizations.

It is often difficult for people who discover vast amounts of stolen credentials stashed on servers and sites such as Pastebin.com to bring it to the attention of the proper authorities. Many organizations don’t bother to make reporting stolen data easy, and even then, it can be difficult to convince a bank or law enforcement that the information found is legitimate.

The technology used at the site was developed by Microsoft, and donated to the National Cyber-Forensics and Training Alliance, a non-profit organization that provide training to fight cyber-crime.  Other sponsoring organizations include Accuity, the American Bankers Association, Anti-Phishing Working Group, Citizens Bank, eBay, the Federal Trade Commission, National Consumers League, and PayPal.

One potential problem with the new service is that anonymous submissions are not allowed.  This might prevent an insider at a questionable organization from providing information for fear of the consequences if his identity is disclosed.  Still, it seems like a worthwhile step to make life a bit more difficult for the Bad Guys.

Update Saturday, 19 June, 15:40 EDT

Microsoft has a news release available with some additional background information on the project.


Follow

Get every new post delivered to your Inbox.

Join 30 other followers

%d bloggers like this: