Building the Analytical Engine

April 30, 2012

I’ve written here before about the project, launched by John Graham-Cumming, a British writer and programmer, to build a working model of the Analytical Engine, designed in the 19th century by the British mathematician, Charles Babbage.  The Engine, which has a fair claim to being the world’s first design for a stored-program computer, was never built, owing to its size (about the same as a steam locomotive) and complexity.  Lord Byron’s daughter Ada, Lady Lovelace, for whom the Ada programming language is named, wrote a program for the Analytical Engine to compute Bernoulli numbers, and was possibly the world’s first programmer.

Last fall, the Science Museum in London undertook the digitization of Babbage’s various designs for the Engine (he was an inveterate tinkerer), with the aim of coming to a final design for the proposed replica.

There is now a video available of a TED talk that Mr. Graham-Cumming gave at Imperial College, London, on the Analytical Engine project, in which he discusses the design of the Engine and how the project is proceeding.  Although it’s not a comprehensive description, it’s an entertaining overview of the problem.


Prof. Felten’s New Blog

April 30, 2012

In discussing technology policy and security issues here, I’ve frequently mentioned Professor Ed Felten of Princeton, director of the University’s Center for Information Technology Policy [CITP], who is serving a term as the Chief Technologist of the US Federal Trade Commission [FTC].  I’ve just discovered that, in his new capacity, he has recently started a blog, Tech@FTC; he describes the goal this way:

Our goal is to talk about technology in a way that is sophisticated enough to be interesting to hard-core techies, but straightforward enough to be accessible to the broad public that knows something about technology but doesn’t qualify as expert.  Every post will have an identified author–usually me–who will speak to you in the first person.  We’ll aim for a conversational, common-sense tone–and if we fall short, I’m sure you’ll let us know in the comments.

I have not yet had a chance to read all the posts that are there, even though there are not that many yet, but I am sure that they will be worth reading.  I’ll mention two recent posts that I have read.  The first explains why “hashing” data, such as Social Security numbers, does not make the data anonymous,  The second discusses why pseudonyms aren’t anonymous, either.  (I’ve previously written a couple of times about the difficulty of “anonymizing” data.)

I’m looking forward to reading the rest of what’s there, and to Prof. Felten’s future posts.  At the time his appointment to the FTC post was announced, I was pleased that someone so well-qualified had been chosen.  Reading the new blog reinforces that feeling.


Ubuntu Linux 12.04 LTS Released

April 28, 2012

Canonical Ltd, the corporate sponsor of the Ubuntu Linux distribution, has announced the availability of version 12.04 LTS, code named “Precise Pangolin”†, for Desktop, Server, Cloud, and Core products.

There are 54 product images and 2 cloud images being shipped with this 12.04 LTS release, with translations available in 41 languages.  The Ubuntu project’s 12.04 archive currently has 39,226 binary packages in it, built from 19,179 source packages, so lots of good starting points for your imagination!

This is a long-term support [LTS] release.  A new version of Ubuntu is released twice yearly, in April and October, giving version numbers of the form YY.MM, from the year and month of the release..  Most releases receive updates for security issues and bug fixes for 18 months, but every two years an LTS release is made.  Historically, these have received three years of update support on the desktop, and five years for the server edition.  In this case, Canonical has said that all versions will receive five years of updates.  The LTS releases are especially helpful to those who may have sizable Ubuntu deployments, as well as those who just want less frequent OS updates.

In addition to the Linux operating system, the distribution contains a large number of applications, including the Firefox browser, the LibreOffice office suite, and media players.  Many more applications are available in the Ubuntu software repositories, and can be downloaded and installed easily using the Software Center tool included in the distribution.  As usual, the CD images available for download can be used as a bootable “live CD”, so that you can try things out without any modifications to your system; it also allows you to do a standard installation to the hard disk.  More information about this version is available in the Release Notes.

The base Ubuntu distribution for desktop and laptop computers uses Canonical’s Unity desktop shell [GUI].  Other versions are also available.  The Kubuntu version uses the KDE graphical interface, which some users prefer; it is available for download here.  Another variant, Xubuntu, uses the Xfce desktop manager; users with older hardware, especially, may find it of interest, since its resource requirements are more modest.  You can download Xubuntu here.   The announcement from Canonical also lists some other, more specialized, variants.

The Ubuntu Linux system, and the tools included with it, are all free software; you are not only allowed, but also encouraged, to share the software with others.

Ars Technica has an initial review of the new release.

† The Ubuntu project uses alliterative animal names for its releases.  So we have had Dapper Drake, Hardy Heron, Intrepid Ibex, and Oneiric Ocelot, among others.


Another Look at DC Power

April 25, 2012

Back in December, I posted a note about the resurgence of interest in DC power distribution systems, especially within data centers.  Although large scale electricity distribution systems (such as regional or national grids) have used AC for years, since the resolution of the “War of the Currents“,and obviously constitute a workable solution — I am, after all, writing this at about 10:00 PM — the data center environment differs in some significant ways from that of the average utility customer.  The electronic devices themselves almost all work on DC power  (converted from the AC supplied by the grid; backup power supplies for emergencies almost always use batteries, which supply DC.   As I noted in that earlier post, the use of DC distribution in large data centers could potentially produce significant increases in energy efficiency.

Technology Review has a new article that discusses the possible use of DC power distribution on a larger scale.   According to Greg Reed, director of the Power & Energy Initiative at the University of Pittsburgh, the growth in the use of electronic devices, especially consumer electronic devices, has meant that a larger amount of the total demand for power is, ultimately, for DC.  Currently, this DC power is supplied by the battery chargers, power supplies, and “wall warts” of our PCs, smart phones, flat-screen TVs, and other gadgets.  Reed thinks that this trend will continue.

“Within the next 20 years we could definitely see as much as 50 percent of our total loads be made up of DC consumption,” he [Reed] says. “It’s accelerating even more than we’d expected.”

He goes on to argue that a “DC takeover” of the grid is “inevitable”, due to improvements in efficiency, from eliminating AC/DC conversions, and to the increased use of consumer electronics, solar panels, and LED lighting, all of which are more “at home” in a DC-powered world.

It is certainly true that there is technology today, unavailable in Edison’s time, that makes high-voltage DC transmission over significant distances feasible.  I expect this type of distribution will be used more in the future for installations where it makes sense.  I also think that the use of DC power distribution in data centers will increase; moreover, this kind of local grid installation probably makes sense in a number of other contexts, like large commercial buildings.  Electric vehicles, too, use batteries that are recharged with DC power, so there is probably a role for local DC grids there.

Dragan Maksimovic, an expert in power electronics at the University of Colorado in Boulder, estimates that solar-powered vehicle chargers his group is developing should cut power losses from 10 percent of what the panels produce to just 2 percent.

So, I think there is a pretty good case for deployment of DC power distribution on the local scale.  In a data center, it makes little sense to have a large number of servers, each with its own power supply, taking AC from the local utility and turning it into, say, 24 volt DC.  However, I very much doubt that we will see any wholesale switch to DC power distribution on a large scale.  The US power grid represents a huge capital investment, and it does work.


Harvard Library’s Faculty Advisers Push for Open Access

April 24, 2012

The movement toward providing open access to scholarly research seems to be continuing.  I’ve noted before the decisions by a number of different organization, including Princeton University, the Royal Society, the JStor research archive, and, most recently, the World Bank, to provide open access to some or all of their research publications.   According to an article at Ars Technica, a faculty advisory council to the Harvard University Library has just issued a memorandum urging all faculty members to move to open access publication as much as possible, because of what it terms “untenable” and “unsustainable” trends in the pricing of traditional academic journals.

… the Faculty Advisory Council is fed up with rising costs, forced bundling of low- and high-profile journals, and subscriptions that run into the tens of thousands of dollars. So, it’s suggesting that the rest of the Harvard faculty focus on open access publishing.

The library’s current budget for journal subscriptions runs to about $3.75 million.  Admittedly, this is not a large sum compared to the size of Harvard’s endowment, roughly $32 billion; but it is clear from the language of the memorandum that the members of the council have had enough of continually increasing prices that, in their view, have little economic justification.  Some of their complaints, such as the “bundling” of journal subscriptions, will sound familiar to readers familiar with the boycott of Reed Elsevier journals, organized via the Web site, thecostofknowledge.com.  (Incidentally, when I first wrote about the boycott back in January, there were 1,335 researchers who had signed up to participate; the current total is 10,200.)  They feel that the increasing consumption of library resources for these expensive journals will compromise other parts of their mission.

The Faculty Advisory Council to the Library, representing university faculty in all schools and in consultation with the Harvard Library leadership,  reached this conclusion: major periodical subscriptions, especially to electronic journals published by historically key providers, cannot be sustained: continuing these subscriptions on their current footing is financially untenable.

They urge faculty members to submit research to open access journals, or at least those with reasonable access policies; to try to raise the prestige of open access publication; and to consider resigning from the editorial boards of journals with unreasonable subscription policies.

The recommendations are not binding on the faculty, but I hope that they will realize, along with academics elsewhere, that they do have the power to effect considerable change.  After all, they supply the “raw material”, in the form of their papers, that the journals need to exist, and they also supply most of the editorial work, usually for no compensation.  For too long, some of these journal publishers have not only bitten the hand that feeds them, but charged the rest of the body for the privilege.


Mozilla Releases Thunderbird 12.0

April 24, 2012

Along with the Firefox 12.0 release today, Mozilla has released version 12.0 of its Thunderbird E-mail client, for Mac OS X, Linux, and Windows.  The new version includes improvements to global search, and to RSS feed handling.  It also fixes 13 security flaws, 6 of which Mozilla rates as Critical.  (Firefox was affected by many of the same vulnerabilities; the two packages share a substantial amount of code.)  More details of the changes are available in the Release Notes.

You can get the new version via the built-in update mechanism (Help / Check for Updates), or you can download versions for all platforms, in more than 50 languages.  Because of the security content of this release, I recommend that you update your system fairly soon.

Update Tuesday, 24 April, 18:06 EDT

The link for the Release Notes, above, originally pointed to the notes for version 11.0; it has now been fixed to point to version 12.0.  My apologies for the error.


%d bloggers like this: