Take the Tablets …

January 31, 2010

If you have not spent the last couple of weeks at the bottom of an abandoned mine shaft, or in a submerged nuclear submarine, you have undoubtedly heard about Apple’s introduction of its new iPad tablet computer.  Visually, the device looks like a really big iPhone; it uses the same multi-touch interface technology.  Opinions on the device have been divided.  Many people think that the iPad will be a “game changing” product; they point to the success of the iPod and the iPhone as examples.  Others, myself included, are somewhat more skeptical.

Tablet computers with a touch-screen interface are not a new idea.  Apple itself introduced a small one, called the Newton, back in the early 1990s; it was not a commercial success.  Microsoft has also tried to promote tablet devices.  They have achieved some success in niche markets, but have never been much of a money-spinner.  I think it is worth thinking about why.

To me, the tablet computer has always seemed like a solution in search of a problem, at least as far as the mass market (the market in which people buy ordinary PCs and laptops) is concerned.  One might conceive of tablets as replacements for laptop computers, or portable media devices (like an iPod, but for all types of media), or perhaps something else.  However, I think there are two key issues that have prevented tablets from gaining much ground in the market.

If a tablet is considered as a replacement for a laptop computer, it has one outstanding handicap: the lack of a keyboard.  Now keyboards can be annoying, and we all know the story about how the QWERTY layout is sub-optimal, but there is really no good alternative for getting a bunch of text or data input into the device quickly.  (Long before the era of PCs, I was delighted to get a portable typewriter when I was in high school, since I could, and still can, type much faster than I can write in longhand.)   Yes, touch screen devices can draw a keyboard on the screen; but, for a touch typist, using one is painful.  Handwriting recognition takes a slow process, writing in longhand, and makes it even slower with extra errors as a bonus.

The success of tablets in some niche markets, like construction and health care, actually emphasizes this point.  In both cases, the device can be used to store and retrieve important documents (engineering drawings, or patients’ records), and either a touch-screen keyboard or handwriting recognition can be used to capture notes, amendments, and so on.  In these applications, portability is a real plus, and the amount of data that has to be input by the human user is relatively small.

The other issue, which affects laptop computers as well as portables, is battery life.  The very limited time that they can be used away from an AC power source really limits their attractiveness as multi-media devices.  Getting longer battery life is a key reason that E-book readers, like the Amazon Kindle®, use a different display technology, so-called “electronic ink”.  In its current incarnations, this display method is much less power-hungry than the backlit LCD displays used in laptops and tablets, but it only works in grayscale, and has a slow refresh rate that makes it unsuitable for any sort of video.

The Apple iPad doesn’t really address the keyboard problem at all, although apparently an auxiliary keyboard will be available.  It claims to have battery life of 10 hours, which is very good by comparison with many current laptops, but the manufacturers’ battery life ratings for so many devices are so ridiculously optimistic that I think the jury is still out on that point.

If I think about the average business person who travels today with a cell phone and a laptop, it is not clear to me that the iPad buys that user much.  It can’t replace the cell phone (it doesn’t have a phone), and it’s unlikely to be a satisfactory replacement for the laptop.  Even if it could replace the laptop, the user is still saddled with two devices, and their attendant chargers, cords, and other impedimenta.

Apple certainly has a knack, which should not be discounted, for introducing new gadgets that people really want; and perhaps they’ve done it again.  But to me, this seems a bit more like the original introduction of the iPod, which I think really took off when its ecosystem with the iTunes store was completed.  It does seem likely that Apple’s move will refocus people’s attention on the tablet category, and perhaps Apple, or someone else, will come up with a really innovative ecosystem for that.


Is Your Browser Unique?

January 29, 2010

The Electronic Frontier Foundation [EFF] is conducting an interesting research project, called Panopticlick (Bentham fans will recognize the reference), to attempt to find out whether it is possible to track individuals across the Web without employing the usual suspects: Web bugs, cookies, and so on.   The hypothesis,basically, is that because browsers can report a good deal of configuration information to the Web server, it might be possible to identify individuals passively, just by tracking browser characteristics.

Many users are surprised at the amount of data that their browsers can be coaxed into disclosing.  The list below will give you some idea (but is not exhaustive):

  • User Agent string, which identifies the browser and version.  On this machine, running Firefox 3.6 under Kubuntu Linux 8.4, my browser reports: “Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6”
  • Time zone
  • Screen size and color depth
  • Plugin configuration
  • System fonts
  • Cookie settings

From admittedly limited anecdotal evidence, it appears that plugin configurations and font collections tend to be most distinctive.

If you’d like to test your own browser, you can do so by visiting this page.  (Note that you must have JavaScript enabled for the full test to run.)  I’ve run it on Firefox 3.6 here (as mentioned earlier), and also with Google Chrome 4.0.249.43 on this machine.  My Chrome configuration was unique among the 208.662 browsers that had been tested at that time, and both the plugin and font configurations were individually unique.  I tested Firefox a couple of minutes later; its configuration was unique in a sample of 209,884, although no individual configuration item was unique.   If you test your browser, I invite you to leave a comment with your results.

I have not tried it yet, but will be interested to see to what extent, if any, the “private” or “incognito”  modes in some browsers make a difference.

The EFF has a page of suggested defenses against browser tracking; I’m not sure how useful they really are.  Perhaps a Firefox or Chrome extension could be developed that would allow the returned values to be modified by the user, or randomized.


Preventing Deadlocks

January 28, 2010

The PhysOrg.com site has an article about an interesting new development in the ongoing battle against software defects.  The article itself, unfortunately, is written in such general, vague language that it is hard to understand what the new software, called Dimmunix, actually does.  Here is the opening paragraph:

A new IT tool, developed by the Dependable Systems Lab at EPFL in Switzerland, called “Dimmunix,” enables programs to avoid future recurrences of bugs without any assistance from users or programmers.

Now, if this tool could actually prevent the recurrence of a bug without any intervention by the user or the programmer, that would be pretty terrific — one might say almost magical.  Magic, unfortunately, is still in short supply,  but Dimmunix does do something that, although a bit more limited, is still very useful.

The tool, which was developed by researchers at the Dependable Systems Lab of the École Polytechnique Fédérale de Lausanne [EFPL], basically is designed to check for and prevent deadlock conditions (also sometimes called a deadly embrace).  This is a condition in which two or more processes or threads issue requests for system resources in a way such that the requests can never be satisfied; typically, the system hangs or freezes when this occurs.

Consider the following simple example.  Suppose we have two processes, numbers 100 and 200, and two resources, A and B,  that must be allocated exclusively to a process (that is, the resources cannot be shared).  Suppose the following sequence of events takes place:

  1. Process 100 starts
  2. Process 200 starts
  3. Process 100 requests and is granted resource B
  4. Process 200 requests and is granted resource A
  5. Process 100 requests resource A
  6. Process 200 requests resource B
  7. Deadlock !

It should be obvious at this point that, absent some external intervention, both processes will wait forever.  Now in a simple case like this  one, there are some relatively easy fixes.  For example, the system might require that resources have to be requested in  alphabetical order; failure to do so would cause the offending process to be terminated.  But in a real system, with complex inter-relationships between resources, the problem is more complicated.

What Dimmunix does is to monitor and record the resource requests made by processes in the system.  When a deadlock occurs, the pattern of requests preceding it is noted.  If that pattern of requests occurs again, the system takes action to prevent the deadlock.  Thus, over time, a learning effect occurs, and the system gradually acquires an “immunity” to deadlocks.  It is also possible to pool the information collected across a network, so that machines in a group can all benefit from their individual experiences.

Dimmunix is a tool for giving software systems such an immune system against deadlocks, without any assistance from programmers or users. Dimmunix is well suited for general purpose software (desktop and enterprise applications, server software, etc.) and a recent extension allows application communities to collaborate in achieving enhanced immunity.

The software is available in two versions: one for Java applications, and one for POSIX applications written in C or C++; the code is available here.   The research paper, presented at the 8th USENIX Symposium on Operating Systems Design and Implementation in December 2008, can be downloaded here.

Despite the rather wooly summary article that I mentioned earlier, this is an interesting and potentially very useful piece of work.  Squashing bugs that lead to deadlocks is notoriously difficult in complex systems, because they are frequently timing-dependent, and are difficult to replicate on purpose.  So an automatic monitoring tool could be of great value.

//

Safety in numbers — a cloud-based immune system for computers

January 27, 2010 <!–

–>
A new approach for managing bugs in computer software has been developed by a team led by Prof. George Candea at EPFL. The latest version of Dimmunix, available for free download, enables entire networks of computers to cooperate in order to collectively avoid the manifestations of bugs in software.
// //

Ads by Google

24 Hour Fitness® bodybugg – Buy a bodybugg from the 24 Hour Fitness online store today! – 24HourFitness.com

A new IT tool, developed by the Dependable Systems Lab at EPFL in Switzerland, called “Dimmunix,” enables programs to avoid future recurrences of bugs without any assistance from users or programmers.


Happy Birthday, Nat Geo

January 27, 2010

It was on this date, January 27, 1888, that the National Geographic Society was founded in Washington DC, for  “the increase and diffusion of geographic knowledge”.  Today, it is one of the largest non-profit scientific and educational institutions. (An article at the Wired Web site has a more detailed account.)

The Society started publishing a journal less than a year after it was founded, but the early publication didn’t bear much resemblance to the iconic yellow-bordered National Geographic magazine of today.  It was originally a scholarly journal, containing a collection of articles,  that was sent out to Society members.  As far as I can tell, there were no pictures !

One of the founding members of the Society was the inventor Alexander Graham Bell, who became president of the Society following the death of its first president, Gardiner Hubbard.  When Bell took over, the Society was losing money, and still  had very limited membership.  He decided to emphasize the magazine, and hired a full-time editor, Gilbert Grosvenor, to develop the magazine; he paid Grosvenor out of his own pocket.  The idea of combining membership with the magazine proved to be a success; over the next ten years the Society’s membership increased from 1,400 to 74,000, and increased nearly ten-fold in the following decade.

One of the factors that contributed to the success was the decision to make the magazine’s content more accessible to ordinary readers.  The other was the focus on visually striking photography, which, oddly, came about almost by accident:

Surprisingly, National Geographic’s hallmark photojournalism began as a desperate attempt to fill 11 pages of the January 1905 issue before it went to press.

Fortunately, Grosvenor had received a submission of photographs from Lhasa, Tibet, and decided to include them, even though he didn’t know how they would be received.  Needless to say, the idea caught on, and the magazine has been a leader in photojournalism ever since.

It was the first U.S. publisher to establish a color-photo lab in 1920, the first to publish underwater color photographs in 1927, the first to print an all-color issue in 1962, and the first to print a hologram in 1984.

The Society’s growing revenue stream enabled it to stay true to the scientific spirit of its founders, by financing many well-known exploration projects:

Some notable projects it has sponsored include Robert Peary’s expedition to the North Pole, Hiram Bingham’s excavation of the ancient Incan city Machu Picchu, Jacques-Yves Cousteau’s underwater exploration, Louis and Mary Leakey’s research on the history of human evolution in Africa, and Diane Fossey’s and Jane Goodall’s respective studies of gorillas and chimpanzees.

It’s also opened at least a small window onto a wider world for a lot of people.  I was given a subscription and membership back when I was in elementary school, and I can still remember how fascinating it was to see pictures of jungles, tigers, glaciers, and undersea creatures, and to realize that there was a lot more out there than I saw routinely in the suburbs of Washington DC.  It prompted my interest in photography and travel, too.

So I hope the Society will be around for many years more, and wish them all the best.  If you are ever in Washington DC, try to make time to drop in at the Society’s headquarters at 17th and M Streets, NW.  There’s always an interesting exhibit or two on offer.


Detecting Tsunamis via Internet

January 26, 2010

Most readers will remember the appalling destruction and loss of life resulting from the Indian Ocean tsunami in December, 2004.  That tsunami was caused by a magnitude 9.2 earthquake in a subduction zone near Sumatra, which produced waves up to 30 meters high, and affected coastal areas thousands of miles from the earthquake’s epicenter.

The disaster pointed up two problems in defending coastal areas against tsunamis:

  • Detection There are existing networks of pressure sensors on the ocean floor, which can detect tsunamis by the change in weight of the water column above the sensor.  However, sensor networks are expensive to install, and only five countries have them (Australia, Chile, Indonesia, Thailand, and the US); their coverage is far from complete.
  • Warning Even when a suspected tsunami is detected, existing mechanisms for disseminating warnings may not be up to completing the job in a timely enough way.

According to an article in New Scientist, a group of researchers at the US National Oceanographic and Atmospheric Administration [NOAA] has come up with a new approach to the first problem, detection.  They believe that existing undersea communications cables could be used to detect the passage of tsunamis, by using sensors to measure the change in electric field caused by the passage of an unusually large quantity of salt water (which of course contains many electrically-charged ions).  Their simulations indicate that the induced voltage might be on the order of 0.5 volt, which ought to be detectable after correcting for background noise..  This information would not provide any directional indication, but might be quite useful when integrated with data from other sources.

This is quite a clever idea, and might be a valuable way to augment the information gathered by the existing sensor networks without great incremental expense.  However, the second part of the problem, getting the warning information distributed in time, is still a tough nut, especially in those poor countries that have especially vulnerable coastlines.


Google Releases Chrome 4

January 26, 2010

Google has announced the official release for Windows of version 4 of their Chrome Web browser.  The main change in this version, as discussed in an article at Ars Technica, is a much improved mechanism for installing and managing extensions.  The new release also has a feature called Bookmark Sync, which allows you to use your Google account on the Web to synchronize your bookmarks across different machines.  Google says that the new  version also features improved performance.  The Google Chrome blog has more details about the changes.

I’ve been using a beta version of Chrome on Linux for several weeks now; on the whole, it works very well, although I don’t think it can quite match Firefox for stability yet.

You can download Google Chrome here.


%d bloggers like this: