Mozilla Releases Thunderbird 10.0

January 31, 2012

Along with the Firefox 10.0 release today, Mozilla has released version 10.0 of its Thunderbird E-mail client, for Mac OS X, Linux, and Windows.  The new version includes improved search capability (including the ability to do a Web search), as well as a number of  bug fixes.  More details are available in the Release Notes.

You can get the new version via the built-in update mechanism (Help / Check for Updates), or you can download versions for all platforms, in more than 50 languages.

Update Wednesday, 1 February, 10:58 EST

The new version also fixes seven security vulnerabilities, five of which Mozilla rates as Critical; the details are here.


Mozilla Releases Firefox 10.0

January 31, 2012

The folks at Mozilla have released a new major version, 10.0, of their Firefox browser for Mac OS X, Windows, and Linux.   The new version includes a few new features, including:

  • Improvements to extensions compatibility check
  • Anti-aliasing for WebGL
  • CSS3 3-D transforms implemented
  • User interface tweaks

More detailed information is available in the Release Notes.

You can obtain the new version via the built-in update mechanism (Help / About Firefox / Check for Updates), or you can download a complete installation package, in a variety of (human) languages.

Update Wednesday, 1 February, 10:55 EST

The new version also fixes eight security vulnerabilities, five of which Mozilla rates as Critical.  The details are here.


Boycotting Pricey Journals

January 28, 2012

I’ve written here occasionally about efforts to make more academic and research materials available on the Internet, most recently with the trial program to provide free access to part of the JStor archive of academic journals.  One of the reasons that these changes matter is that, as a rule, academic journals are quite expensive.  (If you have ever tried to access an article from one of these journals online, you will have seen a request for payment just to read a single article.  Charges of $25-30, or more, are not uncommon.)   From one perspective, this is hard  to justify.  Although there is obviously some fixed cost in running a journal, and in producing a printed version, the marginal cost of allowing an additional person to read it is effectively zero.

This is especially annoying because free access to information is a foundation of scientific and other academic inquiry, and because much of the value of a peer-reviewed journal is added at no cost to the publisher.  Scholars and scientists write and submit papers, of course; but they also serve as reviewers and members of editorial boards, frequently on a volunteer basis.  To add insult to injury, the publishers also engage in other questionable practices, such as only offering journals in pre-defined “bundles”, as if they were cable TV channels.

There is now a nascent protest movement among academics to boycott their part of this process.  The action was sparked by a blog post by Timothy Gowers, a mathematician at the University of Cambridge, and Fields Medal recipient.  A Web site,  thecostofknowledge.com, has been set up, which enables members to make a public pledge not to do any or all of:

  • Submit articles for publication
  • Referee articles submitted by others
  • Perform editorial work

The initial action is being taken against the publisher Reed Elsevier, which publishes some of the highest-priced journals.  At this writing, 1335 researchers have signed up.  John Baez, a mathematical physicist at the University of California, Riverside, has an overview post on his Azimuth blog.

This is an encouraging development.  For too long, some of these journal publishers have not only bitten the hand that feeds them, but charged the rest of the body for the privilege.  Or, as Adlai Stevenson once said, “Eggheads of the world, unite!  You have nothing to lose but your yolks.”


When Did We Start this Password Thing?

January 27, 2012

I’ve talked many times here about the problems with passwords as a means of authenticating computer users (most recently here and here), and about the search for better alternatives.  Just a few days ago, I mentioned DARPA ‘s Active Authentication project to develop new methods of authentication.  How did this all get started, anyway?

Wired has posted an article, by Robert McMillan, that attempts to answer this question.  It’s amusing to reflect on some of this history, and perhaps it holds a few lessons, too.

As the article points out, the idea of passwords in general has a long history, going back at least to the Romans.  However, it is not entirely clear when the idea was first applied to computer system access.  One possible candidate is the SABRE Reservation System, developed by IBM for American Airlines in 1960.  But McMillan thinks the most likely candidate is the Compatible Time-Sharing System [CTSS], developed at MIT in the mid-1960s. under the direction of Fernando Corbató.   (The photo accompanying the article, showing Corbató standing amidst the system’s equipment, is perhaps of interest to historians of computers or fashion.)

It probably arrived at the Massachusetts Institute of Technology in the mid-1960s, when researchers at the university built a massive time-sharing computer called CTSS. The punchline is that even then, passwords didn’t protect users as well as they could have.

The article goes on to suggest that even in CTSS, passwords were something of a security failure.  I think this argument is a bit unfair.  The two security breaches cited in the article were both the result of someone obtaining a copy of the password file, either due to a system error or deliberate subterfuge.  (The password file was, apparently, not encrypted, of which more anon.)  Criticising the use of passwords based on attacks of this kind is like criticising a lock because the burglar opened it with a stolen key.  The lesson to be taken away from these examples is that effective security is a system, not a particular technology.  No matter how good your passwords are, someone who can steal a list of plain text passwords (or capture them with a keystroke logger) can still access your system.  Similarly, being careful with passwords while leaving an unencrypted copy of the password file accessible will not protect you, any more than putting three deadbolt locks on your door while leaving the windows open.

Having an authentication system that is better, in principle, than passwords, is a good thing.  At the time CTSS was developed, though, passwords could have provided effective security had there not been some serious goofs in the implementation.  (We should also remember the very limited resources of that system, by today’s standards.  A system that would take 30 minutes to process a login would not be worth much, although it might be very secure.)   Also, the history of security systems seems to show that, even with a “provably secure” system, like one-time pad cryptography, implementation and user errors can create embarrassing failures.

 


Critical Flaws in pcAnywhere

January 26, 2012

Symantec’s pcAnywhere software provides remote access and remote desktop capabilities for Windows-based systems.  pcAnywhere is not likely to be installed on the typical home system, but it is used fairly widely by businesses.  It is used, for example, by organizations’ help desks, so that the technical staff on the phone with the troubled user can see the same screen that the user sees.

Symantec has just taken the somewhat unusual step of issuing a white paper, Symantec pcAnywhere Security Recommendations [PDF], which discusses potential security risks from using the product, and recommends that, because of several current vulnerabilities, pcAnywhere be disabled until Symantec has issued appropriate patches.

At this time, Symantec recommends disabling the product until Symantec releases a final set of software updates that
resolve currently known vulnerability risks. For customers that require pcAnywhere for business critical purposes, it is
recommended that customers understand the current risks, ensure pcAnywhere 12.5 is installed, apply all relevant patches as they are released, and follow the general security best practices discussed herein.

Some of the vulnerabilities are, according to the white paper, linked to a theft of some Symantec source code back in 2006.  The stolen code apparently included some encryption and other security functions that were implemented in a vulnerable way.  The principal risk is of a man-in-the-middle attack against the encryption and encoding weaknesses, but other attacks are also possible.   In addition to describing some mitigation steps, the white paper gives a summary of recommended security practices for pcAnywhere users.  In addition to the pcAnywhere product itself, the vulnerable software is bundled with three other Symantec products: Altiris Client Management Suite;  Altiris IT Management Suite versions 7.0 or later; and Altiris Deployment Solution with Remote v7.1.

Symantec has also released a Security Advisory for pcAnywhere and associated products, regarding two serious vulnerabilities that do not seem to be related to the code theft.   Successful attacks against these flaws might result in remote execution of arbitrary code, or unauthorized modification of local files.  The code execution vulnerability is very serious, since the relevant execution context will often be System.  There is a hot fix available for supported versions of pcAnywhere.

The SANS Internet Storm Center has a diary entry on the pcAnywhere issues.  They report seeing some evidence of systematic probes of TCP/IP port 5631, used by pcAnywhere.  This probably indicates attempts to discover and exploit vulnerable systems, so the ISC’s advice, and mine, is patch now.

Using any remote access facility involves some risk, especially if the remote user is in an insecure location.  Users of pcAnywhere should keep an eye on the security news, and on Symantec’s site, so that they can stay on top of this one.


Evolving Autonomous Autos

January 25, 2012

A little over a year ago, I wrote about Google’s research project to develop and test a self-driving automobile.  This was not a totally novel idea; the DARPA Grand Challenge, a prize competition for driverless vehicles, had been running for several years.  Though Google has tested its technology on normal California roads (always with a human back-up driver on board), there are still technical, legal, and cultural obstacles to be overcome before we will be able to sit back and enjoy our coffee while the car drives us to work.

Technology Review reports that a more gradual approach to automated driving technology is being taken by auto manufacturers, especially in Europe.   I’ve written here about a European project to allow communicating vehicles to form up as “road trains” on highways, to improve energy efficiency and safety.  And the automakers have already begun to introduce incremental driving assistance capabilities.

[BMW’s Werner] Huber and executives at other European automakers say the automated driving revolution is already here: new safety and convenience technologies are beginning to act as “copilots,” automating tedious or difficult driving tasks such as parallel parking.

The expectation is that these features will be introduced first on high-end models, then gradually make their appearance on a broader range of cars, depending of course on their reception by customers.  There are models offered now that offer parallel parking assistance, and other capabilities are being offered as well.

For example, for $1,350, people who purchase BMW’s 535i xDrive sedan in the United States can opt for a “driver assistance package” that includes radar to detect vehicles in the car’s blind spot. For another $2,600, BMW will install “night vision with pedestrian detection,” which uses a forward-facing infrared camera to spot people in the road.

Probably one of the marketing goals for these features is to get people more accustomed to the idea of the car “thinking for itself”.  One doesn’t have to look at very many car advertisements to realize that the product is often sold as an extension of the driver, probably not ideal for selling a fully autonomous car.  There are also legal obstacles to be dealt with.  Traffic codes assume, at least implicitly, that a person is in control of the vehicle while it is moving.  There will also be interesting issues of software liability, if it appears that a failure of the automatic system caused a collision.

Still, this is potentially valuable experimentation.   Travel by automobile is a big consumer of fossil fuels, as well as being fairly dangerous (compared to other forms of transport).   Anything that might make it safer and more energy efficient is worth a look.

 


%d bloggers like this: