Google Releases Chrome 21

July 31, 2012

Google today released version 21 of its Chrome browser in the stable channel for all platforms.  The new version number is 21.0.1180.57 for Mac OS X and Linux, and 21.0.1180.60 for Windows and Chrome Frame.   The new version fixes 15 identified security vulnerabilities, one of which is rated as Critical severity; six are rated High severity.   It also includes new APIs for high quality audio and video communication, allowing authorized applications to access the computer’s microphone and camera without a plugin.  Further details are available in the Release Announcement; the new API capabilities are described further in a post on the Official Chrome Blog.

Because of its security content, I recommend that you update your systems as soon as you conveniently can.   Windows and Mac users should get the new version via the built-in update mechanism.  Linux users should get the updated package from their distributions’ repositories, using their standard package maintenance tools.

You can check the version of Chrome that you have by clicking on the tool menu icon (the little wrench), and then selecting “About Google Chrome”.


New Tool Breaks MS-CHAPv2 Passwords

July 31, 2012

The ThreatPost security news service from Kaspersky Labs has an article reporting on a new password cracking tool developed by the security researcher whose nom de déposé is Moxie Marlinspike.  The tool, ChapCrack, which was described in a presentation by Marlinspike and David Hulton at the DefCon 20 security conference last weekend, is designed to crack passwords used in the MS-CHAPv2 (Microsoft Challenge Handshake Authentication Protocol version 2) protocol.  It provides a tool that can extract credentials from a protocol negotiation “handshake”; the credentials can then be submitted to a cloud-based service to extract the passwords.

Marlinspike’s ChapCrack tool has the ability to take packet captures that include an MS-CHAPv2 network handshake–the back-and-forth negotiation that sets up the secure connection between machines–and remove the relevant credentials from the capture. The user can then submit the encrypted credentials to CloudCracker and will eventually receive in return an encrypted packet that he can insert into ChapCrack again. The tool then will crack the password.

The protocol is used as part of Microsoft’s PPTP tunneling protocol for implementing VPNs (Virtual Private Networks).  It has been available as a component of Windows since Windows 95, and has been quite popular, even though it has some known security vulnerabilities; Bruce Schneier and Mudge analyzed the protocol in 1999.  In his blog post discussing the attack in detail, Marlinspike says that the protocol is still widely used in two cases:

  • PPTP-based VPNs
  • Enterprise WPA2 wireless networks

He claims a success rate of 100% in recovering passwords.  The cloud-based component of the attack uses a specialized piece of hardware for cracking DES keys, built by Pico Computing.

David Hulton’s company, Pico Computing, specializes in building FPGA hardware for cryptography applications. They were able to build an FPGA box that implemented DES as a real pipeline, with one DES operation for each clock cycle. With 40 cores at 450mhz, that’s 18 billion keys/second. With 48 FPGAs, the Pico Computing DES cracking box gives us a worst case of ~23 hours for cracking a DES key, and an average case of about half a day.

(This, incidentally, shows that the push to replace DES, the Data Encryption Standard, with the newer AES was not alarmist — Moore’s Law, and all that.)

Marlinspike concludes his blog post with these recommendations:

1) All users and providers of PPTP VPN solutions should immediately start migrating to a different VPN protocol. PPTP traffic should be considered unencrypted.

2) Enterprises who are depending on the mutual authentication properties of MS-CHAPv2 for connection to their WPA2 Radius servers should immediately start migrating to something else.

As I’ve mentioned before, it is a truism of security that attacks always get better.

Update Wednesday, 1 August, 11:29 EDT

Ars Technica has an article that discusses this attack, and its implications, in some detail.


Still Getting Warmer

July 29, 2012

Back in October of last year, I posted a note here about a new study that examined historical records for evidence of global warming.   The study, which confirmed previous results that the global average temperature had increased by 1°C since 1950, was somewhat noteworthy, because it was led by a self-described former global warming  skeptic, Prof. Richard A. Muller, who is a professor of physics at the University of California, Berkeley, and a Faculty Senior Scientist at the Lawrence Berkeley National Laboratory.  The research work, which is available at the BerkeleyEarth.org project site, examined some potential methodological problems with earlier research — notably the “urban heat island” hypothesis, and the questionable quality of some of the historical data — and found that correcting for them did not have any significant effect on the result.  (My earlier post, linked above, has a more complete discussion of the research.)  That study, though, did not address the question of why the warming was occurring.

Now Prof. Muller has published an Op-Ed column in the New York Times, in which he describes some new research that provides some significant evidence that the human-caused increase in atmospheric “greenhouse gases” (such as carbon dioxide and methane) is the cause of the warming.

Last year, following an intensive research effort involving a dozen scientists, I concluded that global warming was real and that the prior estimates of the rate of warming were correct. I’m now going a step further: Humans are almost entirely the cause.

The historical pattern observed by the team (the data are also available at the project web site) shows some expected short temperature dips, following large volcanic eruptions, which eject large amounts of particulate matter into the air, thereby shading the Earth’s surface from some solar radiation.   There are also some small, short-term variations in temperatures that can be attributed to fluctuations in ocean currents, such as El Niño and the Gulf Stream.  But there was only one indicator that was strongly correlated with the long-term temperature trend.

We tried fitting the shape to simple math functions (exponentials, polynomials), to solar activity and even to rising functions like world population. By far the best match was to the record of atmospheric carbon dioxide, measured from atmospheric samples and air trapped in polar ice.

Prof. Muller notes that their historical data covers a long enough period that by analyzing it in connection with sunspot data, changes in solar activity can be essentially eliminated as a cause of global warming.

… our data argues strongly that the temperature rise of the past 250 years cannot be attributed to solar changes. This conclusion is, in retrospect, not too surprising; we’ve learned from satellite measurements that solar activity changes the brightness of the sun very little.

Prof. Muller further notes, correctly, that the strong correlation observed between carbon dioxide levels and temperature increases does not prove that the latter are caused by the former.  However, any posited alternative explanation should explain the historical record at least as well if it is to be taken seriously.   (There is, in the greenhouse effect, a plausible physical / chemical mechanism by which increased carbon dioxide levels could cause warming.)

Prof. Muller also says (and I agree) that many of the claims made about global warming are “speculative, exaggerated, or just plain wrong”.  The real point, though, is that there is more and more evidence that the average temperature on Earth is rising, and that it is being caused as a by-product of human activity.


When Windows XP Expires

July 28, 2012

The “Babbage” blog at The Economist has an interesting and thought-provoking article on the continuing use of the Windows XP operating system, first introduced in September, 2001.   Something like 600 million copies of XP have been sold (mostly pre-installed on new PCs), making it the most used OS ever.   The article addresses the question, which has been raised before, “What comes after XP?”.

Microsoft has, of course, provided two “official” answers: Windows Vista, introduced in January 2007, and Windows 7, introduced in October 2009.  Vista was neither a commercial nor a technical success; as the Economist correspondent says, “Of Vista, the less said the better”.  I have a laptop which came with Vista pre-installed.  As has been my practice for about a decade, I set it up for dual boot with Linux.  I’d estimate that it has spent about 98% of its life running Linux, and more than 90% of the time running Windows just installing patches and updates.  Vista had some good underlying ideas, particularly with regard to improving security; but those ideas were all too often implemented in a very clunky, user-antagonistic way.   In contrast, although I have not, personally, spent any significant time using Windows 7, the consensus reaction of knowledgeable  Windows 7 users that I know is that it is a much better effort than Vista.

So what’s the problem, then?   Many users, seeing the negative reaction to Vista, decided to stick with XP.   Although they might think about moving to Windows 7, given its much more positive reception, this isn’t entirely straightforward.  Upgrading from Vista to 7 is pretty straightforward, but upgrading from XP directly to 7 is considerably trickier.  For enterprise customers, who have large numbers of machines to upgrade, Microsoft offers assistance (for a price, naturally and reasonably enough), but small operations still using XP have a less clear upgrade path.  This is one reason that it is only now that Windows 7, almost three years after its introduction, is about to surpass Windows XP in the number of licensed copies sold.

The upgrade conundrum is made trickier by two things.  The first is the (relatively) imminent end of support for Windows XP.   Microsoft’s current statement is that Windows XP, Service Pack 3, will cease to receive bug fixes or security updates as of April 8, 2014.  Given the historical record of PC operating systems with respect to security issues, only a cockeyed optimist would regard continuing to use XP much beyond that point as prudent, even ignoring that fact that Microsoft, historically, has not really been a poster child for PC security, at least in a positive sense.

The second complication concerns the lifetime of Windows 7 itself, and what is to follow it.  Microsoft has said that the next version of Windows, Windows 8, will be released this autumn.  There’s nothing problematic about that per se; but Windows 8 is, especially from the average user’s point of view, a very different animal from Windows 7.   It has a new user interface, Metro, which is dramatically different from the traditional (since Windows 95) Windows look; Metro, instead, is heavily oriented towards touchscreen devices.

For the past 17 years, Microsoft has taught a generation of PC users how to navigate around their computers intuitively, by using a mouse and keyboard to scroll through drop-down menus and then click on the application they want to run. Microsoft will now ditch all that in favour of a start screen comprising a mosaic of brightly coloured tiles, which serve both as short-cuts to favourite applications and as widgets for reporting data from programs that are already running.

Microsoft’s featuring this interface is perfectly understandable; in the competition to supply a mobile device OS, it is running third, after Apple’s iOS and Google’s Android.  The feedback I’ve heard from those that have tried out Windows 8, and its  interface, is that Metro is a cool design, although there are some glitches,   And Microsoft desperately needs to get on board with the mobile device market. which historically has eluded its grasp.

Today, the fast-growing business of portable computing is dominated by devices that use either Apple’s iOS or Google’s Android operating system. Thus, with Microsoft’s operating systems installed on around 90% of all desktops, laptops and notebooks, every tablet bought to replace a PC means one less copy of Windows is sold. The latest projections have tablets outselling PCs within a year or so. Hence the urgency at the software giant’s Redmond headquarters.

But, because the user interface of Windows 8 is significantly different from that of its predecessors (back almost two decades), it is likely to meet with some lack of understanding. if not hostility, from ordinary users.  The interface works well, on a touch-screen device, but (at least in the preliminary version I’ve seen) has no provision to revert to a more traditional Window screen, with a START button.

There’s the rub. With Windows 8 optimised for portable devices with touchscreens, it becomes a pain in the proverbial for people trying to do real work using a keyboard and mouse on a PC. If, for instance, an application or tool being sought does not have a tile of its own on the start screen, the user has to hunt for it by typing its name into a search box. That quickly becomes the kind of chore PC users really hate.

Another wrinkle has to do with Windows 8 on mobile devices, many of which use ARM processors, which are more frugal consumers of battery power than Intel chips.  It seems that Microsoft’s current requirements for such machines to be labeled “Windows 8 compatible” may prevent the user from installing any alternative software.  Microsoft seems to long for the “walled garden” approach, in which the manufacturer controls, as Apple does, what software can be run on the machine.

If you use a traditional laptop or desktop PC, and are still running Windows XP, I would strongly suggest that you start to consider what comes next.  Here are some possible choices:

  • You can try out the preview edition of Windows 8, and see if it will work for you.  (An ISO installation image can be downloaded here.)  You, in this context, means yourself and any other users whose machines you maintain.  Even if the new version works like a charm, if your users stage a rebellion against the new interface, you’re going to have problems.
  • You can, as the “Babbage” correspondent suggests, start stockpiling copies of Windows 7, which presumably will be supported longer than Windows XP, and which does have a conventional user interface.
  • You can stick with XP, and hope that Windows 8 bombs in the marketplace, forcing Microsoft either to extend XP support, bring out a new versions (Windows 9?) with better support for the classical interface, or both. I really do not recommend this option.

I also really urge you not to delay thinking about this for too long.  OS upgrades are, admittedly, (to use The Economist‘s expression) a pain in the proverbial; but having to do one under the gun, because the users are having a mutiny, or because necessary applications are failing, is a lot less fun.


A Cool Development

July 22, 2012

Summer here in the Washington DC metro area is often hot and humid; and this summer, at least so far, is no exception.  Of the 22 days in July so far, 11 have had daytime temperatures of 95 F (35 C) or higher, as measured at Dulles airport (IAD).   Local legend has it that, at one time, British diplomats assigned to Washington were given extra pay for living and working in tropical conditions.  So it’s probably good that a recent post on the “Babbage” technology blog at The Economist reminds us that things could be a lot worse.

It was just a bit more than 110 years ago, on July 17, 1902, that Willis Carrier, an employee of the Buffalo Forge Company,  finalized his design of the first modern air conditioner, to be installed at Sackett & Wilhelms,  a printing firm in Brooklyn NY.   The printers wanted it mainly for humidity control, not cooling; varying humidity levels can make a mess of paper, as anyone who has fetched in his or her daily newspaper on a rainy day will know.  The humidity-induced changes in the paper stock wrought havoc with color printing especially, since the same sheet needed to pass through the presses multiple times. The new device was a success, and other customers soon appeared.

A drug firm and a silk mill swiftly followed Sackett & Wilhelms in adopting Carrier’s device. A host of other companies in different industries, including Gillette’s safety-razor factory where humidity caused corrosion, converted soon after. In 1915 the Carrier Corporation was founded. It exists to this day as a division of United Technologies, an industrial conglomerate.

Carrier’s design incorporated the basic elements (evaporator, compressor, condenser) that are used today in air conditioners, refrigerators, and heat pumps.   (The Wikipedia article on “Heat Pump and Refrigeration Cycle” provides a good overview.)   One significant design choice is what to use as a refrigerant.

In early system, carbon dioxide (CO2) was often used, but fell out of favor because high pressure is required to liquefy it from a gas, requiring strong (and therefore expensive) plumbing.   New compounds were developed as substitutes, particularly chlorofluorocarbons (CFCs) and hydrochlorofluorocarbons (HCFCs), which have better thermodynamic properties.   (Freon is a DuPont trademark for a range of related refrigerants of this type.)  These refrigerants came under fire beginning in the 1970s, when it was discovered that they could act to deplete the ozone layer in the Earth’s atmosphere.  A new class of refrigerants, hydrofluorocarbons (HFCs), was developed and gradually adopted; however, although the HFCs do not attack the ozone layer, they are very potent greenhouse gases (~10,000 times as potent as carbon dioxide).

One of the effects of all this has been to re-examine an old choice: carbon dioxide as a refrigerant.  Modern manufacturing and construction techniques have made the construction of suitable high-pressure systems less problematic.  CO2 is non-toxic, and there is certainly plenty of it available.   Some commercial units are already using it.

John Mandyck, a vice-president of modern-day Carrier, says the company has already begun rolling out its first CO2-based products. They extract the gas from the air, making them carbon-neutral and easy to replenish in the event of a leak.

There are other methods being tried to improve the efficiency and environmental friendliness of air conditioning, of course.  Two look back to ideas that predate Carrier’s: using ice (made at night when power is cheap) to cool air, and using evaporative cooling.   With more steamy weather in the current forecast, I’m glad we have it, however it works.


UK Research Councils Announce Open-Access Policy

July 22, 2012

If I have seen further it is by standing on the shoulders of giants.
— Sir Isaac Newton

Back in December of last year, I posted a note about the British government’s policy decision that all publicly-funded research should be made available online, free of charge.  Now, according to a report at Nature’s “News Blog”, the Research Councils UK (RCUK), a group of seven government-funded agencies that provide research grants, have announced a new open-access policy (press release), which will apply to all research that they fund, wholly or in part, beginning in April, 2013.

This is a significant step forward, because the new policy is not just a statement of principle, but has quite specific requirements for future research publications.  There are two ways in which the requirements can be satisfied.

Science journals have two ways of complying with the policy. They can allow the final peer-reviewed version of a paper to be put into an online repository within six months. Alternatively, publishers may charge authors to make research papers open-access up front.

The RCUK are big enough — they collectively spend about £ 2.8 billion ($ 4.4 billion) on research grants every year, to have a significant influence on how the systems works.

Apparently for historical reasons, which I have not managed to track down, the first option (up to six months’ embargo) is sometimes called the “green” option; the second (pay up front) is, similarly, called the “gold” option.  RCUK has said that it will make annual block grants available to institutions to support the “gold”, pay in advance, option.  Also noteworthy is the new policy’s requirement that papers with pre-paid open access be published under a Creative Commons license: specifically, the CC-BY license.  (Creative Commons licensing is, in broad terms, open-source licensing for documents.  This blog is published under a Creative Commons license — see the “Legal Stuff” sidebar for details.)

The “green” (temporary embargo) option is similar to the policy of the US National Institutes of Health (NIH), although the NIH allows an embargo of up to  twelve months.   The Wellcome Trust, a major UK health charity, also has a similar policy.

Clearly the new policy is motivated by, and has the support of, the UK government.  The Department for Business Innovation & Skills (BIS)† has also announced its support for open access; in particular, it accepted the main recommendations of the Finch Group. a task force on open access headed by Prof. Dame Janet Fitch, OBE.

I’m a big fan of the open access movement.  I can see no justification at all for charging citizens (i.e., taxpayers) to look at research results that they paid for in the first instance.  Even putting aside this argument from principles of equity, a cornerstone of the scientific method is exposing results to widespread scrutiny, so that errors can be detected, and so that other can build on the work that has been done.

——
† I cannot help thinking that the BIS name is unfortunate.  My feelings are perhaps colored by my experience at an early job.  The company had an “Office of the Future” department.  I wished, more than once, that we could get to the “office of the present” as a starter.


%d bloggers like this: