Windows DLL Hijacking

August 31, 2010

Recently, a new security vulnerability affecting Microsoft Windows has been reported that potentially provides a new vector for remote attacks.  The vulnerability has to do with how Windows searches for and loads Dynamic Link Libraries [DLLs], which are used by virtually all Windows applications.  (DLLs are similar to shared libraries in Unix/Linux.)   The SANS Internet Storm Center has a good diary entry reviewing the issue, and Microsoft has issued a Security Advisory (2269637), which contains suggested mitigation steps and work-arounds.

The nature of this vulnerability is a little out of the ordinary, although it has aspects in common with the LNK vulnerability we saw earlier this summer, which Microsoft patched at the beginning of August.   It is a result of the way a basic mechanism of Windows works, so it is hard to say exactly which applications might be vulnerable.   The key really is how careful the application’s developers were when building their distribution package.

To explain why this is so, I’ll have to discuss briefly what DLLs are, and how Windows uses them.  Like Unix, Linux, OS/2, and other relatively modern systems, Windows contains a large number of program libraries, which supply software modules that can be shared among applications.  For example, a routine that displays a dialog box to the user would be a logical candidate for inclusion in a library.  In these systems, furthermore, the libraries are dynamic, meaning that the routines that they contain can be loaded and linked at run time, rather than in a separate, static linking step.  Also, especially in virtual memory systems, this facilitates having only one copy of the routine resident in memory, even if it is being used by multiple applications.

So, essentially, DLLs provide pieces of code that can be loaded and executed at run time, as part of an application.  That description should suffice to make any security-minded person fairly nervous.  Obviously, this is a mechanism that has to be controlled, so that arbitrary code cannot be so inserted — and this is the kernel of the problem.  Some “known DLLs”, typically for core system functions, are specified in a Windows registry key, and the system knows where they are.  There are two problems with the way Windows searches for any other specified DLL at run time.

The first place Windows looks for a DLL is always the directory from which the application was loaded.  This is done so that applications can override a “standard” DLL for their own use.  Beyond that, though, there are two cases.  Since Windows XP/SP2, the search order has been:

  1. The directory from which the application loaded.
  2. The system directory.
  3. The 16-bit system directory.
  4. The Windows directory.
  5. The current directory.
  6. The directories that are listed in the PATH environment variable.

The first matching DLL found is used.  (Microsoft has a good article on MSDN explaining all this in more detail.)

However, previous versions of Windows used a different order, in which the current directory was searched immediately after the application directory (that is, it was number 2 on the list).  Newer versions of Windows can be made to revert to the old behavior, according to the setting of the SafeDllSearchMode registry key.  If the current directory is searched before the system directories, anyone who can write to the current directory can substitute a malicious DLL for the standard one.  (Ask any Unix or Linux sysadmin why root should never have ‘.’ in its PATH.)

The second problem is more subtle, and harder to detect.  Because applications can specify alternative paths to be checked for DLLs, and because the directories in the PATH are searched, an attacker might be able to put a malicious version of an application’s DLL in the PATH, or on a Windows share.

The existence and severity of this vulnerability depends on the application, and a general Windows patch is not really possible.  Microsoft has, however, released a tool that allows you, via a registry key setting, to selectively disable parts of the DLL search process, either for specific applications, or on a system-wide basis.

So far, there have been sample exploits published for Windows Mail and Microsoft Office applications.  It is hard to predict what the ultimate impact of this will be.  For now, if you run Windows systems, I recommend getting a copy of the new tool from Microsoft, and testing its installation on a non-critical system.  In theory, installing the tool just enables the new registry settings; it shouldn’t actually change any behavior without further action.  But I think it’s a good idea to verify this on your own system, since if you find you need this tool, you will probably need it in a hurry.

Update Wednesday, 1 September, 12:08 EDT

Brian Krebs has a good overview of this vulnerability at his Krebs on Security blog.  As he points out, Microsoft has updated the support article [2264107]  on the tool I mentioned above to include more information, and also to provide a “one-click” FixIt tool that will make a mitigating registry change for you.  It is important to realize that the FixIt tool requires the original tool / patch to be installed first — that is what enables the relevant registry entry in the first place.  So if you want to install the “FixIt”, two steps are necessary:

  • Scroll down the page in the support article [2264107] to the section called “Update Information”.  Download the patch for your version of Windows, and install it.  You will most likely have to reboot your machine.
  • Go back to the support article, or the security blog article mentioned below, click the “FixIt for Me” link, and follow the instructions.

Microsoft also has a new article on its Security Research and Defense blog giving more information about how the exploit would work, from a user’s perspective.  The article also has a link to the FixIt tool.

Update Thursday, 2 September, 20:20 EDT

The US-CERT [Computer Emergency Readiness Team] Web site has an updated Vulnerability Note (# 707943), which contains their summary of this issue.  It also contains, in the section “Vendor Information”, a list of vendors who have acknowledged that their software may be affected; this may be useful if you have concerns about a specific application.


Virginia’s Systems Still Hosed

August 30, 2010

I had occasion to visit the Virginia Department of Motor Vehicles [DMV] this morning.  Although, contrary to popular mythology, all of the staff I encountered there were very polite and helpful, I observed that at least some of their computer systems, run by VITA [Virginia Information Technologies Agency] are still not working, since people who wanted to apply for or renew their driving licenses were being told that it would not be possible today.  This was confirmed by a DMV press release issued yesterday.  (I did check before I went to ensure that my transaction could be handled., as indeed it was.)

Apparently VITA has still not recovered from the outage that started last Wednesday afternoon and brought down a significant proportion of their “high reliability” infrastructure.   So their service is still not restored, on the sixth calendar day (fifth business day for the DMV) since the original equipment failure.   I worked as an IT Director at a couple of different investment banking firms, and I can assure you that if we had experienced a six hour failure, I would have received an immediate field promotion to Former IT Director.

But people whose licenses are expiring, or have already expired, don’t have to worry.  Owing to a particularly stupid piece of legislation, they will be able to enjoy a special inconvenience when (if?) the system ever is back in service.

If your driver’s license or ID card has an expiration date of August 25 through 30 and you must renew in person at a DMV office, when service is restored you will need to bring your birth certificate, passport, or other document that confirms you are a U.S. citizen or legally authorized by the federal government to be in the country. This requirement is Virginia law and cannot be waived by DMV.

I’m reminded of the time when I moved back to the US from the UK, and  had to get a Virginia driving license.  I made the initial mistake; I thought my existing New York driver’s license was still valid, but I was wrong.  I had a perfectly valid English driving license, but the DMV didn’t want to know about that.  So, like a real rookie, I had to take a road test again — which was a joke — and I had to supply documents proving my identity and legal residence.

The state was gracious enough to accept my US passport as one of the documents, but the expired New York license was no good — who knows who I might have become once I was no longer under the legal imprimatur of New York?  I didn’t have an official copy of my birth certificate.  After looking at the list of acceptable documents, I got a certificate from the public school system confirming that I had graduated from a Virginia high school a few decades earlier, and got my license.  That was really just bad luck, though; if I only had had a certificate from my parole officer, that would have done nicely.

Update Monday, 30 August, 16:30 EDT

There’s an article at Computerworld about this, too.   According to a status update at the VITA Web site, dated Sunday, August 29, at 10 PM (the most recent update there):

According to the manufacturer of the storage system, the events that led to the outage appear to be unprecedented. The manufacturer reports that the system and its underlying technology have an exemplary history of reliability, industry-leading data availability of more than 99.999% and no similar failure in one billion hours of run time.

The manufacturer says it’s OK?  Well, then that’s all right.


Adenovirus Structure Described

August 29, 2010

Perhaps relatively few people would immediately know what the term adenovirus refers to, but most of us are probably familiar with the effects of a member of this family of viruses, the one that causes the common cold.  Although it is rare that these microbes cause permanent damage or death in people with normally-functioning immune systems, they have certainly been a source of human misery for a long time.

The Medical Daily news site has a report on some new research by scientists at the Scripps Research Institute who, in  a paper [abstract]  published in the August 27 issue of Science, have presented the first detailed description of the structure of an adenovirus, down to atomic scale.  The imaging of the crystalline virus was done using X-ray diffraction, giving a resolution of 3.5 angstroms.   (One angstrom, abbreviated Å, is one nanometer, or 1 × 10-10 meter.)   The virus particles analyzed have a mass of approximately 150 megadaltons, or 150 million times the mass of a single carbon atom, and contain nearly 1 million amino acids.  This is the largest such particle analyzed to date.

The team began to work on determining the molecular structure of the virus in 1998; the subsequent project turned out to take much longer than expected.   One of their major hurdles was getting the virus into a form which could be crystallized.  They developed a variant form of the virus to this end, but eventually also had to use robotic crystallization, which can use samples of solution much smaller than usually possible (samples on the order of 50 nanoliters were used).  The work also used a new synchrotron, the Advanced Photon Source 23 ID-D beamline at the Argonne National Laboratory, to achieve the necessary resolution.

This really is a significant accomplishment.  Of course, elucidating the structure of the virus may some day help in developing treatments for infections caused by the virus, which would delight cold sufferers everywhere.  However, a more important possible benefit relates to the development of genetic therapies.  One of the techniques used in this area is the insertion of the genetic material into a suitably benign virus, which then can infect the target cells.  Researchers in the field have been interested in the adenovirus family because the virus is fairly hardy and can infect a variety of cell types.   A better understanding of the virus’s structure might be of great value in making the therapy more effective, while minimizing side effects.


Will Greener Lighting Save Energy?

August 29, 2010

I feel reasonably sure that we all, by now, have heard some of the urging to reduce our energy consumption, and thereby indirectly help reduce emissions of carbon dioxide.  One of the steps that has been widely recommended, at least here in the US, is to substitute more efficient light sources (such as compact fluorescent lamps) for our traditional incandescent light bulbs.  (Most of the energy used by incandescent bulbs is given off as heat.)  There has also been an expectation (I wrote about it earlier) that further development of light-emitting diodes (LEDs) could give us an even more energy-efficient source of light.    There has been a more or less common but unspoken assumption that people would just switch to the new lighting technologies, thereby saving energy, with nothing else changing.

Now this is a somewhat curious assumption to make from an economic point of view.  If there new lighting devices save energy, that will manifest itself as a lower cost per unit of light obtained.  (Of course, one must account for the total cost of light production, including the purchase of the device, but at least the possibility of net savings exists.)   For most goods that people buy, a drop in the price per unit will tend to produce an increase in the number of units consumed, other things being equal.

This week’s edition of The Economist has an article reporting on a new analysis[abstract] published this week in the Journal of Physics D by a groups of scientists at Sandia National Laboratory [full paper PDF free download for 1 month].  (The PhysOrg.com site also has an article on this.)  The authors examine the history of lighting technology improvements, and find that better, and cheaper, lighting technology has generally produced an increased demand for lighting.  As The Economist puts it,

The light perceived by the human eye is measured in units called lumen-hours. This is about the amount produced by burning a candle for an hour. In 1700 a typical Briton consumed 580 lumen-hours in the course of a year, from candles, wood and oil. Today, burning electric lights, he uses about 46 megalumen-hours—almost 100,000 times as much. Better technology has stimulated demand, resulting in more energy being purchased for conversion into light.

If you have ever tried to manage after dark by candlelight when the power is out, you will probably have gained some appreciation that the artificial light levels we are used to today are rather higher than those expected a few generations ago.

The paper itself is a very good piece of work.  It looks at how lighting consumption has changed as new technologies for producing artificial light have been introduced.   Interestingly, using data from a collection of studies, the authors find that the proportion of world-wide gross domestic product per capita spent on artificial light has remained nearly constant, at about 0..71%.   As the article points out, this is not a perverse result: people don’t do this because they are gluttons, but because the greater availability of useful lighting makes them more productive and brings other benefits, such as being able to read at night.

The authors also develop a model to forecast what the net effect of introducing LED lighting might be.  Using the assumption from current technology forecasts that, by 2030, LEDs will be about three times as efficient as current compact fluorescent lamps, their model predicts that per capita artificial light consumption will increase by about 10× over the same period.  But they also point out some possible ways in which this increase might be mitigated; for example, since LEDs are solid-state electronic devices, it should be possible to control both the amount and color temperature of the delivered light in a much more precise and localized way.

The paper is a good example of trying to think through all the implications of a technology change; it is well worth a read.


More Techno-Mishaps

August 27, 2010

I’ve written before about some of the risks involved when people become too dependent on their technological gadgets.  Sometimes the results are mostly amusing, as with that Swedish couple who, having mis-typed the name of their destination into their GPS device, ended up in the Northern  Italian town of Carpi rather than at the Isle of Capri.

Sometimes, though, the results can be a little more serious, as a recent article in the New York Times points out.  Sometimes, visitors become so engrossed in playing with their technological toys that they fail to pay attention to the physical world around them.

A French teenager was injured after plunging 75 feet this month from the South Rim of the Grand Canyon when he backed up while taking pictures.

In other cases, their faith in their gadgets, such as GPS devices and  cellphones, leads them to discount the risks of wilderness travel in a state of ignorance.

“Because of having that electronic device, people have an expectation that they can do something stupid and be rescued,” said Jackie Skaggs, spokeswoman for Grand Teton National Park in Wyoming.

“Every once in a while we get a call from someone who has gone to the top of a peak, the weather has turned and they are confused about how to get down and they want someone to personally escort them,” Ms. Skaggs said. “The answer is that you are up there for the night.”

One lost hiker called the ranger station on his cellphone, and asked if they could bring him some hot chocolate.

Going on back-country trips can be a wonderful experience, but it can also be dangerous for the uninitiated, who do not sufficiently appreciate the degree to which Mother Nature can be a bitch.

In an era when most people experience the wild mostly through television shows that may push the boundaries of appropriateness for entertainment, rangers say people can wildly miscalculate the risks of their antics.

So, if you want to go on a wilderness trip, make sure you have essential supplies, like food, water, and maps.  Take your GPS and cellphone, by all means; but take someone along who knows what he’s doing, too.


Virginia Systems Outage

August 27, 2010

I’ve written here last fall about some of the problems that the Commonwealth of Virginia is having with its computer systems. Several years ago, the state entered into a ten-year, $2.3 billion contract with Northrop-Grumman to modernize and run all the state’s computer systems and networks.  At the time, many state agencies were experiencing frequent system outages, apparently because the new systems had been designed without sufficient redundancy in their communication links.

It seems that the project has still not managed to put its problems behind it.  According to an Associated Press article carried by the Hampton Roads PilotOnline, the Virginia Information Technologies Agency [VITA] is currently trying to recover from a major system outage.  The problem apparently began at one of VITA’s data centers, outside of Richmond, with what is described as the failure of a “memory card”  (I am not sure what sort of device they mean by that).  Ostensibly, the system was designed with backup hardware and high-availability capability, but it apparently did not work too well.

The system was built with redundancies and backup storage. It was hailed as being able to suffer a failure to one part but continue uninterrupted service because standby parts or systems would take over. But when the memory card failed Wednesday, a fallback that attempted to shoulder the load began reporting multiple errors, Nixon [Sam Nixon, the state’s chief information officer] said.

The failure affected at least two dozen state agencies, including the Department of Taxation, and the Department of Motor Vehicles, which as of this morning was unable to process driver’s license application at any of its 74 offices across the  state.

The agency hoped to have most systems operational by sometime today, but said that getting all function back online might take until Monday.

I have worked on the design and implementation of highly reliable systems.   I will not claim it is always an easy job, but I know it can be done.  Mr. Nixon said that failures of the type of memory  card at fault were rare; he went on to say,

“This is supposed to be the best system you can buy, and it’s never supposed to fail, but this one did,” he said.

Whether or not it is the best system one can buy is open to question; but only an idiot thinks that there is any system that never fails — one should remember that never is a very long time.


%d bloggers like this: