Those of us who are involved with the design and use of computer systems spend a fair amount of time — for many of us, more than we’d like — worrying about and trying to prevent software defects. One important class of these defect is security vulnerabilities. They always seem to be with us (Microsoft, for example, has its “Patch Tuesday” every month), and the Bad Guys are always coming up with new ways to exploit them. This becomes a bit depressing after a while, especially when one knows that some of the attack techniques have been known and talked about for decades. The venerable buffer overflow attack is still being used, for example, despite the fact that it has been known and studied at least since the very first Internet worm (the Morris worm) employed it back in 1988.
Now Technology Review has a report on research conducted by a team led by MIT computer science Prof. Martin Rinard, which aims to automate the process of fixing software defects:
In work presented this month at the ACM Symposium on Operating Systems Principles in Big Sky, MT, a group of MIT researchers, led by Rinard and Michael Ernst, who is now an associate professor at the University of Washington, developed software that can find and fix certain types of software bugs within a matter of minutes.
The software, called ClearView, works by observing the behavior of a running program over a period of time, and deducing from that “invariants” of the program’s proper functioning: that is, rules that the program obeys when it is working. It uses error-detection logic to identify violations of the rules, which may be caused by an attacker attempting to exploit a vulnerability, and generates potential patches to the executable program that might prevent the erroneous behavior.
ClearView analyzes these possibilities to decide which are most likely to work, then installs the top candidates and tests their effectiveness. If additional rules are violated, or if a patch causes the system to crash, ClearView rejects it and tries another.
All of this is done working with the binary executable program; the source code is not required. And it is done without direct human intervention, although of course there are some cases in which the auto-correction mechanism will be stumped and have to give up.
Some initial testing produced some fairly impressive results, at least as far as exploit prevention was concerned:
To test the system, the researchers installed ClearView on a group of computers running Firefox and hired an independent team to attack the Web browser. The hostile team used 10 different attack methods, each of which involved injecting some malicious code into Firefox. ClearView successfully blocked all of the would-be attacks by detecting misbehavior and terminating the application before the attack could have its intended effect.
In seven of the ten cases, the automatic patching logic came up with fixes for the underlying problem, typically within about five minutes. In no case did ClearView produce an erroneous patch, one that had negative side effects.
This is an extremely interesting bit of work. I think it has considerable promise for mitigating the ill effects of common avenues for software attacks (such as buffer overflows), especially since, by keeping an attacked system running, it can defeat the basic objective of a denial-of-service attack. Prof. Rinard has ambitious goals for the project:
Rinard says that ClearView could be used to fix programs without requiring the cooperation of the company that made the software, or to repair programs that are no longer being maintained.
This may be feasible, if the need is to provide an alternative to source-based software patches. On the other hand, problems with so-called “legacy” software are in many cases semantic or logical errors; it’s less clear to me that Rinard’s approach can help much for them. Nonetheless, this is a valuable piece of work that might just reduce the rate at which we all acquire more gray hair.