Virus Mutations

August 14, 2009

Since the early days of the PC, it has been standard advice to new users that one should install and keep up-to-date an anti-virus program on the computer – especially if it is a Windows PC.  The anti-virus software works by scanning files (for example, when they are opened, executed, or saved) and looking for clues that indicate potentially malicious software (often called malware).  The most common technique for doing this uses a database of malware signatures provided by the vendor; these are small patterns that appear in the executable code or other parts of the malware.  In one way, this is a good approach: it gives a clear result, and false positives are relatively rare.  On the other hand, it has the obvious disadvantage that it cannot possibly detect malware that the anti-virus vendor has never seen before.  As the use of anti-virus programs in general, and signature-based detection in particular, spread, an “arms race” was started between the vendors of anti-virus software and the creators of malware.

The “Unsafe Bits” blog at Technology Review has a report, based on work done by Panda Security, a Spanish anti-virus vendor, that shows one reason why the good guys seem to be losing.   Malware writers have for some time been using tactics to mask or vary the appearance of their code, so that it will be less likely to match an anti-malware signature.  The task of keeping track of these changes is getting harder; for one thing, there are more of them:

On Wednesday, the company [Panda] announced that the quantity of malicious software seen by its customers has skyrocketed recently, with the firm now processing some 37,000 samples per day. In 2008, Panda saw 22,000 new samples every day, on average.

And the malware creators are getting better at changing their products more rapidly.  (Even if a particular virus writer doesn’t know how to do this very well, he can obtain a toolkit to help him on the Internet.)

Panda documented the churn by noting that 52 percent of samples are only seen in a single 24-hour period. Another 19 percent do not last more than two days. Within three days, 80 percent of all malware disappears from the Internet.

Given this rate of change, even the daily updates that anti-virus vendors have been using for some time are probably not enough.  Of course, most vendors now use various detection heuristics in addition to signature matching, but these have their own problems.

I still recommend that Windows users obtain a decent anti-virus program and keep it up to date.  But having an effective firewall, in hardware or software, keeping up to date with software security patches, and avoiding software known to be particularly prone to malware attacks (such as Internet Explorer version 6) are probably more important.

Safer Software

August 14, 2009

Anyone who has used a computer for any length of time is aware of the truth of the aphorism that all non-trivial software contains bugs.  Our current procedures for developing software leave a lot to be desired.  To cite a remark attributed to Gerald Weinberg,

If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization.

There have been numerous brave attempts to improve this situation, and to make software engineering as a discipline resemble other areas in engineering in some respect besides name.  (One of the earliest and best of these is The Mythical Man-Month, by Fred Brooks.)   But basically the way software is developed is to build it, then test it in order to see if it breaks.

There has been some work done in the area of provably-correct software: that is, software that can be shown, with formal logic, to correctly implement a particular specification.  This is a useful exercise in thinking about the problem, and its most important consequence to date has probably been to help people realize how hard the problem really is.  For example, coming up with an adequately specific and detailed specification is not in itself a trivial task.

However, The Engineer, a UK magazine for engineers, has an interesting article on a project done at NICTA, a national technology research center in Australia.   A team from NICTA, along with some researchers from the University of New South Wales, has manged to complete a formal correctness proof of the L4 operating system microkernel for embedded devices.  The specification of the system is written in Haskell, a functional programming language.  The actual code of the microkernel (about 7500 lines of C and assembler) is then verified against the specification.   The proof is not simple, involving as it does more than 10,000 intermediate proofs.  But it is a noteworthy achievement, since it is much larger than any proof completed before.

‘It is hard to comment on this achievement without resorting to clichés,’ said Prof Lawrence Paulson at Cambridge University’s Computer Laboratory. ‘Proving the correctness of 7,500 lines of C code in an operating system’s kernel is a unique achievement, which should eventually lead to software that meets currently unimaginable standards of reliability.’

One benefit of the formal technique is that it can determine that certain kinds of problems (e.g., buffer overflows) are not possible.  Given how many well-known, simple errors like this crop up every day, having another way of looking for them is  a very attractive idea.

A technical paper describing this research is scheduled to be presented at 22nd ACM Symposium on Operating Systems Principles. More information on the project is available at the NICTA Web site.

Although I very much doubt that this signals the end of bugs in software, it should, if properly used, give us another tool to help squash them.

%d bloggers like this: