I’ve written here a couple of times about the risk of an adversary inserting malicious code into a PC’s firmware, either in the BIOS, or elsewhere, perhaps on a network interface card. The risk is not just from intentionally malicious code; there is also a potential problem with parts that may not be genuine, but cheap “knock-offs”, similar to a fake Louis Vuitton bag. One of the reasons that these risks are considered so serious is that, from the usual computer security viewpoint, which is focused (for good reason!) on software, these potential exploits are effectively invisible, appearing as part of the machine’s hardware. The issue has generated considerable concern with respect to defense systems, since so many contemporary weapons systems are dependent on electronic components; and, as in civilian life, many of these components are manufactured in other countries, especially China. On the other hand, there has also been some skepticism expressed about the practicality of such an attack.
The Technology Review has a report on a presentation at last week’s Black Hat US security conference, in which a French hacker, Jonathan Brossard. demonstrated a practical attack of this kind that would work on a wide range of current PCs.
At the Black Hat security conference in Las Vegas last week, Jonathan Brossard demonstrated software that can be hidden deep inside the hardware of a PC, creating a back door that would allow secret remote access over the Internet. His secret entrance can’t even be closed by switching a PC’s hard disk or reinstalling its operating system.
The exploit, which is called Rakshasa, is quite cleverly designed to minimize its chances of being detected. The modified firmware on the PC contains just enough code to allow the whole package to function.
When a PC with Rakshasa installed is switched on, the software looks for an Internet connection to fetch the small amount of code it needs to compromise the computer.
This means that, if there is no Internet connection available, the exploit will not function; but it also makes Rakshasa stealthy, and essentially invisible to most malware detection methods, since it does not leave any “footprints” in the file system, or in the disk’s boot record. It also means that new attack functions can be added to the retrieved malware.
The code Rakshasa fetches is used to disable a series of security controls that limit what changes low-level code can make to the high-level operating system and memory of a computer. Then, as the computer’s operating system is booted up, Rakshasa uses the powers it has granted itself to inject code into key parts of the operating system. Such code can be used to disable user controls, or steal passwords and other data to send back to the person controlling Rakshasa.
The response of at least one manufacturer was, sadly and predictably, an attempt at spin control.
The attack can work on PCs with any kind of processor, but many of the standard features of PC motherboards originated with Intel. Suzy Greenberg, a spokeswoman for that company, said in an e-mail that Brossard’s paper was “largely theoretical,” since it did not specify how an attacker would insert Rakshasa onto a system, and did not take into account that many new BIOS chips have cryptographically verified code that would prevent it from working.
The response also, to a considerable extent, misses the point. A supplier of chips or firmware, or a PC manufacturer, could easily install something like Rakshasa. I presume that the “new BIOS chips” Ms. Greenberg refers to are those implementing the UEFI Secure Boot feature; however, as I’ve discussed, it’s likely that most new PCs with this feature will have some means of bypassing it, so that alternative software can be installed. It is not difficult to imagine a “social engineering” attack that would persuade users to install a firmware “upgrade”. Once that is done, the likelihood that any current anti-malware tools would discover anything amiss is very low.
Software flaws are difficult to find, and hardware defects are even more elusive. In general, lower-level bugs (closer to the “bare metal”) are harder to find, a point Ken Thompson made many years ago in his address, “Reflections on Trusting Trust”, given at his acceptance of the Turing Award from the Association for Computing Machinery in 1983:
No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect.
It is certainly true that putting together a successful attack of this kind is a considerably more challenging project than constructing a MS Word macro virus. But, as Mr. Brossard has demonstrated, it is by no means impossible, especially for an attacker with significant resources who sees a large potential payoff.