By this time, the problems that Toyota has had in dealing with the apparent sudden acceleration of some of their cars is rather old news. They probably have scored one notable success: providing the basis for a business school case study on how not to handle a public relations problem. Despite the recall of many vehicles, a definitive cause of the problem has not, as far as I know, been identified. This doesn’t particularly surprise me, since I have had a suspicion since the early reports of the problem that it might be software-related; and,as I have written a couple of times, bugs in complex software systems can be very hard to find.
I thought of the Toyota case recently when I saw an article at Ars Technica describing some very interesting security research paper that is due to be presented at the IEEE Symposium on Security and Privacy, which starts tomorrow in Oakland, California. As I mentioned in another post on the Toyota problems, modern automobiles have largely replaced direct mechanical or electro-mechanical controls with computerized electronic systems. The microprocessors, or Electronic Control Units [ECUs], that implement these systems are linked together in one or more networks (typically, using a protocol called Controller Area Network [CAN]), and collectively run a very large code base — some estimates put it in the millions of lines.
The new paper will be presented by researchers from the Center for Embedded Automotive Systems Security [CAESS], a research collaboration between the University of California, San Diego, and the University of Washington. The researchers examined two late-model vehicles (they do not identify the make or model), and examined the security of their electronic control systems both in the laboratory and in actual operation. (The operational tests were carried out at an abandoned airfield for safety’s sake.) They did not find it difficult to gain access to the control systems:
Their attacks used physical access to the federally mandated On-Board Diagnostics (OBD-II) port, typically located under the dashboard. This provided access to another piece of federally mandated equipment, the Controller Area Network (CAN) bus. With this access, they could control the various Electronic Control Units (ECUs) located throughout the vehicle, with scant few restrictions.
The relevant standards require very little in the way of security precautions, and these are not always implemented correctly. The research team was able to develop a number of successful attacks.
Once the researchers had gained access, they developed a number of attacks against their target vehicles, and then tested many of them while the cars were being driven around an old airstrip. Successful attacks ranged from the annoying—switching on the wipers and radio, making the heater run full blast, or chilling the car with the air conditioning—to the downright dangerous. In particular, the brakes could be disabled. The ignition key could then be locked into place, preventing the driver from turning the car off.
About the only function they were not able to control was the steering, and they note that even this may well possible in some high-end cars, which are starting to appear with self-parking systems. (Having lived in cities most of my adult life, I tend to look on these as crutches for the incompetent.) They were even possible to load new firmware to the control system while the vehicle was operating.
What they tested was harmless—turning on the wipers when the car reached 20mph—but the possibilities were enormous: for example, the ECU could wait until the car was going at 80 mph, and then disable all the brakes. They could also program in the ability to reboot and reset the ECU, so their hacked firmware would be removed from the system, leaving no trace of what they had done.
The researchers are careful to state in their paper (PDF available here) that they did not attempt to construct a threat model for the cars’ systems, or conduct a full risk analysis. Rather, they view their work as a first step to identify what attacks might be possible, and to determine, at least to some extent, how resilient the control systems are in the face of such attacks — the answer, unfortunately, is “not very”. Their attacks generally do require physical access to the vehicle, but with the advent and growing popularity of telematics systems like GM’s OnStar®, this may give a false sense of security.
Security problems in software systems are generally at least as hard to find as other bugs; and the track record of software developers in getting security right with their first try is not encouraging I hope that this kind of research will spur the auto manufacturers to address the question of security before they are forced to do so by a rash of security failures.