Biometrics and Security

The “Babbage” blog at The Economist has a good article this week about the current state of biometric identification techniques.   As the article points out, the public perception of biometrics as a sort of security “silver bullet” is somewhat at odds with the reality that they rarely work as well in practice as they do on CSI.

THANKS to gangster movies, cop shows and spy thrillers, people have come to think of fingerprints and other biometric means of identifying evildoers as being completely foolproof. In reality, they are not and never have been, and few engineers who design such screening tools have ever claimed them to be so. Yet the myth has persisted among the public at large and officialdom in particular.

I’ve written here before about some of the problems with fingerprints, one of the older biometric technologies, and with DNA analysis, currently the “gold standard” of biometric identification.

The article goes on to outline some of the things that can cause identifications based on biometrics to go wrong.  First, the scientific understanding of the issues involved is not, in general, entirely satisfactory.  We lack understanding, for example, of how biological changes, such as age, disease, or stress, might affect particular biometric characteristics.  In some cases, as with DNA, we may not have a solid handle on what the probabilities of a random match actually are.  Second, the sensors and other equipment used to take biometric measurements may be subject to errors due to poor design, mis-calibration, or environmental factors.

The biggest risk of error, though, comes from the fact that what we are relying on is not just the measurement of someone’s fingerprints or iris patterns, but the total system used to implement the biometric identification or authentication.   “Babbage” reminds us of a stark recent example of this: a US lawyer who was, for a time, falsely accused of being part of a terrorist attack on the Madrid subway system in 2004.

The eye-opener was the arrest of Brandon Mayfield, an American attorney practicing family law in Oregon, for the terrorist bombing of the Madrid subway in 2004 that killed 191 people. In the paranoia of the time, Mr Mayfield had become a suspect because he had married a woman of Egyptian descent and had converted to Islam. A court found the fingerprint retrieved from a bag of explosives left at the scene, which the Federal Bureau of Investigation (FBI) had “100% verified” as belonging to Mr Mayfield, to be only a partial match—and then not for the finger in question.

The fingerprint in question turned out to be from an entirely different person, an Algerian national.

Some of the current or proposed uses of biometrics for mass screening purposes suffer from the same “base rate fallacy” that affects medical testing.  As the article correctly points out, since terrorists are rare in the general population, even a very accurate test is likely to produce a significant number of false positives (that is, innocent people wrongly labeled as suspects), which will create a new class of problems.

The article also mentions a five-year study of biometrics, Biometric Recognition: Challenges and Opportunities, carried out by the National Research Council in the US, and published by the National Academies Press.  (You can read the report online at the link above, or download a PDF copy if you register at the site.)

Finally, there is one point, not mentioned in the article,  that I think is very important to remember, especially for those of us in the technology business.  The ability to measure a biometric is useful for security only if there is a way to compare that measurement with one from an “authentic” specimen.  For example, signatures are one of the oldest biometric security devices, and are not particularly high-tech.  Yet there would not be much point in your signing the checks that you write, if the bank did not have a sample of your signature on file.

To take a more modern example, the output from a fingerprint or iris scanner has to be compared with measurements stored in a database somewhere.  Getting a fancier biometric measurement will not help much if someone can subvert the communications channel between the biometric reader and the database, or can corrupt the database itself.   If the biometric sensor is physically separate from the database, one is then essentially checking, not the biometric itself, but the image data as transmitted across the network.  Potentially, someone who could steal a copy of the database, or parts of it, without detection, could generate a bogus “biometric” that would appear perfectly valid when presented to the database.

Biometric technology is certainly a useful security tool, but it is not the answer to all our security problems.

 

Comments are closed.

%d bloggers like this: