Fixing Fingerprint Flaws

December 28, 2011

I’ve written here before about some of the problems with fingerprints, one of the older biometric technologies; with DNA analysis, currently the “gold standard” of biometric identification; and about some of the issues involved with biometrics in general.  These techniques, of course, always work brilliantly in the movies and on TV shows like CSI; but, as we are frequently reminded, the real world can be a messier place.  It should be obvious, at least, that biometric evidence is necessarily statistical in nature; it is no more possible to prove that fingerprints are unique than it is to prove that no two snowflakes are alike.  There have been miscarriages of justice, and near misses, because this fundamental principle was not understood, or just ignored.

A recent article at the New Scientist reports on some further evidence of problems with fingerprint forensics.  The Scottish government in the UK set up a Fingerprint Inquiry, chaired by The Rt Hon Sir Anthony Campbell, under the Inquiries Act 2005, to look into the fingerprint analysis used in the case of H.M Advocate v McKie.  Shirley McKie was a police detective involved in a murder investigation, who was subsequently tried for perjury based on a fingerprint found at the crime scene; her trial included conflicting expert testimony on whether the fingerprint in question matched McKie’s.  She was found not guilty by the jury.  (More background on the case is here.)

The Fingerprint Inquiry report was published on December 14, and provides a comprehensive history of the case, an examination of current practices with respect to fingerprint evidence, and a set of recommendations for improvements.

The report, published on 14 December, concludes that human error was to blame and voices serious concerns about how fingerprint analysts report matches. It recommends that they no longer report conclusions with 100 per cent certainty, and develop a process for analysing complex, partial or smudged prints involving at least three independent examiners who fully document their findings.

The New Scientist article also reports on findings of two other studies that looked at possible biases in the customary analysis of fingerprint evidence.  In the first, a team led by Itiel Dror of University College London tested whether fingerprint analysts’ results for crime-scene prints were affected if they saw suspects’ fingerprints at the time of analysis.  It should not come as a huge surprise to find out that there was a difference.  The same team also examined how analysts checked potential matches provided by an automated fingerprint identification system [AFIS].  They found that potential matches that came earlier on the AFIS-generated lists were more likely to be identified as matching by the examiners, and that changing the order of the entries on the AFIS list could change the “match” selected.  This research does suggest that how fingerprint evidence is analyzed matters.

In addition to the recommendations of the Fingerprint Inquiry, he [Dror] says examiners should always analyse crime-scene prints and document their findings before seeing a suspect’s print, and should have no access to other contextual evidence.

Despite the impression of scientific rigor and infallibility one may get from watching TV cop shows, fingerprint evidence, in addition to being based on a statistical assertion, is collected and analyzed by people, and is therefore subject to the same kind of errors people in other fields make as a matter of course.  If we want to make the administration of justice as fair as possible, we need to make sure that message is understood and reflected in forensic practice.

%d bloggers like this: