Experimental results of fingerprint comparison validity and reliability: A review and critical analysis |
| |
Affiliation: | 1. John D. Cooper Archaeology and Paleontology Center, Department of Geological Sciences, California State University, Fullerton, CA 92834, USA;2. Red Paleontológica U-Chile, Laboratorio de Ontogenia y Filogenia, Departamento de Biología, Facultad de Ciencias, Universidad de Chile, Las Palmeras 3425, Nuñoa, Santiago, Chile;1. Department of Public Health, Manipal University, Manipal, Karnataka, India;2. Department of Applied Health Research, University of Birmingham, Birmingham, UK;1. Department of Urology, Northwell Lenox Hill Hospital, New York, NY;2. Department of Urology, Mount Sinai Medical Center, New York, NY;1. Department of Economics, Strome College of Business, Old Dominion University, Norfolk, VA;2. Division of Epidemiology, Department of Public Health Sciences, Center for Healthcare Policy and Research, University of California Davis, Davis;1. Department of Neurosurgery, Salford Royal NHS Foundation Trust, Salford, United Kingdom;2. Institute of Cardiovascular Sciences, The University of Manchester, Manchester, United Kingdom;3. Leiden University Medical Center, Leiden, The Netherlands |
| |
Abstract: | Our purpose in this article is to determine whether the results of the published experiments on the accuracy and reliability of fingerprint comparison can be generalized to fingerprint laboratory casework, and/or to document the error rate of the Analysis–Comparison–Evaluation (ACE) method. We review the existing 13 published experiments on fingerprint comparison accuracy and reliability. These studies comprise the entire corpus of experimental research published on the accuracy of fingerprint comparisons since criminal courts first admitted forensic fingerprint evidence about 120 years ago. We start with the two studies by Ulery, Hicklin, Buscaglia and Roberts (2011, 2012), because they are recent, large, designed specifically to provide estimates of the accuracy and reliability of fingerprint comparisons, and to respond to the criticisms cited in the National Academy of Sciences Report (2009).Following the two Ulery et al. studies, we review and evaluate the other eleven experiments, considering problems that are unique to each. We then evaluate the 13 experiments for the problems common to all or most of them, especially with respect to the generalizability of their results to laboratory casework.Overall, we conclude that the experimental designs employed deviated from casework procedures in critical ways that preclude generalization of the results to casework. The experiments asked examiner-subjects to carry out their comparisons using different responses from those employed in casework; the experiments presented the comparisons in formats that differed from casework; the experiments enlisted highly trained examiners as experimental subjects rather than subjects drawn randomly from among all fingerprint examiners; the experiments did not use fingerprint test items known to be comparable in type and especially in difficulty to those encountered in casework; and the experiments did not require examiners to use the ACE method, nor was that method defined, controlled, or tested in these experiments.Until there is significant progress in defining and measuring the difficulty of fingerprint test materials, and until the steps to be followed in the ACE method are defined and measurable, we conclude that new experiments patterned on these existing experiments cannot inform the fingerprint profession or the courts about casework accuracy and errors. |
| |
Keywords: | |
本文献已被 ScienceDirect 等数据库收录! |
|