skip navigation

PUBLICATIONS

Register for Latest Research

Stay Informed
Register with NCJRS to receive NCJRS's biweekly e-newsletter JUSTINFO and additional periodic emails from NCJRS and the NCJRS federal sponsors that highlight the latest research published or sponsored by the Office of Justice Programs.

NCJRS Abstract

The document referenced below is part of the NCJRS Virtual Library collection. To conduct further searches of the collection, visit the Virtual Library. See the Obtain Documents page for direction on how to access resources online, via mail, through interlibrary loans, or in a local library.

 

NCJ Number: 148185 Add to Shopping cart Find in a Library
Title: MEASURING AFIS MATCHER ACCURACY
Journal: Police Chief  Volume:61  Issue:4  Dated:(April 1994)  Pages:147-151
Author(s): M K Sparrow
Date Published: 1994
Page Count: 5
Sponsoring Agency: National Institute of Justice/
Rockville, MD 20849
Sale Source: National Institute of Justice/
NCJRS paper reproduction
Box 6000, Dept F
Rockville, MD 20849
United States of America
Type: Report (Technical Assistance)
Format: Article
Language: English
Country: United States of America
Annotation: The author discusses automated fingerprint identification systems (AFIS) for the sake of potential buyers.
Abstract: Buyers--cities, States, and nations--typically make AFIS decisions based on political, sole-source, or pragmatic concerns, rather than an informed understanding of system performance. The most crucial component of AFIS, accuracy, is also the most difficult to measure. Other factors such as price, space requirements, user friendliness, and computability are much easier to assess. Buyers should not settle for "99.9-percent accuracy." It is important to have a contextual interpretation of other users' claims of satisfaction with their systems. For one thing, other users have an interest in defending their past buying decisions. This and other important considerations are discussed: ten- print tests using the same cards twice; generating artificial latent images from rolled prints; overestimating the ten-print comparison problem; using "dabs" as latents; advance submission of test sets; failure to correct for size of database; use of score-dependent measures; use of multiple thresholds; latent test set based upon previous performance; minimum total error; mismatch score distribution percentiles; and ranked lists. Figure, 3 endnotes
Main Term(s): Computers
Index Term(s): Automated fingerprint processing; Police
To cite this abstract, use the following link:
http://www.ncjrs.gov/App/publications/abstract.aspx?ID=148185

*A link to the full-text document is provided whenever possible. For documents not available online, a link to the publisher's website is provided. Tell us how you use the NCJRS Library and Abstracts Database - send us your feedback.