U.S. flag

An official website of the United States government, Department of Justice.

NCJRS Virtual Library

The Virtual Library houses over 235,000 criminal justice resources, including all known OJP works.
Click here to search the NCJRS Virtual Library

Report of the Ad Hoc Committee on Validated Techniques

NCJ Number
238105
Journal
Polygraph Volume: 40 Issue: 4 Dated: 2011 Pages: 203-305
Author(s)
Mike Gougler; Raymond Nelson; Mark Handler; Donald Krapohl; Pam Shaw; Leonard Bierman
Date Published
2011
Length
103 pages
Annotation
This report presents the results of a meta-analytic survey of criterion accuracy of validated polygraph techniques.
Abstract
Meta-analytic methods were used to calculate the effect size of validated psychophysiological detection of deception (PDD) techniques, expressed in terms of criterion accuracy. Monte Carlo methods were used to calculate statistical confidence intervals. Results were summarized for 45 different samples from experiments and surveys, including scored results from 295 scorers who provided 11,737 scored results of 3,723 examinations, including 6,109 scores of 2,015 confirmed deceptive examinations, 5,628 scores of 1,708 confirmed truthful exams. Fourteen different PDD techniques were supported by a minimum of 2 published studies each that satisfied the qualitative and quantitative requirements for inclusion in the meta-analysis. Results for the individual studies, and for different PDD techniques, were compared using multivariate analytic methods. Two studies produced outlier results that are not accounted for by the available evidence and which are not generalizable. Excluding outliers, there were no significant differences in criterion accuracy between any of the PDD techniques supported by the selected studies. Excluding outlier results, comparison question techniques intended for event-specific (single issue) diagnostic testing, in which the criterion variance of multiple relevant questions is assumed to be non-independent, produced an aggregated decision accuracy rate of .890 (.829 - .951), with a combined inconclusive rate of .110 (.047 - .173). Comparison question PDD techniques designed to be interpreted with the assumption of independence of the criterion variance of multiple relevant questions (multiple-issue and -facet) produced an aggregated decision accuracy rate of .850 (.773 - .926) with a combined inconclusive rate of .125 (.068 - .183). The combination of all validated PDD techniques, excluding outlier results, produced a decision accuracy of .869 (.798 - .940) with an inconclusive rate of .128 (.068 - .187). (Published Abstract)