U.S. flag

An official website of the United States government, Department of Justice.

NCJRS Virtual Library

The Virtual Library houses over 235,000 criminal justice resources, including all known OJP works.
Click here to search the NCJRS Virtual Library

Statement Validity Assessment: Inter-Rater Reliability of Criteria-Based Content Analysis in the Mock-Crime Paradigm

NCJ Number
212256
Journal
Legal and Criminological Psychology Volume: 10 Issue: 2 Dated: September 2005 Pages: 225-245
Author(s)
Heinz Werner Godert; Matthias Gamer; Hans-Georg Rill; Gerhard Vossel
Date Published
September 2005
Length
21 pages
Annotation
This study assessed the inter-rater reliability of criteria-based content analysis (CBCA), the main component of statement validity assessment (SVA); tested the adequacy of diverse statistical indexes of reliability; and analyzed CBCA's effectiveness in distinguishing between true and false statements.
Abstract
SVA is the most common technique for testing the credibility of verbal statements. It consists of a structured interview; the CBCA, which systematically assesses the content quality of the statement; and the Validity Checklist, which relates the CBCA outcome to other evidence and to factors associated with the interview. The CBCA consists of 19 criteria that are assumed to reflect a statement's content quality in terms of vividness, concreteness, originality, psychological coherence, etc. The more numerously and/or the more explicitly these criteria are met by a statement, the more it is probable that the statement is credible, i.e., that it is based on real experience. In order to test the reliability of CBCA across raters, three raters were trained in CBCA. They were then asked to apply CBCA criteria to transcripts of 102 statements by witnesses and suspects in a simulated theft of money. Some of the statements were based on actual experience and some were false in various ways. The analysis of rater judgments of the credibility of statements varied only slightly across transcripts, which rendered the weighted kappa coefficient, the product-moment correlation, and the intra-class correlation as inadequate indexes of reliability. The Finn-coefficient and percentage agreement, which were calculated as indexes independent of rater judgment distributions, were sufficiently high regarding 17 of the 18 assessed CBCA criteria to conclude that the CBCA differentiated significantly between truthful and fabricated accounts across raters. 4 tables, 59 references, and appended description of the rater training