skip navigation


Register for Latest Research

Stay Informed
Register with NCJRS to receive NCJRS's biweekly e-newsletter JUSTINFO and additional periodic emails from NCJRS and the NCJRS federal sponsors that highlight the latest research published or sponsored by the Office of Justice Programs.

NCJRS Abstract

The document referenced below is part of the NCJRS Virtual Library collection. To conduct further searches of the collection, visit the Virtual Library. See the Obtain Documents page for direction on how to access resources online, via mail, through interlibrary loans, or in a local library.


NCJ Number: 212256 Find in a Library
Title: Statement Validity Assessment: Inter-Rater Reliability of Criteria-Based Content Analysis in the Mock-Crime Paradigm
Journal: Legal and Criminological Psychology  Volume:10  Issue:2  Dated:September 2005  Pages:225-245
Author(s): Heinz Werner Godert; Matthias Gamer; Hans-Georg Rill; Gerhard Vossel
Date Published: September 2005
Page Count: 21
Type: Report (Study/Research) ; Test/Measurement
Format: Article
Language: English
Country: United Kingdom
Annotation: This study assessed the inter-rater reliability of criteria-based content analysis (CBCA), the main component of statement validity assessment (SVA); tested the adequacy of diverse statistical indexes of reliability; and analyzed CBCA's effectiveness in distinguishing between true and false statements.
Abstract: SVA is the most common technique for testing the credibility of verbal statements. It consists of a structured interview; the CBCA, which systematically assesses the content quality of the statement; and the Validity Checklist, which relates the CBCA outcome to other evidence and to factors associated with the interview. The CBCA consists of 19 criteria that are assumed to reflect a statement's content quality in terms of vividness, concreteness, originality, psychological coherence, etc. The more numerously and/or the more explicitly these criteria are met by a statement, the more it is probable that the statement is credible, i.e., that it is based on real experience. In order to test the reliability of CBCA across raters, three raters were trained in CBCA. They were then asked to apply CBCA criteria to transcripts of 102 statements by witnesses and suspects in a simulated theft of money. Some of the statements were based on actual experience and some were false in various ways. The analysis of rater judgments of the credibility of statements varied only slightly across transcripts, which rendered the weighted kappa coefficient, the product-moment correlation, and the intra-class correlation as inadequate indexes of reliability. The Finn-coefficient and percentage agreement, which were calculated as indexes independent of rater judgment distributions, were sufficiently high regarding 17 of the 18 assessed CBCA criteria to conclude that the CBCA differentiated significantly between truthful and fabricated accounts across raters. 4 tables, 59 references, and appended description of the rater training
Main Term(s): Criminology
Index Term(s): Instrument validation; Interview and interrogation; Investigative techniques; Witness credibility
To cite this abstract, use the following link:

*A link to the full-text document is provided whenever possible. For documents not available online, a link to the publisher's website is provided. Tell us how you use the NCJRS Library and Abstracts Database - send us your feedback.