U.S. flag

An official website of the United States government, Department of Justice.

NCJRS Virtual Library

The Virtual Library houses over 235,000 criminal justice resources, including all known OJP works.
Click here to search the NCJRS Virtual Library

Pretend It Doesn't Work: The 'Anti-Social' Bias in the Maryland Scientific Methods Scale

NCJ Number
214375
Journal
European Journal on Criminal Policy and Research Volume: 11 Issue: 3-4 Dated: 2005 Pages: 275-296
Author(s)
Tim Hope
Date Published
2005
Length
22 pages
Annotation
This article examines the series of biases in the evaluation of evidence of crime prevention policy interventions focusing on collective social phenomena (or community) induced by the social constructs and methodological principles in the Maryland Scientific Methods Scale (SMS).
Abstract
The argument presented in this paper is that the conclusion validity of community crime prevention interventions that are the result of research designs that score highly on the Maryland Scientific Methods Scale (SMS) are likely to suffer from Type II Error. In this case, this means that community prevention initiatives are shown as ineffective when in deed they may be effective. Utilizing a case study illustrating problems of inference from an evaluation of the impact of the Priority Estates Project (PEP), this paper found that the SMS might not be the most useful method of policy evaluation. The case shows that rigorous application of the methodological precepts of SMS (experimental paradigm) in the interpretation of results can have seriously misleading policy consequences. It can lead to negative conclusions about effectiveness; however their inherent “anti-social” bias may induce Type II error with regard to desirability of social interventions to reduce crime. On the other hand, the undermining of the protagonists’ claims to political utility may be the particular scientific interpretation which is encapsulated in the hierarchy of values of SMS, rather than the research methods themselves. The SMS is proposed as a method for ranking evaluation research studies according to three methodological criteria: control over other variables, measurement error, and statistical power to detect program effects. References