U.S. flag

An official website of the United States government, Department of Justice.

NCJRS Virtual Library

The Virtual Library houses over 235,000 criminal justice resources, including all known OJP works.
Click here to search the NCJRS Virtual Library

When Can We Conclude That Treatments or Programs "Don't Work"?

NCJ Number
214551
Journal
The Annals Volume: 587 Dated: May 2003 Pages: 31-48
Author(s)
David Weisburd; Cynthia M. Lum; Sue-Ming Yang
Date Published
May 2003
Length
18 pages
Annotation
After examining criminal justice practices of reporting statistically "nonsignificant" findings (do not reach the level of statistical significance) across a large group of studies that represent a broad range of criminal justice areas, this article suggests and illustrates an alternative null hypothesis statistical testing method.
Abstract
Analyses of 675 studies show that criminal justice evaluators often make formal errors in the reporting of statistically nonsignificant findings. Instead of concluding only that the results were not statistically significant or that there was not enough evidence to support a treatment effect, they often mistakenly accepted the null hypothesis, i.e., that the intervention had no impact or did not work. In the studies reviewed, the effect sizes reported as being statistically nonsignificant were often not trivial, and such investigations seldom met accepted thresholds for statistically powerful studies. The authors recommend that criminal justice researchers be more cautious in interpreting statistically nonsignificant findings. One approach is to have better defined and more rigorous standards for reporting practices, such as those used by the American Psychological Association. The authors advise that the general form of null hypothesis statistical testing does not provide a clear method for determining that an intervention does not work. Researchers should define a second null hypothesis that sets a minimal threshold for program effectiveness. Such a threshold would take into account the potential costs and benefits of a program and focus on the particular intervention being examined. In illustrating this approach, the authors found that more than half of the studies that had no statistically significant finding showed a statistically significant result when measured by a null hypothesis that allowed for a minimal worthwhile treatment effect. Programs do not work when they fail to meet a minimal threshold of success. 6 tables, 17 notes, and 44 references