Chapter One

INTRODUCTION: THE CONGRESSIONAL MANDATE TO EVALUATE

by Lawrence W. Sherman

For over three decades, the federal government has provided assistance for local crime prevention. Most of that assistance has been used to fund operational services, such as extra police patrols. A small part of that assistance has been used to evaluate operational services, to learn what works--and what doesn't--to prevent crime. Most of the operational funding to prevent crime, both federal and local, remains unevaluated by scientific methods (Blumstein et al 1978; Reiss and Roth, 1993).

The Congress has repeatedly stated its commitment to evaluating crime prevention programs. In the early years of local assistance under the Omnibus Crime Control and Safe Streets Act of 1968, it was "probably the most evaluation-conscious of all the social programs initiated in the 1960s and 1970s" (Feeley and Sarat, 1980: 130). In 1972, the Congress amended the Act to require evaluations of the "demonstrable results" of local assistance grants. In 1988, the Congress generally limited federal assistance under the Anti-Drug Abuse Act Byrne Grants to programs or projects of "proven effectiveness" or a "record of success" as determined by evaluations.1 But then as now, the Congressional mandate to evaluate remains unfulfilled, for reasons of funding structure and levels inherent in local assistance legislation for three decades.2

This report responds to the latest in the long line of Congressional initiatives to insure that its local assistance funding is effective at preventing crime. It is a state-of-the-science report on what is known--and what is not--about the effectiveness of local crime prevention programs and practices. What is known helps to address the Congressional request for a scientific assessment of local programs funded by federal assistance. What is not known helps to address the underlying issue of the Congressional mandate to evaluate crime prevention, the statutory reasons why that mandate remains unfulfilled, and the scientific basis for a statutory plan to fulfil the mandate.

The report finds substantial advances in achieving the Congressional mandate in recent years. The scientific strength of the best evaluations has improved. The Department of Justice is making far greater use of evaluation results in planning and designing programs. Within the scope of severely constraining statutory limitations, the level of resources the Department of Justice has given to evaluation has increased. The 1994 Crime Act already contains piecemeal but useful precedents for a more comprehensive statutory plan to fulfil the mandate. By asking for this report, the Congress has opened the door for a major step forward in using the science of program evaluation better to prevent crime. That step is a clearer definition of what "effectiveness" means, and a clearer plan for using impact evaluations to measure effectiveness.

THE MANDATE FOR THIS REPORT

In the 104th United States Congress, the Senate approved a major new approach to local assistance program evaluation. The Senate bill would have required the Attorney General to "reserve not less than two percent, but not more than three percent of the funds appropriated" for several local assistance programs to "conduct a comprehensive evaluation of the effectiveness of those programs." This would have been the first statutory plan to adopt the principle of setting aside a certain percentage of DOJ's operational funds exclusively for program evaluation--a principle often endorsed by the same operational leaders from whose funds would be affected,3 and one which has been adopted for other federal agencies.

The House version of the Justice Department's Appropriations bill did not include the evaluation set-aside plan, so a Conference Committee of the two chambers reached an agreement on this point. Rather than funding evaluations of the three specific programs named in the Senate version, the Conference Committee called for a comprehensive evaluation of the effectiveness of all Justice Department funding of local assistance for crime prevention. The Committee also required that the review be completed within nine months after the enactment of the legislation.

On April 27, 1996, the 104th United States Congress enacted the Conference Report (See Exhibit 1) requiring the Attorney General to provide an independent, comprehensive and scientific evaluation of the "diverse group of programs funded by the Department of Justice to assist State and local law enforcement and communities in preventing crime."4 The evaluation was required to focus on the effectiveness of these programs, defined in three ways:

o preventing crime, with special emphasis on youth violence

o reducing risk factors for juvenile violence, including those found in

o increasing protective factors against crime and delinquency

The legislation specifically required that the evaluation employ "rigorous and scientifically recognized standards and methodologies." In order to accomplish this task, the Assistant Attorney General for the Office of Justice Programs directed the National Institute of Justice (NIJ), in coordination with the Bureau of Justice Assistance (BJA), the Office of Juvenile Justice and Delinquency Prevention (OJJDP), and the Executive Office of Weed and Seed, to issue a competitive solicitation for proposals. On June 26, 1996, the National Institute of Justice released a solicitation that began the process of building the framework for this report to achieve the mandate of the 1996 legislation.


Exhibit 1

Exhibit 1


FRAMEWORK FOR THIS REPORT

This chapter presents the broad rationale for the framework used in this report. It begins with the scientific issues in the choice of the framework, and clarifies what the report is not. It sets the stage for the review with a brief introduction to the scope and structure of federal funding of local crime prevention programs. It then returns to the basic challenge of fulfilling the mandate to evaluate as an integral part of responding to the Congressional request for this report. The detailed plan for the rest of the report is then presented in Chapter Two.

Scientific Issues in The Choice of Framework

The 1996 legislation featured four key factors guiding the choice of methods for accomplishing the evaluation mandate: its breadth, its timing, its scientific standards, and its independence. The Justice Department programs in question cover a broad and complex array of activities. The short time period for producing the report ruled out any new evaluations of crime prevention effectiveness. Thus the requirement to employ scientific methods clearly implied a synthesis of already completed scientific studies.

The reliance on existing rather than new evaluations is clearly reflected in the NIJ solicitation, which called for "an evaluation review of the effectiveness of broad crime prevention strategies and types of programmatic activity..[including] family, school, and community-based strategies and approaches, as well as law-enforcement strategies." The solicitation defined more specifically how the evaluation was to be conducted:

The Assistant Attorney General decided to award a grant to an independent research group to accomplish this mandate. The legislation required that the review's content be "independent in nature," even if provided "directly" (by federal employees) or by independent contractors or grantees. An anonymous panel appointed by NIJ evaluated the proposals submitted in response to the solicitation. On the basis of the peer-review panel's report, the Director of the National Institute of Justice selected the University of Maryland's Department of Criminology and Criminal Justice in early August, 1996 to conduct the Congressionally mandated evaluation due on January 27, 1997.

Once the University of Maryland was selected as the independent contractor, the strategic choices for accomplishing the mandate shifted to the team of six senior scientists who wrote this report. All decisions about the project were left in the hands of the Maryland criminologists, who bear sole responsibility for the work. That responsibility includes the technical choices we made about how to employ "rigorous and scientifically recognized standards and methodologies" most effectively in the limited time available to complete the report. The principal decision was to define the scope of the report as follows:

This report is thus a review of scientific evaluations of categories of local programs and practices that are supported by broad categories of federal funds--often by several different "programs" of funding. Using systematic procedures described in Chapter Two and the appendix, the report attempts to sort the science of local crime prevention programs and practices supported by DOJ. It focuses primarily on the direct evaluation of local program operations, and uses those findings selectively to support indirect and theoretical assessments of some national funding streams based on findings about their specific parts.

Direct Evaluations of Local Program Operations. What rigorous science can evaluate most reliably is the effect of a specific program operated at a local level. This report identifies over 500 studies that attempt to do just that, with varying levels of scientific rigor. In a few areas, the science is rigorous enough, the studies are numerous enough, and the findings are consistent enough for us to draw some reasonably certain and generalizable conclusions about what works, what doesn't, and what is promising at the local level of operation. Such conclusions are not yet possible for most local crime prevention strategies. That fact requires the report to address the starting point for the legislation mandating this report: the need for far greater investment in program evaluation. But the growing OJP support for program evaluation in recent years helps to provide the raw material for the core of this report.

Indirect Evaluations of National Funding. In an effort to be as responsive to the Congress as possible, this report makes selective use of another approach to the scientific method. That approach uses evaluations of local programs to make indirect evaluations of federal funding streams. Those streams vary widely in their diversity, from funding streams of such relatively uniform programs as the hiring of the Crime Act's 100,000 police to very diverse Local Law Enforcement Block Grants program. The extent to which it is scientifically appropriate to generalize upwards from local program evaluations to national funding streams varies as well. In general, the more homogeneous the federal funding stream, the more appropriate it is to evaluate the effectiveness of that funding based on local evaluations.

Theoretical Assessments of Unevaluated Programs. Where no rigorously scientific impact data are available on funding streams expending substantial tax dollars, the report employs theoretical analyses to provide limited assessments of the programs. A prime example is the numerous efforts that OJP is currently making to prevent crime in the concentrated urban ghetto poverty areas producing the majority of serious youth violence in America. These programs attempt to be comprehensive in addressing the crime risk factors in those areas, which allows a comparison of the program content to the available theory and data on risk factors. The need for scientific impact assessments of these programs, however, is critical, and the theoretical assessment should be seen merely as a stopgap approach required by the current lack of measured effects.

Comprehensiveness

This report attempts to be as comprehensive as the available science allows. It is not, however, an annotated list of DOJ local assistance programs with a summary of scientific evidence relating to each one. Such an encyclopedic approach would have several limitations. It would fail to identify important issues cutting across programs. It would fail to give greater attention to the more important crime risk factors identified in the literature. Most important, it would have nothing to say about a great proportion of the specific program components of DOJ local assistance programs, given the lack of available impact evaluations.

While the report attempts some form of scientific commentary for the major DOJ prevention funding streams, it omits direct commentary on many of the smaller diverse funding categories. We attempt not to omit, however, any published program impact evaluations, meeting minimal standards of scientific rigor, that help show indirectly the effectiveness of the DOJ programs. Where such omissions have occurred, we anticipate that can be corrected in a systematic effort to keep the present findings up to date in future years.

What This Report Is Not

The Congressional mandate did not require that this report include an audit of the use of Department Of Justice (DOJ) funds, an evaluation of the leadership of DOJ's Office of Justice Programs (OJP) or Community Oriented Police Services (COPS) office, or a process or descriptive evaluation of specific programs at the local level supported with DOJ funds. None of these tasks fall within the required assessment of the scientific evidence of the effectiveness of local assistance funds administered by DOJ in preventing crime and risk factors.

Not an Audit of DOJ. Congress did not require the Attorney General to provide a detailed accounting of how DOJ local assistance funds are being spent. That kind of analysis requires auditing rather than scientific methodologies; the legislation clearly indicated the use of science. Knowing exactly how much money is being spent on Drug Courts, for example, does not alter the conclusions that can be reached by using scientific methods to examine the available studies of the effectiveness of drug courts. The report's concern with the expenditure of DOJ funds was limited to four questions that informed a scientific assessment:

1) Does DOJ funding support this kind of crime prevention program or practice?

2) If not, does the scientific evidence suggest Congress should consider funding it?

3) Are current funds allocated in relation to scientifically established crime risk factors?

4) Have the funds been allocated in a way that permits scientific impact evaluation?

Not an Evaluation of DOJ Leadership. The term "evaluation" is often understood to mean something like a report card, reflecting on the personal effectiveness of officials directing programs. There is even a substantial scientific literature in the field of industrial psychology for personnel or performance "evaluation" systems. The legislation clearly does not call for a performance evaluation, but for an evaluation of program effectiveness. The Congressional mandate to focus on the science of the programs does not require assessments, positive or negative, about the performance of DOJ leadership. In order to standardize the focus on the evidence, the report does not even employ interviews with DOJ leadership, and relies solely on analysis of legislation, written documents and publications about the programs they administer.

Not A Descriptive or Process Evaluation of DOJ Programs. The Congressional mandate clearly focuses on what scientists call "impact" evaluations, rather than "descriptive" or "process" evaluations. The distinction between the two kinds of evaluation is critical, but often misunderstood. Descriptive or process evaluations describe the nature of a program activity, usually in some detail. An impact evaluation uses scientific methods to test the theory that a program causes a given result or effect. Only an impact evaluation, therefore, can be used to assess the "effectiveness" of a program. Descriptive evaluations can provide useful data for interpreting impact results based on variations in the implementation of programs and interpretations of their effects. But they do not provide a sufficient response to the Congressional mandate.

Not a Technical "Meta-Analysis." Scientists are making increasing use of a statistical methodology called "meta-analysis," in which findings from many studies are analyzed together quantitatively. This method is important because it can produce different conclusions than a summary of findings from individual studies, largely by increasing the sample size available for analysis. There are no currently published statistical meta-analyses comparing the effectiveness of the full array of crime prevention strategies, from Head Start to prisons. There are several meta-analyses on specific crime prevention strategies included in the evidence used for this report. The Congressional requirements for rapid production of this report, however, ruled out a formal meta-analysis of the evaluation results across all crime prevention programs, however.

Evaluating Funding Mechanisms Versus Prevention Programs

The legislation did not define DOJ crime prevention "programs" as the large general funding streams. The focus on effectiveness clearly directs the report to specific crime prevention strategies. A substantial scientific literature is available on the crime prevention effectiveness of the specific strategies. We could find no existing impact evaluation, however, of such general funding streams as the Byrne Memorial State and Local Law Enforcement Assistance Program. This fact raises several key issues: the definition of "programs," the science of varying treatments, and the barriers such variations raise to direct evaluation of internally diverse national funding streams.

Defining "Programs." A major source of confusion in policy analysis of federal crime prevention is the meaning of the word "program." The meanings vary on several dimensions. One dimension is the level of government: if the federal Byrne Program funds a neighborhood watch program in Baltimore, which one is the DOJ "program" this report should evaluate for the Congress: Byrne or Baltimore's neighborhood watch? Or should the evaluation focus fall in between those two levels of analysis, addressing what is known generally about neighborhood watch programs? This report takes the latter approach.

The meanings of the term "program" also vary with respect to the required degree of internal uniformity. Neighborhood watch "programs," for example, are fairly uniform in their content, despite some variations. A national community policing "program," in contrast, embraces a far wider range of activities and philosophies, ranging from aggressive zero tolerance enforcement campaigns "fixing broken windows" (Kelling and Coles, 1996) to outreach programs building partnerships between police and all segments of the community (Skogan, 1990).

Science and Varying Treatments. The tools of the scientific method are only as useful as the precision of the questions they answer. Medical science, for example, evaluates the effectiveness of specific treatments; it is rarely able to establish the controls needed to evaluate broad categories of funding embracing multiple or varying treatments, such as "hospitals" or even "antibiotics." Variations in treatment place major limitations on the capacity of science to reach valid conclusions about cause and effect. The scientific study of aspirin, for example, assumes that all aspirin has identical chemical components; violating that assumption in any given study clearly weakens the science of aspirin effectiveness. The same is true of crime prevention programs. The more a single program varies in its content, the less power science has to draw any conclusions about "the" program's content (Cohen, 1977; Weisburd, 1993).

Compare a study of the effects of a sample of 5,000 men taking aspirin to a study of the same sample taking different pills elected arbitrarily from an entire pharmacy of choices. Any changes in health would be more clearly understood with the aspirin study than with the pharmacy evaluation. Even if the whole pharmacy of pills were taken only on doctor's orders, based on a professional assessment of the most appropriate pills for each patient, wrapping all of the different pills' effects into the same evaluation of effectiveness would prevent an assessment of what effect each medicine had. Science is far more effective at evaluating one kind of pill at a time than in drawing conclusions about different pills based upon a pharmacy evaluation.

Direct Evaluations of National Funding Programs. Any attempt to evaluate directly an internally diverse national funding program is comparable to a pharmacy evaluation. Even if the right preventive treatments are matched to the right crime risks, a national before-and-after evaluation of a funding stream would lack vital elements of the scientific method. The lack of a control group makes it impossible to eliminate alternative theories about why national-level crime rates changed, if at all, with the introduction of a widely diverse national program like the Local Law Enforcement Block Grant. Federal funding of local crime prevention, for example, increased by over five hundred percent from 1994 to 1996, and violent crime has fallen steadily during that period. But violent crime started falling in 1992, for reasons that no criminologist can isolate scientifically. Isolating still further the effects of the increased funding in 1994 is not possible to do with rigorous scientific methods. Thus we could not have evaluated most national DOJ funding programs directly, even if we had been allowed several years or decades.

Implications of This Approach

The choice to start with the available science on local programs rather than the DOJ funding mechanism programs has important implications. One limitation is the report's unavoidable bias towards well-researched programs. One advantage is that the report becomes a reference source for different legislative approaches to federal funding. The approach also becomes a demonstration of how unevenly evaluation science can proceed, and the need for clear distinctions between science and policy analysis.

Bias Towards Well-Researched Programs. The report clearly emphasizes strategies that have received substantial research attention, regardless of their merits in receiving that attention. To the extent that the rigorous science has been focused on less promising crime prevention strategies, both the report and public policymaking are at a disadvantage. The alternative might have been to rely more on theoretical science and less on empirical results. The obvious danger in that course, however, is a risk of losing the objectivity required for reliable assessments. On balance, then, the decision to focus on the strongest scientific evidence seems to be the most useful and least problematic approach available.

A Reference for Diverse Approaches to Federal Funding. Letting science guide the report around local programs may help the findings to have more lasting value. Organizing the evidence around theories and data will provide a reference for many different possible approaches to federal funding of local programs. While the structure of federal funding changes almost annually, the results of program evaluations accumulate steadily over long time periods. While the NIJ solicitation asked for special emphasis to be placed on evaluations completed in the last five years, many of the most important evaluation results are older than that. Omitting those earlier studies from the analysis would have substantially and inappropriately altered the conclusions reached. Similarly, Congressional deliberations on crime prevention policy can benefit from a reference source organized around the basic institutional settings for local crime prevention: communities, families, schools, labor markets, specific places, police, and criminal justice.

The Uncertainty of Science. Guiding the report with available findings offers a more realistic picture of what evaluation science is able to achieve. As the U.S. Supreme Court recently concluded, hypotheses about cause and effect cannot be "proven" conclusively like a jury verdict; they can merely be falsified using a wide array of methods that are more or less likely to be accurate.5 A Nobel Laureate observes that "Scientists know that questions are not settled; rather, they are given provisional answers..."6 Science is a constant state of double jeopardy, with repeated trials often reaching contradictory results. Fulfilling the mandate to evaluate will always result in an uneven growth of evaluation results, not permanent guidance. This report directly confronts the problems of mixed results from methods of varying scientific rigor, and attempts to develop decision rules for applying the findings to both research and program policy. These rules may have value not just for this report. They may also help advance the Congressional mandate to evaluate beyond the nonscientific concept of "proven" effectiveness to the scientific concept of "likely" effectiveness.

This problem of accurately predicting the effects of a program wherever it may be implemented is an important limitation to using evaluations in policy analysis. Generalizing results from an evaluation in one city to the effects of a program in another city is a very uncertain enterprise. We still lack good theories and research to predict accurately when findings can be accurately generalized. Just as the Justice Department may fund different kinds of community policing programs, the same program may be very different in different places. The nature of a "drug court" may vary enormously from one judge to the next, community policing home visits may vary from friendly to intrusive, gang prevention programs may have different effects in different kinds of neighborhoods or ethnic groups. This uncertainty is best acknowledged, and addressed by ongoing evaluations of even programs with enough evidence to be judged "likely" to "work."

Science Versus Policy Analysis. The focus on scientific results should help the reader distinguish between the report's science and its policy analysis. The distinction is crucial. Even though scientific evaluation results are a key part of rational policy analysis, those results cannot automatically select the best policy. This is due not just to the scientific limitations of generalizing results from one setting to the next. Another reason is that evaluations often omit key data on cost-benefit ratios; the fact that a program is "effective" may be irrelevant if the financial or social costs are too high. This report attempts, where possible, to distinguish summaries of science from their application to policy issues using judgment and other sources of information outside the evaluation results. We expect that there will be less consensus about the policy analysis than about the scientific findings. But we also determined after extensive deliberation that recommendations based on policy analysis were a useful addition to the purely scientific summaries that form the core of the report.

The framework adopted for this report is not the only possible way to have responded to the Congressional request. There are legitimate differences of opinion about how best to use scientific methods for this kind of analysis. Some analysts have argued for a more "flexible" approach to program evaluation, with more emphasis on expert insight and less emphasis on whether a program "works" (Pawson and Tilley, 1994). Others call for less reliance on evaluation results that have less rigorous measurement of program context and other data needed to assess the generalizability of results (Ekblom and Pease, 1995). Our own preference would have been to raise the cutoff point for defining "scientific" methods much higher than we actually did (see Chapter Two). On balance, however, this approach provides an acceptable compromise between the Congressional needs for information and the scientific strength of available evidence.

There are also multiple goals for the $4 Billion annual funding described in this report, which may be valuable for other reasons besides its scientifically measurable effectiveness in preventing crime. The focus on crime prevention excludes the very important goals of justice, fairness and equality under the law. That limitation is not inherent in the science of program evaluation; it is merely a function of the boundaries of the specific mandate for this report.

LOCAL CRIME PREVENTION AND THE DEPARTMENT OF JUSTICE

The policy context for this report is the current structure of local crime prevention assistance programs funded by the U.S. Department of Justice. This section provides a brief introduction to those programs. It begins with a summary of the appropriated budgets for local crime prevention in fiscal year 1996, the year the Congress requested this report. It then describes the administrative structure of the Justice Department offices administering those funds. It concludes with a brief discussion of the types of funding mechanisms Congress has created for distributing the funding, and briefly details the focus and mechanisms of the largest of the funding programs.7

Budget

Local crime prevention offices now receive more DOJ funding than at any time in American history, a larger budget than the FBI, the DEA, or the INS. Among all DOJ components, only the Federal Bureau of Prisons consumes a larger share of the budget. At $4 billion per year, the combined annual budget of the $1.4 billion administered by the Director of the COPS (Community-Oriented Policing Services) Office and the $2.6 Billion administered by the Assistant Attorney General for OJP (the Office of Justice Programs) is more than five times the amount the Congress allocated in the peak years of the old Law Enforcement Assistance Administration.

Not all of these funds can be classified as having crime prevention purposes. The largest of these programs, the 1994 Crime Act's Title I Community Policing grants, does not even specify the prevention of youth violence as a legislative purpose of the funding, even though many observers would expect youth violence prevention to result from the program. The definition of crime prevention as an intention or a result is a major issue addressed in Chapter Two, which explains this report's rationale in using a definition focused on results. This definition thus clearly include the 100,000 police. But even that broad definition does not include the State $300 State Criminal Alien Assistance Program, reimbursing states for housing 38,000 illegal aliens incarcerated for felony offenses, or the $31 million Public Safety Officers Benefits program for families of police slain in the line of duty. Nor does it include infrastructure programs for courts and computerization of criminal justice records, general programs of statistics, research and evaluation, services to victims of crime, the Police Corps, or general administrative costs. As Figure 1-1 shows, the major crime prevention funding programs within DOJ added up to about 85% of the $4 billion total appropriations for the two local assistance offices (OJP and COPS), or about $3.4 billion. The historical context of these appropriations levels is indicated in Figure 1-2, which shows the three-decade trends in total DOJ funding of its local crime prevention assistance offices (including services other than crime prevention).

The Department of Justice funding of local programs which may result in crime prevention are authorized under several different Acts of Congress. The Juvenile Justice and Delinquency Prevention Act is the oldest, having continued in force after the end of the Law Enforcement Assistance Administration. The 1988 Anti-Drug Abuse Act of 1988 authorized the Byrne Grants program to the states, followed by the 1994 Crime Act which took the local prevention funding to its current historic heights. The five principal titles of the 1994 Act include Public Safety and Policing (Title I), Prisons (Title II), Crime Prevention (Title III), Violence Against Women (Title IV), and Drug Courts (Title V). While this report treats all five titles as falling within a results-based scientific definition of crime prevention, it is worth noting that the Congress has never appropriated any funds specifically labeled as "crime prevention" under Title III. Both the 1988 Anti-Drug Abuse Act and the 1996 Omnibus Appropriations Act, however, appropriated funds allowing grants to be made in a "purpose area" labeled crime prevention.

Figure 1-1

Major DOJ Crime Prevention Funding Programs


OFFICE & BUREAU              FUNDING PROGRAMS                         FY 1996         
                                                                      Funding         

Community-Oriented Policing  100,000 Local Police                     $1.4  Billion   
Services                                                                              

Office of Justice Programs                                                            

Bureau of Justice            Local Law Enforcement Block Grant        $488  Million   
Assistance                   Formula Program                                          

                             Byrne Memorial State and Local Law       $475  Million   
                             Enforcement Assistance Formula Program                   

                             Byrne Discretionary Grants Program:       $32  Million   

                             (Boys and Girls Clubs Earmark)           ($ 4 Million)   

                             (Nat'l. Crime Prevention Council         ($ 3 Million)   
                             Earmark)                                                 

                             (DARE Drug Abuse Prevention Earmark)     ($ 2 Million)   

Office of Juvenile Justice   Juvenile Justice Formula Grant Program   $70   Million   
and                                                                                   
Delinquency Prevention                                                                

                             Competitive Grants Programs              $69   Million   

Executive Office of Weed     Operation Weed and Seed                  $28   Million   
and Seed                                                                              

Violence Against Women       STOP (Services, Training, Officers, and  $130  Million   
Grants Office                Prosecution) Violence Against Women                      
                             Formula Grant Program                                    

                             Rural Domestic Violence Enforcement      $  7  Million   

                             Encourage Arrest Program                 $ 28  Million   

Corrections Program Office   Residential Substance Abuse Treatment    $ 27  Million   

                             Violent Offender Truth in Sentencing     $405  Million   
                             Prison Construction Formula Grants                       

Drug Courts Program Office   Drug Courts Competitive Grants           $ 15  Million   

Total Major Funding                                                   $3.2 Billion    



Administrative Structure

The administration of these various programs under various Acts is organized into the two separate offices. One of these--the Office of Community-Oriented Policing Services--has a single large program and a single presidential appointee. The other--the Office of Justice Programs--has numerous programs ranging widely in size, managed by an Assistant Attorney General, two Deputy Assistant Attorneys General, and five Presidentially appointed directors or administrators of the following units the Bureau of Justice Assistance (BJA), the Bureau of Justice Statistics (BJS), the National Institute of Justice (NIJ), the Office of Juvenile Justice and Delinquency Prevention (OJJDP), and the Office for Victims of Crime (OVC). In addition, several other OJP offices manage funding under separate Titles of the 1994 Crime Act: the Corrections Programs Office, the Office for Drug Courts, and the Violence Against Women Grants Office. The OJP Executive Office of Weed and Seed is supported by transfers of BJA Byrne Discretionary Grant appropriations under the 1988 Anti-Drug Abuse Act. Figure 1-1 summarizes the administrative and programmatic structure of the agencies administering the major local crime prevention programs. NIJ and BJS do not administer major local assistance grants for crime prevention purposes, although BJS does assist states in their implementation of the data systems requirements for compliance with the Brady Act. The Office of Vicitms of Crime is funded by fines collected by federal courts, and provides funding mostly for repairing the harm cuased by crime; a few areas of potential crime prevention effects from OVC funding, such as its support for battered women's shelters, are noted in Chapter Four.

Funding Mechanisms: Formula, Discretionary, Earmarks, Competitive

The crucial point in understanding DOJ local crime prevention funding programs is the statutory plan for allocating the funding. The "funding mechanisms" of this plan vary across the different authorization Acts, and use different criteria even within each funding mechanism depending on the specific Act. Two basic types of funding mechanisms are "formula" or "block" grants versus "discretionary" grants. Many observers and grant recipients incorrectly assume these labels mean that local units are entitled to their funding under formula grants, while DOJ executives decide how to administer the discretionary grants. That assumption is incorrect. There are substantial legislative requirements constraining DOJ's allocation of "discretionary" funds, and there are also various legislative requirements that grantees must satisfy in order to become eligible to receive their "formula" funding.

The so-called Discretionary programs are constrained by Congress in three ways: earmarks, eligibility criteria, and competition. Earmarks are legislative directions in the Appropriations laws (as distinct from Authorization Acts) on how to spend certain portions of funds appropriated within a larger funding program, such as the $11 million earmark for Boy's and Girls Clubs within the 1996 appropriation for the BJA Local Law Enforcement Block Grant Program and the $4.35 Million earmark for the same organization under the Byrne Discretionary grants. Earmarks are both "hard" and "soft." Hard earmarks are written into legislation, usually with specific amounts to be spent and the specific recipient of the funding identified. Soft earmarks are based upon committee hearings and conference reports, such as the legislation for the present report, with or without specified amounts.

Eligibility criteria programs are only "discretionary" in the sense that DOJ officials must decide whether the applicants are eligible to receive the funds for which they apply. The applicants do not receive the funds unless they apply, and can demonstrate their eligibility in the application. Congress often requires, for example, that states pass certain state laws as a condition of eligibility for receiving federal funds under certain grant programs. The most famous example is perhaps the limitation of maximum state speed limits to 55 miles per hour that was for two decades an eligibility requirement for receiving federal highway construction funding. Similarly, the 1994 Crime Act makes state passage of "Truth-in-Sentencing" Legislation an eligibility requirement for prison construction grants. Once DOJ has proof of program eligibility, however, the determination of how much funding the applicant receives must follow the statutory allocation plan. All those receiving funds do so on the basis of a "formula" that may be based on population, crime rates, prison overcrowding rates or other factors. In addition, certain minimum amounts are often reserved for jurisdictions of certain size irrespective of the formula, such as the requirement that half of all funding for the 100,000 police be allocated to applicants from cities of over 150,000 people. In that particular case, the allocation is made at least in part on a first-come, first served basis.8 Thus a more accurate label for such funding mechanisms might be "discretionary eligibility formula grants."

Only ten percent of the total OJP appropriation is for competitive grants, the truly discretionary programs in which applicants must compete on the merits of issues other than simple eligibility for funding. DOJ officials usually establish different criteria appropriate for each program. Examples of criteria for these grants include innovative approaches, interagency collaboration, comprehensive targeting of crime risk factors, and potential impact of the program on the community. Examples of competitive local assistance programs include Drug Courts, Operation Weed and Seed, JUMP mentoring grants and Encourage Arrest Grants.

Formula grant programs, in contrast to discretionary programs, have no so-called "eligibility" requirements, such as the passage of state laws. The allocation of funding is independent of such tests. Formula programs can, however, require that certain paperwork be satisfactorily completed. BJA Byrne grants, for example, require that an annual plan specify how the formula-determined allocation will be spent, and that evaluations of all grants made with formula allocations be forwarded to BJA. Failure to satisfy these requirements presumably has the same effect as in "discretionary eligibility" programs, which is to block the award of the funds.

These funding mechanisms offer relatively little discretion to DOJ in its choice of program areas or sites, but offers substantial direction to the state and local grant recipients. That policy choice is central to a continuing Congressional debate. Its relevance to this report is to show the centrality of the local programs chosen by the grant recipients in determining the effectiveness of this funding. It is the local decisions on which prevention programs to adopt, and not the Congressionally mandated actions by DOJ in allocating that funding, which largely determine the effectiveness of these broad funding streams in preventing crime.

Major Funding Stream Programs

This section briefly describes the major DOJ funding stream programs listed in Figure 1-1.

COPS. This program reimburses local police agencies for up to 75% of the salary and benefits of an additional police officer for three years, up to a maximum of $75,000 per officer. It is a discretionary-eligibility-formula grant program in which funding is allocated on the basis of eligible applicant population size, with a minimum allocation requirement that 50 percent of the funds go to police departments serving cities of over 150,000 people. In addition to this "Universal Hiring Program" to which the Congress has restricted appropriations in 1997, the earlier years of the program offered various competitive grant programs for domestic violence, youth firearms, anti-gang initiatives, and other special purposes.

Byrne (BJA). The 1988 Anti-Drug Abuse Act established both formula and discretionary grant programs in memory of New York City Police Officer Edward Byrne, who was murdered while monitoring a crack house. The formula program awards funds to states developing plans for allocating grants, originally under 21 and now under 26 purpose areas: 1) drug demand reduction programs involving police, 2) multijurisdictional task forces against drugs, 3) domestic drug factory targeting, 4) community crime prevention, 5) anti-fencing programs, 6) white-collar and organized crime enforcement, 7) law enforcement effectiveness techniques, 8) career criminal prosecution, 9) financial investigations, 10) court effectiveness, 11) correctional effectiveness, 12) prison industries, 13) offender drug treatment, 14) victim-witness assistance, 15) drug control technology, 16) innovative enforcement, 17) public housing drug markets, 18) domestic violence, 19) evaluations of drug control programs, 20) alternatives to incarceration, 21) urban enforcement of street drug sales, 22) DWI prosecution, 23) juvenile violence prosecution, 24) gang prevention and enforcement, 25) DNA analysis, 26) death penalty litigation. While each state is eligible to receive a minimum of 0.25 percent of total appropriations, the balance is allocated on the basis of state population as a proportion of the entire U.S. All Byrne funds must be matched by a 25% commitment of non-federal funds.

The BJA Byrne Discretionary Grants program is heavily earmarked for initiatives such as those indicated in Figure 1-1 (e.g., Boys and Girls Clubs, DARE) as well as programs well-established with Congressional understanding, such as Weed and Seed (see below). Over 5 percent of Byrne discretionary funds ($3.1 million) went to program evaluation purposes in FY 1996, with another $3.5 million allocated to program evaluation by the States from their formula grants.

Local Law Enforcement Block Grants (BJA). This is a formula grant program that awards funds to applying local governments based on their share of the their state's total Part I violent offenses (homicide, rape, robbery, aggravated assault) over the previous three years. The eight purpose areas for local expenditure of the grants are 1) police hiring, 2) police overtime, 3) police equipment and technology, 4) school security measures, 5) drug courts, 6) violent offender prosecution, 7) multijurisdictional task forces, community crime prevention programs involving police-community collaboration.

STOP Violence Against Women Block Grants (VAWGO). This is a formula grant program allocating funding to states and territories based upon population. Within each state, the grants must total at least 25% for law enforcement, prosecution, and victim services. A wide range of programs fall within each of these categories, including both domestic and stranger violence against women.

Encourage Arrest Grants (VAWGO). This is a competitive program for which eligibility is determined by the passage of certain state laws concerning the arrest of suspects about whom there is probable cause to believe they have committed an act of domestic violence or a related offense. These grants are intended to encourage communities to adopt innovative, coordinated practices that foster collaboration among law enforcement officers, prosecutors, judges, and victim advocates to improve the response to domestic violence.

Operation Weed and Seed (EOWS). This is a competitive program funded by a transfer of BJA discretionary Byrne funding to the OJP Executive Office of Weed and Seed. The program consists of long-term funding to a varying number of selected cities to help them create a comprehensive program of reducing crime in small, high-crime areas. The DOJ funding operates as seed money leveraging additional federal, state, local and private resources.

Juvenile Justice Formula Grants (OJJDP). This program provides annual funding to eligible states to deinstitutionalize status offenders, separate juveniles and adults in secure correctional facilities, jails and lockups, and to reduce the number of juveniles in secure facilities.

Prison Construction Grants (Corrections Office). This program provides funds to states to build more prison cells or to construct less expensive space for nonviolent offenders, to free space in secure facilities for more violent offenders.

Residential Correctional Drug Abuse Treatment (Corrections Office). This funding program funds state prison delivery of substance abuse treatment to inmates.

THE STATUTORY PLAN FOR PROGRAM IMPACT EVALUATION

In theory, one of the most effective federal crime prevention programs is the evaluation of local programs. The Attorney General's Task Force on Violent Crime called it the central role of the federal government in fighting crime, the one function that could not be financed or performed as efficiently at the local level.9 With less than one percent of local criminal justice budgets supported by the federal government (not counting the COPS program), federal funds are arguably most useful as a stimulus to innovation that makes the use of local tax dollars more effective (Dunworth, et al, 1997). The three-decade old Congressional mandate to evaluate is consistent with that premise. Its implication is that a central purpose of federal funding of operations is to provide strong evaluations.

The Congressional mandate for this report therefore includes an evaluation of the effectiveness of DOJ-funded program evaluation itself. The central question is whether those evaluations have "worked" as a federal strategy for assisting local crime prevention. The report answers that question in a different fashion from the method used to evaluate the direct local assistance funding. Rather than directly evaluating the impact of program evaluations on crime, the report indirectly examines the antecedent question of whether those evaluations have succeeded in producing published and publicly accessible scientific findings about what works to prevent crime. After presenting the scientific framework for the review in Chapter two, the report presents the evidence for both program and evaluation effectiveness in Chapters Three through Nine. Chapter Ten then summarizes the limited evidence on local program effects, and returns to the underlying issue of how to accomplish the Congressional Mandate to evaluate.

This report concludes that the current statutory plan for accomplishing that mandate is inadequate, for scientific reasons not addressed by current legislation. That inadequacy substantially limits the capacity to judge the effectiveness of the federal effort to reduce serious crime and youth violence. Part of the statutory problem is simply inadequate funding. While Figure 1-2 shows the steep rise in total federal support for local crime prevention operations, Figure 1-3 shows a rough indication of the declining proportionate support for research and evaluation: the percentage of total OJP appropriations allocated to the National Institute of Justice.


Exhibit 1-2


Exhibit 1-3


Figure 1-3 actually overstates the amount of DOJ funding allocated to program evaluations. Program evaluations are also funded by OJJDP and BJA,10 and actual NIJ expenditure in FY 1996 was $99 million rather than $30 (due to inter-agency transfers).11 But Figure 1-3 reflects the total NIJ budget for all research, technical assistance, and dissemination purposes, as a well as for program evaluation; only 27 percent ($8 million) of NIJ's FY 1996 appropriation was allocated to evaluation. The proportionate allocation of the NIJ budget to evaluation over the past three decades has not changed substantially on this point. Thus while Figure 1-3 overstates the absolute dollars DOJ has been appropriated for evaluation, it is still an accurate portrayal of the absence of statutory attention to keeping evaluation funding commensurate with operational funding.

Evaluation funding alone, however, cannot increase the strength of scientific evidence about the effects of federally funded local programs on crime. Chapter Ten documents the need for adequate scientific controls on the expenditures of program funds in ways that allow careful impact evaluation. A statutory plan earmarking a portion of operational funds for strong scientific program evaluation is the only apparent means for increasing the effectiveness of federal funding with better program evaluations. The basis for this conclusion is central to scientific thinking about crime prevention, as the next chapter shows.


NOTES

142 U.S.C. 3782 Sec. 801 (b) (1), (19), (20).

2U.S. Attorney General's Task Force on Violent Crime, 1981, p. 73.

3In 1988, for example, more than 30 big city police chiefs asked Congress to earmark ten percent of the Anti-Drug Abuse Act funds for research and evaluation. While Titles I and II of the 1994 Crime Act authorize DOJ to spend up to 3 percent of funds for assorted purposes including evaluation, there has never been a requirement to spend a percentage of operational funds exclusively on program impact evaluations demonstrating crime prevention effectiveness.

4104th Congress, First Session, House of Representatives, Report 104-378.

5Daubert v. Merrell Dow Pharmaceuticals, Inc., 113 S. Ct. 2786, 125 L. Ed. 2d 469 (1993), in which the Court adopts the scientific framework offered by Karl Popper, Conjectures and Refutations: The Growth of Scientific Knowledge, 5th Ed., 1989.

6David Baltimore, "Philosophical Differences," THE NEW YORKER, January 27, 1997, p. 8.

7This section is based largely upon a January 17, 1997 NIJ background memorandum from Jane Wiseman to Christy Visher prepared at the University of Maryland's request.

8U.S. Department of Justice, Office of Community Oriented Policing Services, COPS Facts: "Cops More '96." Update September 18, 1996.

9Attorney General's Task Force on Violent Crime, Report, 1981; James Q. Wilson, "What, if Anything, Can the Federal Government Do About Crime?" Presentation in the Lecture Series on Perspectives on Crime and Justice, sponsored by the National Institute of Justice with support from the Edna McConnell Clark Foundation, December, 1996.

10Total BJA expenditures on program evaluation in FY 1996 were $6.6 million.

11Actual NIJ expenditures on all purposes included transfers authorized by the Assistant Attorney General for the Office of Justice Programs from Crime Act appropriations of $15.6 million in FY 1995 and $51.9 million in FY 1996.


REFERENCES

Blumstein, Alfred, Cohen, Jacqueline, and Daniel Nagin (eds).

1978 Deterrence and Incapacitation: Estimating The Effects of Criminal Sanctions on Crime Rates. Washington, DC: National Academy of Sciences.

Cohen, J.

1977 Statistical Power Analysis for the Behavioral Sciences. N.Y.: Academic Press.

Ekblom, Paul and Ken Pease

1995 Evaluating Crime Prevention. In Michael Tonry and David Farrington, eds., Building a Safer Society: Strategic Approaches to Crime Prevention. Crime and Justice, Vol. 19. Chicago: University of Chicago Press.

Feeley, Malcolm and Austin Sarat

1980 The Policy Dilemma: Federal Crime Policy and the Law Enforcement Assistance Administration. Minneapolis: University of Minnesota Press.

Kelling, George and Katharine Coles

1996 Fixing Broken Windows. NY: Free Press.

Pawson, R. and N. Tilley

1994 What Works in Evaluation Research. British Journal of Criminology 34: 291-306.

Reiss, Albert J., Jr. and Jeffrey Roth (eds.)

1993 Understanding and Preventing Violence. Washington, D.C.: National Academy of Sciences.

Skogan, Wesley

1990 Disorder and Decline. NY: Free Press.

Weisburd, David with Anthony Petrosino and Gail Mason

1993 Design Sensitivity in Criminal Justice Experiments: Reassessing the Relationship Between Sample Size and Statistical Power. In Michael Tonry and Norval Morris, eds., Crime and Justice, Vol. 17. Chicago: University of Chicago Press.


Home