Chapter 16 Research and Evaluation
The interdisciplinary field of victims' rights and services is continually developing. The "knowledge base" available through research and evaluation has seen tremendous advances. "Promising practices" recommendations are often developed and updated based on the research. Promising practices are significant to those working in the field and are of particular importance to victim service providers. A victim service provider must employ interventions with victims that are effective and efficient. Therefore, the provider has an ethical responsibility to update and acquire new skills that improve service delivery. Research provides the means of determining which interventions may have value and which may not.
This chapter reviews basic research issues and processes. Also, an extensive resource list is provided to help the reader locate materials to assist in designing and conducting research.
Upon completion of this chapter, students will understand the following concepts:
Those who fall in love with practice without science are like a sailor who enters a ship without a helm or compass, and who never can be certain whither he is going. -- Leonardo da Vinci
Research is often viewed as a topic that is esoteric, the sole purview of "pointy headed intellectuals," and has no practical value to victim advocates. Nothing could be further from the truth. In the criminal justice and victim advocacy fields, almost everyone has strong beliefs about a host of topics from how much crime there really is, to the major causes of crime, to what types of services crime victims really need, to what is the best way to help crime victims, to whether crime victims should or should not have constitutionally protected rights. Much is known about each of these topics. Some of this information is right; some is wrong; and unfortunately, there is often great difficulty distinguishing between which is which.
Research is nothing more than a systematic approach that is designed to help distinguish between beliefs and opinions that are supported by empirical data versus those that have no empirical support. The assumption is that if a technique has empirical support, it has a greater likelihood of being successful or helpful than one that does not. T. H. Huxley wrote: "The great tragedy of science--the slaying of a beautiful hypothesis by an ugly fact." Among other things, science and research are there to keep practitioners from falling prey to a charming idea that can actually cause people harm.
Research is necessary because it can and does address many issues in the field that have practical significance. Often there are no research articles or reports that offer immediate solutions to problems that arise or are specific to victim service providers' needs. However, quality research can provide alternative strategies, help us understand complex and puzzling problems, and help us not to fall into the trap of a beautiful opinion that has no relevance and may even make things worse for those who need help. Research also affects policy decisions that alter the way resources are allocated. Research is often used by leaders and administrators to change almost everything--the structure of organizations, the laws that govern us, and even the dollars that support our programs.
In a highly informative and entertaining book about science designed for the lay person, McCain and Segal (1969) describe science as a game that is informed by certain attitudes and played by certain rules. They make a distinction between science-based belief systems and belief systems based on dogma, and suggest that "it is the system of data-based explanation that distinguishes science from dogma." Scientists cannot accept statements unsupported by data and have the responsibility to decide, on the basis of evidence, the best explanation for a set of facts. In contrast, dogma is based on the pronouncements by people in political, religious, social, or even criminal justice authority. McCain and Segal capture the difference between science-based and dogma-based belief systems as follows:
One way of contrasting science and dogma is to say that a scientist accepts facts as a given and belief systems as tentative, whereas a dogmatist accepts the belief systems as given; facts are irrelevant (p. 31).
Victim advocates seek to learn more about crime victims and the best ways to help them. This chapter is designed to help victim service providers better utilize what scientists, researchers, and research have to offer. Since few victim advocates aspire to be scientists or researchers, the focus of this chapter is to help victim service providers become more critical consumers of research and form mutually beneficial partnerships with researchers. First, the issue of understanding research produced by others will be discussed. This will be followed by a primer on conducting research.
A comprehensive treatment of understanding empirical research is beyond the scope of this brief chapter. However, there are a few foundational tips to keep in mind about analyzing research. Victim service providers who do not feel that their current knowledge and skill level are sufficient in this area may wish to take (or re-take) a basic course in research methods and statistics. At the very least, reference should be made to text books in these areas that cover the basic terminology and techniques of empirical investigations. An extensive resource list is provided at the end of this chapter.
It is most typical to begin a research project by reviewing the work of others. This is most often found in the "Literature Review" contained in an article or report. When considering others' research, victim service providers who are less familiar with research methodology should keep the following in mind as they analyze research under consideration:
BASIC RESEARCH TERMS
When people first begin to read research reports, they often encounter terms that are new. Even if readers are generally familiar with the terms in question, these terms may have a different or more refined usage in evaluation research. Spending some time learning these terms is worthwhile since they form the language of the scientific method and are used consistently to describe the results of empirical studies. The following are basic research terms.
Variable. A variable is anything that can have more than one value, that is, it is not a fixed item or event. A variable can change or vary. If something cannot vary, it is not a variable. It is usually the case that studies involve controlled observations of variables and their interrelationships. Variables can include a wide variety of factors such as victim satisfaction, different treatment outcomes, attitudes of officials toward victims, length of sentences.
There are two basic types of variables involved in research: dependent and independent. In general, an independent variable is something that influences or produces an effect on a dependent variable. The dependent variable, then, is one that depends on, or is influenced by, another variable. Generally speaking, an independent variable is the variable that is typically manipulated by the researcher to see what effects this manipulation has on the dependent variable. Of course, many times manipulation of variables is not possible, but the relationship between dependent and independent variables can be observed nonetheless in a naturally occurring manner (so called, naturalistic observation).
Study. A study is a very broad term covering just about all objective analyses of variables. Calling something a study does not necessarily imply it is a good one, however. Better studies comport with generally accepted rules regarding appropriate research methods as described below.
Subjects. Most typically, victim service providers will be interested in studies involving people. In such studies, the persons observed are called subjects. Subjects could be, for example, victims or survivors whose experience in the system or responses to treatment are being measured, or professionals whose service-providing activities are being evaluated.
Theoretical framework. All good studies begin with a theoretical framework, wherein researchers provide some insight into their general approach to the subject matter at hand. This is usually evident in the author's review of the literature where specific publications and research are cited and reviewed. From this, researchers develop a hypothesis.
Hypothesis. The hypothesis is an extremely important foundation upon which good research is conducted. A hypothesis is a declarative statement that typically expresses the relationship between variables. An example might be "Providing victim impact statements at sentencing significantly increases victim satisfaction with the criminal justice system regardless of sentencing outcomes."
Case study. A case study is a study containing observations about one subject. These studies are typically based on what is termed anecdotal evidence. A series of case studies typically provide more useful information that something of significance is happening that may merit further study. This further study may begin with a pilot study, which is a scaled down version of a major effort conducted for several purposes (for example, to test proposed measurement instruments, to hone the research methodology, and to see if there is a preliminary basis for supporting the hypothesis).
Sample study. More commonly, a sample study would be employed due to the increased inferential power of such studies. A sample study is one where only some of the individuals or events of interest to the researcher are studied so as to be able to draw conclusions about the population as a whole. The sample group is usually selected or assigned with some degree of randomness. This is done so that researchers can say that the sample is representative of the population they ultimately seek to speak about. For example, a group of individuals who have survived a significant traumatic event are randomly assigned to two or more treatment groups such as a traditional therapy approach and an eye movement and desensitization treatment group to see which ones respond better as a result of the treatment provided.
Randomized study. A randomized study is one in which subjects are assigned to different groups as randomly as possible. This may be done by flipping a coin or using a random number generator. In contrast, if the researcher decides which subjects go into which group, or if the subjects assign themselves, selection bias can cause the groups to no longer be comparable. The purpose of randomization is to represent, as best as is practicable, the entire universe of potential subjects, in this case, all crime victims. Since this is not possible, researchers attempt to assemble unbiased samples to study.
Controlled study. In a controlled study, at least two groups are compared. The experimental group receives the intervention or treatment, and the control group does not. The hypothesis is that if the samples were selected appropriately, the experimental group would be just like the control group, except for whatever the experiment provided (the intervention, sometimes called the treatment). The rationale is that any measurable differences between the groups can be attributed to the experimental intervention.
Generalizing. Assuming good research methods and appropriate statistics are employed, the results of these studies can often be generalized to larger groups with some level of confidence. As stated above, the basic rationale for a sample study is the impracticability, cost factors, or simply the impossibility of testing all potential subjects such as testing every rape victim in the country. Therefore, some smaller group is selected for study under controlled conditions and for rigorous analysis that allows for inferences to be drawn from the sample. It is of the utmost importance that sample selection, or other methods employed, do not bias the outcomes.
Research questions--research design. The research design is based on research questions which develop from the underlying hypothesis. The research questions ask what variables can and will be manipulated and studied. A sound experimental design attempts to show a functional relationship or an interaction between two or more variables. A researcher sets out to show that changes in one variable influences or controls changes in another event. For example, do restraining orders issued on stalkers reduce violence to victims? Does having a restraining order, the independent variable, result in a reduced likelihood of the stalker hurting the victim, the dependent variable? When conducting research, the function of an experimental design is to control systematically the conditions surrounding how the independent variable and the dependent variable interact. An experimental design's primary purpose is to arrange conditions in order to rule out the possibility that some other event, rather than the independent variable, may have caused the changes to the dependent variable.
Using the above example to explore the idea of a good design over one that is not, a study could be conducted upon two groups of victims, those that have restraining orders on their stalkers and those that do not. To study the changes in rates of violence perpetrated by the stalkers on the victims, this experimental group design requires that the subjects in the group with restraining orders be identical to the group of subjects who do not have restraining orders. If all things are equal between the two groups with the exception of the restraining orders, it could be concluded that the differences in violence between the two groups is a function of the restraining order. If rates of violence are lower in the group with restraining orders, then it can be concluded that restraining orders help protect victims being stalked.
However, if the groups are not identical, then those features that make them different might account for the changes in violence and not the restraining order. For example, rather than assuring that the subjects are the same in both groups, the researcher just looks at those victims who have gotten restraining orders and those who have not. Suppose the researcher finds that the stalkers with a more violent history are the stalkers who are more likely to have restraining orders issued against them. Further, suppose that a violent history predicts future violent behavior. If the most likely stalkers to actually engage in violence are the ones who fall into the group with restraining orders, then it would not be surprising to find higher rates of violence and harm to victims in the group with restraining orders. The data would then suggest that having a restraining order increases the risk of violence to the victim. That would be an erroneous conclusion, however, because the research design was flawed--the groups were not the same. If instead, the history of the violence of the stalker was the same across both groups, then the data might have looked much different showing that restraining orders do indeed help protect the victim from harm.
Operational definitions. Research factors/variables must be clearly defined. For example, if the term "recidivism" is being used in a study, it should be defined, such as "committing another criminal or juvenile offense." Frequently, otherwise sound research is criticized due to lack of precision in providing the operational definition of research variables. Moreover, how these variables are measured has a great impact on the success of the study. For example, is "committing another offense" measured by arrest data, conviction data, or interviews that may pick up additional violations?
Survey. A survey reports the results of a study where data were collected by way of questionnaires or interviews. Surveys can either be observational, if no intervention or treatment occurred, or can be used as pre-test and post-test measures before and after some intervention or treatment. A pre- and post-test design is among the simplest research designs. This approach simply means that some measurement is taken of a population before the experimental intervention, and then re-taken after this intervention to see if there is any significant difference. If other factors are well controlled, these differences can be, at least in large part, attributed to the experimental intervention (the introduced independent variable).
FUNDAMENTAL RESEARCH METHODS
Experimental Research Design. A survey's pre- and post-test approach is an experimental research design. The purpose of an experimental design is to provide controlled empirical comparisons when naturalistic observation alone is insufficient to answer the questions posed. Without experimental designs certain questions can never be reliably answered. There are many experimental designs used to control the interactions between the dependent and independent variables being studied. Most research texts describe these designs and how they can be used, such as Campbell and Stanley (1963) or Dixon, Bouma and Atkinson (1991).
Single subject designs. Of the many experimental research designs available, use of single subject designs has gained considerable attention in applied research over the last twenty or so years. The advantage of single subject designs is that a large number of subjects are not required to conduct highly reliable and valid research. Most victim advocates in most applied settings with proper training could conduct single subject design research studies. These types of studies focus on the effects of the independent variable as it is systematically delivered to a few subjects across time. These studies also focus on the variations that occur with each subject across time both before and after the independent variable or intervention is employed. For example, some measure of a crime victim's behavior with respect to their avoidance of events associated with the crime might be measured both before, during, and after treatment is delivered. The measurements often take place across many days or even weeks. The strength of this type of research is that it does follow the effects upon the individual subjects with repeated measures across a substantial period of days, weeks, and even months in some studies. Individual reactions and the pattern of their behavior can be assessed. Since behavior is a function of the relationship between the individual and their environment, it is not surprising that an intervention will have effects that are peculiar to each individual. Understanding these effects is important in understanding the very nature and desirability of using a particular intervention with a victim. Where the research question is actuarial in nature, large group studies are preferred. Where the research question relates to how something will affect an individual, then single subject experimental procedures may be more appropriate. If a researcher wants to talk about the population, such as the entire class of individuals with PTSD, then group experimental designs based on samples is more appropriate. However, if the researcher intends to draw conclusions about an individual, a sample is not appropriate.
Correlational studies. Correlational studies look for associations between variables. A positive correlation means that the greater variable X is, the greater one can expect variable Y to be. A negative correlation, also referred to as an inverse correlation, means that the greater variable X is, the less one can expect variable Y to be. It is important to note that correlations do not prove anything absolutely as much as they suggest a relationship. It is often said that "correlation is not causation," meaning that just because two items are associated does not mean that there is a cause and effect relationship. An example of a correlation study might involve measuring victim satisfaction with the criminal justice process and looking at the relationship between this measure and the specific courthouse or prosecutor's office that handled the victims case. The results might demonstrate that there is a relationship between victim satisfaction and particular courts or prosecutor's offices. However, this does not in itself give us any real information about causation behind these results.
Prevalence/Incidence study. If the research in question is looking at the frequency of something at a particular point in time, this is called a prevalence study (such as the number of victims of violent crime per 100,000 people in the United States). If the study focuses on the frequency of something over a given period of time, it is called an incidence study (such as the number of violent crime victims in the last month). Often prevalence and incidence data are compared across time in what may be referred to as trend analysis, such as whether the number of violent crimes across certain years demonstrates a rising or falling trend.
Retrospective/Prospective study. A retrospective study looks to the past for information about the topic at hand. Often these studies involve reviewing archival data such as old arrest reports, etc. A prospective study is one which looks forward; a longitudinal (or longer-term) study may be prospective. For example, a longitudinal study of the recovery rates of victims exposed to different treatments that followed them into the future for several years would be prospective.
Blind study. A blind study means that the researchers and/or the subjects do not know which treatment group each subject is in. In a single-blind study, the subjects do not know but the researchers do. In a double-blind study, neither the researchers nor the subjects know which group the subjects are in; all information is coded, and the code is not broken until the end of the study. This helps avoid problems that occur when study participants and researchers deliberately or inadvertently contaminate study results.
Quantitative and qualitative research. So far the type of research described is known as quantitative research. However, the field of research lately has been broken into two approaches to the collection of meaningful information. Besides quantitative research there is an area of research now known as qualitative research. Quantitative research has been the predominant and most widely accepted methodology for collecting information on the world we live in during the last 100 years or so. Qualitative research has in the last twenty years become recognized as a legitimate and respected approach to understanding the relationship of humans to the world around them. This is evidenced by the dramatic increase in qualitative research publications in professional journals, the inclusion of qualitative research sections in revised editions of previously quantitative-only research textbooks, and the publication of numerous books on qualitative research methodology.
While the distinctions between the two approaches are difficult to define at certain levels, qualitative research is grounded in what Mason (1996) has described as three aspects. First, it is concerned with how the social world is interpreted, understood, and experienced. Second, it is based on data-gathering methods that are sensitive and flexible to the social context rather than rigidly standardized or structured. Third, qualitative research is based on methods of collecting data that attempt to discover the richness, complexity, and depth of the event within its social context.
For example, if a researcher was concerned about the influence of victims making impact statements to the court prior to sentencing, the researcher could investigate the problem differently using the two research approaches. From a qualitative approach, the researcher would be concerned with exploring each victim's experience during the court statement. Victims might be interviewed and asked to share everything about the experience from what they were feeling, to what thoughts they had at the time, to their own personal analysis of the experience. A quantitative researcher might explore the same event but do so with a standardized survey instrument or a set of precise questions asked exactly the same of each victim and scored in a precise and exact manner. The quantitative researcher would likely be concerned with a statistical analysis of the data afterwards while the qualitative researcher would look for similarities but typically would not use any statistics. The quantitative researcher would likely be concerned with reducing the data to representative numbers. The qualitative researcher would be concerned with describing each individual subject's experience in its depth and complexity. The quantitative researcher would not typically be concerned with more than a few variables and any individual subject data would be less important than those things that are similar across members of the group. The qualitative researcher would be concerned with a wealth of information, and the individual's experience would not be overlooked or lessened, even when commonalities between individual subjects were found; indeed, these might be seen as very instructive.
The strength of the quantitative research approach is its precision and concerns for reliable and generalizable research results. Its weakness is that it overlooks and fails to observe the context and plethora of variables affecting any situation. The strength of the qualitative approach is that it attempts to understand the individual's experience from the individual's perspective. It is less likely to overlook small but important variables. Its weakness is often its inability to be able to generalize findings to other people in other situations. However, qualitative research can explore the tapestry of experience and lead quantitative researchers to analyze, assess, and isolate variables that would never have been observed without the investigation of the qualitative researcher.
Descriptive and inferential statistics. Despite one's best efforts, it is inevitable that a discussion about research design and evaluation is likely to include some references to statistics. Often jokingly (or maybe not so lightly) referred to as "sadistics," statistics is the part of the research package that can cause the most concern to the uninitiated. However, many user-friendly statistical packages are currently available that may be loaded on most desktop PCs; often a basic understanding is enough to get the newcomer going. Indeed, only a few concepts are important to review here.
Two basic types of statistics are descriptive and inferential. Descriptive statistics describe or summarize information about a sample. Inferential statistics move beyond simple descriptions and are instructive as to what generalizations or statistical estimations can be made about the population.
The reader is already familiar, no doubt, with many basic descriptive statistics. There are three (generally known as measures of central tendencies), the mode, median, and mean. The mode is the number, item, score or other value that occurs most often. It is the most frequent occurrence in the sample. The median is the middle or midpoint of a distribution. Therefore, it is the number, item, score, or other value that has 50 percent of the others above and 50 percent of the values below it. The mean, perhaps the most often used measure of central tendency, is the average number, item, score, or other value in the distribution. It is, then, the arithmetic or mathematical center of the distribution.
There are many, many types of inferential statistics, and a full discussion is not possible here. A list of sources for obtaining more in-depth treatment can be found in Additional Resources at the end of the chapter.
Statistical significance. Statistical significance is a concept that is critical to an understanding of the generalizability of research findings. That is, how confident can one be about these findings, and how can or should these findings be used in the decision-making process? Understanding statistical outcomes is often a matter of degree of confidence in those findings, rather than an "absolute proof" versus "no proof" decision. Very often it is a matter of determining a comfort level with the "odds" that the results in question are due to the experimental manipulation (or the hypothesized naturally occurring relationship) rather than being due to some chance occurrence.
In keeping with this notion, statistical significance is expressed as the "probability" that the outcome was due to what the researcher hypothesized, versus a random outcome. This probability value is expressed in terms of p value. P values are typically <0.05 (less than the point 05 level) and <0.01 (less than the point 01 level). A value of <0.01 means that the probability that the results of the study occurred by chance is less than 1 percent. Or to phrase it another way, if one were to re-do ("replicate") the study 100 times, one would predict that in 99 cases the results would be the same. This is considered an excellent outcome. Perhaps the most often relied upon level is <0.05. This is considered solid statistical significance (the results would be replicated 95 out of 100 tries).
Sample size. Researchers are often unable to test the entire universe of subjects and must typically rely on smaller numbers of cases. A critical issue in both the research methodology and the power of any statistical findings is the size of the sample. Simply put, a larger sample helps to avoid what are called confounding variables. This simply means that there is always the possibility that something other than what was hypothesized actually produced the outcome. Careful methods, good variable measuring instruments, and other factors all contribute to a strong research design. Samples must also be of sufficient size to support the statistical significance of findings and generalizability. No doubt the reader is familiar with the phrase "statistically significant sample" being used in, for example, news reports that relay the results of national opinion polls. Some people may be surprised to learn that these samples are often in the low thousands, if that, and are being used to estimate the views of tens of millions of voters. The power of randomization and sizable samples, in concert with other methodological issues (such as whether or not the questions asked in the poll's questionnaire protocol are valid), combine to produce some strikingly dependable results.
EVALUATION RESEARCH METHODS
Among the most common applications of research methods in the victims services area is evaluating the effectiveness of a project or program. Indeed, evaluation research is not really a different or difficult area in and of itself. It is best thought of as simply research applied in the field or in a program setting. At its most fundamental level, evaluation research seeks to answer basic questions about whether or not the program is achieving its stated goals as measured in the research project.
There are many forms of evaluation research. Given the fact that many traditional experimental or "laboratory" research methods are not always possible in the "real world" setting of an ongoing victim program, a variety of innovative designs are utilized. Many of these are derived from Campbell and Stanley's seminal work Experimental and Quasi-Experimental Designs for Research (1963). This is a very important book to become familiar with, even on a basic level. In terms of specific evaluation research itself, there are several distinct categories. The reader will note that many of these are distinguished by what is being measured and when it is being measured.
Process/Impact or outcome evaluation. Service providers should understand certain distinctions between process evaluation, which investigates issues regarding the program's implementation, and impact or outcome evaluation, which looks more specifically at whether or not the program achieved its goals and had an effect on the issue at hand. For example, a process evaluation might look at how networks of service providers are formed and measure the number and intensity of these relationships. An outcome evaluation might focus on whether or not this networking actually helped victims in some way.
Empowerment evaluation. Empowerment evaluation is a model that is currently enjoying increased use in a wide variety of public and private settings. While still maintaining the utmost independence, the role of the evaluator evolves into one inclusive of collaborative functions within an open forum, and not merely one of expert-counselor. This approach involves both independent evaluation and the empowerment of management and program staff to continuously assure quality of services. This is also useful given the changing macro-contexts within which services are delivered. As Fetterman (1996) points out:
Empowerment evaluation is necessarily a collaborative group activity.As a result, the context changes: the assessment of a program's value or worth is not the end point of the evaluation--as is often the case in traditional evaluation--but part of an ongoing process of program improvement. This new context acknowledges a simple but often overlooked truth: that merit and worth are not static values. Populations shift, goals shift, knowledge about program practices and their value change, and external forces are highly unstable. By internalizing and institutionalizing self-evaluation processes and practices, a dynamic and responsive approach to evaluation can be developed to accommodate these shifts (Ibid., 5).
Staying current in a developing field is both exciting and demanding. By virtue of its interdisciplinary nature, the crime victim area requires attention outside the primary fields of a practitioner's training. Among the fields involved in contributing to knowledge in this area are law enforcement, criminal justice, juvenile justice, criminology, corrections, psychology, social work, sociology, counseling, family studies, human services, public administration, medicine, nursing, and education.
With the ever increasing demands placed on service providers' time by heavy caseloads, staying current in a single primary area is oftentimes difficult. However, there are tools that may be employed to stay current and to better ensure the quality of crime victim services and advocacy. Much of the work in culling through the research and other literature is already being done, at least to some extent, by others. Many journals are published that contain this body of work. Some are specific to a field, such as Child Maltreatment published by the American Professional Society on the Abuse of Children, while others provide a variety of articles of interest to service providers, such as the Journal of Interpersonal Violence. One publication, Violence & Abuse Abstracts summarizes current research throughout areas of interpersonal violence and is a good starting point to see what journals are publishing materials of interest to the service provider. Victim service providers should draw upon these resources and not expend energies to re-create this work.
Many relevant research activities may be ongoing in local colleges and universities. Victim advocates can pick up a school/course catalogue and read up on the work victim service providers are doing and the courses researchers are teaching. Victim service providers may not have taken the opportunity to reach out to learn what other victim services agencies in the area are doing, but keeping up with the latest research may reveal what others are doing in a related topic area. The following are several potential ways to work together to achieve a mutual benefit for service providers and researchers alike:
Periodicals published by professional associations or publishing houses often have articles of current relevance. These include publications that are more substantial than the typical newsletter, but perhaps are not truly academic journals. The difficulty here typically involves the time needed to review these publications and the money needed to subscribe. Although these concerns are certainly real, the benefit to victim service providers and their agencies may well justify this resource allocation. It is important to invest these limited resources in the highest pay-off areas.
Victim advocates can begin by collecting suggestions from colleagues regarding what they are reading (or wish they had the time and money to read) and add to that list by talking to the
professor(s) and their graduate student(s). Addresses should be obtained for the publications, and free sample issues requested.
Additional publications may be listed for review on a monthly or quarterly basis by visiting the library. To stay current across disciplines, victim service providers should look for periodicals that have a broad range of editors listed who represent the areas to be covered. Also, colleagues can be drawn upon to informally share information where articles of interest are brought to the attention of others to cut down on the initial work of each participant.
The power of the on-line services should not be underestimated. Specific information about on-line research is available in Chapter 20. The amount of time that can be saved in researching topics on-line can be astounding. The only caution here is to be particularly skeptical of sources found on-line if they cannot otherwise be verified as credible by the identification of author or institution such as when addresses end with <.edu> or <.gov>. While there is excellent information to be gathered from the Internet, there is a lot of pure nonsense there too. The Internet is a very powerful tool, but it is subject to abuse and manipulation. Information and references obtained from the World Wide Web should be cross-checked.
Various government agencies provide outstanding information clearinghouses, such as the National Criminal Justice Reference Service (NCJRS); the Office for Victims of Crime Resource Center (OVCRC) is part of NCJRS. In addition, departments such as Health and Human Services, Housing and Urban Development, and Education offer similar information services. Victim service providers should register with all applicable clearinghouses to assist in identifying innovative programs and current information.
It is often noted that good experimental design is mastered by practice and not simply by being told the potential problems for which one should be on the lookout. The best way to keep up-to-date is to commit to conducting a small scale research project, or to writing a brief review article about some area of interest. Set reasonable, but strict, deadlines. Starting with the tips provided in this chapter, victim service providers should get input from a variety of sources and ask others to review and react to this work. No doubt the new researcher will be amazed at how much was already known, and a considerable array of additional material will probably be compiled. Victim service providers will learn much from an open-minded reception of methodological, content, and editorial feedback.
Victim service providers should be mindful of a few important points:
- Make sure that the reader has access to both the raw numbers as well as proportional representations. Readers should not rely heavily on, for example, percentage representations if there is not a good sense for the underlying data (which really should be made available). For example, two jurisdictions have claimed a 50 percent reduction in homicides in the same period. Jurisdiction A fell from 50 to 25, while jurisdiction B fell from 2 to 1. These may be equally significant depending upon the many circumstances involved, but they do represent quite different things such as a drop in the actual homicide frequency versus a percentage change in the homicide rate.
- When data are provided graphically (for example, in graphs that show trend lines), look to see that the graph shows the zero point on the axis and, if it does not, then see if there is a good reason for this and if it is understandable as to what the data actually represent.
- Be wary of trend data that make broad claims from either short spans of time or from two discrete points in time because manipulating the presentation of data is an easy way to limit the focus.
- Readers should be very skeptical of claims made about the greater population at large from studies that have small sample sizes because there are limitations to the strength of estimating techniques. Studies that examine a few subjects with the purpose of examining the effects of an intervention on individual subject behavior are relevant and important. Even in these studies, however, one should be very careful in generalizing results to a larger or different population.
- Victim service providers should be aware of misinterpretations that arise from mishandling proportions in population demographics. Even if group A and B seem to have the same absolute numbers of victims, if one group is many times the size of the other group, then their proportional representation should be stated in order to have a truer understanding of this phenomenon. (For example, two ethnic or racial groups may have the same number of homicide victims; however, only within the context of population proportion can these numbers be truly understood.)
- Victim service providers should be skeptical, and not take research at face value. If the author is not convincing about the findings and conclusions drawn from the study, try to articulate what is wrong with the research or how it could have been done differently.
Victim service providers must be careful not to automatically discount research that simply does not happen to jive with their point of view. Research should be read to learn new things as well as to confirm current beliefs. Also, remember that no study is perfect. This is particularly true in the crime victim research area as the demands of ethical treatment of subjects and the limitations on data that can be gathered often conflict with the rigors of pure research.
As the victims' field expands, those who work in it need to keep up with an ever-increasing array of research and other published literature. It is important not to be anxious about delving into this area. Adopting the tips above will help victim service providers stay current and better ensure that their services to, and advocacy for, victims of crime will be of high quality.
Sound research should form the basis of developing sound practices that address the needs of the population of victims served. This research should be of good quality and study actual client populations in field settings whenever practical. Indeed, one's reputation, and the credibility of the field as a whole, relies to a significant degree on the field's collective ability to translate good research into quality service provision.
Research and Evaluation Self-Examination
b. Operational definitions.
c. Randomized study.
d. Sampling bias.
e. Positive correlation.
2. Explain the difference between descriptive and inferential statistics.
3. List several ways in which your program could access or minimize the cost of research and evaluation services.
4. List and discuss three "clever data manipulations" to be wary of.
5. Describe the difference between the qualitative research approach and the quantitative research approach.
Chapter 16 References
Dixon, B. R., G. D. Bouma, and G. B. J. Atkinson. 1991. A Handbook of Social Science Research. Oxford: Oxford University Press.
Fetterman, D., S. Kaftarian, and A. Wandersman., eds. 1996. Empowerment Evaluation: Knowledge and Tools for Self Assessment and Accountability. Thousand Oaks, CA: Sage Publications.
Mason, J. 1996. Qualitative Researching. London: Sage Publications.
McCain, G., and E. Segal. 1969. The Game of Science. Belmont, CA: Brooks/Cole.
Chapter 16 Additional Resources
Abt, C., ed. 1976. The Evaluation of Social Programs. Beverly Hills, CA: Sage Publications.
Berg, B. L. 1998. Qualitative Research Methods for the Social Sciences, 3rd ed. Needham Heights, MA: Allyn & Bacon.
Caro, F., ed. 1977. Readings in Evaluation Research, 2nd ed. New York: Russell Sage.
Cronbach, L. & Associates. 1980. Toward Reform of Program Evaluation. San Francisco: Jossey-Bass.
Fink, A., and J. Josecoff. 1978. An Evaluation Primer. Beverly Hills, CA: Sage Publications.
Geertz, C. 1973. "Thick Description: Toward an Interpretive Theory of Culture." In C. Geertz, The Interpretation of Culture. New York: Basic Books.
Guttentag, M., and E. Struening, eds. 1975. Handbook of Evaluation Research, vol. 2. Beverly Hills, CA: Sage Publications.
Isaac, S., M. Isaac, and B. William. 1971. Handbook in Research and Evaluation. San Diego, CA: EDITS.
Mason, E., and W. Bramble. 1978. Understanding and Conducting Research: Applications in Education and the Behavioral Sciences. New York: McGraw-Hill.
Meyers, W. 1981. The Evaluation Enterprise. San Francisco: Jossey-Bass.
Reicken, H., and R. Boruch, eds. 1974. Social Experimentation: A Method for Planning and Evaluating Social Intervention. New York: Academic.
Reiss, A., and J. Roth. 1993. Understanding and Preventing Violence, vol. 1-4. Washington, DC: National Academy of Sciences Press.
Rossi, P., and H. Freeman. 1982. Evaluation: A Systematic Approach, 2nd ed. Beverly Hill, CA: Sage Publications.
Shortell, S., and W. Richardson. 1978. Health Program Evaluation. St. Louis, MO: C.V. Mosby.
Struening E., and M. Guttentag, eds. 1975. Handbook of Evaluation Research, vol.1. Beverly Hills, CA: Sage Publications.
Suchman, E. 1967. Evaluative Research: Principles and Practice in Public Service and Social Action Programs. New York: Russell Sage.
Weiss, C. 1972. Evaluating Action Programs: Readings in Social Action and Education. Boston: Allyn & Bacon.
Weiss, C. 1972. Evaluation Research: Methods for Assessing Program Effectiveness. Englewood Cliffs, NJ: Prentice-Hall.
American Prosecutors Research Institute. 1996. Measuring Impact: A Guide to Program Evaluation for Prosecutors. Alexandria, VA: Author.
Fink, A., and J. Kosecoff. 1977 to present. How to Evaluate Education Programs. Arlington, VA: Capitol Publications.
Tallmadge, G. October 1972. The Joint Dissemination Panel Ideabook. Mountainview, CA: RMC Research Corporation.
DESIGN AND SAMPLING
Alwin, D. 1978. "Survey Design and Analysis: Current Issues." Sage Contemporary Social Science Issues 46. Beverly Hills, CA: Sage Publications.
Jessen, R. 1978. Statistical Survey Techniques. New York: John Wiley.
Rutman, L., ed. 1977. Evaluation Research Methods: A Basic Guide. Beverly Hills, CA: Sage Publications.
Williams, B. 1978. A Sampler on Sampling. New York: John Wiley.
Cronbach, L. 1970. Essentials of Psychological Testing, 3rd ed. New York: Harper & Row.
Nunnally, J. 1978. Psychometric Theory, 2nd ed. New York: McGraw-Hill.
Whitta, D. K., ed. 1968. Handbook of Measurement and Assessment in Behavioral Sciences. Reading, MA: Addison-Wesley.
ANALYSIS OF INFORMATION
Haack, D. 1979. Statistical Literacy: A Guide to Interpretation. North Scituate, MA: Duxbury.
Johnson A. 1977. Social Statistics Without Tears. New York: McGraw-Hill.
Vito, G. 1989. Statistical Applications in Criminal Justice. Newbury Park, CA: Sage Publications.
Journal of Traumatic Stress
Journal of Interpersonal Violence
Violence and Victims
Crime and Delinquency
Criminal Justice and Behavior
Back to NVAA 1999