1997-98 Academy Text Supplement

Chapter 21-13


Understanding Basic Research and Evaluation

Abstract: The interdisciplinary field of victims' rights and services is continually developing. The "knowledge base" available through research and evaluation has seen tremendous advances, and "promising practice" recommendations are often developed and updated based on these studies. This is significant to those working in the field, and of particular importance to victim service providers. This chapter reviews basic research issues and processes. Also, an extensive resource list is provided to help the reader locate materials to assist in designing and conducting research.

Learning Objectives: Upon completion of this chapter, students will understand the following concepts:

1. How information about research findings can be obtained.

2. How programs can secure assistance to conduct research.

3. The definition of basic research terms and fundamental research and evaluation methods.


Why Victim Advocates Need to Know More About Research and Science

Research is often viewed as a topic that is esoteric, that is the sole purview of "pointy headed intellectuals," and that has no practical value to victim advocates. Nothing could be further from the truth. In the criminal justice and victim advocacy fields, almost everyone has strong beliefs about a host of topics from how much crime there really is, to the major causes of crime, to what types of services crime victims really need, to what is the best way to help crime victims, to whether crime victims should have constitutionally protected rights or not.

Much is known about each of these topics. Unfortunately, some of this knowledge is right; some is wrong; and there is often great difficulty distinguishing between what is right and what is wrong. Research is nothing more than a systematic approach that is designed to help distinguish between beliefs and opinions that are supported by empirical data versus those that have no empirical support.

In a highly informative and entertaining book about science designed for the lay person, McCain and Segal (1969) describe science as a game that is informed by certain attitudes and played by certain rules. They make a distinction between science-based belief systems and belief systems based on dogma, and suggest that "it is the system of data-based explanation that distinguishes science from dogma." Scientists cannot accept statements unsupported by data and have the responsibility to decide on the basis of evidence the best explanation for a set of facts. In contrast, dogma is based on the pronouncements by people in political, religious, social, or even criminal justice authority. McCain and Segal capture the difference between science-based and dogma-based belief systems as follows:

"One way of contrasting science and dogma is to say that a scientist accepts facts as a given and belief systems as tentative, whereas a dogmatist accepts the belief systems as given; facts are irrelevant." (p. 31)

Victim advocates seek to learn more about crime victims and the best ways to help them. This chapter is designed to help victim service providers better utilize what scientists, researchers, and research have to offer. Since few victim advocates aspire to be scientists or researchers, the focus is to help victim service providers become more critical consumers of research and form nonexploitive partnerships with researchers.

How to Make Sense of Research

A comprehensive treatment of understanding empirical research is beyond the scope of this brief chapter. However, there are a few foundational tips to keep in mind about analyzing research. Victim service providers who do not feel that their current knowledge and skill level are sufficient in this area may wish to take (or re-take) a basic course in research methods and statistics. At the

very least, reference should be made to text books in these areas that cover the basic terminology and techniques of empirical investigations. An extensive resource list is provided at the end of this chapter. First, the issue of understanding research produced by others will be discussed. This will be followed by a primer on conducting research.

It is most typical to begin a research project by reviewing the work of others. When considering others' research, victim service providers who are less familiar with research methodology should keep the following in mind as they analyze research under consideration:

Nomenclature of Research and Evaluation

As the title of this section suggests, when people first begin to read research reports, they often encounter many terms that are new. Even if they are generally familiar with the terms in question, the reader may find that these terms have a different or more refined usage in evaluation research. It is worthwhile to spend some time learning these terms since they form the language of the scientific method and are used consistently to describe the results of empirical studies. Moreover, as a student of evaluation research, one should consider conducting simple studies. It is the best way to really learn about research, and frankly, it is not that difficult to do on a small scale.

Research Basics

The first, and perhaps most basic term to know is variable. A variable can be almost anything that can have more than one value. That is, it is not a fixed item or event. A variable can change, or vary. It is usually the case that studies involve controlled observations of variables and their interrelationships. Variables can include a wide variety of factors, such as victim satisfaction, attitudes of officials toward victims, length of sentences, and the list goes on. The next important term is study, which is a very broad term covering just about all objective analyses of variables. Calling something a study does not imply it is necessarily a good one, however.

Most typically, victim service providers will be interested in studies involving people. In such a study, the persons observed are called subjects. A subject could be a victim or survivor whose experience in the system or responses to treatment are being measured, or a professional whose service-providing activities are being evaluated.

All good studies begin with a theoretical framework, wherein the researcher provides some insight into his or her general approach to the subject matter at hand. This is usually evident in the author's review of the literature where specific publications and research are cited and reviewed. From this, the researcher develops a hypothesis. The hypothesis is an extremely important foundation upon which good research is conducted. A hypothesis is a declarative statement that typically expresses the relationship between variables. An example might be "Providing victim impact statements at sentencing significantly increases victim satisfaction with the criminal justice system regardless of sentencing outcomes."

There are many different forms of evaluation or research studies. A case study is a study containing observations about one subject reported by, for example, an anthropologist, sociologist, psychologist or medical doctor. These studies are typically based on what is termed anecdotal evidence. A series of case studies typically provide more useful information that something of significance is happening that may merit further study. This further study usually begins with a pilot study, which is a scaled down version of a major effort conducted for several purposes (for example, to test proposed measurement instruments, to hone the research methodology, and to see if there is a preliminary basis for supporting the hypothesis).

More commonly, a sample study would be employed due to the increased inferential power of such studies. A sample study is one where only some of the individuals or events of interest to the researcher are studied so as to be able to draw conclusions about the population as a whole. The sample group is usually selected or assigned with some degree of randomness. This is done so that researchers can say that the sample is representative of the population they ultimately seek to speak about. For example, a group of individuals who have survived a significant traumatic event are randomly assigned to two or more treatment groups (e.g., a traditional therapy approach and an eye movement and desensitization treatment group) to see which ones do better as a result of the treatment provided.

A randomized study is one in which subjects are assigned to different groups as randomly as possible by flipping a coin or using a random number generator. In contrast, if the researcher decides which subjects go into which group, or if the subjects assign themselves, selection bias can cause the groups to no longer be comparable.

In a controlled study, at least two groups are compared. The experimental group receives the intervention or treatment, and the control group does not. The theory is that if the samples were selected appropriately, the experimental group would be just like the control group, except for whatever the experiment provided (the intervention). The rationale is that any measurable differences between the groups can be attributed to the experimental intervention.

Assuming good research methods and appropriate statistics are employed, the results of these studies can often be generalized to larger groups with some level of confidence. The basic rationale for a sample study is the impracticability, cost factors, or simply the impossibility of testing all potential subjects (i.e., testing the entire universe of subjects). Therefore, some smaller group is selected for study under controlled conditions and rigorous analysis that allow for inferences to be drawn from the sample.

More formalized research methods involve deriving research questions from the underlying hypothesis and deciding upon what variables can and will be manipulated and studied. There are two basic types of variables involved in research: the dependent and independent variables. In general, an independent variable is something that influences or produces an effect on a dependent variable. The dependent variable, then, is one that depends on, or is influenced by, another variable. Generally, an independent variable is the variable that is typically manipulated by the researcher to see what effects manipulation has on the dependent variable. Of course, many times there is no manipulation of variables possible (i.e., not a true experimental design), but the relationship between dependent and independent variables can be observed nonetheless in a naturally occurring manner. To aid in the reader's understanding, note that the dependent variable may also be thought of as the outcome variable.

Most important factors involved in research must be clearly defined. These are termed operational definitions. For example, if the term "recidivism" is being used in a study, it should be defined, such as "committing another criminal or juvenile offense." Frequently, otherwise sound research is criticized due to lack of precision in defining research variables.

A survey measure reports the results of a study where data were collected by way of questionnaires or interviews. Surveys can either be observational, if no intervention or treatment occurred, or can be used as pre-test and post-test measures before and after some intervention or treatment. A pre- and post-test design is among the simplest research designs. This approach simply means that some measurement is taken of a population before the experimental intervention, and then re-taken after this intervention to see if there is any difference. If other factors are well controlled, these differences can be attributed to the experimental intervention.

The above mentioned pre- and post-test approach is an experimental research design. Given the difficult nature of and ethical issues involved in using true experimental designs in the victim services area, other methods and types of studies are often required. Correlational studies look for associations between variables. A positive correlation means that the greater variable one is, the greater one can expect variable two to be. A negative correlation, also referred to as an inverse correlation, means that the greater variable one is, the less one can expect variable two to be. It is important to note that correlations do not prove anything absolutely. It is often said that "correlation is not causation," meaning that just because two items are associated does not mean that there is a cause and effect relationship.

If the research in question is looking at the frequency of something at a particular point in time, this is called a prevalence study (such as the number of victims of violent crime per 100,000 people in the United States). If the study focuses on the frequency of something over a given period of time, it is called an incidence study (such as the number violent crime victims in the last month). Often prevalence and incidence data are compared across time in what may be referred to as trend analysis, e.g., does the number of violent crimes across certain years demonstrate any trends, such as rising or falling?

A retrospective study looks to the past for information about the topic at hand. Often these studies involve reviewing archival data (e.g., old arrest reports, etc.) A prospective study is one in which looks forward; a longitudinal (or longer-term) study may be prospective. For example, a longitudinal study of the recovery rates of victims exposed to different treatments that followed them into the future for several years would be prospective.

A blind study means that the researchers and/or the subjects don't know which treatment group each subject is in. In a single-blind study, the subjects don't know but the researchers do. In a double-blind study, neither the researchers nor the subjects knows which group the subjects are in; all information is coded, and the code isn't broken until the end of the study. This helps avoid problems that occur when study participants and researchers deliberately or inadvertently contaminate study results.

A Word About Statistics

Despite one's best efforts, it is inevitable that a discussion about research design and evaluation is likely to include some references to statistics. Often jokingly (or maybe not so lightly) referred to as "sadistics," it is a part of the research package that can cause the most concern to the uninitiated. However, many user-friendly statistical packages are currently available that may be loaded on most desktop PCs; often a basic understanding is enough to get the newcomer going. Indeed, only a few concepts are important to review here.

Two basic types of statistics are descriptive and inferential. Descriptive statistics, as one might surmise, describe or summarize information about a sample. Inferential statistics move beyond

simple descriptions and are instructive as to what generalizations or statistical estimations can be made about the population. Inferential results are generally considered more powerful.

The reader is already familiar, no doubt, with many basic descriptive statistics. For example, there are three generally known as measures of central tendencies. They are the mode, median, and mean. The mode is the number, item, score or other value that occurs most often. It is the most frequent occurrence in the sample. The median is the middle or midpoint of a distribution. Therefore, it is the number, item, score or other value that has 50% of the others above and 50% of the values below it. The mean, perhaps the most often used measure of central tendency, is the average number, item, score or other value in the distribution. It is, then, the arithmetic center of the distribution.

There are many, many types of inferential statistics and a full discussion is not possible here. The reader will find several good sources for obtaining a more in-depth treatment in the resource list below. One central concept does, however, warrant discussion at this point. That is the idea of statistical significance.

Statistical significance is a concept that is critical to an understanding of the generalizability of research findings. That is, how confident can one be about these findings, and how can or should these findings be used in the decision-making process? Understanding statistical outcomes is often a matter of degree of confidence in those findings, rather than an "absolute proof" versus "no proof" decision. Very often it is a matter of determining a comfort level with the "odds" that the results in question are due to the experimental manipulation (or the hypothesized naturally occurring relationship) rather than being due to some chance occurrence.

In keeping with this notion, statistical significance is expressed as the "probability" that the outcome was due to what the researcher hypothesized, versus a random outcome. This probability value is expressed in terms of p value. P values are typically <.05 (less than the point 05 level) and <.01 (less than the point 01 level). A value of <.01 means that the probability that the results of the study occurred by chance is less than 1%. Or to phrase it another way, if one were to re-do ("replicate") the study 100 times, one would predict that in 99 cases the results would be the same. This is considered an excellent outcome. Perhaps the most often relied upon level is <.05. This is considered solid statistical significance (the results would be replicated 95 out of 100 tries).

Another closely related issue is sample size. Remember, researchers are often unable to test the entire universe of subjects and must typically rely on smaller numbers of cases. A critical issue in both the research methodology and the power of any statistical findings is the size of the sample. Simply put, a larger sample helps to avoid what are called confounding variables. This simply means that there is always the possibility that something other than what was hypothesized actually produced the outcome. Careful methods, good variable measuring instruments, and other factors all contribute to a strong research design. Samples must also be of sufficient size to support the statistical significance of findings and generalizability. No doubt the reader is familiar with the phrase "statistically significant sample" being used in, for example, news reports that relay the results of national opinion polls. Some people may be surprised to learn that these

samples are often in the low thousands, if that, and are being used to estimate the views of tens of millions of voters. The power of randomization and sizable samples, in concert with other methodological issues (such as whether or not the questions asked in the poll's questionnaire protocol are valid), combine to produce some strikingly dependable results.

Evaluation Research Issues

Among the most common applications of research methods in the victims services area is evaluating the effectiveness of a project or program. Indeed, evaluation research is not really a different or difficult area in and of itself. It is best thought of as simply research applied to the field program setting. At its most fundamental level, evaluation research seeks to answer basic questions about whether or not the program is achieving its stated goals as measured in the research project.

There are many forms of evaluation research. Given the fact that many traditional experimental or "laboratory" research methods are not always possible in the "real world" setting of a field program, a variety of innovative designs are utilized. Many of these are derived from Campbell and Stanley's seminal work Experimental and Quasi-Experimental Designs for Research, the citation to which is found in the reference list. This is a very important book to become familiar with, even on a basic level. In terms of specific evaluation research itself, there are several distinct categories. The reader will note that many of these are distinguished by what is being measured and when is it being measured.

Service providers should understand certain distinctions here, such as the distinction between process evaluation, which investigates issues regarding the program's implementation, and impact or outcome evaluation, which looks more specifically at whether or not the program achieved its goals and had an effect on the issue at hand.

A powerful approach to evaluation called "empowerment evaluation" is currently enjoying increased use. This approach involves both independent evaluation and the training of program staff to continuously assure quality of services (Fetterman, et al., 1996). This approach is one that should be considered by any victim service program contemplating an evaluation project.

Staying on the Cutting Edge

Staying on the cutting edge in a developing field is both exciting and demanding. By virtue of its interdisciplinary nature, the crime victim area requires attention outside the primary fields of a practitioner's training. Among the fields involved in contributing to knowledge in this area are law, criminal justice, juvenile justice, corrections, psychology, social work, counseling, human services, public administration, medicine, nursing, education, divinity and others.

With the ever increasing demands placed on service providers' time by heavy caseloads, it is oftentimes difficult enough to stay current in a single primary area. However, there are tools that may be employed to stay current and to better ensure the quality of crime victim services and advocacy. Much of the work in culling through the research and other literature is already being done, at least to some extent, by others. Victim service providers should draw upon these resources and not expend energies to re-create this work. By staying current on cutting-edge research, service providers can greatly improve victim services.

Academic Institutions: A Wealth of Information

Many relevant research activities may be ongoing in local colleges and universities. Victim advocates can pick up a school/course catalogue and read up on the victim service providers, the work they are doing and the courses they are teaching. They may not have taken the opportunity to reach out to learn what victim services agencies in the area are doing, but may be working on related topics. There are several potential ways to work together to achieve a mutual benefit for service providers and researchers alike, including:

In addition, academics may be willing to review and critique a draft of an evaluation proposal. Finally, a victim services agency could offer an undergraduate or graduate student internship that would offer some quality volunteer work and provide access to the schools resources, such as on-line literature searches and academic journals.

Utilizing Periodicals

Periodicals published by professional associations or publishing houses often have articles of current relevance. These include publications that are more substantial than the typical newsletter, but perhaps are not truly academic journals. The difficulty here typically involves the time needed to review these publications and the money needed to subscribe. Although these concerns are certainly real, the benefit to victim service providers and their agencies may well justify this resource allocation. It is important to invest these limited resources in the highest pay-off areas.

Victim advocates can begin by collecting suggestions from colleagues regarding what they are reading (or wish they had the time and money to read) and add to that list by talking to the professor(s) and graduate student(s). Addresses should be obtained for the publications, and sample issues can be requested.

Other publications may be listed for review on a monthly or quarterly basis by visiting the library. To stay current across disciplines, victim service providers should look for periodicals that have a broad representations of editors listed from the areas to be covered. Also, other colleagues can be drawn upon to informally share information where articles of interest are brought to the attention of others to cut down on the initial work of each participant.

On-Line Services

The power of the on-line services should not be underestimated. Specific information about on-line research is available in Chapter 20. However, it deserves mention here that the amount of time that can be saved in researching topics on-line is astounding. The only caution here is to be particularly skeptical of sources found on-line if they cannot otherwise be verified as credible. The Internet is a very powerful tool, but it is subject to abuse and manipulation. Information and references obtained from the World Wide Web should be cross checked.

Governmental Clearinghouses

Various government agencies provide outstanding information clearinghouses, such as the National Criminal Justice Reference Service (NCJRS); the Office for Victims of Crime Resource Center (OVCRC) is part of NCJRS. However, there are important information clearinghouses outside the fine contributions made by U.S. Department of Justice. Other departments such as Health and Human Services, Housing and Urban Development, and Education, among others, offer similar assistance. Victim service providers should register with all such clearinghouses to assist in identifying innovative programs and cutting-edge information.

Experience Is the Best Teacher

It is often noted that good experimental design is mastered by practice and not by simply being told the potential problems for which one should be on the lookout. Among the best way to keep on the cutting edge is to commit to conducting a small scale research project, or even to writing a brief review article about some area of interest. Set reasonable, but strict, deadlines. Starting with the tips provided in this chapter, victim service providers should get input from a variety of sources and ask others to review and react to this work. No doubt the new researcher will be amazed at how much was already known and a considerable array of additional materials will probably be compiled. Victim service providers will learn much from an open-minded reception of methodological, content, and editorial feedback.

Some Final Research Points to Remember

Victim service providers also should be mindful of a few important points:

1. Make sure that the reader has access to both the raw numbers as well as proportional representations. Readers should not rely heavily on, for example, percentage representations if there is not a good sense for the underlying data (which really should be made available). For example, two jurisdictions have claimed a 50% reduction in homicides in the same period. Jurisdiction A fell from 50 to 25, while jurisdiction B fell from 2 to 1. These may be equally significant depending upon the many circumstances involved, but they do represent quite different things (e.g., a drop in the actual homicide frequency versus a percentage change in the homicide rate).

2. When data are provided graphically (for example, in graphs that show trend lines), look to see that the graph shows the zero point on the axis and, if it does not, that there is a good reason for this and that it is understandable as to what the data actually represent.

3. Be wary of trend data that make broad claims from either short spans of time, or from two discrete points in time, as it is easy to manipulate the presentation of data by limiting the focus in this way.

4. Readers should be very skeptical of claims made from studies that have small sample sizes as there are limitations to the strength of estimating techniques.

5. Victim service providers should be aware of misinterpretations that arise from mishandling proportions in population demographics. Even if group A and B seem to have the same absolute numbers of victims, if one group is many times the size of the other group, then their proportional representation should be stated to have a truer understanding of this phenomenon. (For example, two ethnic or racial groups may have the same number of homicide victims; however, only within the context of population proportion can these numbers be truly understood.)

6. Victim service providers should be skeptical, and not take research at face value. If the author is not convincing that the findings and conclusions drawn from the study make sense, try to articulate what is wrong with the research.

Victim service providers must be careful not to automatically discount research that simply doesn't happen to jive well with their point of view. Research should be read to learn new things as well as to confirm current beliefs. Also, it is important to remember that no study is perfect. This is particularly true in the crime victim research area as the demands of ethical treatment of subjects and the limitations on data that can be gathered often conflict with the rigors of pure research.

As the victims' field expands, those who work in it need to keep up with an ever increasing array of research and other published literature. It is important not to be anxious about delving into this area. Adopting the tips above will help victim service providers stay on the cutting edge and better ensure that their services to, and advocacy for, victims of crime will be current and of high quality.

Sound research should form the basis of developing sound practices that address the needs of the population of victims served. This research should be of good quality and study actual client populations in field settings whenever practical. Indeed, one's reputation, and the credibility of the field as a whole, rests to a significant degree on the field's collective ability to translate good research into quality service provision.

Self Examination Chapter 21, Section 13


Understanding Basic Research and Evaluation

1) Define the following:

a) Variable

b) Operational definitions

c) Randomized study

d) Positive correlation

2) Explain the difference between descriptive and inferential statistics.

3) List several ways in which your program could access or minimize the cost of research and evaluation services.

4) List and discuss three "clever data manipulations" to be wary of.

5) What important information would one need to know about a researcher/

author before fully understanding his/her work in the appropriate context?

References

General

Abt, C. (ed.). (1976). The Evaluation of Social Programs. Beverly Hills, CA: Sage.

Caro, F. (ed.). (1977). Readings in Evaluation Research, 2nd ed. New York: Russell Sage.

Cronbach, L. & Associates. (1980). Toward Reform of Program Evaluation. San Francisco: Jossey-Bass.

Fink, A. & Josecoff, J. (1978). An Evaluation Primer. Beverly Hills, CA: Sage.

Guttentag, M. & Struening, E. (eds.). (1975). Handbook of Evaluation Research, Vol. 2. Beverly Hills, CA: Sage.

Isaac, S., Isaac M., & William, B. (1971). Handbook in Research and Evaluation. San Diego, CA: EDITS.

Mason, E. & Bramble, W. (1978). Understanding and Conducting Research: Applications in Education and the Behavioral Sciences. New York: McGraw-Hill.

McCain, G. & Segal, E. (1969). The Game of Science. Belmont, CA: Brooks/Cole.

Meyers, W. (1981). The Evaluation Enterprise. San Francisco: Jossey-Bass.

Reicken, H. & Boruch, R. (eds.). (1974). Social Experimentation: A Method for Planning and Evaluating Social Intervention. New York: Academic.

Reiss, A. & Roth, J. (1993). Understanding and Preventing Violence, Vol. 1-4. Washington, DC: National Academy of Sciences Press.

Rossi, P., & Freeman, H. (1982). Evaluation: A Systematic Approach, 2nd ed. Beverly Hill, CA: Sage.

Shortell, S. & Richardson, W. (1978). Health Program Evaluation. St. Louis, MO: C.V. Mosby.

Struening E. & Guttentag, M. (eds.). (1975). Handbook of Evaluation Research, Vol.1. Beverly Hills, CA: Sage.

Suchman, E. (1967). Evaluative Research: Principles and Practice in Public Service and Social Action Programs. New York: Russell Sage.

Weiss, C. (1972). Evaluating Action Programs: Readings in Social Action and Education. Boston: Allyn & Bacon.

Weiss, C. (1972). Evaluation Research: Methods for Assessing Program Effectiveness. Englewood Cliffs, NJ: Prentice-Hall.

Evaluation

American Prosecutors Research Institute. (1996). Measuring Impact: A Guide to Program Evaluation for Prosecutors. Alexandria, VA: Author.

Fetterman, D., Kaftarian, S. & Wandersman, A. (Eds.). (1996). Empowerment Evaluation: Knowledge and Tools for Self Assessment and Accountability. Thousand Oaks, CA: Sage.

Fink, A. & Kosecoff, J. (1977 to present). How to Evaluate Education Programs. Arlington, VA: Capitol Publications.

Tallmadge, G. (1972, October). The Joint Dissemination Panel Ideabook. Mountainview, CA: RMC Research Corporation.

Design and Sampling

Alwin, D. (1978). "Survey Design and Analysis: Current Issues." Sage Contemporary Social Science Issues, 46. Beverly Hills, CA: Sage.

Campbell, D. & Stanley, J. (1963). Experimental and Quasi-experimental Designs for Research. Chicago: Rand McNally.

Jessen, R. (1978). Statistical Survey Techniques. New York: John Wiley.

Rutman, L. (ed.). (1977). Evaluation Research Methods: A Basic Guide. Beverly Hills, CA: Sage.

Williams, B. (1978). A Sampler on Sampling. New York: John Wiley.

Measurement

Cronbach, L. (1970). Essentials of Psychological Testing, 3rd ed. New York: Harper & Row.

Nunnally, J. (1978). Psychometric Theory, 2nd ed. New York: McGraw-Hill.

Whitta, D.K. (ed.). (1968). Handbook of Measurement and Assessment in Behavioral Sciences. Reading, MA: Addison-Wesley.

Analysis of Information

Haack, D. (1979). Statistical Literacy: A Guide to Interpretation. North Scituate, MA: Duxbury.

Johnson A. (1977) Social Statistics Without Tears. New York: McGraw-Hill.

Vito, G. (1989). Statistical Applications in Criminal Justice. Newbury Park, CA: Sage.

Journals

Journal of Traumatic Stress

Journal of Interpersonal Violence

Violence and Victims

Crime and Delinquency

Criminal Justice and Behavior