OVC ArchiveOVC
This file is provided for reference purposes only. It was current when produced, but is no longer maintained and may now be outdated. Please select www.ovc.gov to access current information.
 

Line
The Client Satisfaction Survey

s you will recall from the last section, the Agency Survey looked at Denver's victim service network from the service provider's perspective. This section discusses the Client Satisfaction Survey, which looked at Denver's victim service network from the client's perspective. The Client Satisfaction Survey regarded crime victims as service consumers and tried to determine what their needs were and how well those needs were met.

Prior to VS2000, several client satisfaction survey instruments were already in use in Denver, including both written and telephone interview formats. Surveys from the Denver Police Department's Victim Assistance Unit, the Denver District Attorney's Victim Advocacy Program, the Denver Victims Service Center, and other community-based programs were reviewed by the Client Satisfaction Survey Committee. The committee also reviewed client satisfaction surveys from victim service agencies in other states, including those developed for the Nebraska Crime Commission's statewide needs assessment.

The committee resolved that the client survey should capture both quantitative and qualitative information on services, the accessibility of those services, the response of the criminal justice system, if applicable, and interagency responses. Service information would include what services were received, satisfaction with those services, and services that were needed but not received. Accessibility of the services would be determined by questions about affordability, language appropriateness, and ease of use. The criminal justice response would be measured by questions about reporting, prosecution, and the victims' satisfaction level with their interactions with the police and district attorney. Finally, survey questions about referrals that had been given and used would shed light on the appropriateness of interagency response.

Before the survey could be designed, the committee had to resolve several key questions. Should the survey measure the victims' satisfaction with all the services they had received as a result of their victimization? Or, should the survey measure the victims' satisfaction only with services obtained from the agency that provided the survey to the victim? Should the survey form capture information about the services provided to victims within a given timeframe? How could the survey be disseminated to ensure maximum response? How should the survey address the wide variation in victim service programs in terms of types of victims served, types of services provided, amount of client activity, and the ways services are delivered? In other words, should a survey be tailored specifically to each separate agency? Or, could a common survey be adequate? Finally, how could the survey format and method of implementation make victims comfortable and motivated to complete the survey?

One goal of the survey was to collect information that would be used to inform the creation of a seamless network of services for all victims of crime. Having decided that the purpose of the client survey was not to evaluate satisfaction with a specific agency, but with distinct types of services within a network of services, the committee decided that the survey should elicit satisfaction levels of all Denver-area services that the victim received within a 12-month period.

The population sample for the distribution of the Client Satisfaction Survey was composed of victims who were currently receiving services at Denver victim service agencies. The committee chose this option instead of a random population survey for several reasons. Although any population will include victims of crime, a random sample of a population would also include many nonvictims, who were not being targeted by this survey. In addition, surveys mailed to random population groups are known to have a low return rate. To obtain enough returns, the survey would have to be mailed to a very large number of people, and this would be too costly. So, the decision was made to direct the surveys to victims currently receiving services at Denver victim service agencies.

Agencies were given several options for survey dissemination. The surveys could be given to victims by service providers as part of the exit evaluation process, or as a mid-service evaluation, or at any time a provider felt it was appropriate. The committee assumed that because almost all the Denver-area service programs were involved in the VS2000 project, they would be invested in the success of the survey assessment, ensuring that completed surveys were returned. This method allowed the clients to complete the surveys inhouse. Committee members reported that surveys taken home by clients for completion usually were not returned. Although clients were not required to complete the surveys, it seemed more likely that this method would result in more returns and would be less intrusive.

The survey designers tried to make the survey user friendly by keeping it short (a little over three pages) and the language plain and simple, avoiding the kind of terminology and jargon service providers understand, but victims often do not.

The survey was organized into four sections. The first section asked questions about the crime (or crimes) the respondent was a victim of, including type of crime, year of the crime, city in which the crime occurred, the first agency or group contacted for help, and whether the crime was reported and prosecuted. The second section addressed overall victim satisfaction with criminal justice and victim service agencies and service referrals. The third section contained a Service Information Table, much like the one in the Agency Survey, which measures satisfaction level, service gaps, and accessibility. The fourth section requested demographic information.

The survey measured a victim's satisfaction with the justice system and service agencies by asking the question, "Do you feel you were treated with dignity and respect by the police? By the district attorney's office? By the agencies that provided services for you?" This question was phrased specifically to determine the victims' feelings about how they were treated, avoiding undue negative (or positive) answers based on case outcome.

The Service Information Table listed 47 services. For each service victims received, they were asked their satisfaction level: Were services provided in their language? Were services affordable? Were services easy to get to or use? Similar to the Agency Survey, this table allowed the survey to ask several questions about each type of service in a relatively concise format.

Implementation of the Client Satisfaction Survey

The Client Satisfaction Survey was sent to 70 Denver victim service agencies. Depending on their size, most agencies were asked to return between 5 and 15 surveys within a 2-month period. This deadline was extended for several months because of the low return rate.

The survey included a cover letter stating the survey was confidential and did not contain the victim's name. It also explained the purpose of the assessment, who was conducting it, who would see the results, and how they would be used. To ensure confidentiality for the victim, envelopes addressed to VS2000 were provided with the surveys. After completing the survey, the respondent could seal the survey in the envelope and the service agency would mail it. For victims who could not read, the agency could offer to read it to them, conducting the survey as an oral interview provided the victim did not object to the lack of anonymity. Surveys were also available in Spanish and in Vietnamese, which is the largest Asian population in Denver. Eleven agencies returned 80 completed surveys. VS2000 staff disseminated additional surveys to community groups. In total, 127 surveys from 11 victim service agencies and 11 community groups were returned to VS2000 for analysis. Because of the slow return rate, survey results were computed by hand.

Results of the Client Satisfaction Survey

Survey analysis revealed that client satisfaction with services was high.

Overwhelmingly, the survey responses revealed that services were language appropriate, affordable, and easily accessible. The primary unmet service needs were crisis intervention at the crime scene; victim assistance at the crime scene; emergency financial assistance, victims' rights information, and protection and safety services; and updates on the status of one's case. Among other things, the survey revealed that 46 percent of respondents felt they were treated with dignity and respect by the police, 33 percent by the prosecutor (51 percent were "not applicable"), and 61 percent by service agencies (21 percent were "not applicable").

The results of the survey are colored somewhat by the low return rate and the fact that only 11 agencies were represented. Also, some agencies were overrepresented, even when taking into account size considerations. For instance, 1 agency returned 20 surveys, which was appropriate for the number of clients it served. Yet, other programs with an equally large client base returned only one or two surveys.

During and after the implementation of the Client Satisfaction Survey, it became clear that the low return rate was due not to victims' reluctance, but to resistance by service providers. The survey was an inconvenience to service providers, requiring more paperwork, more coordination effort, and more time. In agencies and programs that did not already conduct periodic evaluations or assessments as part of their routine procedures, it was difficult for the service providers to conduct the survey. Another and perhaps primary cause of the service providers' resistance to the survey was that the providers felt the survey represented another layer of intrusion and stress for their clients. However, it is interesting to note that in later focus groups conducted with victims, victims stated unequivocally that they wanted to be asked about their opinions, experiences, needs, and suggestions.

In the final analysis, the results from the Client Satisfaction Survey were not as valuable as those from the Agency Survey and the Victim Focus Groups because of the low return rate and the fact that the responses were not really representative of the crime victims receiving services in Denver. As a result, the results of the Client Satisfaction Survey had less impact than the Agency Survey or the Victim Focus Groups on the development of the model.

Evaluation: Suggested Changes to the Content and Method of the Client Satisfaction Survey

In retrospect, the committee decided that changing the implementation of the Client Satisfaction Survey might result in a higher return rate and more meaningful data. Steps were taken to "mentor" service providers throughout the survey process. Committee members were assigned agencies to call periodically to ask how the implementation was proceeding, answer questions, and suggest possible methods for conducting the survey. These mentors were usually told that everything was fine and proceeding on course, yet completed surveys were not forthcoming. It became clear that communication by telephone was too removed and impersonal. The survey effort may have been more effective if VS2000 staff and committee members met with agency staff personally to explain the survey and help with the implementation process.

During the process of totaling and analyzing the client surveys, problems with survey format were discovered. The most obvious problem was with the service table. It was too complicated for many respondents to understand. This resulted in respondents providing contradictory information. Many surveys had to be discarded because the contradictory responses made it impossible to determine what the respondents meant. Perhaps asking the question, "How could services be made better?" and providing a blank space for the respondent's answer would have generated more valuable information. However, returned surveys revealed that respondents, for the most part, failed to complete other spaces provided for comment on the survey.

Although the results of the Client Satisfaction Survey were disappointing, the assessment process did highlight the need for a systematic means to obtain ongoing client feedback.

Previous Contents Next


OVC Bulletin, October 2000
Denver Victim Services 2000 Needs Assessment
Archive iconThe information on this page is archived and provided for reference purposes only.