MENU TITLE: Trial Court Performance Standards and Measurement System Implementation Manual. Series: BJA Monograph Published: July 1997 Author: Bureau of Justice Assistance 248 pages 550,135 bytes U.S. Department of Justice Office of Justice Programs Bureau of Justice Assistance To receive copies of the forms that accompany this text, please contact Jennifer Knobe, Bureau of Justice Assistance at (202) 616-3212. Trial Court Performance Standards and Measurement System Implementation Manual Monograph July 1997 NCJ 161567 ------------------------------ Foreword Developing a common language for describing, classifying, and measuring the performance of trial courts was the goal of an 8-year effort, the Trial Court Performance Standards Project, initiated in 1987 by the National Center for State Courts and the Bureau of Justice Assistance (BJA). The Trial Court Performance Standards and Measurement System is the result of that effort. Crafted by a commission of leading trial judges, court managers, and scholars and demonstrated in trial courts across the Nation, the measurement system is an invaluable resource for enhancing a court's ability to provide fair and efficient adjudication and disposition of cases. Because many trial courts lack the resources to create a mechanism for self-evaluation, the project is critical to improving the administration of justice on the basis of universally accepted performance standards. This publication is a detailed guide to implementing the Trial Court Performance Standards and Measurement System. It is intended for judges, court managers, lawyers, policymakers, court staff, and other professionals who will participate in the implementation process. It is our hope that every trial court in the Nation will use this guide and its companion publications to begin improving access to justice and its administration with equality, integrity, and timeliness. Nancy E. Gist Director ------------------------------ Preface The Trial Court Performance Standards and Measurement System is the culmination of an 8-year initiative begun in 1987 by the Commission on Trial Court Performance Standards to develop measurable performance standards for the Nation's State trial courts. The Commission first created the Trial Court Performance Standards, which set forth standards of performance for trial courts in five performance areas: o Access to Justice o Expedition and Timeliness o Equality, Fairness, and Integrity o Independence and Accountability o Public Trust and Confidence The Commission's next challenge was to provide trial courts with a systematic and sound means to examine how well they achieve these performance standards. To meet this challenge, the Commission and the Trial Court Performance Standards Project staff developed a set of measures for assessing trial court performance. Twelve trial courts in Ohio, New Jersey, Virginia, and Washington subsequently tested the measures during a 4-year demonstration. In addition to the Commission and project staff, more than 100 personnel of the demonstration courts, as well as program monitors of the Bureau of Justice Assistance and the State Justice Institute, contributed to the demonstration process. This extensive and collaborative undertaking was necessary to ensure that the measures, many of which are complex and novel, are operable in court settings, do not consume unreasonable amounts of resources, and produce information courts can use readily. The resulting measurement system is intended to be a versatile tool for self-assessment and improvement, and not a means for evaluating the performance of individuals or for drawing comparisons across courts. The measurement system attempts to balance practicality and economy with precision and scientific rigor. The 68 measures within the system accomplish this balance to varying degrees, but a workable measurement system should be viewed as evolutionary and subject to continuing development. The Commission expects that as trial courts implement the system, their experiences will inspire innovative approaches that improve both the individual measures and the system as a whole. Although implementing the measurement system requires dedication, perseverance, and flexibility from a trial court's staff, the undertaking is critical to effective court performance. The Commission commends the Trial Court Performance Standards and Measurement System to the court community as a useful set of tools for conducting self-evaluation and for engaging in the worthy pursuit of improving public service. ------------------------------ Acknowledgments The Bureau of Justice Assistance would like to thank the Commission on Trial Court Performance Standards for its dedication and vision in guiding the Trial Court Performance Standards to their fruition. Commission members include the following individuals: Honorable Robert C. Murphy, Chair Chief Judge (Retired) Court of Appeals of Maryland Towson, Maryland Honorable Rebecca A. Albrecht Associate Presiding Judge Superior Court of Arizona, Maricopa County Phoenix, Arizona Robert N. Baldwin State Court Administrator Supreme Court of Virginia Richmond, Virginia Carl F. Bianchi Director of Legislative Services Legislative Services Office Boise, Idaho Honorable Robert C. Broomfield Chief Judge, U.S. District Court District of Arizona Phoenix, Arizona John A. Clarke Executive Officer/Clerk Los Angeles Superior Court Los Angeles, California Judith A. Cramer Manager, Weed & Seed Neighborhood Revitalization Program Tampa, Florida Honorable Anne L. Ellington Assistant Presiding Judge King County Superior Court Seattle, Washington Howard Hanson County Clerk/Superior Court Administrator Marin County San Rafael, California Robert D. Lipscher Administrative Director (Retired) New Jersey Administrative Office of the Courts Trenton, New Jersey Edward B. McConnell (ex-officio) President Emeritus National Center for State Courts Williamsburg, Virginia Doris Marie Provine Chair, Department of Political Science Syracuse University Syracuse, New York Honorable Henry Ramsey, Jr. Dean, Howard University School of Law Washington, D.C. Honorable Leo M. Spellacy Judge, Ohio Court of Appeals Eighth District Cleveland, Ohio Whitfield Smith (1987-1991) Clerk of Court (Former) Superior Court for DeKalb County, Georgia Decatur, Georgia Honorable Fred B. Ugast Chief Judge (Retired) Superior Court of the District of Columbia Washington, D.C. Bureau of Justice Assistance, U.S. Department of Justice Marilyn Nejelski, Program Manager Charles Hollis, Chief, Adjudication Branch National Center for State Courts Sally T. Hillsman, Vice President (1992-1995) Geoff Gallas, Vice President (1987-1991) Trial Court Performance Standards Project Staff Pamela Casey, Director Ingo Keilitz,* Director Hillery Efkeman Margaret Fonner John Goerdt Thomas Hafemeister* Roger Hanson William Hewitt Brenda Jones* Susan Keilitz Fred Miller* Beatrice Monahan* Pamela Petrakis* David Rottman* *denotes former project staff In addition to those listed above, numerous individuals at the National Center for State Courts and elsewhere gave generously of their time to assist the development and initial testing of the Trial Court Performance Standards and Measurement System, including Stevalynn Adams, David Aday, Carl Baar, Kent Batty, Richard Berk, Chuck Campbell, Joy Chapper, George Cole, Hank Daley, Michael Dann, Tom Dibble, Chris Duncan, Bill Fishback, Gene Flango, Sandy Garcia, Debbie Gause, George Gish, Gordon Griller, Mary Hogan, Cindy Huffman, Michael Jeans, Lynn Jordaans, Carl Kessler, Kay Knapp, Gerald Kuban, Monica Lee, Chris Lomvardias, Kay Loveland, Jennifer Rae Lovko, Robert Lowe, James Lynch, Barry Mahoney, Mary McCall, Craig McEwen, Jan Michaels, Barbara Meierhoefer, Tom Munsterman, Raymond Nimmer, Jessica Pearson, Mike Planet, Maryann Rondeau, Jane Raynes, Teresa Risi, Dalton Roberson, Ronald Rosenberg, Jeffrey Roth, Fred Rusillo, Hisako Sayers, Bob Tobin, Anne Walker, Patricia Wall, Steven Wasby, Joan White, Matt Williams, and Robert Williams. We also are indebted to the many individuals in the 4 pilot States and the 12 courts for the countless hours and invaluable insights they contributed during the project's 4-year demonstration phase: in New Jersey, Robert D. Lipscher and Theodore Fetter, Administrative Office of the Courts-- Atlantic County Superior Court, Burlington County Superior Court, Morris County Superior Court, Ocean County Superior Court, and Somerset County Superior Court; in Ohio, Stephan W. Stover and Ruth Ann Elmer, Supreme Court of Ohio--Meigs County Court of Common Pleas, Stark County Court of Common Pleas, and Wayne County Court of Common Pleas; in Virginia, Robert N. Baldwin and Beatrice P. Monahan, Office of the Administrator for the Courts--Fairfax County Circuit Court; and in Washington, Mary Campbell McQueen and Yvonne Pettus, Office of the Administrator for the Courts--Spokane County Superior Court, Thurston County Superior Court, and Whatcom County Superior Court. We also gratefully acknowledge Richard Van Duizend, Deputy Director of the State Justice Institute (SJI), for his commitment to the project and SJI's financial support of many demonstration phase activities. The Commission on Trial Court Performance Standards, the National Center for State Courts, and the Bureau of Justice Assistance thank these individuals and the many individuals not named here who assisted in the Trial Court Performance Standards Project in its 8-year duration. ------------------------------ Table of Contents Introduction History of the Trial Court Performance Standards and Measurement System Developing the System's Measurement Component Using the Measurement System Organization of This Implementation Manual Planning to Use the Trial Court Performance Standards and Measurement System Performance Area 1: Access to Justice Standard 1.1: Public Proceedings Measure 1.1.1: Access to Open Hearings Measure 1.1.2: Tracking Court Proceedings Measure 1.1.3: Audibility of Participants During Open Court Proceedings Standard 1.2: Safety, Accessibility, and Convenience Measure 1.2.1: Courthouse Security Audit Measure 1.2.2: Law Enforcement Officer Test of Courthouse Security Measure 1.2.3: Perceptions of Courthouse Security Measure 1.2.4: Court Employees' Knowledge of Emergency Procedures Measure 1.2.5: Access to Information by Telephone Measure 1.2.6: Evaluation of Accessibility and Convenience by Court Users Measure 1.2.7: Evaluation of Accessibility and Convenience by Observers Standard 1.3: Effective Participation Measure 1.3.1: Effective Legal Representation of Children in Child Abuse and Neglect Proceedings Measure 1.3.2: Evaluation of Interpreted Events by Experts Measure 1.3.3: Test of Basic Knowledge Required of Interpreters Measure 1.3.4: Assessing Non-English Language Proficiency Through Back Interpretation Measure 1.3.5: Participation by Persons with Disabilities Standard 1.4: Courtesy, Responsiveness, and Respect Measure 1.4.1: Court Users' Assessment of Court Personnel's Courtesy and Responsiveness Measure 1.4.2: Observers' Assessment of Court Personnel's Courtesy and Responsiveness Measure 1.4.3: Treatment of Litigants in Court Standard 1.5: Affordable Costs of Access Measure 1.5.1: Inventory of Assistance Alternatives for the Financially Disadvantaged Measure 1.5.2: Access to Affordable Civil Legal Assistance Measure 1.5.3: Barriers to Accessing Needed Court Services Performance Area 2: Expedition and Timeliness Standard 2.1: Case Processing Measure 2.1.1: Time to Disposition Measure 2.1.2: Ratio of Case Dispositions to Case Filings Measure 2.1.3: Age of Pending Caseload Measure 2.1.4: Certainty of Trial Dates Standard 2.2: Compliance With Schedules Measure 2.2.1: Prompt Payment of Moneys Measure 2.2.2: Provision of Services Measure 2.2.3: Provision of Information Measure 2.2.4: Compliance With Reporting Schedules Standard 2.3: Prompt Implementation of Law and Procedure Measure 2.3.1: Implementation of Changes in Substantive and Procedural Law Measure 2.3.2: Implementation of Changes in Administrative Procedures Performance Area 3: Equality, Fairness, and Integrity Standard 3.1: Fair and Reliable Judicial Process Measure 3.1.1: Performance in Selected Areas of Law Measure 3.1.2: Assessment of Court Performance in Applying the Law Standard 3.2: Juries Measure 3.2.1: Inclusiveness of Jury Source List Measure 3.2.2: Random Jury Selection Procedures Measure 3.2.3: Representativeness of Final Juror Pool Standard 3.3: Court Decisions and Actions Measure 3.3.1: Evaluations of Equality and Fairness by the Practicing Bar Measure 3.3.2: Evaluations of Equality and Fairness by Court Users Measure 3.3.3: Equality and Fairness in Sentencing Measure 3.3.4: Equality and Fairness in Bail Decisions Measure 3.3.5: Integrity of Trial Court Outcomes Standard 3.4: Clarity Measure 3.4.1: Clarity of Judgment and Sentence Measure 3.4.2: Clarity of Civil Judgments Measure 3.4.3: Experience in Interpreting Orders and Judgments Standard 3.5: Responsibility for Enforcement Measure 3.5.1: Payment of Fines, Costs, Restitution, and Other Orders by Probationers Measure 3.5.2: Child Support Enforcement Measure 3.5.3: Civil Judgment Enforcement Measure 3.5.4: Enforcement of Case Processing Rules and Orders Standard 3.6: Production and Preservation of Records Measure 3.6.1: Reliability of the File Control System Measure 3.6.2: Adequate Storage and Preservation of Physical Records Measure 3.6.3: Accuracy, Consistency, and Utility of the Case Docket System Measure 3.6.4: Case File Integrity Measure 3.6.5: Reliability of Document Processing Measure 3.6.6: Verbatim Records of Proceedings Performance Area 4: Independence and Accountability Standard 4.1: Independence and Comity Measure 4.1.1: Perceptions of the Court's Independence and Comity Suggested Steering Committee Activities for Standard 4.1 Standard 4.2: Accountability for Public Resources Measure 4.2.1: Adequacy of Statistical Reporting Categories for Resource Allocation Measure 4.2.2: Evaluation of Personnel Resource Allocation Measure 4.2.3: Evaluation of the Court's Financial Auditing Practices Suggested Steering Committee Activities for Standard 4.2 Other Related Considerations for Standard 4.2 Standard 4.3: Personnel Practices and Decisions Measure 4.3.1: Assessment of Fairness in Working Conditions Measure 4.3.2: Personnel Practices and Employee Morale Measure 4.3.3: Equal Employment Opportunity Suggested Steering Committee Activities for Standard 4.3 Standard 4.4: Public Education Measure 4.4.1: Court and Media Relations Measure 4.4.2: Assessment of the Court's Media Policies and Practices Measure 4.4.3: Community Outreach Efforts Suggested Steering Committee Activities for Standard 4.4 Standard 4.5: Response to Change Measure 4.5.1: Responsiveness to Past Issues Suggested Steering Committee Activities for Standard 4.5 Performance Area 5: Public Trust and Confidence Standard 5.1: Accessibility Measure 5.1.1: Court Employees' Perceptions of Court Performance Measure 5.1.2: Justice System Representatives' Perceptions of Court Performance Measure 5.1.3: General Public's Perceptions of Court Performance Standard 5.2: Expeditious, Fair, and Reliable Court Functions Standard 5.3: Judicial Independence and Accountability Appendix A: Bibliography Appendix B: Sources for Further Information Appendix C: Forms for Implementing the Trial Court Performance Standards and Measurement System List of Tables and Figures Tables Table 1: Summary of Measures Table 2: Summary of Measures by Primary Data Collection Method Figures Figure 1: Case Disposition Time Standards Adopted by the Conference of State Court Administrators, the Conference of Chief Justices, and the American Bar Association Figure 2: List of Problems Affecting Judicial Independence That Might Be Produced Using Nominal Group Technique Figure 3: Relationship Between Records Staff Support, Judges, and Case Categories Figure 4: Resource Allocation Model List of Forms Performance Area 1: Access to Justice Standard 1.1: Public Proceedings Form for 1.1.1: Record of Access to Courtroom Form for 1.1.2: Tracking Court Proceedings Form for 1.1.3: Courtroom Audibility Evaluation Form Standard 1.2: Safety, Accessibility, and Convenience Form for 1.2.1: National Sheriffs' Association Physical Security Checklist Form for 1.2.3: Survey of Courthouse Security Form for 1.2.4: Interview Protocol on Emergency Procedures Form for 1.2.5: Access to Information by Telephone Directions and Recording Sheet Form for 1.2.6 and 1.2.7: Accessibility and Convenience of the Court Standard 1.3: Effective Participation Form for 1.3.1: Evaluation of Legal Representation of Children in Child Abuse and Neglect Proceedings (a) Case Data Collection (b) Judge (c) Guardian ad litem (d) Caseworker Form for 1.3.2: Evaluation of Interpreter Services Form for 1.3.3: Court Interpreter Terminology, Procedure, Protocol, and Ethics Fundamentals Test Form for 1.3.5: Access to Courthouse Facilities by Individuals With Disabilities Standard 1.4: Courtesy, Responsiveness, and Respect Form for 1.4.1 and 1.4.2: Questionnaire for Courteous and Responsive Treatment Form for 1.4.3: Recording Form for the Treatment of Litigants in Court Standard 1.5: Affordable Costs of Access Form for 1.5.1: A Checklist of Court Activities To Promote Affordable Access to Justice Form for 1.5.2: Access to Affordable Civil Legal Assistance Illustrative Data Collection Form Form for 1.5.3: Refer to Form for 5.1.3 Performance Area 2: Expedition and Timeliness Standard 2.1: Case Processing Form for 2.1.1: Generic Civil/Criminal Case Data Collection Form (a) Generic Civil Case Data Collection Form (b) Code Sheet--Civil Cases (c) Generic Criminal Case Data Collection Form (d) Code Sheet--Criminal Cases Form for 2.1.2: Ratio of Dispositions to Filings (Clearance Rate) Worksheet Form for 2.1.3: Display Tables--Age of Pending Caseload Form for 2.1.4: Civil Jury Trial Settings--Data Collection Form (a) Civil Jury Trial Settings--Data Collection Form (b) Civil Jury Trial Continuance Rate Worksheet Standard 2.2: Compliance With Schedules Form for 2.2.2: Provision of Services Forms (a) Provision of Services Data Collection Form (b) Checklist of Services Required in ABA Standards Form for 2.2.3: Information Request Data Collection Form Form for 2.2.4: Generic List of Court Activity Reporting (a) Generic List of Court Activity Reporting (b) Data Collection Form--Compliance With Reporting Schedules (c) Data Summary Report for Overall Court Compliance With Reporting Schedules Standard 2.3: Prompt Implementation of Law and Procedure No Forms Performance Area 3: Equality, Fairness, and Integrity Standard 3.1: Fair and Reliable Judicial Process Form for 3.1.2: Illustrative Questions for Measuring Court Employees' and Attorneys' Assessments of Fidelity to the Law Standard 3.2: Juries No Forms Standard 3.3: Court Decisions and Actions Form for 3.3.1: Illustrative Questionnaire Concerning the Practicing Bar's Views of the Court's Equality and Fairness Form for 3.3.2: Illustrative Questionnaire Concerning the Users' View of the Court's Equality and Fairness Form for 3.3.3: Illustrative Sentencing Data Collection Form Form for 3.3.4: Illustrative Bail Decision Data Collection Form Form for 3.3.5: Illustrative Data Collection Form for Outcomes of Criminal Appeals Standard 3.4: Clarity Form for 3.4.1: Illustrative Data Collection Form--Clarity of Judgment and Sentence Form for 3.4.2: Illustrative Data Collection Form--Clarity of Civil Judgments Form for 3.4.3: Illustrative Questionnaire Form--Experience in Interpreting Orders and Judgments Standard 3.5: Responsibility for Enforcement Form for 3.5.1: (a) Illustrative Data Elements for Measuring Enforcement of Probationary Orders (b) Illustrative Probationary Enforcement Data Collection Form Form for 3.5.2: Illustrative Data Elements for Measuring Enforcement of Child Support Orders/ Illustrative Child Support Enforcement Data Collection Form Standard 3.6: Production and Preservation of Records Form for 3.6.1: Illustrative Data Collection Form--The Reliability of the File System Form for 3.6.2: Illustrative Data Collection Form--Adequate Storage and Preservation of Physical Records Form for 3.6.3: Illustrative Data Collection Form--Accuracy, Consistency, and Utility of the Case Docket System Form for 3.6.4: Illustrative Data Collection Form--Case File Integrity Form for 3.6.5: Illustrative Data Collection Form--Reliability of Document Processing Form for 3.6.6: Illustrative Questionnaire-- Verbatim Records of Proceedings Performance Area 4: Independence and Accountability Standard 4.1: Independence and Comity Form for 4.1.1: Questionnaire Regarding the Independence of the Judiciary and Intergovernmental Relationships Standard 4.2: Accountability for Public Resources Form for 4.2.3: Auditing Practices Checklist and Performance Index Standard 4.3: Personnel Practices and Decisions Form for 4.3.1: Illustrative Position Groupings and Schedule Form for 4.3.2: Employee Survey on Personnel Practices and Employee Morale Form for 4.3.3: (a) Illustrative Data Collection Form for Personnel Information (b) Illustrative Summary Statistical Report on Race and Gender Mix among Employees Standard 4.4: Public Education Form for 4.4.1: Checklist for Court Policy Governing Response to Media Inquiries Form for 4.4.2: Illustrative Survey Form (a) For Media Representatives Regarding Court and Media Relations (b) For Court Employees Regarding Court and Media Relations Form for 4.4.3: Checklist of Potential Community Outreach Efforts (a) Organizational Efforts (b) Individual Efforts Standard 4.5: Response to Change No Forms Performance Area 5: Public Trust and Confidence Standard 5.1: Accessibility Form for 5.1.1: Court Employees' Perceptions of Court Performance Form for 5.1.3: Public Perceptions of Court Performance Standard 5.2: Expeditious, Fair, and Reliable Court Functions No Forms--refer to Forms for 5.1.1 and 5.1.3 Standard 5.3: Judicial Independence and Integrity No Forms--refer to Forms for 5.1.1 and 5.1.3 ------------------------------ Introduction This implementation manual provides trial courts with both the rationale and detailed instructions for implementing the Trial Court Performance Standards and Measurement System. This introduction provides an overview of the development and application of that system. It is intended to help courts translate the philosophy of the system into its practical application. History of the Trial Court Performance Standards and Measurement System The Trial Court Performance Standards and Measurement System expresses a new philosophy and framework for defining and understanding the effectiveness of trial courts by focusing attention on performance, self-assessment, and self- improvement. The 22 standards in the system establish goals for effective court performance in five areas: access to justice; expedition and timeliness; equality, fairness, and integrity; independence and accountability; and public trust and confidence. The measurement component consists of 68 field-tested measures for evaluating how well the court is meeting these performance standards. The Trial Court Performance Standards and Measurement System is an approach to self- assessment that courts can adapt to meet their individual needs; it is neither intended nor suited for comparing performance across courts. A hallmark of the system is its emphasis on the systematic assessment of a trial court's performance as a service organization and on the application of those findings to improve performance. This assessment applies to the court as a whole and does not include individual performance evaluations. The court is viewed as a system involving processes and tasks that are linked together and affect one another. The collective work of the court involves not only judges, but all who perform administrative court functions, including clerks of court, administrators, probation officers, and other court staff, as well as private lawyers, public defenders, prosecutors, and social service providers. The Trial Court Performance and Standards and Measurement System is the major product of the Trial Court Performance Standards Project. The National Center for State Courts (NCSC) and the Bureau of Justice Assistance (BJA), U.S. Department of Justice, initiated the project in August 1987, to develop measurable performance standards for State trial courts. The impetus for the enterprise was the recognition of the need for State trial courts to increase their capacity to provide fair and efficient adjudication and disposition of cases. To carry out the mission of the Trial Court Performance Standards Project, the NCSC established the Commission on Trial Court Performance Standards (Commission). Composed of trial judges, court managers, and scholars, the Commission formulated, deliberated, and generated the measurement system with assistance from the project staff. The initial work of the Commission progressed over 3 years, culminating in 1990 with the publication of Trial Court Performance Standards With Commentary, which has been endorsed by the Conference of Chief Justices, the Conference of State Court Administrators, and the National Association for Court Management, and adapted by the National College of Probate Judges. During the ensuing 4 years, trial courts in Ohio, New Jersey, Virginia, and Washington applied the standards and tested the utility and feasibility of the system's measures. BJA continued funding to NCSC to coordinate and assist the demonstration phase of the Standards Project, while the State Justice Institute supported the demonstrations in the four States. Developing the System's Measurement Component The Trial Court Performance Standards and Measurement System provides the tools for assessing the extent to which a court meets the performance criteria set forth in the 22 performance standards. These tools consist of procedures for systematically gathering and analyzing quantitative and qualitative data and for drawing conclusions from the data to identify areas in need of attention or improvement. Field Testing. As the measurement system evolved, 75 measures were developed, tested, and refined by the Standards Commission and Standards Project staff. Trial courts in Arizona, Michigan, and Ohio contributed to this process by serving as test sites for the draft measures. Following the research and development phase, a 4-year demonstration phase commenced in Ohio, New Jersey, Virginia, and Washington. The demonstration phase of the Standards Project has been crucial to the widespread acceptance and use of the system. Twelve trial courts in 4 States participated in the demonstration, and each of the 75 measures was tested in at least one court.[1] Most measures were tested in two or more of the courts. The 12 trial courts varied on a variety of factors including size, organization, jurisdiction, funding source, demographic and economic context, and, of course, State law and court rules. This variation across the courts provided the opportunity to test the measures under diverse conditions and produced a rich body of information relevant to the application of the measurement system in other trial courts throughout the country. As the demonstration proceeded, the Standards Commission and Standards Project staff reviewed and revised the measures to reflect the experiences of these trial courts in implementing the measurement system. Along the way, the original 75 measures were refined to a set of 68. The outcome of this comprehensive and cooperative undertaking is the system presented in this manual. Using the Measurement System Purposes of Measurement. The Trial Court Performance Standards and Measurement System defines a philosophy that encourages trial courts to conduct regular self-assessments and improvements, treating them as routine court administrative activities. To this end, the system's measurement component is designed to gather information that the court can use in a variety of ways, including budgeting, case management, implementing court improvement projects, and strategic planning. The initial application of the measures aids the court in identifying areas requiring attention or potentially in need of improvement. The measures also may be used to establish benchmarks with regard to court performance on each standard the court wishes to address. Subsequently, the court can use the measures to determine whether its performance with respect to a particular standard is better, about the same, or worse than when the measures were originally applied. The information gathered through the measures also is helpful in determining whether the court's prior improvement efforts have been successful or need to be altered in some way. Nature of the Measures. Some measures and their specific methods build on others and should be conducted in a particular sequence. For example, in Standard 2.1: Case Processing, Measures 2.1.1, Time to Disposition, and 2.1.2, Ratio of Case Dispositions to Case Filings, examine case processing times and case clearance rates. If these measures indicate that average case processing times exceed State or local standards or that clearance rates are not keeping pace with the incoming caseload, the court should proceed to Measure 2.1.3, Age of Pending Caseload, to determine whether a case backlog exists and, if so, to ascertain its nature and extent. Other measures and methods stand alone and can be applied independently. Furthermore, some measures, such as Measure 1.1.3, Audibility of Participants During Open Court Proceedings, are relatively easy to apply while others, such as Measure 3.3.3, Equality and Fairness in Sentencing, are more complex and time consuming. Measurement Methods. The measurement system employs numerous data-gathering methods and taps diverse data sources. The data sources and collection methods used include both familiar processes, such as court and case record reviews and tallies of case filings and dispositions, as well as other social science techniques used less commonly by courts, such as systematic observations, structured interviews, surveys of various reference groups, simulations, group techniques, and public opinion polls. Just as the measurement techniques vary, different types of evaluators are employed depending on the object of the measure. For example, volunteers conduct structured observations of court proceedings and simulations of public access to information, while court staff conduct many of the measures involving record reviews. A few measures are best carried out by consultants or court staff with expertise in areas such as data analysis. Following this introduction, two tables provide information designed to help the reader understand the measurement system. Table 1: Summary of Measures lists the specific measures associated with each standard, the primary data collection method (how the measure is applied), the primary evaluators (who should apply the measure), and the source of data (the subject of the measurement). The 3-digit number identifying each measure in Table 1 provides a key to the measure's place in the measurement system. The first digit denotes the performance area, the second denotes the standard within the area, and the third refers to the measure associated with the standard. Table 2: Summary of Measures by Primary Data Collection Method provides a different perspective of the measurement system. It displays the various measurement methods and lists the individual measures that employ each method. The reader can determine from Table 2 if two or more measures the court intends to implement use the same methods and sources of data. For example, Measure 1.2.3, Perceptions of Courthouse Security, and Measure 1.2.6, Evaluation of Accessibility and Convenience by Court Users, both survey court employees. Consequently, the court can economize by distributing one questionnaire to court employees to accomplish the data collection for both measures. The measurement methods most commonly recommended in the system are summarized below. They are described more fully in this guide in the instructions for conducting the individual measures. Court Record Reviews and Case Data Examination. Because a primary function of courts has been to make and preserve records of civil and criminal matters as well as court operations, court and case record reviews are the most traditional and familiar of the measurement methods. Thirty-two of the measures entail court case and record reviews. These reviews require staff to consult case files, docket sheets, case summary screens in automated systems, and administrative reports. Many of the record review measures are very time consuming, even for courts with automated systems (e.g., Measure 2.1.1, Time to Disposition), but only a few of the measures require knowledge of advanced analysis techniques (e.g., Measure 3.3.3, Equality and Fairness in Sentencing). Because these reviews provide primarily quantitative information, they are more objective in evaluating the court's performance than are surveys and interviews, which usually report the perceptions of the respondents. The results provide insight into areas such as caseflow and case file management practices, compliance with procedural reporting requirements, and timeliness in implementing changes in laws and procedures. Observations and Simulations. The measurement system incorporates several measures that involve observations of court proceedings or simulations of court activities and interactions with court staff. Court personnel can perform a few of these measures, such as rating the audibility of court proceedings (Measure 1.1.3). Many of the measures, however, attempt to simulate the experiences of people who have business in the court only occasionally. These measures require volunteer observers who are unfamiliar with the court system, court procedures, or courthouse facilities. Examples of the activities performed by volunteers include gaining entrance to court proceedings that should be open to the public (Measure 1.1.1), obtaining information about the status of scheduled proceedings (Measure 1.1.2), requesting information about the time and location of a court proceeding by telephone (Measure 1.2.5), and checking the accessibility of court facilities and services for persons with disabilities (Measure 1.3.5). Surveys and Questionnaires. Eighteen measures incorporate the use of surveys. The surveys seek a variety of information from different court constituencies including employees, attorneys, jurors, and the general public. Some of the information gathered is factual, such as demographic information about jurors (Measure 3.2.3). Most of the surveys, however, are designed to gauge opinions on topics such as the accessibility and convenience of court facilities and services (Measures 1.2.6 and 1.2.7) or the fairness and equality of court proceedings and actions (Measures 3.3.1 and 3.3.2). Many of the surveys seek basic demographic information from survey respondents, such as gender, age, and relationship to the court. For some measures, this information is critical in comparing the attitudes of different groups of court users responding to the survey. However, in some instances, requesting demographic data may result in a lower response rate if respondents believe their anonymity is threatened. This is particularly true in small jurisdictions. Consequently, the court needs to weigh the value of obtaining the information against the possibility of losing some respondents. Several of the surveys developed for the measurement system were adapted from instruments used by other organizations for similar assessment purposes. For instance, Measure 1.2.3 uses a survey for assessing courthouse security adapted from the National Crime Survey--Attitude Questionnaire and from the National Crime Survey--Basic Screening Questionnaire. Another example is Measure 5.1.3's survey for gauging public trust and confidence in the courts, which was drawn from other surveys of the public's perceptions of the justice system.[2] Interviews. In addition to surveys, the measurement system employs interviews to gather information and opinions from court staff and court users. In some measures, the surveys and interviews are offered as alternative approaches while in others the two are used in tandem, such as a survey followed by a focus group. Interviews are used as a primary method of data collection when a measure calls for more detailed responses than a written survey might yield, such as in assessing employees' familiarity with emergency procedures (Measure 1.2.4). More typically, interviews are used to collect background information when preparing to conduct a measure (e.g., interviewing the court records manager about case file storage procedures) or to gather followup or clarifying information after collecting data in a different format (e.g., after a records review or survey). Interviews with court employees may be the most efficient way to gather information when court policies are governed less by written documents than by unwritten practices and rules (e.g., Measure 4.4.2). Group Techniques. Group techniques are used in five measures. These techniques include review panels composed of knowledgeable practitioners (Measure 3.1.1, Performance in Selected Areas of Law) and more structured interactions that require a facilitator to guide the group through the activity. Examples of techniques in the latter group are Nominal Group Technique (NGT) and Ideawriting. NGT is used to generate and select among ideas and to make decisions. Ideawriting is a method for developing ideas and works well for groups that communicate well in writing. These techniques are used primarily in measures for Performance Area 4, such as Measure 4.2.1, Adequacy of Statistical Reporting Categories for Resource Allocation. Although the techniques are not difficult to implement, readers planning to use them should consult a text on group techniques such as Group Techniques for Idea Building by Carl Moore (volume 9 of the Applied Social Science Research Series from Sage Publications; see Appendix A for more information). Organization of This Implementation Manual The organization of this manual follows that of the measurement system. The measurement approaches associated with each standard in the five performance areas are described in separate sections. The titles, text, and commentary of the standards are followed by a brief overview of the measures, methods, and techniques associated with them. The overviews are designed to assist the reader in understanding the general approach and requirements for the measures without studying the detailed prescriptions for applying them. Specific measures and data collection forms follow each standard and measurement overview. Applying the Measures. The description of each measure has four parts. First, an introductory section explains the measure's purpose and how it relates to its associated standard. Next, a planning/preparation section details any preparatory work that is necessary to apply the measure. For example, a measure involving a survey identifies the individuals or groups who should be included in the survey sample; a record review measure designates which case files are to be examined. This section also indicates whether certain individuals in the court should be consulted before conducting a measure or if the services of an expert are recommended to assist court staff in applying a measure. A data collection section then outlines the particular steps necessary for actually gathering the data for the measure. For instance, a survey measure details how the surveys should be distributed, and a record review measure includes a description of the data elements that will be collected from court records. A final section on data analysis and report preparation describes how the gathered data should be analyzed and often recommends how the results can be presented to court officials, others who work in the court, or other relevant audiences. In some measures, this section indicates the optimum level of performance, while for other measures the level of satisfactory performance is left to the court to determine. Modifying the Measures. Although at first it may appear that the more complex measures cannot be implemented without some simplification, these measures should be modified only after careful deliberation. For example, an item of information should not be eliminated simply because it is hard to obtain. Instead, the court should consider how to overcome the perceived difficulties in obtaining the information, as well as the consequences of not including the information in the measurement process. On the other hand, strict adherence to every element of a measure might stifle the development of innovative approaches. Therefore, thoughtfully conceived modifications may be undertaken as long as efforts are made to balance a measure's feasibility and utility with its scientific merit. Planning to Use the Trial Court Performance Standards and Measurement System The manual that follows provides both the rationale and detailed instructions for conducting each measure in the measurement system. Before a court undertakes the measurement process, however, those who will conduct the evaluation are likely to have many questions about where and how to begin, how to proceed most efficiently, and where the measurement process might lead. Unlike the explicit directions presented in the measurement system, answers to important questions such as these generally cannot be prescribed for individual courts. Each court must identify its particular needs, set its own performance goals, and determine how it can best apply the Trial Court Performance Standards and Measurement System to both guide the evaluation process and achieve the improved performance it seeks. For guidance on these implementation issues, the reader is strongly urged to consult a companion publication, the Planning Guide for Using the Trial Performance Standards and Measurement System. (To order this publication, contact the BJA Clearinghouse at 1-800-688-4252.) The Planning Guide is based on the experiences of the 12 demonstration courts that tested the system and reflects the lessons they learned in the undertaking. Intending to serve as a conceptual bridge from the Trial Court Performance Standards to the measurement system, the planning guide presents an implementation model to help courts translate the application. The guide provides direction for using the system as a planning, evaluation and monitoring tool and addresses many of the questions and issues courts are likely to encounter when embarking on the process of self- evaluation and self-improvement. Endnotes: [1] In New Jersey, the five demonstration courts were the Superior Courts of Atlantic County, Burlington County, Morris County, Ocean County, and Somerset County. In Ohio, the three demonstration courts were the Common Pleas Courts of Meigs County, Stark County, and Wayne County. The demonstration court in Virginia was the Fairfax County Circuit Court. In Washington, the three demonstration courts were the Superior Courts of Spokane County, Thurston County, and Whatcom County. [2] See, for example, Citizen's Commission to Improve Michigan Courts, Final Report and Recommendations to Improve the Efficiency and Responsiveness of Michigan Courts (Lansing, MI: Michigan Supreme Court, 1986). See also Yankelovich, Skelly, and White, Inc., The Public Image of Courts: Highlights of a National Survey of the General Public, Judges, Lawyers, and Community Leaders (Williamsburg, VA: National Center for State Courts, 1978). ------------------------------ Performance Area 1: Access to Justice Trial courts should be open and accessible. Location, physical structure, procedures, and the responsiveness of personnel affect accessibility. Accordingly, the five standards grouped under Access to Justice require a trial court to eliminate unnecessary barriers to its services. Such barriers can be geographic, economic, and procedural. They can be caused by deficiencies in both language and knowledge of individuals participating in court proceedings. Additionally, psychological barriers can be created by mysterious, remote, unduly complicated, and intimidating court procedures. Overview of Standards. The intent of the first two standards is to bring the administration of justice into the open and to make it accessible. Standard 1.1 requires the trial court to conduct its business openly. To ensure that all persons with legitimate business before the court have access to its proceedings, Standard 1.2 requires the trial court to make its facilities safe, accessible, and convenient to use. Accessibility is required not only for those who are guided by an attorney but also for all litigants, jurors, victims, witnesses, and relatives of litigants. Access to trial courts is also required for many other individuals--for example, beneficiaries of decedents in probate matters, parents and guardians in juvenile cases, persons seeking information from public records held by the court, employees of agencies that regularly do business with the courts (e.g., investigators, mental health professionals, sheriff's deputies, and marshals), and the public. Because a trial court may be accessible to most and still hinder access to some, Standard 1.3 requires the court to provide opportunities for the effective participation of all who appear before the court, including persons with linguistic difficulties and handicaps. To promote access to justice and to enhance citizen confidence and trust in the court, Standard 1.4 urges that all court personnel accord respect, courtesy, and dignity to all with whom they come into contact. Standard 1.5 recognizes that there are financial and procedural barriers to access to justice. It requires that the fees imposed and procedures established by the court be fair and reasonable. Recognizing the importance of the relationship between public records and access to justice, the standard also requires that public records be preserved and made available at a reasonable cost. Overview of Measures. Twenty-one measures are associated with the five Access to Justice standards. Taken together these measures provide both breadth and depth of measurement of a court's performance in offering the public access to justice. Data obtained from the measure of one standard are often relevant to assessment of performance for another standard. This is especially true of Standards 1.3 and 1.5. Standard 1.3 requires that all who appear before the court be able to participate effectively, and Standard 1.5 calls for affordable costs of access to court proceedings. Effective representation by counsel is an important implicit factor for effective participation in court proceedings. Unfortunately, the cost of legal services makes access to justice impossible for many people. Thus, measurement for Standard 1.5 requires the collection of data that are relevant for this implicit requirement of Standard 1.3. The measures in this performance area rely on a variety of data collection methods: surveys, observations (in some measures combined with simulation), interviews, and reviews of court records and documents. Three measures call for administering surveys to individuals who are "regular users of the courthouse." The information sought from these people relates to safety and security, ease of doing business, and the courtesy and respect they experience in the courthouse. Although each survey measure is described separately in relation to a particular standard, it may be easier and less time consuming to combine the questions for each measure into one questionnaire and survey regular users once rather than three times. The method described most often for measuring access to justice is observation (sometimes combined with simulation). Observers systematically record what they see and hear. This structured information can then be examined quantitatively as well as qualitatively. These "see, hear, and record" measures range from concrete and objective (Was an observer able to gain entrance to a courtroom?) to subjective (Did activity taking place in a courtroom detract from the dignity of the proceedings?). There are 12 measures of this type. Although the observations could be carried out by almost anyone, the recommended approach is to use citizen volunteers who are relatively naive to the legal system and who are unfamiliar with the facilities and "customs" of the courthouse. This results in records of experiences that resemble those of ordinary citizens who have infrequent occasion to do business with the court. Furthermore, the observers chosen should optimally be representative of the jurisdictional community of the court. Representativeness is more important for some measures than others. However, because the same individuals could be asked to obtain data for all the observation measures, it may be helpful to recruit one pool of observers who vary on demographic factors. Observers may be recruited by contacting volunteer organizations, universities, senior citizen groups, and so forth. This "volunteer observer" method has other advantages, notably its relatively low cost. The court must invest staff time to recruit volunteers, orient them to their assignments, and evaluate results. Once the recruitment and orientation are completed, however, the observers may be used to collect data for many measures described throughout the measurement process. Because the observers are relatively few in number, they offer the added advantage of being able to provide court staff with additional information during interviews following their structured assignments. A much richer, qualitative analysis results when explanations, descriptions, and suggestions can be elicited from the observers to augment what is provided on written forms, questionnaires, and checklists. Two other measurement methods rely on data collected through interviews and examination of court records and written policy documents. Some of the measures of this type focus on case data. Measure 1.3.1, regarding effective legal representation of children in child abuse and neglect proceedings, is of this type. In this measure, court case records are examined and those involved in the cases are surveyed and interviewed to document how the guardian ad litem process actually has worked for several selected cases. Other measures focus on administrative documents. For example, Measure 1.5.1 relies on an examination of forms, brochures, and written policies to evaluate court efforts to facilitate affordable access alternatives for individuals with low incomes. Interviews with court staff also are conducted to identify and locate the relevant documents. Finally, measures addressing the issues of court security (Measure 1.2.1) and interpreter services (Measure 1.3.2) rely on evaluation by outside experts in the respective areas. Standard 1.1: Public Proceedings The trial court conducts its proceedings and other public business openly. Commentary. This standard requires the trial court to conduct all proceedings openly, contested or uncontested, that are public by law or custom. The court must specify proceedings to which the public is denied access and ensure that the restriction is in accordance with the law and reasonable public expectations. Further, the court must ensure that its proceedings are accessible and audible to all participants, including litigants, attorneys, court personnel, and other persons in the courtroom. Measurement Overview. The three measures for Standard 1.1 determine the degree to which a court openly conducts its business. The measures assume that a trial court meets Standard 1.1 if it (1) provides public access to its courtrooms, (2) ensures that information regarding the status of court proceedings is obtainable, and (3) ensures that judges and other court participants can be heard in open proceedings. All three measures rely on direct observations. The measures require court staff to compile some basic calendaring information. Once this information is available, each of the measures can be completed within a few days. Each of the measures can be accomplished separately, but it would be more efficient to conduct them simultaneously. Although almost anyone can serve as observers for these measures, as noted in the overview of measures for Access to Justice beginning on page 1, it is recommended that individuals who are unfamiliar with the court be recruited. The same individuals also may be used for obtaining observation data for measures related to other standards of access, particularly measures of the convenience of access, perceptions of safety, courtesy, and responsiveness of court personnel. Measure 1.1.1: Access to Open Hearings This measure verifies that the public has access to court proceedings that should be open to the public. The coordinator for the measure provides volunteer observers a list of scheduled court hearings and asks the observers to verify whether they can enter the courtroom in which the hearings take place. Planning/Preparation. Preparation for this measure involves identifying at least 30 court proceedings[1] for the volunteer observers to attend. The first step is to select several days during which the observations will take place. The number of days selected will depend on: o The court's daily volume of proceedings. If few proceedings are held each day, the observations will have to be conducted over many days or weeks. o The variety of proceedings conducted each day. If certain matters are heard only on certain days (e.g., all or most civil and criminal motions are heard only on Mondays), then several days will be needed to observe a cross-section of proceedings. o The number of volunteer observers available to conduct the measure. If a large number of observers are available, data could be collected across many days without asking observers to visit the courthouse repeatedly. Alternatively, if observers must collect data on a number of proceedings, it will be more convenient to do so on 1 or 2 days than to have them traveling to the courthouse across many days. o The observers' schedules. The court may have to collect data across several days (or in just a few days) in order to accommodate the various schedules of the observers. The measure provides an example in which five volunteers observe two proceedings each across 3 days. As noted above, the data collection process can be modified to accommodate a court's particular caseload and volunteers' schedules. Select more or fewer days as necessary. To select the 3 days, first ask court employees involved in scheduling court proceedings whether certain matters are heard only on certain days. If, for example, most short matters are heard only on Mondays, be sure to include at least one Monday in the sample.[2] The selected days should include a cross-section of the types of proceedings the court hears. If the court hears the same types of matters each day, randomly select 3 days. Next, review the list of proceedings scheduled for each day for nonpublic proceedings. Eliminate any matters specifically noted as closed to the public. (Eliminated proceedings may be examined in connection with Standard 3.1, Measure 3.1.1, to determine whether the court's practices for closing hearings are in compliance with Federal and State case law and applicable statutes.) Randomly select 10 proceedings scheduled for each day.[3] Because some proceedings (such as trials) may be canceled before their scheduled start times, it is advisable also to select several additional proceedings as backup. On the morning of the planned observation, give each of the five volunteers two proceedings to attend. Make sure that the two proceedings are not scheduled to take place at the same time in different courtrooms. Data Collection. An observer goes to each scheduled hearing at the designated location and time. For each event, the observer records (see Form 1.1.1, Record of Access to Courtroom) whether he or she was successful in gaining access to the proceeding. If the observer is excluded from any of the scheduled proceedings, he or she should talk with court officials and record the reasons for exclusion. If some of the proceedings with individually scheduled start times (such as trials) are canceled before the scheduled start time, additional proceedings should be chosen to replace them. Canceled proceedings that are part of a court session including many short matters do not need to be replaced. As long as the observer gains access to the courtroom where the matter was scheduled to be heard, the observer can record that the proceeding was accessible. Data Analysis and Report Preparation. Analyzing the data involves a two-step process. If all of the court proceedings were open to the public, the court is performing well on this measure and there is no need to undertake the second step of analysis. If, on the other hand, some of the court proceedings were closed, court officials should examine the legitimacy of the explanations that were given for closing the proceedings. Were the proceedings closed according to the standards enumerated by the Supreme Court in Press-Enterprise Co. v. Superior Court?[4] These standards include: o There is an overriding interest that would be prejudiced by open proceedings. o The closure order is no broader than necessary to protect that interest. o Reasonable alternatives to closure have been considered. o The trial court needs findings on the record adequate to support closure. The standards enumerated for closing a pretrial hearing in criminal cases are:[5] o There is substantial probability that the defendant's right to a fair trial will be prejudiced by publicity. o No reasonable alternatives to closure could protect the defendant's fair trial rights. If any of the proceedings were closed for reasons other than these, the court is not performing optimally on this measure. If proceedings were closed for illegitimate reasons, court officials should take steps to ensure that, in the future, the Supreme Court's standards for closing proceedings are followed. Measure 1.1.2: Tracking Court Proceedings This measure is a logical extension of Measure 1.1.1. If an observer has physical access to a courtroom but cannot identify which proceeding is underway, public access is compromised. The measure examines whether an observer can obtain information about the status of specific court proceedings on the court's calendar. Planning/Preparation. This measure can use the same sample of court proceedings that was drawn for Measure 1.1.1, Access to Open Hearings. The method for selecting the sample of court proceedings is described in the planning/preparation section of Measure 1.1.1. Data Collection. After following the data collection procedure described for Measure 1.1.1, the observer tries to determine when a specific court event will be heard. For each court event, the observer records (see Form 1.1.2, Tracking Court Proceedings) how he or she determined the status of the event (e.g., saw it take place or asked a court official) and any difficulties encountered during the process. Data Analysis and Report Preparation. If the observers were able to identify the status of each scheduled court event, the court is performing well on this measure. If the observers were unable to determine the status of one or more court events, court officials should review the types of court events that could not be tracked. Are there any patterns in the data? For example, did most of the problems occur with court events that did not have specific start times? In order to improve the court's performance on this measure, court officials should examine observers' reports of difficulties they encountered during the data collection process and suggestions for improving the dissemination of information regarding the status of specific proceedings (e.g., provide periodic reviews of the calendar or post the calendar in the courtroom and update it as matters are heard or rescheduled). Measure 1.1.3: Audibility of Participants During Open Court Proceedings This is a measure of the audibility of proceedings when court is in session. Observers collect qualitative data regarding the audibility of judges, attorneys, litigants, witnesses, and other court participants during court proceedings. Planning/Preparation. The first step in applying this measure is selecting a sample of court proceedings. A subset of the sample of court proceedings drawn for Measure 1.1.1, Access to Open Hearings, may be used for this measure. The method for selecting the sample for Measure 1.1.1 is described in the planning/preparation section of that measure. From this sample, at lease five court proceedings are selected to be observed for audibility.[6] The sample should be stratified by courtroom to ensure that proceedings are observed in several different courtrooms. (Smaller courts may prefer to test all of their courtrooms.) The second step is the identification and recruitment of observers with normal hearing. The observers may be volunteers or court employees. Data Collection. An observer with normal hearing attends each of the selected court proceedings and sits for approximately 5 minutes on each side of the courtroom's public seating. The time spent in the courtroom may need to be extended if the observer only had the opportunity to hear one or two of the court participants speak. After observing each court proceeding, the observer should answer the questions on Form 1.1.3, Courtroom Audibility Evaluation Form, and record any specific acoustic and human speech factors that seemed to affect audibility in the courtroom. This qualitative information will help court officials identify factors that may be contributing to poor audibility in the courtroom. Data Analysis and Report Preparation. A report should be prepared that compiles and synthesizes the results of the qualitative evaluations by the observers in the various courtrooms. If audibility is a problem, the report should address whether the problem is common across courtrooms and types of proceedings or generally limited to one courtroom or one type of proceeding. The report should be disseminated to the court administrator and other appropriate court officials. If a problem exists across all courtrooms, court officials should consider contacting a sound engineer for suggestions. Other problems may be alleviated by making minor changes in a courtroom's environment or by developing and enforcing administrative rules related to courtroom audibility. Standard 1.2: Safety, Accessibility, and Convenience Trial court facilities are safe, accessible, and convenient to use. Commentary. Standard 1.2 considers three distinct aspects of court performance: the security of persons and property within the courthouse and its facilities, access to the courthouse and its facilities, and the reasonable convenience and accommodation of those unfamiliar with court facilities and proceedings. It urges a trial court to be concerned about matters such as the centrality of its location in the community that it serves, adequate parking, the availability of public transportation, the degree to which the design of the court provides a secure setting, and the internal layout of court buildings (e.g., the signs that guide visitors to key locations). Because the attitudes and behavior of trial court personnel can make (or fail to make) the courthouse safer, more accessible, and more convenient to use, Standard 1.2 pertains to the conduct of trial court personnel as well. Unusual or unexpected conditions, such as bomb threats, records destruction, employee strikes, sting operations, mass arrests, and natural disasters, challenge the routine operations of the court. Mechanisms (both internal and operated in coordination with other justice system agencies) may be required to handle emergent situations that could impede the courts and disrupt daily routines. Measurement Overview. Measurement of performance for Standard 1.2 addresses three components: safety, accessibility, and convenience. The seven measures for Standard 1.2 utilize a variety of methods, including: (1) a formal audit of courthouse security measures carried out by an expert, (2) simulations by law enforcement personnel evaluating courthouse security, (3) facts and opinions collected from observers who role-play the occasional courthouse visitor, and (4) surveys of regular users of the courthouse and court employees. Courthouse security is defined as "the feeling of safety combined with the measures taken to provide that feeling of safety--against personal injury, property damage, and the loss of records housed in the courthouse." [7] Four measures examine both of these aspects of courthouse security. Measure 1.2.1 examines the physical security of the courthouse with a formal audit of security measures. Measure 1.2.2 requires that trained law enforcement officers conduct a test of courthouse security by observing and trying to breach the court's security measures. Measure 1.2.3 uses a survey to assess the general sense of safety perceived by regular users of the court. Measure 1.2.4 examines the training courthouse employees have received in responding to emergency situations. Accessibility and convenience are addressed together in Measures 1.2.5, 1.2.6, and 1.2.7, reflecting the close connection between the concepts. Measure 1.2.5, relating to access to information by telephone, and Measure 1.2.7, relating to accessibility and convenience of court facilities, rely on observers who simulate business transactions in the court. Measure 1.2.6 uses a survey method to obtain opinions of regular users of the courthouse about accessibility and convenience. The measures that utilize observers are relatively inexpensive compared to those using surveys. Through exit interviews, the "observation" measures also allow the court to gather more detailed information about the observers' experiences than would be possible through a mailed survey. In addition, as a result of their experiences, the observers may be able to provide the court with suggestions for improving the court's services to the public. As a consequence, the observers could serve as emissaries between the court and the community and provide an ongoing source of information and support for the court. Measure 1.2.1: Courthouse Security Audit "The general goal of a comprehensive court security policy should be to establish appropriate protection for court staff and facilities, the general public, and the judicial process as a whole." [8] Measure 1.2.1 considers the court's performance in taking precautions to reduce or eliminate threats to the public's safety in the courthouse. This measure addresses "the degree to which design features of the court provide a secure setting," mentioned in the commentary for Standard 1.2. Planning/Preparation. An expert in court security features should be retained to help conduct the security audit. The National Sheriffs' Association can help identify available consultants. In some jurisdictions, appropriate expertise may be available from the local sheriff's department or the U.S. Marshals Service. The security consultant and security officers from the court should be provided Form 1.2.1, National Sheriffs' Association Physical Security Checklist, as a resource for drafting an audit to fit the court's building(s) and grounds. For example, some audit items will vary depending on whether the court is located within a multipurpose government building or has its own facility. While developing the checklist, the consultant and officers also should consider what would constitute a positive response for each question. That is, in some cases, a "no" response on the National Sheriffs' Association Checklist may be positive (see, for example, question 10 under "Parking Areas.") Data analysis will be simpler if responses are consistent across items, i.e., all "yes" responses are positive. (See the section below on data analysis and report preparation.) Data Collection. The security consultant conducts an in-person security audit, using the modified security checklist described earlier. The court's security officers should assist the consultant in obtaining any information he or she needs in order to conduct the audit. Data Analysis and Report Preparation. Simple descriptive statistics are used to analyze the results of the security audit. The number of positive responses are summed and divided by the number of total responses possible on the court's version of the security checklist. (It is important to note that if some "no" responses are positive, the total number of positive responses cannot be obtained by adding only the responses in the "yes" column.) The court's performance on this measure increases as the percentage of positive responses on the checklist increases. If the security audit indicates problems, court security officials can examine the percentage of positive responses in each of the major areas of security (e.g., parking areas, courtrooms, elevators) to determine where added precautions may be necessary. Measure 1.2.2: Law Enforcement Officer Test of Courthouse Security This measure determines the adequacy of the court's security in protecting both the public and confidential court records. The measure should be conducted as a follow-up to Measure 1.2.1. Data are gathered by law enforcement experts through simulation exercises. The measure requires the cooperation of local law enforcement officials. Planning/Preparation. Local law enforcement officials should be contacted and asked to help court officials conduct a security audit of the courthouse. Law enforcement officials should be informed that the security audit will involve simulations in which one or two officers, dressed in plain clothes, will attempt to breach the court's security system. The officers who conduct the simulations should not be well known to court personnel. Court staff, in consultation with the security expert retained for Measure 1.2.1, should develop simulations to target security areas in which potential weaknesses (e.g., safety of parking areas, ease of gaining access to confidential files, or access to courtrooms and chambers during and after normal business hours) were identified during the security audit conducted for Measure 1.2.1. On a cautionary note, simulations should not be developed that place individuals in potentially dangerous situations (e.g., simulations that involve carrying a concealed weapon). Data Collection. The officers should visit the courthouse in plain clothes. Only the court manager and judge should be aware of the officers' presence in the courthouse. The officers should "wander" through the courthouse conducting the simulations developed by court staff and note any security problems encountered. If, after conducting the simulation exercise, the officers have any questions or need additional information on specific aspects of court security, they should conduct follow-up interviews with relevant court personnel. Court officials should ensure that the officers are introduced to the appropriate staff and should encourage staff to answer the officers' questions as accurately and thoroughly as possible. Data Analysis and Report Preparation. Once the officers have completed the simulations, they should prepare a report on the overall security status of the courthouse. The report should answer questions such as: Was the court's security system successful in protecting the public and in protecting confidential court files and records? Did the officers notice any specific security problems that the court should address? What recommendations do they have for improving court security? Measure 1.2.3: Perceptions of Courthouse Security The extent to which the courthouse is perceived as a safe environment is measured through the administration of a questionnaire to regular users of the court (e.g., court employees, attorneys, probation officers, and jurors). The measure requires the assistance of someone skilled in survey research methods. Planning/Preparation. Measure 1.2.6, Evaluation of Accessibility and Convenience by Court Users, and Measure 1.4.1, Court Users' Assessment of Court Personnel's Courtesy and Responsiveness, also involve surveying regular users of the court. If these measures are being applied in the court, coordinators for these measures may find it efficient to combine the three measures into a single survey instrument. The first step in applying the measure is to review Form 1.2.3, Survey on Courthouse Security. The survey form consists of three sections. The first section seeks to gauge the respondents' sense of threat to their person and property while visiting the courthouse. The second section asks about actual victimizations experienced in the courthouse. The third section seeks general background information such as gender, age, and relationship to the court. The questions are adapted from the National Crime Survey--Attitude Questionnaire and the National Crime Survey--Basic Screening Questionnaire.[9] The survey form can be used as is or modified to better fit the specific characteristics of the court. For example, smaller jurisdictions conducting this measure may want to eliminate some or all of the background information questions included in the third section of the survey. Small jurisdictions may have fewer respondents and thus responses to demographic questions could essentially reveal the identity of individual respondents. Courts in this situation will have to weigh the benefits of including the demographic information to allow for more detailed analysis against a possible low response rate because anonymity cannot be guaranteed. Data Collection. The survey form is administered to four groups of individuals who use the court on a regular basis: court employees, attorneys, probation officers,[10] and jurors. To ensure that each group is represented in the sample of survey recipients, stratified sampling should be used. A sample of at least 80 individuals[11] should be drawn for a total sample of 320 individuals.[12] Court employees should be selected from a list of employees maintained by the court's personnel office. Probation officers should be selected from personnel lists maintained by the probation department. Attorneys should be selected according to the procedure described in Measure 3.3.1. A list of individuals who served as jurors for the court during the previous 18 months should be prepared and used for obtaining the juror sample. A questionnaire is sent to each person in all four groups. For best results, a stamped envelope with the administrator's name and address on it should be included with each questionnaire. As a means to track which surveys have been returned while still preserving the confidentiality of the respondents, each questionnaire should be accompanied by an index card with a code number that corresponds to a master list of the survey recipients. The code number should not be included on the questionnaire itself. Recipients should be instructed to return the card with their completed survey or, if they prefer, in a separate envelope. They should also be informed that the code-numbered cards will not be used for identification of the surveys, but only to determine which surveys are still outstanding and thus require a "reminder" note. The card should be destroyed once the return of the survey is recorded on the master list. To increase the response rate, those who have not returned their surveys should be sent a reminder notice after 10 days. Data Analysis and Report Preparation. The responses for each item of the questionnaire are associated with a number code. For example, "male" is coded as 1 and "female" is coded as 2. The responses for each questionnaire are recorded using these number codes. These codes subsequently are entered into a computer file and tabulated using a statistical software program. In general, court performance depends on two factors: the number of courthouse areas rated safe by a majority of respondents and the number of crime incidents reported by respondents. As perceived safety increases and reported crime decreases, court performance on this measure improves. Two sets of analyses are conducted. The first set examines the frequency (or percentage) of responses for each category of each item. For example, an analysis of question 1 will indicate the number of survey respondents who thought crime in the courthouse had increased, the number who thought crime had decreased, and the number who thought crime had remained about the same. If the majority of respondents thought that crime had increased in the courthouse, then the court is not performing well on this measure. The second set of analyses compares the responses on one item with the responses on other items through the use of a cross-tabulation procedure and the gamma coefficient. These analyses will help explain some of the percentages derived from the first set of analyses. For example, of those respondents who thought that crime had increased, how many had actually been victimized in the courthouse? (See Part II of the survey form.) The result of this analysis will clarify whether a response of "increased crime" is based on actual incidents of crime. The needs of the court should dictate the dissemination and utilization of the results of this measure. Results will provide useful feedback to the Trial Court Administrator and Supervisor of Courthouse Security. If actual incidents of courthouse crime were reported, the court will need to examine its security more closely. If there is a perception that crime has increased but no incidents were reported, the court may have to better publicize its security efforts. Measure 1.2.4: Court Employees' Knowledge of Emergency Procedures When emergencies arise that threaten the safety of courthouse users, court employees must be knowledgeable about and prepared for correct responses. Their actions and decisions will have consequences for their safety, the safety of others, and the integrity of court records. This measure uses interviews to determine the extent to which court employees are familiar with emergency procedures. Planning/Preparation. The first step in applying the measure is to compile a list of employees. From this list, a sample of employees will be drawn to serve as interviewees. At least 15 supervisors or managers and 15 employees should be randomly selected.[13] Form 1.2.4, Interview Protocol on Emergency Procedures, should be reviewed and modified (e.g., change terminology, add specific questions, or modify particular questions) as necessary to better address local jurisdictional settings. For instance, questions referring to weather emergencies (see questions 3 and 14) could be specified to include those weather situations likely to occur in the locale (e.g., a flood or a blizzard). In addition, more questions may be added regarding power outages if they are a particular problem in the jurisdiction. Power outages may occur more often than some of the other emergency situations and may be particularly problematic given the widespread use of technology in both facility operations and court communications. Thus, it may be particularly important for employees to be aware of how to respond to them. Next, court procedures for responding to each emergency situation should be reviewed. If a court does not have written procedures regarding a particular emergency, questions about that emergency should be eliminated from the protocol. Before the interviews are conducted, each interviewer should be given an orientation to the court's security procedures. The data collection phase will be shorter if several individuals are available to conduct the interviews. However, care should be taken to ensure that interview responses are scored consistently across interviewers. One method for doing this is to have each interviewer complete an interview protocol for two or three "practice" interviews and then to compare the interviewers' protocols. If discrepancies exist, the instructions for the interview protocol should be modified to increase consistency among the raters. Data Collection. The interviews should be conducted in person with approximately 15 minutes allocated for each interview. The date and time of each interview should be recorded as part of the data for the measure. (Results of earlier interviews can be compared with results of later interviews. If employees interviewed at a later date have a higher level of familiarity with security measures than employees interviewed earlier, it is likely that the measurement process has prompted employees to become more informed.) Data Analysis and Report Preparation. If the court discovers during the planning/preparation stage that no written procedures exist regarding a certain type of emergency situation, the identified area requires immediate attention. For emergency situations that have written procedures, data analysis proceeds with an examination of the interview information. The interviews gather information on three topics: (1) the training provided to court personnel about security procedures, (2) the effectiveness of the training, and (3) the extent to which employees believe that improved security measures are needed. Summary statistics are used to analyze the results for each question in Part I and Part II. A benchmark of acceptable court performance is that 75 percent or more of all employees recall being briefed on emergency procedures. Court performance improves as the length of time since employees' last briefing decreases. In addition, 75 percent of employees should know what emergency procedures are in place. The responses to individual items in Parts I, II, and III can be examined to determine the areas in which the court is performing well and the areas in which the most improvement is needed. For example, the court may be very conscientious about preparing employees for a bomb threat but may be less conscientious about providing information on handling a hostage situation. A review of the individual items can help court officials determine which areas need the most attention. Measure 1.2.5: Access to Information by Telephone This measure involves simulating a request by a litigant or other interested person for information about the location and time of a court proceeding. A volunteer observer attempts to obtain information about the specific time and location of a court proceeding as well as the type of proceeding it is and its case number. The observer knows only the formal name of the court, the name of the litigant, and the day on which the proceeding in question is scheduled. He or she is not knowledgeable about routine court operations. Planning/Preparation. Five proceedings from the court events sampled for Measure 1.1.1, Access to Open Hearings, are selected and the name of the parties, date, time, and location (i.e., courthouse, floor, and courtroom) of the scheduled events are recorded. If Measure 1.1.1 has not been conducted, five scheduled court events will have to be selected from the court's calendar. A stopwatch or watch with a second hand will be needed during the data collection phase. Data Collection. The first step is for the observer to attempt to find the court's general telephone number from the local telephone directory using the court's official name (see Form 1.2.5, Access to Information by Telephone--Directions and Recording Sheet). If the number is not readily obtained from the local directory, the observer contacts the local directory information service. The observer notes the availability and difficulty of obtaining the court's telephone number and records the number(s) obtained on the data collection form. Using the telephone number obtained from the directory or directory assistance, the observer calls the court to obtain the time and location of each of the five events. To improve the simulation, the telephone contacts with the court should be distributed so that the frequency of the calls will not be noteworthy. Court officials should establish this distribution. For each event, the observer notes the elapsed time before the requested information is provided, using a standard stopwatch, and notes the number of individuals with whom he or she comes into contact. This information is recorded on Form 1.2.5. If the required information cannot be obtained within 1 hour (or if it cannot be obtained at all), the observer records a maximum of 60 minutes and six contacts for each event for purposes of the aggregate summary. He or she also makes notes as appropriate. Data Analysis and Report Preparation. Data obtained for the five events are aggregated. First, the observer summarizes the ease or difficulty of obtaining the court's telephone numbers from the telephone directories and notes the range of elapsed times for the five events. The elapsed time and number of contacts to acquire the information is then averaged across the five events (Telephone Information Accessibility Score). If the range of results from the calls varies widely, the court should separately evaluate, if possible, the circumstances of each simulation. The court may also wish to increase the number of simulations in order to achieve a more reliable average score and to better diagnose the patterns that explain extremes. Measure 1.2.6: Evaluation of Accessibility and Convenience by Court Users The ease and convenience of conducting business with the court is measured through a survey of regular court users (i.e., court employees, attorneys, probation officers, and jurors). Planning/Preparation. Measure 1.2.3, Perceptions of Courthouse Security, and Measure 1.4.1, Court Users' Assessment of Court Personnel's Courtesy and Responsiveness, also involve surveying regular users of the court. If these measures are also being conducted, the coordinators for these measures may find it efficient to combine the three measures into a single survey instrument. Review Form 1.2.6-1.2.7, Accessibility and Convenience of the Court. The survey form covers three subjects related to ease of conducting business: (1) convenience and cost of access to the building itself, (2) signs and other help for finding the right location or service in the building, and (3) the amenities that are available to those who are in the courthouse on business. The questionnaire should be adapted, as necessary, to local conditions and for each of the four groups. For instance, smaller jurisdictions conducting this measure may want to pay particular attention to the demographic questions included on the survey (Part IV, Background). In small jurisdictions responses to demographic questions might reveal the identity of individual respondents. If this is a potential problem, the court may find it best to eliminate some or all of the demographic questions. Data Collection. The questionnaire is administered following the same procedures described in the data collection section of Measure 1.2.3. Data Analysis and Report Preparation. The number and percentage of each response for each question is calculated. The percentages can then be compared across groups. For example, do jurors report more difficulty in getting to or conducting their business in the courthouse? If so, court officials should investigate methods for improving juror access to the court and its facilities. Specific problem areas may be examined and analyzed on a situation-by-situation basis. A review of the responses from all four groups also will highlight those areas in which the court generally is performing well and those areas in which improvement is needed. For example, do people tend to have more difficulty getting to the courthouse or in finding their way around the courthouse once they are there? Once a court has conducted this measure, court officials should establish benchmarks for "poor," "adequate," and "good" ratings. For example, an "adequate" rating might mean that less than 25 percent of the respondents report some difficulty finding parking, and a "good" rating might mean that less than 10 percent report difficulty. Courts should strive to meet the "good" benchmark when conducting the measure in the future. Because different groups may experience different problems, the benchmarks might differ for each group. Measure 1.2.7: Evaluation of Accessibility and Convenience by Observers Several measures in this document require information to be collected by volunteers who are unfamiliar with court facilities and procedures. For this measure, volunteers are given a survey questionnaire on the ease of conducting business with the court at the end of their first observation day in the courthouse. The survey questionnaire is basically the same as that used in Measure 1.2.6. Planning/Preparation. The questionnaire used for Measure 1.2.6 (Form 1.2.6-1.2.7, Accessibility and Convenience of the Court) should be reviewed and modified, as necessary, for the volunteer observers group. Data Collection. The observers are given a questionnaire at the conclusion of their first day of simulated business in the courthouse. They are asked to return it when they make their next observation for one of the measures. This procedure should apply to all observers who visit the courthouse during the evaluation process (not just those collecting data for measures in the Access to Justice performance area of the survey). Data Analysis and Report Preparation. The basic analyses are the same as those discussed in the data collection section of Measure 1.2.6. In addition, the responses of the volunteer observers can be compared to those of the "regular users" who were surveyed for Measure 1.2.6. How do the percentages differ? For example, do regular users tend to report that finding a restroom or telephone is easy, while the volunteer observers report that it is difficult? If so, perhaps the directional signs in the courthouse could be improved to better accommodate the needs of a stranger. The court may also get suggestions for how to improve its accessibility to strangers by conducting "debriefing" interviews with the volunteer observers once the observers have completed the questionnaire. Standard 1.3: Effective Participation The trial court gives all who appear before it the opportunity to participate effectively, without undue hardship or inconvenience. Commentary. Standard 1.3 focuses on how a trial court accommodates all participants in its proceedings--especially those who have language difficulties, mental impairments, or physical handicaps. Accommodations made by the court for impaired or handicapped individuals include the provision of interpreters for the deaf and special courtroom arrangements or equipment for blind and speech-impaired litigants. Measurement Overview. The measures for this standard focus on four groups of people with special needs: (1) children who require special treatment by counsel and the court in order to be represented effectively in court proceedings, (2) hearing or speech impaired individuals who require the services of interpreters in order to participate effectively in court proceedings, (3) non-English-speaking individuals who also require the services of interpreters, and (4) individuals with physical disabilities that impede their ability to get to and move around the courthouse with a reasonable degree of ease and autonomy. The five measures for this standard consider whether these four groups are given the opportunity for effective participation. Measure 1.3.1 examines the representation provided to children in child abuse and neglect proceedings. It relies on case record, survey, and interview data. Measures 1.3.2, 1.3.3, and 1.3.4 examine interpreter services. Measure 1.3.2 examines the quality of interpreting services and the conformity of those services with interpreter standards. It relies on observation data. Measures 1.3.3 and 1.3.4 evaluate interpreters on their knowledge of basic legal and justice system terminology and concepts and on the interpreter's knowledge of a language other than English. Both of these measures require administering tests to the interpreters. The final measure, 1.3.5, relies on observation data. Individuals with physical disabilities collect the data by conducting real or simulated business in the courthouse. Measure 1.3.1: Effective Legal Representation of Children in Child Abuse and Neglect Proceedings The Federal Child Abuse Prevention and Treatment Act of 1974 requires all States to appoint an individual to represent the interests of children involved in judicial proceedings regarding child abuse and neglect. The individual appointed for this purpose is usually called a guardian ad litem (GAL). The States employ various models for providing the services of a GAL. In some States the GAL must be an attorney, while in others a trained volunteer (most often a court-appointed special advocate) may serve as the GAL or may work in conjunction with an attorney. The model used by individual jurisdictions within States also may vary from one another. In addition, the roles and responsibilities of GALs vary across the States, and many State statutes offer little guidance on the GAL's specific duties. In most States, however, the GAL is expected at a minimum to act as an independent investigator of the facts related to the abuse or neglect, an advocate of the child's interests, and a case monitor.[14] Proponents of the rights of children and guidelines on GAL representation recommend that the GAL perform other duties as well.[15] This measure determines the effectiveness of legal representation of the child in child abuse and neglect proceedings. An evaluator (or court staff) reviews the state statutes and court rules relevant to the appointment and responsibilities of GALs in child abuse and neglect proceedings, compares the statutes or rules to recommended practices for GALs, and obtains data from court records and surveys or interviews with GALs, judges, and child protective services caseworkers. Planning/Preparation. Planning and preparation for conducting this measure includes four steps. First, court staff review the relevant statutes, court rules and policies, and case law on the appointment of guardians ad litem and their roles and responsibilities. Second, court staff modify the sample case data collection form (see Form 1.3.1a, Evaluation of Legal Representation of Child Abuse and Neglect Proceedings: Case Data Collection Form) and the survey forms (see Forms 1.3.1b, Judge Survey, 1.3.1c, Guardian ad litem Survey, and 1.3.1d, Caseworker Survey) to conform to the court's procedures, practice, and terminology. Forms 1.3.1a through 1.3.1d include items related to practices recommended in the literature on GAL representation and in guidelines developed in a few States.[16] Unless these items bear no relationship to local practice or are contrary to State law or court rule, they should not be eliminated because they are an important gauge of the effectiveness of legal representation. The third step is the selection of the case sample, which should include 20 current child abuse and neglect cases that have reached a disposition hearing and 20 current review cases that have had a review hearing. The measure requires current cases to ensure that the judges, GALs, and caseworkers have fresh memories of their experiences in the sample cases. The sample should include a broad representation of the pool of individuals who serve as GALs in the jurisdiction. Staff also should determine if they must have approval to access the case files and obtain any approval that is required. Fourth, as staff select the case sample, they create a list of judges, GALs, and child protective service caseworkers involved in the sample cases. The list should match the judges, GALs, and caseworkers to the specific case in which they were involved. These individuals will be surveyed to obtain information about GAL performance that is not available from the case record. In some instances, court staff may also need to interview the judges, GALs, and caseworkers to clarify their responses to the survey. If interviews become necessary, staff may need to request assistance in scheduling interviews with GALs and caseworkers. Data Collection. Data collection from the case records and from the judges, GALs, and caseworkers may proceed simultaneously to reduce the time required to complete this measure. The surveys should be distributed with a cover letter from the chief or presiding judge of the division of the court that has jurisdiction over child abuse and neglect cases. The letter explains the purpose of the survey and that all responses are and will remain confidential. Provide the name of the specific case on each of the survey forms distributed to the judge, GAL, and caseworker. In some jurisdictions, one GAL, judge, or caseworker may have been involved in several cases and therefore will receive multiple surveys. In those cases, only one of the surveys should include Part III, which calls for the respondent's general opinions about training and practice issues related to GAL representation rather than his or her views about GAL representation in a specific case. As the surveys are being prepared and distributed, court staff complete Form 1.3.1 for each of the sampled cases. As the surveys are returned, court staff should review them to determine if calls to the respondents will be needed to clarify responses. Data Analysis and Report Preparation. Case records: The analysis of case record data provides information on the timeliness of GAL appointments, the level of participation of GALs in court proceedings, and the degree to which GALs contribute to case dispositions. For each case, determine whether the appointment of the GAL was made within the time limit set by statute or court rule. Calculate the percentage of cases that fall within the time limit. Also calculate across all cases the average time (mean) in days between the appointment of the GAL and the filing of the petition, the emergency removal order, or other initial court action in the case. The quality of representation is likely to be higher when appointments are made within the time limit and in cases in which appointments are made shortly after the first court action taken because the GAL will have greater opportunity to assess the child's environment and the need for placement outside the home. Next, calculate the number and percentage of hearings in which the GAL participated. The higher the rate of GAL participation in hearings, the higher the effectiveness of representation is likely to be. To assess the level of GAL preparation, calculate the average number of required reports submitted by the GAL. To determine the extent to which GAL performance creates delays in child protection proceedings, calculate the number of continuances of hearings because the GAL was not prepared, the percentage of GAL reports filed on time, and the number of days past the deadlines reports were filed. The extent to which reports from involved agencies are in the case record indicates whether adequate information is available for the GAL to review, making preparation for the case more efficient and effective. Finally, calculate the percentage in both new and review cases in which the GAL made recommendations regarding the placement of the child. The higher the percentage of cases in which the GAL offers the court recommendations, the greater the likelihood that GALs are aggressively representing the child's interests. Surveys: For each GAL activity, calculate the percentage of judges, GALs, and caseworkers who reported that the activity was undertaken. To calculate these percentages, the number of "x's" for each activity are summed across all cases and divided by the total number of cases. If an activity was marked with a "0" or an "I", that case is not counted as part of the numerator or the denominator of the percentage. In addition, calculate the mean rating of each group surveyed (judges, GALs, and caseworkers) for all cases for the items in Parts II and III of Forms 1.3.1b through 1.3.1d. The results should be calculated separately for each group so that the perceptions of the different players can be compared. Court officials should review the average "overall ratings" in Part II first. The higher the average ratings, the better the court is performing on this measure. Is the quality of legal representation generally good (average rating of 4 or 5), or is it considered better in some areas than in others? To determine why the ratings of the quality of representation vary, court officials can examine the responses in Parts I and III. Do GALs undertake certain tasks more frequently than others? Do they demonstrate greater competence in fulfilling particular responsibilities than in completing others? Do review cases receive adequate attention? Are some important activities in representing a case neglected? Do judges, GALs, and caseworkers believe that GALs and judges receive sufficient training? Answers to questions such as these will help court officials determine whether children are being represented properly and, if not, what kinds of improvements are needed. These improvements might include additional GAL or judicial training, clearer definition of the roles and responsibilities of the GAL, implementation of compensation policies that encourage GALs to spend more time on the case, and the development of standards of practice. Measure 1.3.2: Evaluation of Interpreted Events by Experts This measure involves observation and evaluation of the work of court interpreters by individuals who are skilled in foreign language and sign language interpretation.[17] The experts observe interactions in which interpreters are involved, make an assessment of the interpreter's proficiency, and record interpretation problems or violations of interpreter standards. This measure is only appropriate when courts can predict with reasonable certainty that interpreters will be used in specific locations during predictable timeframes. Before arranging for this measure, court personnel should first inform themselves of the qualifications that "skilled individuals" used as observers should possess. For example, if the proposed observers are very proficient in both English and the other language but not familiar with the code of professional responsibility for court interpreters, they should not be used. "Certified" professional interpreters would make the best observers. However, they are not available for many languages nor are they available in many parts of the country.[18] Planning/Preparation. Individuals skilled in foreign languages and sign communication should be identified and recruited to evaluate the court's interpretation services. These individuals, serving as paid professionals or as volunteers, will provide an independent viewpoint of the quality of the court's interpreter services. It is essential for this measure that the language experts understand the requirements for interpreting in court settings. If the observers are not themselves certified court interpreters, they need to be thoroughly familiarized with the professional responsibilities of court interpreters. In addition to any State or local rules governing appropriate professional conduct, the observers should be provided with the following material from Court Interpretation: Model Guides for Policy and Practice in the State Courts:[19] o Chapter 2: Interpreting Terminology o Chapter 6: Judges' Guide to Standards for Interpreted Proceedings o Chapter 9: Model Code of Professional Responsibility for Interpreters in the Judiciary Experts may be located by contacting national and state interpreter associations;[20] the State's office of social services that is responsible for services to deaf or hearing impaired individuals; universities; or community agencies that serve foreign language or handicapped citizens. Experts should be informed that what they see or hear in open court should be discussed only with court officials and that they should not attempt to intervene in any way in the cases they observe. The next step is to select a sample of scheduled court proceedings to observe. Ideally, this sample includes both nonevidentiary and evidentiary hearings. High-volume calendars that likely will include interpreters are good choices for observation scheduling. Examples include traffic court sessions, misdemeanor arraignment and plea dockets, and child support calendars. Felony arraignment and plea calendars should be included if possible. Observations of evidentiary hearings in which interpreters are used for witness testimony are also important to include in the sample. Pending cases should be examined to obtain a list of cases in which interpreters will be needed. When arranging for these observations, identify several proceedings that observers could go to in the same day. The key to scheduling is to ensure that if some of the scheduled proceedings are continued or delayed, other observation opportunities are available. If a court uses interpreters infrequently, this measure should not be attempted. Data Collection. Evaluators observe short procedural hearings in their entirety, striving to achieve as much variety in languages as possible and as many different interpreters as possible. Observations of interpreters working during witness testimony should last at least 5 minutes but not longer than 30 minutes. Using Form 1.3.2, Evaluation of Interpreter Services, the evaluator records observations regarding the quality of interpreter services. The observer first identifies the session of court and the type of proceeding observed. The specific case number, date, and time should be noted, but this may not be possible in high-volume court sessions. If different interpreters are used during a session of court, a separate form should be used for each interpreter. If one interpreter is used for several different cases, a separate form should be completed for each case. For each interpreted session recorded on the form, the observers should rate the overall performance of the interpreter on three dimensions, as shown on the form. These dimensions are: o Language proficiency o Interpreting skills o Professional conduct If problems with the interpreter's performance are noted during the proceeding, these should be briefly recorded on the form. Data Analysis and Report Preparation. After the data collection is complete, the observer should prepare a brief report summarizing the observations. The report should include the following: (1) the number of individual cases that were observed, (2) the number of different interpreters that were observed, by language, and (3) a summary of the evaluation results for all of the cases observed, by language (e.g., the percentage of all cases observed where problems were noted). A summary qualitative assessment should also be provided informing the court of any problem areas that are severe in the observer's opinion, with examples included in the narrative. Measure 1.3.3: Test of Basic Knowledge Required of Interpreters Interpreters cannot adequately perform their job without knowledge of the principles of appropriate professional conduct and basic legal and justice system terminology and concepts. Research has shown, however, that many interpreters used by the courts have not mastered these fundamentals. This measure involves administering a written test that is used to determine whether interpreters have acquired this knowledge. Other essential job requirements--language proficiency and interpreting skills--must be measured independently. Measures 1.3.3 and 1.3.4 may be unnecessary if courts already have a valid and reliable testing process for interpreters used in their courts, including freelance interpreters. If freelance interpreters are not tested prior to employment, use of the measure should be considered. Planning/Preparation. Preparation for this measure involves reviewing and modifying the attached model written test (see Form 1.3.3, Court Interpreter Terminology, Procedure, Protocol, and Ethics Fundamentals Test) to ensure that it reflects local terminology and concepts. After the revisions are complete, the instrument should be pretested by giving it to at least three experienced local court personnel or practicing lawyers (legal and justice system terminology) and to at least two professional court interpreters (questions related to professional conduct). No time limit should be imposed during the pilot test. All pilot test takers should agree on which answer is correct for each test item and that there is only one correct answer. If there is disagreement, the question should be eliminated or replaced with a test item that is agreed upon by the test takers. Each test taker should also be asked to suggest cutoff scores for "excellent," "good," "acceptable," "poor," and "very poor" levels of performance on the exam. It is recommended that the test then be reviewed by at least one judge (preferably two) before setting the final criteria that will be used to evaluate individual test performance. Data Collection. Data collection involves administering the test to all or a majority of the individuals who the court uses as interpreters, and then scoring the test using a standardized scoring guide. It is useful to prepare a scoring template to add greater speed and reliability to the scoring process. Data Analysis and Report Preparation. Results should be analyzed using a standard statistical analysis and reporting software package, if possible. This method allows greater speed and flexibility of analysis. Every time the test is administered, the new scores should be added to the database. The analysis should include, at a minimum, a frequency report showing the number and percentage of test takers in each ranking group (i.e., "excellent," "good," etc.). It is recommended that the analysis also include frequency reports of score rankings by language, and, within each language group and overall, breakdowns by years of experience and educational level. Measure 1.3.4: Assessing Non-English Language Proficiency Through Back Interpretation This measure allows the court to make an assessment of a person's knowledge of a language other than English. The procedure can be used for virtually any language and can be applied by an examiner who speaks only English. Measures 1.3.3 and 1.3.4 are unnecessary if courts already have a valid and reliable testing process for interpreters for the language in question, including freelance interpreters. If freelance interpreters are not tested prior to employment, the measure should be used. Back translation is a technique in which a candidate interprets or translates English into the foreign language in question and, after the passage of time, interprets or translates her or his own foreign language version back into English. The interpreted or translated English version is then compared to the original English to determine how faithfully the original message has been preserved. Planning/Preparation. Before undertaking this measure the court should acquire the textbook Fundamentals of Court Interpretation: Theory, Policy and Practice.[21] The textbook includes a detailed description of the proper procedure for administering and scoring the back translation exercise, including 10 sample questions and statements with underlined scoring units. The measure also requires the use of two audiotape recorders, one for playing a recorded script and one into which the interpreter records her or his interpretation of the script. A written script in English is then prepared in a form identical or similar to the script suggested in Fundamentals of Court Interpretation. The written script is read aloud into a tape recorder in the same way that an attorney would pose a question to a witness or a witness would answer a question. Between each prerecorded question or statement there must be a pause long enough for the interpreter to complete the interpretation. To conduct the measure the court identifies all interpreters who work regularly in the court and plans a testing schedule. The schedule should require the interpreters to report to the testing room on two separate occasions. On the first occasion the interpreter listens to a tape-recorded passage in English and interprets it aloud in the foreign language, using a second tape recorder to record the foreign language rendition. On the second occasion the interpreter will listen to her or his own recorded foreign language rendition of the original script and interpret it back into English. The interval between the first occasion and the second occasion may be as little as one hour. However, separating the occasions by one or several days is not only acceptable but may result in a better test because the passage of time reduces the opportunity for the interpreter to rely on memory of the original English. Approximately 15 minutes should be allocated for each interpreter for each test session. The final preparation step is to select one or two individuals to score the test results. These individuals should have highly developed language skills in English and be able to discern the difference between substitution of words and distortion of meaning. Data Collection. Data collection consists of administering the test to the candidates as summarized above and as described in more detail in Fundamentals of Court Interpretation (pp. 196- 199). Test raters listen to each interpreter's back-translated English version of the script and compare it to the original. The script will contain approximately 40 underlined scoring units that are used to determine the individual's score. The resulting data sources are scoring sheets prepared for each interpreter by the test rater showing the number of scoring units on the back translation that match the meaning of the original English script. Data Analysis and Report Preparation. After all of the tests are scored, a listing of their scores should be prepared. The analysis should then report the summary results in terms of percentiles, as shown in the following table. Results of Back Translation for All Interpreters Report Illustration Score grouping (out of 40 possible correct responses) 36-40 correct (90% or better) 32-35 (80 to 89%) 28-31 (70 to 79%) 24-27 (60 to 69%) 20-23 (50 to 59%) 20 or less correct (49% or below) Number in the group 1 3 4 6 6 3 Percentage of test takers (n=23) 4 13 17 26 26 13 Research and experience with court interpreter testing suggests that analysis of test results should examine test scores of interpreters by language groups. One obvious way to do this in most States is to prepare a report that distinguishes the test results for Spanish language interpreters from other languages. In interpreting the results, the court's policymakers should draw their own conclusions about what is an acceptable level of performance. The mathematics speak for themselves in terms of performance: an interpreter who gets 20 correct items is only rendering one-half of the questions or testimony accurately; a score of 30 correct implies that 25 percent of the "message" is changed, distorted, or lost altogether in the process of being rendered from one language to another. Measure 1.3.5: Participation by Persons with Disabilities This measure examines access to courthouse facilities by persons with physical disabilities. The measure produces two kinds of information: general information about accessibility in the courthouse as a whole and more specific information on the ease or difficulty with which individuals with disabilities conduct business transactions with the court. (The volunteers who complete this measure may also be included in the samples for Measure 1.2.6, Evaluation of Accessibility and Convenience by Court Users, and Measure 1.4.2, Observers' Assessment of Court Personnel's Courtesy and Responsiveness.) Planning/Preparation. The first step is for court staff to complete Part I of Form 1.3.5, Access to Courthouse Facilities by Individuals with Disabilities. Part I is based on a checklist developed by the National Center for State Courts to assess court compliance with the Americans with Disabilities Act.[22] These questions will provide the court with general information about the accessibility and usability of courthouse facilities and services as a whole for persons with disabilities. Next, court officials should prepare a list of routine activities that citizens engage in while using the court's services. These should include: (1) transacting business in the clerk's office, (2) appearing for jury duty, (3) observing a domestic relations calendar, (4) observing a criminal arraignments calendar, (5) observing a trial or simulating the experience of being a litigant during a trial (e.g., visit the courtroom, sit in the litigation area), (6) accessing facilities for special services such as ADR program offices, child support complaint and payment offices, and bail payment windows, and (7) using general courthouse facilities such as cafeterias, restrooms, attorney- client conference rooms, and public telephone areas. Local service agencies or advocacy associations for individuals with physical disabilities should be contacted to obtain the names of individuals who may be willing to participate in a simulation exercise. At a minimum, two individuals confined to wheelchairs and two individuals with a visual impairment should be asked to visit the courthouse. Data Collection. A list of the simulation activities and a copy of Part II of Form 1.3.5 is given to each volunteer. (The information is provided verbally as well.) Each volunteer should attempt each activity and note the results of each simulation on Form 1.3.5. (Volunteers with visual impairments will need assistance in recording the results of the simulations.) Form 1.3.5 allows 15 simulations to be recorded. If more than 15 simulations are conducted, the form should be modified to accommodate the additional simulations. Each simulation should be started from outside the courthouse. Volunteers should record the length of time it takes to conduct each simulation, the ease with which the activities are accomplished, and any specific problems encountered. After the volunteers have completed the simulations, court officials should schedule a meeting with them. During the meeting, the volunteers can compare their experiences with one another and discuss possible improvements for making the court more accessible to individuals with disabilities. Problems encountered in obtaining the resources to make improvements should be described, and both court officials and the volunteers should discuss possible options for overcoming the problems. Court officials may also choose to interview regular users of the court who have disabilities (such as court employees or attorneys) to discuss problems they encounter while working in the courthouse, as well as suggestions they have for improvements. Data Analysis and Report Preparation. Court staff first review the answers to Part I of Form 1.3.5 and consider whether the court has adequately addressed the issue of access for persons with disabilities. To what extent are employees knowledgeable about policies and procedures related to accommodating persons with disabilities? Did the answers to these questions provide a favorable or unfavorable impression of the courthouse's accessibility for all persons? Next, court staff review the results of the simulation exercises. The average rating for "ease of conducting business" for all simulations is calculated. (If 4 volunteers each rated 15 simulations, the average would be based on 60 simulations.) The closer the average rating is to "1," the better the court is performing on this measure. The average length of time needed to conduct each simulation also is calculated. Did some activities take longer than others? If so, what specific problems were encountered? Does an examination of all the simulated activities reveal that some areas of the courthouse are less accessible than others? Using both the general information gathered in Part I and the more specific information gathered from the simulations in Part II, court officials should summarize where the most serious problems exist and develop an action plan (incorporating the volunteers' suggestions, if possible) for alleviating the problems. If court officials have not already done so, they should refer to the Americans with Disabilities Act for suggestions and requirements when developing their plan. Standard 1.4: Courtesy, Responsiveness, and Respect Judges and other trial court personnel are courteous and responsive to the public, and accord respect to all with whom they come into contact. Commentary. The intent of Standard 1.4 is to make the justice system more accommodating and less intimidating. A responsive court ensures that judicial officers and other court employees are available to meet both the routine and exceptional needs of those it serves. Requirements of the standard are particularly important in the understanding shown and assistance offered by court personnel to members of minority or disadvantaged groups and to those unfamiliar with the trial court and its procedures. In keeping with the public trust embodied in their positions, judges and other court employees should reflect by their conduct the law's respect for the dignity and value of all individuals who come before, or make inquiries of, the court. No court employee should by words or conduct demonstrate bias or prejudice based on race, religion, ethnicity, gender, sexual orientation, color, age, handicap, or political affiliation. These requirements extend to the manner in which the employees of the court treat each other. Measurement Overview. The three measures for Standard 1.4 determine whether court personnel are courteous, responsive and respectful to one another and to various members of the public. Measure 1.4.1 uses survey data, Measure 1.4.3 relies on observation data, and Measure 1.4.2 utilizes both methods. The survey for Measure 1.4.1 asks regular court users and court personnel about their treatment by court personnel in general. The survey respondents for this measure are from the same groups surveyed for Measure 1.2.3 and Measure 1.2.6: attorneys, probation officers, jurors, and court employees. The survey for Measure 1.4.2 is similar to that for Measure 1.4.1 but is directed at observers who are unfamiliar with the court recruited to collect data for one or more of the other measures. After collecting data for the other measures, the observers are asked to complete a questionnaire that summarizes their overall impressions of the courtesy and responsiveness of court employees. Measure 1.4.3 relies on observation data to determine the degree of courtesy and respect shown to litigants during court proceedings. The measure requires observers to watch several court proceedings and record information on interactions among the various parties involved in the proceedings. Because each measure is directed at different groups of court users, all three measures should be undertaken to obtain the best assessment of courtesy, responsiveness and respect. Although Measures 1.4.1 and 1.4.2 use survey data, they should not be considered interchangeable. Each measure has a different focus and methodological advantage. Measure 1.4.1 surveys a greater number of people and thus will yield more reliable quantitative results. Because the number of respondents surveyed for Measure 1.4.2 depends on the number of observers collecting data for the court, the number of respondents will be small for most courts. Given the small number of respondents, Measure 1.4.2 offers court officials the opportunity of collecting more indepth qualitative information (e.g., for clarifying problems and obtaining suggestions for improvements) through followup interviews with respondents. Measure 1.4.1: Court Users' Assessment of Court Personnel's Courtesy and Responsiveness The courtesy and responsiveness of court personnel is measured through a survey of regular court users, including court employees, attorneys, probation officers, and jurors. Planning/Preparation. Measures 1.2.3, Perceptions of Courthouse Security, and Measure 1.2.6, Evaluation of Accessibility and Convenience by Court Users, also involve surveying regular users of the court. If these measures are also being conducted, the coordinators for the measures may find it efficient to combine the three measures into a single survey instrument. Review Form 1.4.1-1.4.2, Questionnaire for Courteous and Responsive Treatment. It addresses four aspects of courteousness and responsiveness: (1) the courtesy of court employees, (2) the availability of staff to answer questions, (3) the knowledge of court staff, and (4) the willingness of court staff to explain court policies and procedures to the public. The questionnaire also asks respondents to rate the degree of respect with which judges treat the public. Adapt the survey form, as necessary, to local conditions and for each of the four groups receiving the survey. For example, the questionnaire administered to court employees should be modified to ask for employees' perceptions of the public's treatment by judges and court staff. Also, as noted in surveys for other measures (e.g., Measures 1.2.3 and 1.2.6), smaller jurisdictions conducting this measure may want to eliminate some or all of the demographic questions (the background section) included on the survey. Responses to demographic questions could reveal the identity of individual respondents in some categories (e.g., attorneys) if the number of respondents is small. Thus, small courts need to weigh the benefits of including this information to allow for more detailed analysis against the possibility of a lowered response rate. Data Collection. The questionnaire is administered following the same procedures described in the data collection section of Measure 1.2.3. Data Analysis and Report Preparation. The percentage of each response for each question is calculated. The greater the percentage of respondents rating the court a "1" or "2" on questions 1 through 12 (courtesy of staff) and questions 14 and 16 (respectfulness of judges), the better the court is performing on this measure. The percentages can also be compared across groups. For example, do jurors (compared to other groups) rate court personnel as more courteous? If so, court officials should talk with staff to determine if any of the other groups present particular problems that need to be addressed. Once a court has conducted this measure, court officials should establish benchmarks for "poor," "adequate," and "good" ratings. For example, a "good" rating might mean that at least 98 percent of the respondents agree that they were treated politely, and an "adequate" rating might mean that at least 75 percent of the respondents agree that they were treated politely. The benchmarks may differ for each item and group. Measure 1.4.2: Observers' Assessment of Court Personnel's Courtesy and Responsiveness As noted in Measure 1.2.7, several measures in this document require information to be collected by observers unfamiliar with the court. For this measure, the observers are given a questionnaire regarding their treatment by court personnel. The questionnaire is basically the same as that used with Measure 1.4.1. Planning/Preparation. Review Form 1.4.1-1.4.2, Questionnaire for Courteous and Responsive Treatment. Adapt the survey form, as necessary, to local conditions and for individuals unfamiliar with the court. Because the number of observers is small and because they will likely return the questionnaire, the survey form may include more questions and ask for more detailed information. Data Collection. The questionnaire is administered following the same procedures described in the data collection section of Measure 1.2.7. The observers should be reminded that it is the behavior they encounter or observe during the simulations that is to be rated, not that of court officials with whom they work during the evaluation planning or debriefing process. Data Analysis and Report Preparation. The basic analyses are the same as those discussed for Measure 1.4.1. In addition, the responses of the observers can be compared with those of the "regular users" surveyed for Measure 1.4.1. How do the percentages differ? For example, do regular users tend to rate court personnel as more courteous than do the observers? If so, court officials may want to interview several of the observers to determine why the observers rated court personnel negatively. Based on the information from the observers, court officials may develop training programs for court staff or plan a series of meetings to discuss general problems when interacting with the public. Measure 1.4.3: Treatment of Litigants in Court This measure determines the dignity with which litigants are treated in court proceedings. Data are collected through observations of court proceedings. Planning/Preparation. A list of judges who will be hearing matters during the next week is obtained. (The list should include court referees, commissioners, and court ministerial personnel who perform quasi-judicial activities involving face- to-face interaction with litigants such as child support screenings or divorce mediation.) From this list, a sample of 20 judges is selected. If a court has fewer than 20 judges, all of its judges are included in the sample. Courts may choose to inform judicial officers that this observation will be conducted within a given timeframe. During the observation itself, however, the observer should avoid drawing attention to the fact that an observation is being performed. Furthermore, observers should be recruited who will not be readily identifiable by the judicial officers (e.g., an employee of another court or of the State Administrative Office of the Courts). Each judge is observed while hearing three brief matters likely to be attended by litigants. Examples are arraignments, pleas, sentencings (criminal and juvenile), juvenile dependency (abuse/neglect, status offenses), child custody and support matters, and dissolution of marriage hearings. Consideration also should be given to "quasi-judicial" proceedings such as child support screening or divorce mediation. If such proceedings are conducted privately (that is, not in open court), special arrangements should be made for interviewing the litigants or arranging for observers to attend as a "relative" of the party. Closed proceedings should not be eliminated simply because observation and measurement pose special problems. Data Collection. Using Form 1.4.3, Recording Form for the Treatment of Litigants in Court, information is recorded regarding the degree of courtesy and individual respect shown to the litigants. The information includes whether the judge looks at and establishes eye contact with the litigants, whether the litigants are referred to by name, and whether the judge is attentive to litigants' and their attorneys' questions. The observer also records general occurrences in the courtroom that undermine the dignity and respect afforded litigants during proceedings. These occurrences include the frequency with which the judge is interrupted or distracted by other activities during the hearing, the frequency with which the judge and court employees appear confused regarding the nature of the case they are considering, and the frequency with which the judge, attorneys, or other courtroom officials exhibit bias against the litigants. Each proceeding should be observed for at least 5 minutes and not more than 30 minutes. (During a busy calendar, it may be possible to observe three litigant-attended hearings within 30 minutes.) Observers should note how much time they spend observing each proceeding. Data Analysis and Report Preparation. Percentages for each response are calculated for all cases. For example, in what percentage of cases did judges establish eye contact with the litigants and refer to the litigants by name? In what percentage of cases did courtroom activities and conversations often interrupt hearings? The greater the percentage of cases in which the judge treated the litigant with respect (questions 7 through 10) and the fewer instances of disruptions and insensitive activities by individuals in the courtroom (questions 11 through 13), the better the court is performing on this measure. Court officials can also review responses to individual questions to learn where improvement may be needed the most. For example, if the analyses indicate that judges generally treat litigants respectfully but that activities in the courtrooms tend to disrupt the proceedings, court officials may decide to focus on courtroom behavior. The importance of maintaining courtroom decorum could be reinforced through policy, procedures, and/or training. Court officials could also ask observers for their suggestions once the data collection phase is concluded. Standard 1.5: Affordable Costs of Access The costs of access to the trial court's proceedings and records--whether measured in terms of money, time, or the procedures that must be followed--are reasonable, fair, and affordable. Commentary. Litigants and others who use the services of the trial court (e.g., nonlitigants who require records kept by the courts) face three main financial barriers to effective access to the trial court: court fees, third-party expenses (e.g., deposition costs and expert witness fees), and lawyer fees. Standard 1.5 requires that the trial court minimize its own fees for access and participation in its proceedings and, where possible, scale its procedures and those of others under its influence or control to the reasonable requirements of matters before the court. Means to achieve this include the simplification of procedures and reduction of paperwork in uncontested matters, the use of volunteer lawyers to do pro bono work, simplified pretrial procedures, fair control of pretrial discovery, and establishment of appropriate alternatives for resolving disputes (e.g., referral services for cases that may be resolved by mediation, court- annexed arbitration, early neutral evaluation, tentative ruling procedures, or special settlement conferences). Although a trial court may control its own fees more readily, it can reduce the overall cost of litigation by, for example, conducting telephone conferences in lieu of in-person conferences and by making it easier for citizens to handle uncontested matters (e.g., name changes, stepparent adoptions, or uncontested divorces) without legal representation. As a general rule, simple disputes should be resolved at low cost and by uncomplicated procedures. Procedural accessibility should be enhanced by clear, concise, and understandable language in instructing the parties, witnesses, and jurors about rights, responsibilities, necessary forms, hearings, and court facilities and resources. Trial courts possess the record of their own public proceedings as well as important documents generated by others (e.g., police records and laboratory analyses of evidence). These records must be available to individuals who are authorized to receive them. Standard 1.5 requires that the court maintain a reasonable balance between its actual costs in providing documents or information and what it charges users. Measurement Overview. Three measures are suggested for determining the affordability of court documents and proceedings. The measures use a variety of data collection methods, including observations, simulations, review of documents, interviews, and survey methods. Measure 1.5.1 relies on a team of experts to document the court's efforts in assuring affordable access. The experts gather information about the court's efforts through observations, review of documents, and interviews. Measure 1.5.2 gathers information on the ease of access to legal services for financially disadvantaged individuals. Data collectors simulate attempts by individuals with low incomes to obtain affordable legal assistance for routine legal problems. Measure 1.5.3 examines the relationship between the demand for legal services and the actual delivery of legal services. This measure is conducted in conjunction with Measure 5.1.3, General Public's Perceptions of Court Performance. Information on the reasons individuals do not access the court is gathered through a survey questionnaire. The measures for this standard are complementary. Each measure uses a different method in order to obtain information on the court's performance with regard to reasonable and affordable access. Taken together, the data from the three measures will give the court the best "picture" of its performance. Measure 1.5.1 provides general information on the court's efforts to ensure litigant access to affordable legal services. If a court does not have the resources to conduct all three measures, it should start with this measure. Measure 1.5.2 provides more detailed information on the ease of accessing legal services for specific cases. Measure 1.5.3 complements these two measures by considering the issues of affordability from a broader perspective. It provides information on the general public's perception of affordable access by examining how many people avoid using the courts and why. This measure is conducted in conjunction with the telephone survey of the general public conducted for Performance Area 5, Public Trust and Confidence. Some courts may not have the resources or may determine not to undertake this measure. Measure 1.5.1: Inventory of Assistance Alternatives for the Financially Disadvantaged This measure examines activities the court engages in to facilitate affordable access to the judicial system. A team of practitioners who work for and with the court collect information on these activities. The measure utilizes a variety of data collection methods including observation, review of documents, and interviews. Planning/Preparation. A team of three individuals should be selected to collect the data. (Although a team is not essential to the measurement approach, a team of individuals has the advantage of ensuring that a variety of perspectives and attitudes are taken into consideration during the evaluation.) An excellent team would include a practicing attorney, preferably with a legal services orientation, a court official, and a member of a community social service agency, all of whom are aware of the routine legal needs of financially disadvantaged individuals. Review Form 1.5.1, A Checklist of Court Activities To Promote Affordable Access to Justice. It asks for information on court policies, informational brochures, legal services, and activities that help ensure affordable access to the justice system. The form may be modified to increase its relevancy for local jurisdictions. Data Collection. The data are gathered by observations, document reviews, and interviews. Data collectors should keep a record of where or from whom they obtained information for each item on the data collection form. They should also obtain samples of brochures, forms, instructional packages, and so forth that they used in completing Form 1.5.1. Data Analysis and Report Preparation. The data are analyzed in two steps. First, each member of the data collection team summarizes the results of his or her individual data collection effort by summing the number of "yes" responses for Parts I, II, III, and V. The score for Part IV is obtained by summing the number of points across all five categories for each type of legal proceeding. (The highest possible score is 90 points: 2 points for each of the five categories for each of the nine legal proceedings.) During the second step, the members of the team meet to discuss their individual findings, consider the court's performance on the measure, and, if necessary, craft an action plan for improving performance. The team begins its discussion by comparing individual scores on the checklist. What patterns emerge? Does the court perform better in some areas than in others? Is there general agreement among the team members, or are some areas more problematic for one or two team members? Team members should consult their data collection notes (i.e., where and from whom information was obtained) to determine the reasons for different evaluations. Finally, they should consider what can be done to alleviate identified problems. Following the discussion, the team should prepare a report for court officials that details areas in which the court is performing well and areas in which problems exist. For example, is the court strong in providing basic information on affordable access but weak on engaging in activities that ensure affordability? Does the court have policies and procedures regarding affordable access to justice? If so, are these policies and procedures followed? The report should also outline the team's suggestions for improving particular areas and for making the court's assistance in this area more visible to those who might need it. Measure 1.5.2: Access to Affordable Civil Legal Assistance This measure simulates attempts by indigent and low-income persons with routine legal problems to obtain affordable legal assistance. The data are collected by individuals who have not previously obtained legal assistance. The measure complements Measure 1.5.1, Inventory of Assistance Alternatives for the Financially Disadvantaged, and also provides information related to Standard 1.3, Effective Participation, and Standard 1.2, Safety, Accessibility, and Convenience. Planning/Preparation. At least six scenarios of circumstances in which individuals with a limited income attempt to obtain legal assistance should be developed. The scenarios should be developed in consultation with professionals (e.g., attorneys) who routinely work with financially disadvantaged individuals. The scenarios should be based on legal problems commonly faced by individuals with a limited income. Each scenario should include the name of the individual seeking help; his or her address and telephone number; demographic information such as race, gender, and income; personal information such as marital status and number of children; and the hypothetical reason for seeking legal help. As much as possible, each scenario should represent typical cases of low-income individuals in the community. The scenarios should also represent individuals from different geographic areas of the community, including rural areas if appropriate. Officials of agencies not connected with the court should be notified beforehand of the purpose and nature of the simulations. Notification should take place at least 15 days before the simulations are conducted. If any agency objects to the simulation, court officials should request a meeting to determine if the measure could be modified to alleviate the agency's objections. If modification is not possible, the agency should be excluded from the simulation exercise. Data Collection. The simulations should be spaced across several days and conducted at different times of the day to reduce the risk of detection by personnel from the court or other legal/social services agencies. If possible, the simulations should be conducted by different individuals. Those conducting the simulations should memorize the individual scenarios and should dress and behave in a manner consistent with each scenario. Three simulations are conducted by telephone, and three are conducted in person. For each, the data collector should complete Form 1.5.2, Access to Affordable Civil Legal Assistance. The form is divided into two sections: telephone simulations and inperson simulations. Data collectors (those conducting the simulations) should not be given specific directions for obtaining information on legal assistance. Each data collector is given a scenario and the name of the court and is asked to obtain information on legal assistance for the person in the scenario. For telephone simulations, the data collector begins by obtaining a phone number for the court. The data collector calls the court and requests information for obtaining legal assistance for the reason stated in the scenario. The data collector records information on each person with whom he or she speaks until the data collector has obtained the relevant information for accessing legal help (e.g., the kind of help available, how much it will cost, and the procedure for accessing the help). For simulations conducted in person, the data collector begins by trying to obtain public transportation (e.g., bus, subway, taxi) from the neighborhood noted in the scenario to the court. If public transportation is not available, the data collector should note that fact on the data collection form and drive to the court in a private car. The remainder of the simulation is identical to the telephone simulation, except that the data collector actually visits the offices or agencies to which he or she is referred. The simulation ends when the data collector understands the procedures for obtaining legal assistance. The data collector does not need to actually obtain the legal assistance. Data Analysis and Report Preparation. A report with basic statistical information should be prepared covering topics such as the number of referrals necessary for obtaining the information in each scenario and the length of time required to obtain the information for each scenario (sum the number of minutes required to find each phone number or office and the number of minutes spent during each conversation). A content analysis of responses to questions in Parts 1C and 2C should also be performed. For example, what types of problems are mentioned most frequently as obstacles in trying to obtain information on legal assistance? This summary information should be used to focus a discussion among court officials and representatives of each agency involved in the simulation on improving access to affordable legal assistance. The discussion should explore the reasons certain problems were encountered and suggestions for alleviating the problems. For example, if public transportation is not available from some neighborhoods to the court, court officials could decide to speak with representatives of the various public transportation companies to see if service could be expanded to such neighborhoods. If the data collectors frequently encountered rude or indifferent employees, court officials and agency representatives might suggest implementing a training program that stresses the special needs of financially disadvantaged individuals. Based on the discussions, an action plan should be developed for improving access to affordable legal assistance. Measure 1.5.3: Barriers to Accessing Needed Court Services This measure determines the degree to which access to court services is hindered due to the cost or complexity of procedures. The measure provides information on the latent demand for court services, i.e., the number of people who have a need for court services but, for a variety of reasons, do not access such services. Data are collected in conjunction with the telephone survey in Measure 5.1.3, General Public's Perceptions of Court Performance. Planning/Preparation. Review Form 5.1.3 regarding the general public's perceptions of court performance. Items 19 through 21 were added to the survey to obtain data for Measure 1.5.3. The items ask respondents (1) whether they have ever wanted to go to court but did not, (2) what type of case they had, and (3) what prevented them from going to court. In addition, the first two questions on the survey form inquire about respondents' previous experience with the court. These items should be reviewed and modified as necessary to incorporate terminology used by the local jurisdiction. Data Collection. The data are collected as part of the telephone survey for Measure 5.1.3. Review the description of that measure for details on the procedure. Data Analysis and Report Preparation. The data are analyzed as set forth in the description of Measure 5.1.3. The contractor conducting the telephone survey will provide the percentages of each response for each question. The higher the number of individuals who wanted to access the court but did not, the poorer the court is performing on this measure. What types of cases are most often not pursued and what reasons are most often given for not pursuing a court case? Do the responses vary for those who have had prior experience with courts and those who have not? Responses should also be analyzed by different subgroups of the interview sample to determine if nonwhites, females, or individuals with low incomes perceive courts to be less accessible than their counterparts. Court officials should examine the data to determine what improvements need to be made. Educational programs may be needed to correct the misperceptions held by different groups. End Notes 1. In general, the reliability of the measure's results increases with an increase in the size of the sample. During the demonstration, several courts increased the number of proceedings they investigated by sampling over an extended timeframe or asking volunteers to observe more than one proceeding. 2. A trial test of the measure using calendars from one court, for example, did not include any Monday calendars. Because of this, virtually all of the court's criminal and civil motions and other short matters, including sentencing, child support, and so forth, were excluded from the sample. 3. If the court's calendar tends to change frequently, court staff may prefer to wait until the morning of the scheduled observations before selecting the proceedings. 4. Press-Enterprise Co. v. Superior Court, 464 U.S. 501, 104 S. Ct. 819, 78 L. Ed. 2d 629 (1984). 5. Press-Enterprise Co. v. Superior Court, 478 U.S. 1, 106 S. Ct. 2735, 92 L.Ed.2d 1 (1986). 6. This "subsample" of court proceedings should include both trials and shorter matters that are part of a busy calendar session. If the sample is too uniform (e.g., sample consists only of trials), a stratified sample should be drawn. 7. S. S. Johnson and P. Yerawadekar, "Courthouse Security," Court Management Journal 3 (1981): 8-12. 8. National Sheriffs' Association, Court Security: A Manual of Guidelines and Procedures (Grant No. 77-DF-99-0023) (Washington, DC: Law Enforcement Assistance Administration, 1978). 9. The National Crime Survey is available from the U.S. Department of Justice, Bureau of Justice Statistics, Washington, DC. 10. Probation officers should not be considered a separate group if they are identified as court employees and are included on lists of court employees maintained by the court's personnel office. 11. If there are fewer than 80 individuals in one or more of the four groups (court employees, attorneys, probation officers, and jurors), all of the individuals in those groups should be surveyed. 12. In general, the reliability of statistical analyses increases as the size of the sample increases. Therefore, court officials should consider increasing the sample size if the court has the resources to do so. Increasing the sample will help ensure that analyses performed on subgroups of the sample (e.g., only the court employees or the jurors) yield reliable results. 13. If the court is housed in several buildings, the sample should be stratified to include a few individuals from each building. 14. U.S. Department of Health and Human Services, Final Report on the Validation and Effectiveness Study of Legal Representation Through Guardian Ad Litem (Washington, DC, 1994). 15. Two other primary responsibilities recommended in the literature are mediation among the parties to facilitate cooperative resolutions and identification of community resources and services for the child. See American Bar Association Center on Children and the Law, Standards of Practice for Lawyers Who Represent Children in Abuse and Neglect Cases (Washington, DC, 1996); National CASA Association, "Quality GAL Representation: What Every Child Deserves," The Connection 8 (1) (1992); and D. N. Duquette, Advocating for the Child in Protection Proceedings: A Handbook for Lawyers and Court Appointed Special Advocates (Lexington, MA: Lexington Books, 1990). 16. Court staff may wish to consult these sources before modifying the data collection forms. See literature cited in footnotes 14 and 15, as well as the following State guidelines on GAL representation, as cited in U.S. Department of Health and Human Services, Final Report on the Validation and Effectiveness Study of Legal Representation Through Guardian Ad Litem 2-17 and 2-18, 1994: Colorado State Bar Guardian ad litem Committee of the Justices of the Superior Court, "Colorado Guardian ad litem Mission Statement," October 1992; "New Hampshire Guidelines for Guardians ad litem"; New York State Bar Association's Committee on Juvenile Justice and Child Welfare, "New York Law Guardian Representation Standards in Child Protective Proceedings" (Washington, DC, 1994). 17. For purposes of this measure, interpreter services include both interpretation for physically impaired individuals (e.g., deaf and hearing impaired) and for language-handicapped individuals (those who do not understand English and cannot communicate well in the court system). 18. "Certification" is a status conferred on interpreters for the deaf by the National Registry of Interpreters for the Deaf or by an equivalent State organization. For foreign language interpreters, only the Federal courts and some State court systems certify interpreters after rigorous testing. Certification should not be confused with "approval" processes granted by private interpreter firms, which may indicate only that a person has received some basic orientation to court interpreting. 19. W. Hewitt, Court Interpretation: Model Guides for Policy and Practice in the State Courts (Williamsburg, VA: National Center for State Courts, 1995). 20. For example, the National Association of Judiciary Interpreters and Translators, 531 Main Street, Suite 1603, New York, NY 10004 (212-759-4457), and the Registry of Interpreters for the Deaf, 8719 Colesville Road, Suite 310, Silver Spring, MD 20910 (301-608-0050). 21. Available for $65 from Carolina Academic Press, 700 Kent Street, Durham, NC 27701 (919-489-7486). 22. See the National Center for State Courts, The Americans with Disabilities Act: Title II Self- Evaluation, (Williamsburg, VA: National Center for State Courts, 1992). ------------------------------ Performance Area 2: Expedition and Timeliness Courts are entrusted with many duties and responsibilities that affect individuals and organizations involved with the judicial system, including litigants, jurors, attorneys, witnesses, criminal justice agencies, social service agencies, and members of the public. The repercussions from untimely court actions in any of these involvements can have serious consequences for the persons directly concerned, the court, allied agencies, and the community at large. A trial court should meet its responsibilities to everyone affected by its actions and activities in a timely and expeditious manner--one that does not cause delay. Unnecessary delay causes injustice and hardship. It is a primary cause of diminished public trust and confidence in the court. Defining delay requires distinguishing between the amount of time that is and is not acceptable for case processing. National and statewide authorities have articulated time standards for case disposition. These standards call for case processing time to be measured beginning with arrest or issuance of a summons in a criminal case, or from the date of filing in a civil case. Overview of Standards. The three standards in this performance area draw attention not only to the prompt resolution of cases, a requirement expressed by Standard 2.1, Case Processing, but also to the expectation that all trial court functions will be expeditiously performed, a requirement of Standard 2.2, Compliance With Schedules. Standard 2.3, Prompt Implementation of Law and Procedure, emphasizes the importance of expedition and timeliness in anticipating, adapting to, and implementing mandated changes in law and procedure. Overview of Measures. The 10 measures for this area's three standards assess how promptly the court processes cases, files required reports, and implements new legal and procedural changes. Because of the diversity of activity examined under the three standards, a wide range of measurement techniques are employed. Yet, in many cases, data collection can be coordinated with other measures and many of the measures associated with Standard 2.1 will be familiar to judges and court managers. Information from individual case files or automated records is required to complete most measures for Standard 2.1. For example, calculating the time to disposition and the age of pending cases requires access to case status and the dates of key events. Information on the number of times a case was set for trial is needed to determine the certainty of trial dates in the progression of cases through the system. To measure compliance with Standard 2.2, a variety of records maintained by the court are compared with recognized filing requirements. Patterns of completeness must also be evidenced as a condition of meeting these measures. Financial records, records of court-initiated services (e.g., court- appointed counsel, interpreters) and required statistical reports are considered. Recognizing that not all information flows through written channels, an information request simulation provides an opportunity for the court to assess how quickly and accurately it responds to inperson information requests from the public. The court must not only promptly disburse information when it is requested, it must also promptly conform its operation to meet new requirements of law or procedure. Two measures for Standard 2.3 provide opportunities for reviewing records or interviewing individuals affected by these changes in order to assess the court's pattern of adopting changes based on new requirements. Standard 2.1: Case Processing The trial court establishes and complies with recognized guidelines for timely case processing while, at the same time, keeping current with its incoming caseload. Commentary.The American Bar Association, the Conference of Chief Justices, and the Conference of State Court Administrators have urged the adoption of time standards for expeditious caseflow management. Timely disposition is defined in terms of the elapsed time a case requires for consideration by a court, including the time reasonably required for pleadings, discovery, and other court events. Any time beyond that necessary to prepare and conclude a case constitutes delay. The requirement of timely case processing applies to trial, pretrial, and posttrial events. The court must control the time from civil case filing or criminal arrest to trial or other final disposition. Early and continuous control establishes judicial responsibility for timely disposition, identifies cases that can be settled, eliminates delay, and ensures that matters will be heard when scheduled. Court control of the trial itself will reduce delay and inconvenience to the parties, witnesses, and jurors. During and following a trial, the court must make decisions in a timely manner. Finally, ancillary and postjudgment or postdecree matters need to be handled expeditiously to minimize uncertainty and inconvenience. In addition to requiring courts to comply with nationally recognized guidelines for timely case processing, Standard 2.1 urges courts to manage their caseloads to avoid backlog. This may be accomplished, for example, by terminating inactive cases and resolving as many cases as are filed. Measurement Overview. Four measures are associated with Standard 2.1. These measures require using court records and management information to determine the court's compliance with case processing time standards and whether it is keeping up with its incoming caseload. The degree to which needed information is retrievable will affect the time, personnel, and financial commitments required to complete the evaluations. Some of the measures may be undertaken by court staff; others may require the aid of an outside department or agency to assist with analysis of the data and the interpretation of the results. Measure 2.1.1 evaluates timely case processing from case filing to disposition. Based on a large sample of cases, processing times are calculated by measuring the time between filing and disposition for each case. By comparing its own processing times with recommended standards, the court examines how closely it approximates the standards. Measure 2.1.2 assesses how well a court is keeping up with incoming cases. Failure to keep up with the incoming caseload increases the pending caseload. An examination of the court's clearance rates (the ratio of disposed to filed cases) over several years will identify trends in reducing or increasing the pending caseload. Measure 2.1.3 looks at all cases awaiting disposition and determines what percentage of those cases represent a backlog. Pending cases are ranked by age and compared to case processing time standards. The percentage of cases exceeding the standards indicates the size of the court's backlog. Measure 2.1.4 evaluates the extent to which cases are heard when scheduled. Based on court records that indicate the number of trial settings, patterns of continuances in the court can be determined. All measures should be used to obtain the most complete picture of how well a court performs with respect to the timeliness of its case processing activities. However, if available time and resources do not permit use of all measures, Measures 2.1.1 and 2.1.2 should be given priority. If the court is in compliance with local or State disposition time standards and there is no evidence of an emerging backlog, court staff might choose to omit Measures 2.1.3 and 2.1.4. Measure 2.1.1: Time to Disposition This measure provides information regarding the time it takes to process cases. It compares the court's processing times to local, State, or national standards, and evaluates the degree of compliance with these standards. The court's case processing time is calculated from case processing information collected from a random sample of cases disposed of during the preceding year. Planning/Preparation. This measure requires careful coordination and supervision. The investment in time and money required for completion depends to a large extent on the court's recordkeeping system. Courts with automated systems may be able to provide much of the necessary data from computer printouts. Courts with manual recordkeeping systems may need to hire, train, and supervise individuals to collect data from case files. In either case, data need to be gathered and analyzed. The first task is to identify general case categories. At a minimum, the court should measure felony and general civil case dispositions. Misdemeanor, domestic relations, juvenile, or other specialized case types may also be measured using the same methodology. However, because these types of cases may fall within the jurisdiction of limited or special jurisdiction courts, they are not referred to specifically in this discussion. A felony case is one in which a formal indictment, information, or accusation is filed against a defendant on any charge (or charges) defined as a felony by State law. Count all charges in one indictment against one defendant as one case. Count a case charged as a felony in the indictment or information as a felony case for sampling purposes, even if the defendant is convicted of a misdemeanor. Do not count probation violation alone as a felony case. A civil case is any action under civil law other than probate, domestic relations, and small claims. Other cases that should be excluded from the civil case sample include appeals from lower courts or administrative agencies, petitions for amendment of orders or decrees, and any case type that is nonlitigious in nature (e.g., name changes, registration of foreign judgments, and transcripts of judgments). The second task is to compile a list of all cases of each type to be examined that were disposed of in the prior reporting period. (This measure is designed to correspond to the court's yearly reporting cycle. In many cases this will be a calendar year, but some courts operate on a July 1 to June 30 reporting cycle.) The cases should be identified by docket number and, if possible, by case caption. Disposition in felony cases is defined as the date on which a diversion, judgment of guilt (guilty plea entered or verdict) or acquittal, nolle prosequi, or dismissal of the case is entered regarding all (or the last of) the charges against the defendant. For cases in which adjudication is formally withheld in anticipation of dismissal (a type of diversion), the date on which adjudication is formally withheld (the beginning of the diversion period) should be considered the disposition date. Ideally, data collectors should subtract the amount of time that a defendant was unavailable because he failed to appear, resulting in the issuance of a bench warrant or capias. Subtract the time from issuance of the bench warrant to his subsequent rearrest. In civil cases not concluded by trial, a case is disposed of when a final order is entered from a default or summary judgment, entry of settlement, voluntary dismissal, or dismissal for lack of prosecution. In cases concluded by trial, the date the verdict or judgment was entered can be considered the disposition date. If a trial verdict is appealed and remanded to the trial court, the case should be considered "reopened" for purposes of determining case processing time (i.e., count from the date of the remand to disposition). The following types of dispositions should be excluded: transfer or removal to another jurisdiction, interlocutory appeal, and a stay (e.g., pending bankruptcy). The next step is to select the samples of cases. If the court's automated information system can identify the case types targeted for examination and can produce a list of random numbers, docket numbers can be selected electronically. If the automated system does not have this capability or the system is manual, an interval sample (e.g., every fifth case) must be selected manually. To determine sample size, the following guide for each case type (civil, criminal, etc.) should be used: Total Dispositions for the Year 1,000 2,000 3,000 5,000 10,000 Minimum Sample Size 280 325 345 360 380 These sample sizes should provide a sampling error of ñ5 percent in 95 percent of all samples.[1] Expect to reject some sampled cases because they are not the targeted case or disposition types or because key data are missing. Thus, the initial sample should include about 10 percent more cases than the required minimum sample size. After the samples have been drawn, prepare the data collection forms. Forms 2.1.1a and 2.1.1c are generic data collection forms for civil and criminal cases, respectively. Forms 2.1.1b and 2.1.1d are sample civil and criminal case code sheets. Items on these forms may require modification to reflect the terminology used in the jurisdiction (e.g., felony entries referring to "information or indictment" may need to be changed to "accusation or true bill"). The generic data collection forms capture the basic information needed to identify cases and to calculate overall case disposition time as well as time periods for intermediate case processing events. Items with an asterisk are those required to calculate the time from case filing to disposition. Additional data elements on the forms will give the court a more refined picture of its case processing situation. To examine other factors influencing timeliness in case processing, additional data elements can be added to the forms (e.g., the number of plaintiffs or defendants, the criminal defendant's custody status, and the number of days in trial). Before data collection begins, prepare a coding manual to guide data collectors and assure that data recording is consistent among cases and coders. For each item on the coding sheet, the manual should describe what information is to be collected, where it can be found in the data source (e.g., computer printout, case file, docket sheet) and how it is to be recorded. Before data collection begins, review this information with the data collectors. Data Collection. During this step, data collectors record the appropriate case information on the data collection forms. If the data collectors use a computer printout with all the necessary data, this process may average as little as 3 or 4 minutes per case. If manual case files must be retrieved and reviewed to acquire the necessary information, data collection may average as much as 15 minutes per case. Data Analysis and Report Preparation. After gathering the data, compute the number of days from case filing (or arrest) to disposition. (The most commonly used statistical software can automatically calculate the number of days between two dates.) Summarize the results by the number and percentage of cases disposed of within the specified timeframes. Compare these results with local or State case processing time standards. If the court has not adopted time standards, or if the standards are ambiguous, compare the court's case processing time data with the time standards adopted by the American Bar Association (ABA) or by the Conference of State Court Administrators (COSCA) and the Conference of Chief Justices (CCJ), which are presented in figure 1. For example, the ABA's standards stipulate how long it should take for the 90th, 98th, and 100th percentile cases to be resolved. Consequently, they provide a convenient way to evaluate court performance. The higher the percentage of cases in compliance with the standards, the better the court's performance is on this measure. Measure 2.1.2: Ratio of Case Dispositions to Case Filings A court must regularly monitor whether it is keeping up with its incoming caseload. A key indicator of court performance on this issue is the disposition or clearance ratio: the number of cases that are disposed in a given year divided by the number of filings in the same year for identifiable case types. Courts should aspire to dispose at least as many cases as are filed each year (i.e., it should have a clearance ratio of 1.0 or higher). If the court is disposing of fewer cases than are filed each year, a growing backlog is inevitable. Knowledge of clearance ratios for various case categories over a period of 3 to 5 years can help to pinpoint emerging problems and where improvements must be made. Planning/Preparation. This measure requires information on the numbers of cases filed and disposed of each year. It is most valuable to courts if data are available for particular case types for at least 5 years. Data Collection. The data required for this measure should be available from the clerk's office or court manager's records. Data Analysis and Report Preparation. For each case type, divide the number of cases disposed of by the number of cases filed. The resulting ratios represent the court's annual clearance rates for those case types. (Form 2.1.2, Ratio of Dispositions to Filings Worksheet, can be used as a guide for calculating the ratios.) Compute the same calculation for the court's total caseload. Display the data in a graph showing the clearance rates for both individual case types and the court's total caseload over a 5-year period (see Form 2.1.2). If a court is keeping up with its incoming caseload, all the ratios on the graph will be close to 1.0. A court that is not keeping up with its incoming caseload will plot values less than 1.0, indicating that a backlog is developing or that an existing backlog is increasing. A consistent trend of 1:1 ratios between case dispositions and case filings is evidence that a court is keeping pace with its incoming caseload. A court that is not performing well on Measure 2.1.2, as evidenced by clearance ratios well below 1.0, should examine the size and characteristics of its pending caseloads. Measure 2.1.3, Age of Pending Caseload, offers a workable procedure to address that issue. Measure 2.1.3: Age of Pending Caseload This measure is designed to evaluate the age of cases awaiting disposition in order to establish whether a backlog exists and, if so, to determine its magnitude. Planning/Preparation. To determine the source of data for this measure, court personnel should identify the best source for information on the total number of cases pending by designated case types (e.g., docket sheets, case files) as well as the means for determining the filing dates for each case so that the age of particular cases can be calculated. The degree to which case type data are kept by the court will determine the number of categories to be measured (e.g., some courts may track only general civil data while others may track specific categories such as tort, contract, and property). Data Collection. The first task is to compile a list of all pending cases for each case type to be measured. This list should include, at a minimum, the case number and the filing date. Next, arrange the cases according to their filing dates, beginning with the oldest pending case. This arrangement will permit the determination of how many cases fall within specified age categories (e.g., the number of civil cases pending 360 days or more, the number of cases pending 180 days or more). Form 2.1.3, Display Tables Age of Pending Caseload, can be used as a guide to create tables showing the age of cases in 60-day intervals for civil cases and 30-day intervals for criminal cases. Most courts with automated case records can obtain the necessary data with the help of a programmer. Courts with only manual case records have found data collection to be difficult. A court that has a large number of pending cases and inadequate case record automation might select a sample of pending cases for purposes of this analysis (see the planning/preparation section for Measure 2.1.1). Data Analysis and Report Preparation. First, determine the existence and magnitude of a backlog (defined here as the percentage of pending cases that exceed the maximum disposition time goal for the case type). Divide the number of pending cases older than a time standard by the total number of pending cases in that case type: the larger the percentage, the larger the backlog. If the court has not adopted time standards, nationally recognized disposition time standards can be used as to determine the maximum allowable time for processing cases (see the data analysis and report preparation section for Measure 2.1.1). Because complex cases might require more time than suggested by these or State disposition time standards, judges should be given the opportunity to explain why some cases exceed the standards. Measure 2.1.4: Certainty of Trial Dates This measure evaluates the frequency with which cases scheduled for trial are heard when scheduled. Research has shown that a higher proportion of jury trials that start on the first scheduled trial date is correlated with a more expeditious pace of litigation.[2] Planning/Preparation. Through interviews with the court manager, gather information on trial settings in individual cases. The most convenient and accurate source for collecting data on the number of times specific cases have been set for trial will vary from court to court (e.g., docket sheets, case summary screens in automated systems, case control cards, case files). Jury trials are of particular interest because they require a greater expenditure of resources and impose a greater burden on local citizens (jurors) than do bench trials. Evaluating the degree of jury trial date certainty, therefore, should be given a somewhat higher priority. Ideally, however, the court should evaluate trial date certainty for both bench and jury trials. (Note: A hearing on a motion for summary judgment should not be counted as a bench trial; nor should a default or show cause hearing be counted as a bench trial.) A bench trial is defined as a hearing at which the parties contest the facts in the case and present evidence before a judge in open court and at which the judge renders a decision that disposes of the case. (Note: a summary judgment hearing is not a bench trial because the parties agree on the facts; appropriate application or interpretation of the law is the only issue at a summary judgment hearing.) Data Collection. All cases disposed during or at the conclusion of a bench or jury trial for each case category during the previous year should be identified through automated or manual case records. If automated case records cannot identify bench or jury trial verdicts, the jury commissioner and courtroom clerks might retain records that could help identify trial cases. If current records allow you to identify only cases that started trial or that had a verdict entered (one or the other), your list will be sufficient for determining trial date certainty. Sampling: Select separate samples of bench and jury trials. For each type of trial, if there were fewer than 100, obtain data on all trial cases. If the number of trials substantially exceeds 100, randomly sample at least 100 cases or 25 percent of all trials, whichever number is larger. (See also the planning/preparation section for Measure 2.1.1, Time to Disposition, which includes a table for determining sample size.) An interval sample (e.g., selecting every third case) can also be used. Most courts, therefore, will have to collect data on 100 or fewer jury trials and 100 or fewer bench trials for civil cases and about the same numbers of bench and jury trials in criminal cases (or whatever case types you examine). Page 1 of Form 2.1.4a, Civil Jury Trial Settings Data Collection Form, could be used to collect data on the issue of civil jury or bench trial date certainty. The form can be modified to collect data on any type of trial for civil or criminal cases (simply change the title of the form; for criminal cases you will change item (B) to "Defendant Name"). To simplify data collection, "Number of Trial Settings" could be added as a data item to the form. Data Analysis and Report Preparation. For each type of trial, prepare a summary table showing the number of cases with one trial setting, those with two, and so on, up to the maximum number of trial settings recorded. Next, calculate the percentage of cases at each level of trial settings (1, 2, 3, and so on) appearing on the table. Finally, calculate the median and average number of trial settings. The closer the average is to one trial setting per case, the better the court's performance on this measure. Form 2.1.4b is a sample worksheet. Standard 2.2: Compliance With Schedules The trial court disburses funds promptly, provides reports and information according to required schedules, and responds to requests for information and other services on an established schedule that assures their effective use. Commentary. As public institutions, trial courts have a responsibility to provide information and services to those they serve. Standard 2.2 requires that this be done in a timely and expeditious manner. The source of the information requests may be internal or external to the court. Services provided to those within the court's jurisdiction may include legal representation or mental health evaluation for criminal defendants, protective or social services for abused children, and translation services for some litigants, witnesses, or jurors. In addition to adhering to case processing time guidelines, an effective trial court establishes and abides by schedules and guidelines for activities not directly related to case management. Moreover, the court meets reasonable time schedules set by those outside the court for filing reports or providing other information stemming from court activities. When disbursement of funds is necessary, payment is made promptly. Standard 2.2 requires that regardless of who determines the schedules, once established, those schedules are met. Timely disbursement of funds held by the court is particularly important. Fines, fees, restitution, child support payments, and bonds are categories of moneys that pass through the court to their lawful recipients. Depending on the category involved and the laws of a given jurisdiction, the recipients may include funding agencies (e.g., State, county, or city), public agencies (e.g., police academies and corrections boards), and individuals (e.g., litigants or victims). In addition, courts oversee disbursement of funds from their budgets. These funds go to other branches and units of government, vendors, jurors, litigants, or witnesses. For some recipients, delayed receipt of funds may be an accounting inconvenience; for others, it may create personal hardships. Regardless of who the recipient is, when a trial court is responsible for the disbursement of funds, expeditious and timely performance is crucial. Measurement Overview. Four measures are associated with Standard 2.2. They draw upon State and local sources of information to determine whether the court is performing key functions in a timely manner. Each measure addresses one of the four elements of the standard: (1) distribution of funds, (2) provision of reports, (3) provision of information, and (4) provision of services. The specific application of each measure will vary from court to court because the measures are tied to statute, policy, and procedure. Taken together, however, they should indicate how well a court meets the schedules established internally or externally. The most complete picture of court performance in this area will be accomplished by undertaking all four measures. However, if all cannot be completed for budgetary or other reasons, the court should begin with Measure 2.2.1 and work from there as time and resources permit. Measure 2.2.1 examines court financial records to assess whether various types of funds are disbursed in a timely manner. All types of funds for which the court is responsible are included (e.g., those they hold in trust such as bail and bond moneys, those that pass through the system such as child support payments, those from their operating budgets such as payments to vendors and jurors). Based on a review of records indicating when payments are made routinely, the time taken to disburse funds is compared to the payment timeframes set by statutory requirements or court policy. Measure 2.2.2 evaluates how promptly the court provides various services. This measure requires tracking certain events for specific services (e.g., when the service was requested and when it was provided) and determining whether these events occurred within an acceptable time period. Measure 2.2.3 assesses how quickly the court responds to requests for information from the public. It allows the court to determine whether it is responding to such requests in an acceptable period of time and requires that data be collected through simulations. Courts should enlist outside assistance to conduct the measure. To produce results that more closely represent treatment of the general public, simulations should be conducted by individuals who are neither familiar with court operations or known by court staff. Although direct observation might appear to be an alternative means of conducting this measure, the observation process itself would likely be so apparent that it would either intrude on the business being conducted or cause court staff behavior to change. All courts are required to file various reports with other agencies or offices at regular intervals. Measure 2.2.4 evaluates whether these reports are filed routinely in a complete and timely way. Completion of the measure will require an understanding of the court's reporting obligations, a review of a number of the reports, and may require contact with the offices or agencies receiving the reports. Measure 2.2.1: Prompt Payment of Moneys This measure is designed to evaluate whether a court promptly disburses moneys, including those held in trust and those due in payment for services rendered, once a determination has been made that the money should be disbursed. Courts operate in different financial environments. Some courts maintain direct control over all moneys coming into the court, while others work with a local government agency that handles disbursements for the courts. In taking this measure, the lines of authority and degree of control the court has over the actual disbursement of funds must be considered. The measure may have to be adapted to distinguish a court's responsibility in initiating a payment from another agency's responsibility for making the payment. Regardless of who is ultimately responsible for disbursement, it is important that this task be performed promptly. Planning/Preparation. The first step is to review court policies and procedures for disbursements of funds. Interview the court manager or the person directly responsible for the relevant court policies and procedures. Potential areas for investigation include policies governing the following activities: o Forwarding collected child support payments or restitution moneys. o Returning moneys held in trust by the court (e.g., bond). o Disbursing fines and fees to government agencies. o Paying moneys to vendors or jurors. The needed information covers the timeframes required for these payments, the basis for each (e.g., court rule, policy, statute, or local procedure) and the mechanisms utilized to monitor compliance with the schedules. A determination also should be made whether annual financial audits are performed in the court and whether their results are available. A review of these reports will indicate if any deficiencies in the disbursement system were recorded. Data Collection. Examine records for each selected payment type for the 6 months prior to the time of the evaluation. If more than 100 of the given payment types were made during the period, take a random or interval (e.g., every third case) sample of 100 or 20 percent, whichever is larger. (See also the planning/preparation section for Measure 2.1.1, Time to Disposition.) Record the date payments were ordered/approved and the date payments were actually made. In addition, consider collecting data on interim events between the date a payment was approved and the date payment was made. This data may help to identify where the greatest delay (if any) occurs in the process. Data Analysis and Report Preparation. The objective is to determine the percentage of disbursements that are made within established timeframes, once disbursement has been ordered. To accomplish this objective, construct a table that displays the amount of time required for disbursement. The table can be constructed with weekly or monthly intervals depending upon the maximum length of time allowed for disbursement. If no timeframe has been specified, the average time for disbursement of each type of payment should be computed. For example, child support disbursements can be compared to the timeframes established under the Family Support Act of 1988. In addition, the child support and other payment data to other jurisdictions or those suggested in the literature can be compared. Compare the information gathered from disbursement records with the applicable statutory or procedural timeframe. The percentage of payments for each category that are made within the allowable timeframe should also be charted. The higher the percentage of payments within the timeframe, the better the court's performance is on the measure. Courts that have used this methodology have found it relatively easy to implement; they have also found the data to be valid and useful. Measure 2.2.2: Provision of Services This measure seeks information on the time required to provide services to appropriate individuals. For this measure, three types of services have been identified: (1) indigent defense services, (2) interpreter services, and (3) mental health evaluations. Others could be added or substituted to reflect the services of concern to a particular jurisdiction. A similar process could be used to assess functions such as issuing marriage licenses, handling passport applications, or processing name changes. Planning/Preparation. This measure begins with a review of the procedures used to initiate the following services and the identification of any statutory, case law, or policy requirements that mandate a timeframe within which they must be provided: interpreter services (foreign language and/or hearing impaired), indigent defense services, and mental evaluations. For each service area, first identify the individual with responsibility for coordinating delivery of services. Next, identify the aggregate or individual records that are maintained concerning requests for and the provision of each service. This background information can be gathered through interviews with the court manager. Data Collection. For each service to be evaluated, draw a sample of cases using that service. The samples for each service should contain no less than 100 cases or 20 percent of the cases (whichever is larger) to allow valid and reliable inferences regarding payment patterns. For each sample, use Form 2.2.2a, Provision of Services Data Collection Form, to gather data to measure the time required to provide the service. Examples of the data elements for three types of services include: o Presentence investigations date ordered, date staff assigned to investigation, date completed, and date filed with the court. o Indigent defense counsel date indigent defense ordered by court and date counsel was assigned. o Criminal or mental health evaluations date evaluation was ordered, date evaluator was designated, date evaluation was conducted, and date of report to court. Data Analysis and Report Preparation. The basic analytical task is to compute the length of time taken to initiate service provision; the elapsed time to initial service provision; the elapsed time from court order to initial service provision; and, for services for which a report must be filed with the court (e.g., mental health evaluations, home studies) the elapsed time to file reports with the court. National standards such as the American Bar Association Standards Relating to Trial Courts and Standards for Criminal Justice (Form 2.2.2b, Checklist of Services Required in ABA Standards) or State guidelines can be used as benchmarks. For example, if the standards prescribe that services are to be provided in 10 calendar days, a measure of the court's performance is how many cases exceed the 10-day time limit. The smaller the percentage, the better is the court's performance. Measure 2.2.3: Provision of Information This measure is designed to assess the promptness with which information is provided to members of the public. The measure involves the use of role players who request various types of information from the court. It is recommended that the court use members of the public (not court employees or attorneys), although the measure could be expanded to include role playing by "courthouse regulars." A comparison of reports from role-playing citizens and court employees would be very useful. Planning/Preparation. First, court staff identify the types of information to be sought in the simulations. Examples of the types of information that might be included are the location where a specific case is being heard, a request to see a specific case file when only the name of one party is known, a request to have certain documents copied from the case file, and a request to know the status of a particular case (the last/next activity scheduled for the case). It should also be determined, through interviews with the court manager, whether the court has a local policy or procedure that addresses the manner or time within which information requests should be handled when made on a walk-in or phone-in basis in any court office. For each type of information requested by a role player, the performance standard evaluators (research directors) should know in advance approximately how many minutes it should take to provide the requested information. Second, citizens unfamiliar to the judges and court staff are recruited to be role players who request information in several offices in the courthouse. Court staff should keep in mind that this exercise measures the timeliness and accuracy of information provided in response to a request from a member of the general public, not a special response to a courthouse "regular" or to an outside "evaluator." Provide the citizen role players with a set of questions to ask or items to request, together with any background information needed to allow the simulation to be credible (e.g., if requesting information on the next scheduled event in a criminal case, the citizen should know the defendant's name and the charges involved). The role player should not read the question when doing the simulation but rather "play the part." An effort should be made to recruit different types of people. Courts that have tested this measure have reported difficulty in recruiting a variety of types of volunteers. Retired people are good candidates. However, it would be best to have volunteers of different ages, racial groups, and gender. A person's demeanor might also influence the nature and timeliness of the service provided by court staff. It is unrealistic, however, for most courts to systematically examine the influence of age, race, gender, and demeanor on the provision of services. Including demeanor as a factor could seriously complicate the analysis. The minimum expectation in each court should be that citizens of any age, race or ethnic group, or gender asking politely for information should be treated courteously and have questions answered in a timely manner. It is therefore recommended that this measure focus primarily on role players who act politely when requesting information. After each office to be examined has been checked through a sufficient number of observations by courteous role players, the evaluators might decide to have the role players request similar information from the same offices, but to do so in a rude, impatient manner. Although not described in the following section, an alternative technique for measuring how promptly (and courteously) court or clerk s office staff provide information is the use of an exit survey. A brief questionnaire (one page or less) is constructed and given to citizens who ask for information or assistance after they complete their business in the various court or clerk s offices. This questionnaire is an easy-to-administer and cost-effective alternative which could be conducted periodically to check on staff performance in this area. However, exit surveys do not allow the court to measure the accuracy of the information provided by court or clerk s office staff. Moreover, people who are unhappy about the information they receive or about the way they are treated may be more likely to fill out a questionnaire (as a means of registering their complaint) than are people who are satisfied with the service and information they receive. Data Collection. Ideally, each office included in the study should receive at least 30 requests for information from role players. Give the volunteer role player a data collection sheet, such as Form 2.2.3, Information Request Data Collection Form, on which to record the time required for the court staff to provide the information sought. Entries on the data sheet should be made after leaving the office in which the request is made. If the requester is referred from one office to another, the referral process and time involved should be recorded in the special notes section of the data sheet. The simulations should be conducted several times during the day or on several days during the week to account for normal differences in work flow. On the data collection form, the volunteer role player should record the type of information requested, the office in which the request was made, the number of minutes required to obtain a response, and any notes or comments about the nature of the response or interaction with the information provider. It is recommended that the role player rate the accuracy and completeness of the information provided. In addition, this exercise provides an opportunity to collect information relevant to Standard 1.4, Courtesy, Responsiveness, and Respect, and the role player should also rate the courteousness of the information provider. Data Analysis and Report Preparation. Compare the results of the simulated requests to the court's stated policy or procedure for responding to requests or the predetermined amount of time that it should have taken to provide the information. The lower the proportion of requests that exceed the prescribed time limits, the better the court's performance. If no policy or procedure prescribing time standards exists, review the results with a committee of court staff members and discuss their views as to the acceptability of the documented level of performance. Evaluators should also examine ratings on the completeness and accuracy of the information provided to the role players. The court should expect to receive a high percentage of "very good" ratings for completeness and accuracy; no ratings should be received that are "unacceptable." The court should also expect a high percentage of "very good" ratings for courtesy. The performance of courts on this measure can be compared with the responses to appropriate sections of the questionnaire used in Measure 1.2.6, Evaluation of Accessibility and Convenience by Court Users. Staff discussion should focus on understanding the consistencies and inconsistencies between the responses to the two measures. Measure 2.2.4: Compliance With Reporting Schedules This measure reviews and assesses the court's level of compliance with established reporting schedules for court activity. Reports required by the judicial system (e.g., statistical reports to the State Administrative Office of the Courts (AOC)) and by other government agencies (e.g., vital statistics or Equal Employment Opportunity Commission (EEOC) reports) are included. The data collection and evaluation methods will provide the court with information about the timeliness of overall reporting as well of specific reports. Planning/Preparation. First, court staff must gather specific information on reports the court is required to file. This information can be obtained through discussions with the court manager or the person directly responsible for each report. Form 2.2.4a, Generic List of Court Activity Reporting, is a guide to help organize these discussions and includes questions regarding reporting schedules; the statute, order, directive or policy establishing each schedule; the name of the individual responsible for filing each report; the location of court copies of the reports; and an indication of whether requests for additional or corrected information were made after the reports were filed. Additionally, it should be determined whether regular financial or compliance audits are conducted on court records and, if so, what kinds of records are included. The first time an assessment is conducted in a State, parallel discussions should be held with the State AOC. Information sought from the State AOC will include the reports required of trial courts and their relevant reporting schedules and authority for reporting. Information from both local and State sources will help ensure complete coverage of reporting requirements. Compile a single list of reporting requirements from the court and State AOC lists, including information on required audits. If discrepancies appear, contact the appropriate individuals to resolve discrepancies. Second, court staff must locate the data to be collected. Select for data collection and evaluation at least two reports from each of the reporting categories found in the guide (Form 2.2.4a). For each report selected that appears on the audit list, examine the most recent audit reports to ascertain whether that report can provide some or all of the data required for this measure (see Form 2.2.4b, Compliance With Reporting Schedules, for required information). For those not included on the audit reports, or those for which insufficient information is provided, contact the individual responsible for filing the reports. The agency that receives the report(s) should also be contacted to determine whether staff at that agency perceive problems in timeliness, completeness, or accuracy of the reports filed by the court. Depending on the type of report, contact the State AOC, EEOC, an employees' labor union, or the State or county comptroller. The number/period of reports in the evaluation sample will depend on the nature of the reports and the frequency with which they are filed. For monthly reports, review a 1-year period; for weekly reports, review a 3- to 6-month period or, alternatively, the same month over a 5-year period (e.g., all April reports for the past 5 years). For some personnel matters such as performance evaluations, reporting dates may be keyed to employment anniversary dates. In such cases, draw a sample that includes 50 evaluations or 20 percent of all evaluations submitted during the prior calendar year, whichever is larger. Data Collection. Record the required reporting date and the actual reporting date of each report in the sample. (See Form 2.2.4b.) Was the report filed on time? Was it late? If so, by how many days? Examine all report forms to see if all requested information was provided. For reports reflecting individual evaluations (e.g., personnel evaluations), were meaningful responses provided? Are forms individualized in a way that provides useful information for the record and to the employee? If there is a pattern of incompleteness or lack of uniform responses for all employees, the pattern should be recorded in the comments section of the data collection form. Data Analysis and Report Preparation. For each type of report reviewed, compute the percentage of reports that are filed on time by dividing the total number of reports filed on time by the total number of reports reviewed. The closer this figure is to 100 percent, the more timely is the court's performance. The average number of days late for each type of report reviewed can be estimated by dividing the total number of days late by the total number of late reports. Form 2.2.4b illustrates these calculations. If a pattern of late reporting emerges on any of the data collection forms, contact the individual responsible for filing the report to determine the type of information that needed clarification and/or the reason(s) for late filing. Record both general reasons (e.g., the court's system does not capture that information; every month the court must wait for xyz information from xxx office to prepare the report) and specific explanations (e.g., "a new staff member was preparing reports at that time" or "in May, I became ill and was not able to prepare the report until the following week") on the comments section of the data collection form. Prepare a summary report combining results from each sample's data collection sheet. (See Form 2.2.4c, Data Summary Report for Overall Court Compliance With Reporting Schedules, for an example.) The summary report should provide the following information: name of report, number of reports in the sample, the number and the percentage of sample that are on time or late, and the average number of days late for that sample.The percentage of all reports sampled that are filed on time and that are filed late should be included along with the general categories of reasons for lateness. The completeness or quality of responses ascertained from interviews and report reviews should also be mentioned. Finally, court personnel should discuss the patterns and trends of reporting timeliness and quality reflected in the summary report. Standard 2.3: Prompt Implementation of Law and Procedure The trial court promptly implements changes in law and procedure. Commentary. Tradition and formality can obscure the reality that both the law and procedures affecting court operations are subject to change. Changes in statutes, case law, and court rules affect what is done in the courts, how it is done, and those who conduct business in the courts. Trial courts must make certain that mandated changes are implemented promptly and correctly. Whether a change can be anticipated and planned or must be responded to quickly, Standard 2.3 requires that the court not only make its own personnel aware of the changes but also notify court users of such changes to the extent practicable. It is imperative that changes mandated by statute, case law, or court rules be integrated into court operations as they become effective. Failure to do so leaves the court open to criticism for noncompliance with the law or required procedures. Measurement Overview. The two measures for Standard 2.3 are concerned with the promptness with which a trial court implements externally mandated changes. Measure 2.3.1 examines the response to mandates found in legislation while Measure 2.3.2 focuses on responses required by court opinions, procedural rules, or administrative orders or directives from the highest State appellate court or the State AOC. Identifying the changes to which the courts should be responding involves the collection and review of information from the State AOC. If several courts are evaluated within the same State in one year, the information can be shared by the courts and the process need not be repeated for each trial court. If courts in the same State are evaluated during a period extending over more than one year, an update of this background information should be obtained. Use of these measures will vary considerably from State to State and from year to year within a State because they are based upon a court's response to recent changes. Even the methods involved in taking the measure will vary depending upon the nature of the change to be reviewed. For example, in some cases final court orders may need to be read to determine if required provisions are included (e.g., insurance coverage for children in divorce decrees); in others, files may need to be reviewed to determine if required forms have been filed. Measure 2.3.1: Implementation of Changes in Substantive and Procedural Law This measure evaluates the implementations of two current or upcoming changes in law and is designed to be administered by someone outside the court. Selection of the changes should be based on their significance and measurability. (Note: for a distinction between the legal procedures examined here and administrative procedures, see Measure 2.3.2, Implementation of Changes in Administrative Procedures.) Planning/Preparation. First, judges or court staff must identify current or very recent changes in the law. Ask the State AOC for copies of (1) summaries of State and Federal legislation affecting the courts with effective dates occurring during the 12 months following the request and (2) new Federal regulations (e.g., regarding child support) that affect the trial courts. If letters, memoranda, or directives concerning these changes were distributed by the State AOC, copies should be gathered. Second, select the changes to be examined. Select at least two items from the information provided by the court and the State AOC. Changes selected should have clearly measurable requirements/changes specified. For example, if one of the changes requires that all divorce decrees including child support include provision for health insurance coverage by one of the parents, the decrees issued after the effective date of that changed requirement could be reviewed to determine the proportion in which insurance coverage was included. Data Collection. Data collection on compliance with required changes and the timeliness of that compliance will vary with the nature of the changes. One possible approach is to review records for 3 months immediately following the effective date of the change to ascertain whether new forms or order provisions appear in case files. The percentage of cases in compliance can be calculated: the higher the percentage, the better the court's performance. Additionally, or if the change is not easily quantifiable, attorneys might be interviewed to determine their perceptions of the courts' promptness in implementing change in general as well as in specific instances. Evaluators should focus on three aspects of the change implementation process: (1) whether all relevant staff and judges are informed of the impending change in a timely and uniform manner, (2) whether there is a plan for implementing the change, and (3) whether the change is implemented in a timely and uniform manner. Progress toward full implementation of this plan should be checked periodically until the court is in full compliance over multiple monitoring periods. Though this goes beyond the issue of timeliness, evaluators might also check whether judges or court staff checked with appropriate authorities during their planning for implementing the change to determine whether the court's interpretation of the new law is consistent with the intent of the law. Data Analysis and Report Preparation. Compile the data collected into a report for use by the court. If one or more indicators of compliance (e.g., attorney interviews) indicate a problem with prompt implementation, the court should take whatever actions are necessary to ameliorate the problem. Another change in law should be evaluated to determine whether the lack of prompt implementation reflects the court's general operations or only its response to that particular change. Judges and court staff should discuss ways to improve low compliance rates and, where appropriate, work with the judges, court and other agency staff, or local bar to better facilitate changes mandated by legislation. Measure 2.3.2: Implementation of Changes in Administrative Procedures This measure requires selection and evaluation of changes in two administrative procedures recently mandated by the highest State appellate court or State AOC. As with Measure 2.3.1, Implementation of Changes in Substantive and Procedural Law, the selection should be based on the significance and measurability of the changes. Administrative procedures are those that affect the responsibilities of judges and court staff regarding the internal operation of courts or their relations with other agencies. Administrative procedures are usually distinguishable from legal procedures (examined in Measure 2.3.1), which govern the actions of litigants and the judge in the course of litigation. Planning/Preparation. This measure will focus on administrative changes that are required to be implemented during the coming 12 months. The first step is to identify the changes in administrative procedure to be evaluated. Begin by asking the State AOC for copies of (1) recent supreme appellate court decisions that place specific new or changed performance requirements on the trial courts, (2) copies of any recent changes in court rules that require the trial courts to change the manner in which they operate, and (3) copies of recent directives or orders of the highest State appellate court or the State AOC requiring changes in court recordkeeping or procedures. If letters, memoranda, or directives concerning these changes were forwarded to the courts, compile copies for the evaluator. After procedural changes have been identified, select for evaluation at least two items from the information provided by the court and the State AOC. Changes selected should have clearly measurable requirements/changes. For example, if one of the changes requires that presentence investigation (PSI) reports be filed with the court prior to sentencing for all felony convictions, files of cases in which sentencing occurs after the effective date of the change could be reviewed to determine the proportion of cases in which the PSI reports were filed prior to sentencing. Evaluators should examine: (1) whether there is a procedure whereby all people who need to know about the impending change in procedure are informed in advance of the change, (2) whether there is a plan for the timely and uniform implementation of the change, and (3) whether the change is implemented in a timely and uniform manner. Data Collection. Collection of data on compliance with required changes and the timeliness of that compliance will vary with the nature of the changes. One possible approach is to review records of affected cases for 3 months immediately following the effective date of the change to ascertain whether new forms or order provisions appear in the case files. Evaluators can calculate the percentage of cases in compliance with the new procedure. Additionally, or if the change is not easily quantifiable, information from practitioners can be obtained through interviews about the implementation of the new procedures. Data Analysis and Report Preparation. Compile the data collected into a report for use by the court. If one or more compliance indicators or measures suggest a problem with prompt implementation, the court should take whatever action is necessary to ameliorate the problem. Another change should be evaluated to determine whether the lack of prompt implementation reflects the court's general inability to effectively implement changes or whether there was a problem with the implementation of that particular change only. Court staff should discuss ways to improve low compliance rates and, where appropriate, work with the court or other agency staff to facilitate procedural changes. End Notes 1. A. Herbert and R. Colton, Tables for Statisticians (New York: Barnes and Noble, 1963), p. 145. 2. J. Goerdt et al., Examining Court Delay: The Pace of Litigation in 26 Urban Trial Courts, 1987, (Williamsburg, VA: National Center for State Courts, 1989), pp. 32-35. See also B. Mahoney et al., Changing Times in Trial Courts: Caseflow Management and Delay Reduction in Urban Trial Courts, (Williamsburg, VA: National Center for State Courts, 1988), pp. 81-82. ------------------------------ Performance Area 3: Equality, Fairness, and Integrity Trial courts should provide due process and equal protection of the law to all who have business before them, as guaranteed by the U.S. and State constitutions. Equality and fairness demand equal justice under law. These fundamental constitutional principles have particular significance for groups who may have suffered bias or prejudice based on race, religion, ethnicity, gender, sexual orientation, color, age, handicap, or political affiliation. Integrity should characterize the nature and substance of trial court procedures and decisions, and the consequences of those decisions. The decisions and actions of a trial court should adhere to the duties and obligations imposed on the court by relevant law as well as administrative rules, policies, and ethical and professional standards. What the trial court does and how it does it should be governed by a court's legal and administrative obligations; similarly, what occurs as a result of the court's decisions should be consistent with those decisions. Integrity refers not only to the lawfulness of court actions (e.g., compliance with constitutional rights to bail, legal representation, a jury trial, and a record of legal proceeding) but also to the results or consequences of its orders. A trial court's performance is diminished when, for example, its mechanisms and procedures for enforcing its child support orders are ineffective or nonexistent. Performance also is diminished when summonses and orders for payment of fines or restitution are routinely ignored. The court authority and its orders should guide the actions of those under its jurisdiction both before and after a case is resolved. Overview of Standards. The demand for equality, fairness, and integrity is articulated by six performance standards. The first standard encompasses the all-important legal concept of due process and requires that trial courts adhere to relevant law, rules, and policy when acting in their judicial and administrative capacities. The equality and fairness afforded to litigants and disputes are determined not only by judges and court personnel but also by juries. Standard 3.2 requires that trial courts do their utmost to encourage equality, fairness, and integrity by ensuring that individuals called for jury duty are representative of the population from which the jury was drawn. Standard 3.3 focuses on what many consider to be the essence of justice. The standard requires that the decisions and actions of trial courts be based on legally relevant factors consistently applied in all cases. Furthermore, those decisions and actions should be based on individual attention to each case. In accordance with the call for integrity in court performance, Standard 3.4 urges trial courts to render decisions that clearly address the issues and specify how compliance with their decisions can be achieved. Clarity is a prerequisite for both compliance and enforcement. Standard 3.5 encourages trial courts to assume responsibility for the enforcement of their orders. Finally, Standard 3.6 requires the prompt and accurate preservation of trial court records. Records of court decisions and the process followed to arrive at decisions constitute, in an important sense, the law. Both the accuracy of the records and reliable access to them are fundamental to the achievement of the purposes of trial courts. Overview of Measures. Twenty-three specific measures are associated with the six standards in Performance Area 3: Equality, Fairness, and Integrity. They are intended to provide systematic information on the many facets of this complex and important topic. For most of the individual standards, the measures use similar data elements, data gathering procedures, and methods of analysis. For example, Standard 3.6 states that "Records of all relevant court decisions are accurate and properly preserved." For five of the six measures, a common database is used to assess the integrity of the court's record management systems. The measures use some portion of the same pool of cases to examine the extent to which court records are adequately stored. Use of a joint database is called for in other standards, including Standard 3.3, which requires trial courts to "give cases individual attention, deciding them without undue disparity among like cases and only upon legally relevant factors." Measure 3.3.3, Equality and Fairness in Sentencing, and Measure 3.3.4, Equality and Fairness in Bail Decisions, rely on the same set of cases and the same methodological approach to determine whether legally irrelevant factors play a role in bail and sentencing decisions. Hence, a court that decides to undertake the measurement of a given standard will find that it can apply all of the measures within that standard in an efficient manner. The most common approach to all of the measures in this area is the analysis of case-related information. Case files are used as a primary source of data for many of the measures. In some instances, the information in the files is gathered and analyzed to assess the fairness of court decisions in areas such as bail and sentencing. On the other hand, case-related information is also used in Standard 3.1 to determine the extent to which the court adheres to laws and procedures. Standard 3.1 states that "Trial courts faithfully adhere to procedural rules, and established policies." Here the case-related information is used as a way to verify compliance to laws. The second most common approach is the use of mail questionnaires to assess the views of key participants in the trial court process. Different measures target different sets of respondents. For example, Measure 3.3.3 seeks to determine both court employees' and attorneys' assessment of court performance in applying the law. Measure 3.3.1 targets the bar's view of the fairness of court decisions and actions. Measure 3.3.2 surveys the opinions of court users. Measure 3.6.6 examines the views of attorneys toward the adequacy of the court record when cases are appealed. Finally, the three measures related to Standard 3.2 call for an examination of court records pertaining to the selection of jurors. The lists of potential jurors are compared to other sources of information such as census reports to determine the inclusiveness, randomness, and representativeness of juries. Standard 3.1: Fair and Reliable Judicial Process Trial court procedures faithfully adhere to relevant laws, procedural rules, and established policies. Commentary. The first standard in the performance area of Equality, Fairness, and Integrity draws on the concept of due process, including notice and a fair opportunity to be informed and heard at all stages of the judicial process. Fairness should characterize the court's compulsory process and discovery. Trial courts should respect the right to legal counsel and the rights of confrontation, cross-examination, impartial hearings, and jury trials. Standard 3.1 requires fair judicial processes through adherence to constitutional and statutory law, case precedent, court rules, and other authoritative guidelines, including policies and administrative regulations. Adherence to established law and procedures contributes to the court's ability to achieve predictability, reliability, and integrity, and to satisfy all parties. Because of its centrality to the court's purpose, Standard 3.1 overlaps with standards in the performance areas of Access to Justice and Public Trust and Confidence, which emphasize that justice should be "perceived to have been done" by those who directly experience the quality of the trial court's adjudicatory process and procedures. Measurement Overview. Two measures are associated with this standard. They are of equal importance but involve different methodologies. Measure 3.1.1 relies on panels of knowledgeable practitioners to assess whether the court adheres to key legal requirements. The measure involves an examination of relevant documents, case files, and court records. A panel is designated for each area of law, such as civil, criminal, domestic relations, and so forth, and asked to identify 5 to 10 requirements for critical review. Measure 3.1.2 complements the panels' assessments. It requires surveying court employees and practicing attorneys to assess their views on the extent to which legal requirements are met. For both Measures 3.1.1 and 3.1.2, the greater the extent to which requirements are met, the higher the court's performance in this standard area. Measure 3.1.1: Performance in Selected Areas of Law Integrity is essential to court performance. To maintain their position as independent and fair arbiters of disputes, courts must be faithful to the laws they are expected to apply. The court's integrity in upholding the law can be measured by the extent to which its actions are in accordance with the requirements specified in substantive and procedural laws. If the requirements are met, the court is performing well. Whether the court adheres to legal requirements often can be determined empirically. For example, if a court by statute must advise convicted offenders orally of their appeal rights, empirical data can be gathered by observing several adjudication or sentencing hearings to determine if the offenders actually are advised of their appeal rights. Similarly, if a statute requires that all decrees of divorce include a finding on the subject of medical insurance for children, the presence or absence of the finding is ascertainable by examining the order. A recommended approach to identifying areas of law to be examined is to organize panels on basic areas of law such as civil, criminal, juvenile, and domestic relations. The local trial court is in a position to suggest the names of relevant practitioners from the bar and other justice system agencies. The Administrative Office of the Courts (AOC) in each State may be particularly helpful in facilitating this task. AOC can assist in organizing the panels and providing guidance for the panels' deliberations. A panel approach is suggested because the measure requires detailed knowledge in several areas of the law (e.g., criminal, juvenile, domestic, civil torts, and contracts).[1] Planning/Preparation. When more than one jurisdiction in a State participates in an evaluation using the measurement system, a sponsoring agency (e.g., a State AOC) designates a coordinator for the measurement effort in this area. The coordinator should be capable of taking charge of a panel of professionals and leading them through a measurement process. If a single trial court in a State is using the measurement system, an individual at the local level coordinates the effort. The coordinator assembles panels of individuals knowledgeable about the State's laws and practices relating to particular types of cases. These case types will include some or all of the following: general civil, juvenile offender, juvenile dependency (neglected/abused), domestic relations and mental health, and criminal. As an example, the panel of criminal experts might include defense and prosecution attorneys, a probation official, a corrections department official, a staff member employed either by a judiciary committee of the State legislature or the judicial council, and a trial judge. Each panel convenes to identify 5 to 10 requirements of law. The coordinator should use specific examples to help focus the discussion and the process of selecting the laws. Some States and jurisdictions have special requirements that should be considered. In Ohio, for example, the coordinator might point out that State law prohibits the use of a probation sentencing alternative for juveniles with certain offense histories. Also in Ohio, State court rules call for specific oral advisements by the judge regarding appeal rights for criminal defendants on the occasion of sentencing. In other States, orders on matters of child custody must address health insurance coverage. The following list identifies areas of law that are applicable to virtually every State. Panels should use these categories as a guide in developing their own lists of laws. Other areas of law can be added or substituted, depending on local circumstances. o Reviewing and deciding motions--extent to which required documentation (e.g., briefs) is met for particular motions (e.g., summary judgment), whether the deadlines for filing motions and responses to motions are met, whether motions are ruled on in a timely manner. o Imposition of sanctions--extent to which sanctions are imposed when attorneys request them and the court's rule clearly states that they shall be imposed (e.g., costs are to be imposed on the losing party in a discovery-related motion). o Enforcement of continuance policies--extent to which the court adheres to its own policy of granting extensions of time (e.g., no continuance is granted simply because all parties agree to it or no more than two continuances are granted, except in exceptional cases). o Required documents--documents, for example, summarizing income in divorce matters. o Enforcement--specific requirements for court activity (e.g., issuing wage withholding orders and reviewing guardianships and conservatorships). o Jury instructions--whether, when, and how instructions to juries are given. o Awards of costs and attorney fees--statutes requiring awards to parties. o Juvenile detention--statutes, case law, rules governing administration of detention services. o Setting bail--extent to which guidelines are followed. o Process for appointment of counsel--eligibility, timeliness of appointment, participation at critical stages. o Required proceedings--whether, when, and how proceedings are conducted (e.g., advisement of rights and alternative dispute resolution hearings). o Content of orders--mandated elements such as length and terms of incarceration. o Appeal process--extent of notification of the opportunity to appeal. An overlap between the laws the panels will select for this measure and the laws of interest in other standard areas likely exists in areas such as expedition and timeliness, and equality, fairness, and integrity. Hence, each panel should consider data collected for other measures and utilize them as appropriate to simplify the data collection effort. Data Collection. The laws selected must be measurable using one of the following data collection methods: o Records search--data collected about (1) presence or absence of documents in case files, (2) form of documents in case files, (3) content of case file documents and summary records, and (4) filing date of documents. o Observation of proceedings--data about whether and how required proceedings are conducted. o Interviews with judges, court employees, and the local bar--see Measure 3.1.2 for an example of this method. The data collection methods require the use of relatively straightforward measurement instruments. Forms that employ a series of questions can be designed for use in reviewing files, documents, and other court materials. Examples of such questions are: o Does the case file, document, or other form contain a record of whether there was adherence to the required law or procedure? o If not, is there evidence available elsewhere? o If so, what does the record indicate? Was there adherence? These questions should be applied to a sample of each type of case. It is not necessary to draw a separate sample of cases for each area of law under consideration. If areas of law address similar types of cases (e.g., civil or criminal), data may be gathered on each area from the same sample of cases. Data Analysis and Report Preparation. Each panel will review the results of the examination of the sampled cases and proceedings to determine (1) the percentage of cases in which there was a clear indication of whether the law's requirements were followed, (2) the percentage of cases in which there was no evidence of whether the requirements were or were not followed, (3) whether the degree of fidelity was uniform across each panel's set of laws, and (4) whether some areas of law exhibit higher fidelity than others. After addressing these questions, each panel will rate the court's performance. Their assessments will then be forwarded to the court, which will decide what appropriate action should be taken to improve performance. Measure 3.1.2: Assessment of Court Performance in Applying the Law Integrity is a matter of perception as well as objective adherence to substantive and procedural laws. Courts should be viewed as faithfully applying the requirements of substantive and procedural laws. Practicing attorneys and employees of the court are in a position to provide useful information about the integrity of the court's procedures. This information will be collected in a survey administered to members of the bar and court employees. The attorneys and court employees will be asked whether they are aware of specific requirements of law not observed in the court. If they are, they will be asked to cite underlying statutes or procedures as closely as they can. Planning/Preparation. Different techniques will be used to select each set of respondents. For the purpose of this measure, "court employee" designates staff members of the clerk of court as well as persons employed by the judges. A court employee payroll list is requested from the court administrator and court clerk. The list should be annotated, using the most expedient means possible, with each employee's position and duties. In some cases, the court's personnel records may be organized in such a way that no extra effort will be required for annotation. If this is not the case, a system of codes will be provided to the court to make annotation as simple as possible. One group of code values will indicate the kinds of cases with which the employee is familiar; another will indicate the kinds of duties the employee performs. For example, classification of duties will include activities such as courtroom services, document processing at the public counter, recording documents in docket records, managing records, working with judgments and judgment dockets, scheduling matters for calendars of court sessions, supervising probation, preparing pretrial release evaluations and reports, conducting presentence investigations, and handling cashier duties, cash bookkeeping or accounting, jury management, and so forth. The court employee list will be used to identify employee respondents for the survey. Part-time employees and maintenance staff should be excluded from the sample group. Of those remaining, all employees should receive the survey. (If some groups of employees performing similar duties are very large, a sample of these groups could be drawn rather than sending the questionnaire to all of the employees in each group.) For each division of the court included in the evaluation (i.e., general civil, criminal, juvenile, domestic, and so forth), docket entries for cases filed during the previous year is sampled, and a list of attorney names for all of the sampled cases is compiled. Attorneys who practice often in the court are likely to appear on the list more than once, and this frequency should be noted. As a result, if the court chooses, it can sample according to the attorneys' degrees of practice. Data Collection. For examples of possible questions to be used in data collection, refer to Form 3.1.2, Illustrative Questions for Measuring Court Employees' and Attorneys' Assessments of Fidelity to the Law. The questionnaires for court employees will be distributed via interoffice mail. The questionnaires for attorneys should be mailed with a cover letter signed by the president of the local bar association and the presiding judge of the local court. Questionnaires will be returned in sealed envelopes coded to match the "master" list of individuals to whom questionnaires were given. Because the respondents are employees of the court and attorneys who regularly practice before the court, confidentiality is important, especially in smaller jurisdictions where it may be relatively easy to determine the identity of respondents to questionnaires. Care must be taken to avoid identifying respondents by their handwriting or how they return the questionnaires. Individuals who return a questionnaire should be noted on the master list. When 10 days have elapsed from the date respondents received the questionnaire, a reminder should be sent to those individuals who have not yet returned a questionnaire. A second notice should be sent after 10 days have elapsed. At any time after 30 days, the data may be tabulated. Data Analysis and Report Preparation. The laws listed by attorneys and employees should be compiled, noting the frequency of each citation. The percentage of respondents who list one, two, or three laws also should be calculated. For the purpose of measuring court performance, the larger the percentage of respondents who believe that legal requirements are not being followed, the lower the level of performance on this measure. The results of this measure should be submitted, if possible, to the panels described in Measure 3.1.1. The panels should comment on the results and assist in drafting the evaluation report to the court. Standard 3.2: Juries Jury lists are representative of the jurisdiction from which they are drawn. Commentary. Courts cannot guarantee that juries will always reach decisions that are fair and equitable. Nor can courts guarantee that the group of individuals chosen through voir dire are representative of the community from which they were chosen. Courts can, however, provide a significant measure of fairness and equality by ensuring that the methods employed to compile source lists and to draw the venire provide jurors who are representative of the total adult population of the jurisdiction. Thus, all individuals qualified to serve on a jury should have equal opportunities to participate, and all parties and the public should be confident that jurors are drawn from a representative pool. Standard 3.2 parallels the American Bar Association's Standards Relating to Juror Use and Management (1993). These standards emphasize that "the opportunity for jury service should not be denied or limited on the basis of race, national origin, gender, age, religious belief, income, occupation, or any other factor that discriminates against a cognizable group in the jurisdiction" served by the court. Procedures designed to achieve representativeness include combining regularly maintained lists of registered voters and licensed drivers and using random selection procedures at each step of the jury selection process. Measurement Overview. As noted in the introduction to this performance area, courts cannot guarantee that juries reach equitable decisions. Nor can they guarantee that the individuals chosen through voir dire to sit at trial are representative of the community from which they were chosen. Courts can, however, provide a significant measure of fairness and equality by ensuring that the methods employed to compile source lists and to draw the venire are representative of the total adult population of the jurisdiction. Thus, all those individuals qualified to serve on a jury should have equal opportunity to be considered and selected. This will help ensure that all parties and the public are confident that jurors are drawn from a representative pool. Standard 3.2 parallels the emphasis on broad participation in and representation on juries found in the standards on juror use and management that have been adopted by the major national court organizations including the American Bar Association (ABA) and by many of the States.[2] These standards emphasize that jury duty should not be denied or limited on the basis of any factor discriminating against a "cognizable group" in the jurisdiction served by the court. Such a group can be "an economical, occupational, social, religious, racial, political, or geographic group in the community such as physicians, blacks, Protestants, or welfare recipients." Procedures designed to achieve representativeness in juries are included in ABA Standard 2. This standard encourages maximizing representativeness and inclusiveness of the jury source list by combining regularly maintained lists of residents, if any single list is found lacking. ABA Standard 3 encourages the use of random selection procedures at each step of the jury selection process. There are three measures associated with Standard 3.2, Measure 3.2.1, 3.2.2, and 3.2.3. These measures focus on jury representativeness, considered by many courts to be the most crucial indicator of quality. However, the measures are presented here in a sequence that parallels the developmental nature of the jury selection process--compilation of the source list, design and application of random selection procedures, and selection of the juror pool. Measure 3.2.1 focuses on the inclusiveness of the source list. Inclusiveness is measured by comparing the number of names on the source list with the number of age-eligible persons in the population of the jurisdiction.[3] If the census or other demographic source indicates that the jurisdiction contains 100,000 persons over the age of 17 (assuming the statutory minimum age is 18) and the source list is the voter list containing 80,000 names, the inclusiveness of the voter list is 80 percent. Though not ensuring representativeness, high levels of inclusiveness provide reasonable representativeness. Theoretically, if inclusiveness is 100 percent, representativeness is achieved. Inclusiveness is an excellent first measure because it is subject to straightforward calculation and because it provides the first indication of compliance with this standard. It is possible that a small list with low inclusiveness could represent the population, particularly if the population is very homogeneous. However, many persons would not be available to be selected for jury service if the inclusiveness were low. In the interest of equality and fairness and the desirability of broad citizen participation, the inclusiveness should be as great as possible. The greater the inclusiveness, the greater the sharing of responsibility and burden of jury service. Thus, inclusiveness has a dimension beyond representativeness, that of citizen participation in the administration of justice. Measure 3.2.2 focuses on the use of random selection procedures. For years the jury system was marked by the appearance of individuals hand selected from certain strata of the population. Discrimination, intentional or not, was usually the result. Verdicts reflected the community standards of these strata, and the viewpoints of juries rarely reflected those of the entire community. It was only in 1975 that the U.S. Supreme Court held that women could not be excluded simply because they are women.[4] With the previous measure emphasizing the use of a broadly inclusive list, the advantages of such a list are lost if the selection of names from this list is not random. The ABA standards call for randomness at each stage of the juror selection process while recognizing that certain practices are nonrandom but nonetheless permissible. Employing these standards eliminates all other nonrandom procedures. The permitted nonrandom procedures given in Standard 3 of Standards Relating to Juror Use and Management are as follows: o To exclude persons ineligible for service. The inability to communicate in English or the existence of a felony conviction are nonrandom within the population, but exclusion of these persons is permitted. o To excuse or defer prospective jurors. An excuse based on individual or community hardship or postponements to permit persons to serve who would otherwise be excused are nonrandom but permitted within statutory or case law limits. o To remove prospective jurors based on challenge for cause or if challenged peremptorily. These discretionary practices, if established by statute or rule, are permitted. o To provide all prospective jurors with an opportunity to be called for jury service and assigned to a panel. In this practice, all persons reporting for jury service are randomly assigned to a panel for voir dire before anyone is assigned a second time. The result is the best possible representativeness, although it is not a purely random selection. The measures of randomness can be complex. The method proposed is based on careful observation rather than on statistical measures. Observations of situations of nonrandomness beyond reasonable expectations, in turn, place the burden on the court staff to explain the reasons for the unexpected outcomes. Finally, Measure 3.2.3 focuses on the representativeness of the final juror pool. Representativeness of the pool or venire of prospective jurors is measured by the degree to which those persons in the pool or panel represent, by some demographic category, the population in the jurisdiction. Typical categories are race, ethnic origin, age, gender, occupation, and education. Representativeness is the means by which courts usually assess the selection, qualification, and summoning processes of the jury system, although standards of permissible deviations of representativeness have not been established. Both inclusiveness and representativeness use the total population within the statutory age limits as the basis of comparison. The total community is the basis, whether drawing on the constitutional mandate of "an impartial jury of the State and District" or on the case law mandate of Duren v. Missouri, in which the U.S. Supreme Court defined the test for denial of a fair cross-section.[5] The census provides the best measure of the total community. Although some local data sources may be available, the following discussions for measuring the compliance with inclusiveness and representativeness are based on census data. Measure 3.2.1: Inclusiveness of Jury Source List This measure compares the number of names on a court's juror source list(s) with the number of age-eligible persons in the jurisdiction's population. The more closely the numbers match, the better the court is performing on this measure. Planning/Preparation. The U.S. Department of Commerce, Bureau of the Census, publishes the population statistics of all counties in the County and City Data Book.[6] More detailed data for each State can be found in General Population Characteristics, also available from the Bureau of the Census.[7] (The latter is PC 80-1-BXX where XX is the State volume number.) This volume contains the age, race, gender, and national origin composition of each county as reported by the census every 10 years. Extrapolations of these data for the period between the census years are prepared by the Bureau of the Census and by local units of government such as planning commissions. In addition, a Census Data Center in each State (usually at one of the major universities) provides access to and assistance with census data and other statistics or data sources. The eligible population for these calculations are citizens 18 years old and over, or whatever age stratification is defined for the State. Excuses granted for the elderly do not reduce the eligible limits unless individuals over a certain age are prohibited from serving on jury duty. Courts may wish to adjust the eligible population by excluding those who are hospitalized or incarcerated, or who are nonresidents (e.g., military personnel). However, these adjustments are usually beyond the accuracy of the measurement. Data Collection. The source list may be one or more combined lists from which names are selected. Typical lists are the voters list, the drivers list, the merged voters and drivers list, or other single or merged lists. The size of the source list is determined by summing the number of names on the list(s). Inclusiveness is measured by dividing the size of the list by the size of the eligible population. For instance, if the size of the source list is 1,439,066 and the size of the eligible population is 1,541,050, the calculation of inclusiveness would be .9338 or 93.4 percent. Data Analysis and Report Preparation. An absolute standard of inclusiveness has not been adopted. ABA's Standards Relating to Juror Use and Management states that courts should determine inclusiveness and evaluate if improvement is needed. The national rate of voter registration and the percentage of drivers (including only those persons over 18 years of age), 64.3 and 86.6 percent, respectively, in 1986, suggest some guidelines.[8] A problem with all lists is the inclusion of noneligible persons, which gives a false sense of inclusiveness. The drivers list may include out-of-State residents or persons under the age of 18. Merged lists may contain persons who appear on both original lists and were not recognized as duplicates.[9] Inclusiveness in excess of 100 percent is often seen with merged lists due to these situations. The extent of the inflation can be estimated from the response to the first mailing sent to the names selected and from those found to be disqualified for reasons such as noncitizenship or nonresidence. The level of undeliverables is also a measure of how up-to-date and inclusive the list may be. High levels of nonresponse should be pursued not only to establish system integrity (i.e., are citizens recalcitrant or simply not there?) but to further refine the inclusiveness measure. While questioning the inclusiveness measure in these courts, they do attest to the good faith effort to broaden the coverage to the maximum extent possible. A standard of 85 percent inclusiveness has been suggested for any list, which would require a good single source list or the merging of several lists.[10] Although an 85 percent inclusive list could completely exclude a minority that constituted 15 percent of the population, such a result is highly unlikely. This measurement of inclusiveness is considered a useful first indication of jury list adequacy. Comparisons by county within a State, the trend over years, or the change when new lists are compiled can provide a valuable benchmark to understanding the jury system. Courts with inclusiveness values less than 85 percent should examine their levels of representativeness as discussed in Measure 3.2.3. Measure 3.2.2: Random Jury Selection Procedures This measure determines whether a court is using random selection to select prospective jurors from the juror source list(s). Data are obtained by comparing actual prospective juror panels with those that would be expected if random selection was used. Planning/Preparation. Although courts may say that all names are considered for selection, some statutes, rules, or jury plans specify that strata be observed. In such cases, courts draw names to represent each strata equally or to represent each strata according to some ratio. For instance, some jurisdictions draw by district strata so that the number of names selected from each district is in proportion to the population of the district as compared with the population of the whole jurisdiction. If the source list equally represented each district, a random selection would equally represent each district, to within a small margin of error. These stratified selections are intended to overcome any unequal representation in the source list or lists. However, before applying such techniques, courts should ensure that they are allowed by some authority. Measures of randomness can be very complex.[11] For this measurement, it is recommended that courts compare several observations with expected values. Although deviations from expectations are in some cases proof of randomness, persistent patterns of nonexpected results should require investigation. For instance: o A panel of 30 prospective jurors, all male, is expected to occur in every billion panels. One occurrence is reason for great amazement; two occurrences should provoke great concern. o Although the alphabet has never been shown to produce a bias, a group of prospective jurors in alphabetical order, or representing only a portion of the alphabet, raises questions of inclusiveness or discretion. o A potential jury pool consisting of more than one individual with the same last name or the same address can be expected to occur occasionally but should be checked if occurring regularly. o The same people often are called for jury service year after year or several times within the same year. Repeat selections are expected. If 10 percent of the list is selected each year, 1 percent will be selected in 2 successive years and .1 percent will be selected 3 successive years. Values greater than this need to be investigated. Data Collection. This measure is conducted by examining the list of persons reporting for jury service. These persons may be the entire pool of prospective jurors or, if persons are brought to the court in panels, a number of panels could be examined. Several hundred names should be adequate for these examinations. If suspicious patterns are found, persons reporting in at other court or jury terms should be examined. If the patterns persist, problems clearly exist. If the patterns are related to the date of service, problems likewise may exist. Patterns to examine are: o Alphabetical distribution. Half of the last names should be grouped A through K. Deviations of more than a few percent should be investigated to examine the alphabetical distribution of the source list or lists. o Alphabetical inclusiveness. The last names of those serving should represent the entire alphabet. Omissions of the top or bottom of the alphabet should be examined because such omissions would indicate that the whole list was not used. Panels of persons whose last names contain only a portion of the alphabet are probably being called in via a recording that identified individuals to report by last name. This practice should be replaced with one that uses random numbers to select individuals. o Geographical distribution. To the extent possible, the panels or pool of prospective jurors should represent the entire jurisdiction. Lack of representation for a distinct area of the jurisdiction could indicate that a geographical listing such as the voter list is being used sequentially rather than by a random selection from the entire list. Data Analysis and Report Preparation. Nonrandom results are usually the result of the use rather than the generation of random numbers. The problem is in how these numbers are used to select names. If nonrandom results are discovered, detailed discussions with those making selections (i.e., data processing or court staff) are needed. Factors to examine include: o Are the same key factors used for each selection?[12] If a random start/fixed interval method is used, the start number must be randomly selected in the range from one to the interval number. (If 100 names are desired from a list of 1,000 names, the interval is 10. If "2" is randomly selected, the names at 2, 12, 22, 32, etc., in the order are selected.) o If a computer random number generator is used, are the input numbers or seeds changed each time the program is run? o Are names held out or passed over due to permanent exemptions or prior service? If these names represent more than a few percent of the source list, this could be the cause of the problem. o Are the lists or files thought to be random actually sequential lists or files by alphabet or geography? Voter registration lists are often geographically separated by precinct, ward, or district. Lists ordered by voter registration number may have an age order with older citizens having lower voter numbers. o Do the selected names represent the same list? If a printout of the voters list or merged lists contains the same number of pages of "A's" and "W's," the selected names should have equal numbers of "A's" and "W's." The same list could be counted by ZIP Code, and the distribution of those selected should match the distribution of the source list. That is, if 10 percent of the names on the source list have ZIP Code 22180, about 10 percent of those selected should have that ZIP Code. The lack of proper numbers for certain demographic groups (e.g., young or black) probably is due to the shortcomings of the source list rather than a problem of randomness. This lack of representation is the topic of the next measure. Measure 3.2.3: Representativeness of Final Juror Pool This measure considers the representativeness of the final juror pool. It involves collecting demographic data by questionnaire on all persons reporting for jury duty during a specified period of time. The questionnaire data are compared to the demographic characteristics of the jurisdiction's population to determine the extent of representativeness. Planning/Preparation. The census publications or data obtained for Measure 3.2.1 also should contain the demographic data needed for this measurement. The assistance of an individual familiar with demographic studies would be helpful for this measure. A local college or university likely has a faculty member with such qualifications. The added credibility brought to the examination by such a person could prove helpful if the jury system is ever challenged for selecting prospective jurors who are not representative of the population. Data Collection. A questionnaire should be distributed to all persons serving, whether they are selected as a trial juror or not. Questionnaires should be used for several days or weeks scattered over a month. At least 200 questionnaires should be used. Excellent response rates can be obtained by asking people to complete the questionnaires before they leave the court. However, this necessitates using a short, quickly completed form. Data Analysis and Report Preparation. Analysis consists of comparing the demographic characteristics of the population (obtained during the planning/preparation stage to the tabulated data obtained from the jurors. If the population is 30 percent black and the tabulated data indicates that 30 percent of those reporting to the courthouse are black, those reporting perfectly represent the population and there is no disparity between the population and prospective jurors for this particular demographic characteristic. Unfortunately, a difference or disparity usually exists. The two measures of the disparity generally used to measure the difference between the pool or panels (often called the venire) and the population are the absolute and comparative disparity.[13] These measures are defined as follows: o Absolute disparity: This index measures representativeness as the difference between the proportions of the population and the source list of prospective jurors that are in the category of interest. If the 18 and over population is 30 percent black and the venire is 20 percent, the absolute disparity is 10 percent, or the difference between these two numbers. A criticism of this measure is that it is not sensitive to the relative size of the disparity. That is, a venire that contained no blacks drawn from a population that is 10 percent black would have the same absolute disparity as the 30 percent/20 percent disparity mentioned above. The former situation is much more serious than the latter, which is acceptable in many courts. o Comparative disparity: This measure compensates for the limitation in the concept of absolute disparity by relating disparity to the size of the underrepresented group in the population. Using an example similar to the one above, if the venire is 20 percent black and the population is 30 percent, the comparative disparity is [(30-20)/30 X 100] or 33 percent. A venire with the same absolute disparity (one that contained no blacks in a community that is 10 percent black) would produce a comparative disparity of 100 percent [(10-0)/10]. Thus, the comparative disparity more properly reflects the difference in these two situations. Comparative disparity is the percentage by which the probability of serving is reduced for people in the category being examined. (Note that this underrepresentation is positive while an overrepresentation is negative--a point which often causes confusion.) Kairys et al., while admitting that no clear standard values exist based on case law, suggest a maximum comparative disparity of 15 percent.[14] An article surveying California case law as of 1987 cites absolute disparity as low as 1.8 percent and comparative disparity of 43 percent and above as significant.[15] The significance of the results should be based on all of the following: o The findings of the State's appellate courts in representativeness challenges. o The level of the disparity (great disparities require greater actions by the court). o The alternatives available through other lists and the feasibility of merging or using these lists. Finally, regardless of the exact numerical degree of disparity, there is a need to determine how and why the final juror pool is unrepresentative. What are the likely reasons for the disparity? Are out- of-date, invalid, or unreliable sources being relied on in the selection process? Does the unrepresentativeness arise from different attrition rates for jurors from different social groups? What policy changes might be necessary to remedy the situation? By addressing these questions, the court can use Measure 3.2.3 for basic self-improvement. Standard 3.3: Court Decisions and Actions Trial courts give individual attention to cases, deciding them without undue disparity among like cases, and upon legally relevant factors. Commentary. Standard 3.3 requires that litigants receive individual attention without variation due to judge assignment or legally irrelevant characteristics of the parties, such as race, religion, ethnicity, gender, sexual orientation, color, age, handicap, or political affiliation. Persons similarly situated (e.g., criminal defendants faced with or found guilty of similar offenses and having similar criminal histories) should receive similar treatment. The standard further requires that court decisions and actions be in proper proportion to the nature and magnitude of the case and to the characteristics of the parties. Variations should not be predictable due to legally irrelevant factors, nor should the outcome of a case depend on which judge within a court presides over a hearing or trial. The standard refers to all decisions, including sentences in criminal cases, the conditions of bail, the amount of child support ordered, the appointment of legal counsel, and court-supervised alternatives to formal litigation. Measurement Overview. One of the most fundamental problems confronting a democratic society is discrimination on the basis of race, ethnicity, gender, religion, or any other factor. The undesirable nature of discriminatory conduct becomes truly odious when the source of the conduct is a governmental institution. Hence, not surprisingly, the performance of courts is scrutinized closely for the presence of discriminatory policies, procedures, and practices. Virtually every State court system has tried to identify whether it is contributing to discrimination, where discrimination occurs, and what can and should be done to eliminate it. The formation of racial, ethnic, and gender bias commissions is a recent and prominent example of these concerns.[16] The purpose of many of the commissions is to determine the extent of perceived bias in the courts among the citizenry, to evaluate the reality of that bias, and to recommend ways to remedy both the perceptions and any actual biases discovered during the inquiry. In doing this work, these commissions have drawn attention to the problem and have heightened the consciousness of State judicial leaders, prompting them to remove bias where it exists. Discrimination and bias are antithetical to underlying legal and constitutional principles and thus are crucial to eliminate. Standard 3.3 reiterates these principles by asserting that the court is to treat every case with individual attention in a consistent manner on the basis of legally relevant factors. Because the topic of bias is extremely sensitive, courts will want to measure their performance in this area very carefully. Courts will want to know that measures of fairness, equality, and integrity are valid and that conclusions concerning their performance are not open to misinterpretation. However, determining the scope, location, and magnitude of bias requires considerable court resources. With a desire for more precise conclusions about the court's policies and practices pertaining to the race, gender, ethnicity, or age of courtroom participants, the evaluation process requires more time-consuming and costly methodological skills. As a result, courts should begin with the most simple approach to determining court performance with regard to bias and move on to more complex measures as the court desires or requires more precise or complex answers. Courts with limited experience in the area of fairness, equality, and integrity may want to begin by compiling information, literature, and readily available data. A court may choose to limit its compilation to specific topics it considers most relevant or to materials that discuss issues in similarly sized and situated jurisdictions.[17] Has the topic ever been investigated in this court? How broad in scope and how detailed were these studies? Did they cover the treatment of litigants, witnesses, and jurors in both civil and criminal cases? For a court already familiar with the general topics of equality and fairness, the initial approach might be to focus on the opinions of experts, court users, and the community. Following the model of many bias commissions, the court may convene focus groups to reveal attitudes toward the court from various points of view. This opinion gathering should have an agenda that structures the discussion. For instance, invited participants might be limited to representatives of selected groups or the topic might be limited to a specific aspect of the legal process such as sentencing criminal defendants. Court organizers should emphasize that the discussion is about general opinions toward the court and should not focus on any particular person or case. Following the exercise, the court can then evaluate issues such as: Is the court commonly viewed positively or negatively? Or, is the general outlook one in which bias is thought to be an exception and limited to particular circumstances? A knowledge of the literature and data about bias and an awareness of the opinions toward the court will be useful. If the general picture reveals areas of potential problems, the court can decide whether to pursue a more systematic inquiry into the possible sources of bias and discrimination. Even if the general picture is almost entirely favorable, the court may decide to confirm this view with more systematic information. The gathering of more specific and detailed information demands more resources, time, and skills to complete. As explained in detail in the following pages, the implementation of quantitative measures requires more complex methodologies (e.g., inquiry into individual case files, data manipulation, or a systematic survey of a random group of individuals) than those required by the first two approaches. Measures 3.3.1, Evaluations of Equality and Fairness by the Practicing Bar, and 3.3.2, Evaluations of Equality and Fairness by Court Users, focus on the views of practicing attorneys and court users toward the decisions and actions of courts through a survey of a random sample of these individuals. Measure 3.3.3, Equality and Fairness in Sentencing, focuses on the extent to which legally relevant factors account for the court's sentencing decisions in criminal cases. To document whether any perceived problems exist, a statistical approach is described. This approach, however, is likely to require technical assistance from the research community. Measure 3.3.4, Equality and Fairness in Bail Decisions, focuses on the extent to which legally relevant factors account for the court's bail decisions in criminal cases. Systematic information is gathered to answer this question through a review of closed case files. Finally, Measure 3.3.5, Integrity of Trial Court Outcomes, examines the integrity of court decisions and actions as indicated by the outcomes of civil and criminal appeals. The measures described for Standard 3.3 are challenging. The reason for this complexity, however, is to ensure that any findings regarding the presence or absence of bias are valid. These measures may be beyond the scope of some courts' available expertise and resources. Other courts may choose to implement only one or two of the suggested measures based on their own resources. Measure 3.3.1: Evaluations of Equality and Fairness by the Practicing Bar The purpose of this measure is to ascertain the practicing bar's perceptions of the equality and fairness of the court's decisions and actions. Members of the bar who appear in court will be asked, through a survey questionnaire, to assess the fairness and equality of the court's actions and decisions. A consensus among them that the court provides attention to litigants, produces similar outcomes among like cases, and relies upon legally relevant factors in making decisions will be another indication that the court complies with Standard 3.3. Planning/Preparation. The first step is to construct a set of questions that measure the extent to which attorneys believe that the court is treating individuals fairly and equally. Questions can be drawn from both previous pools of judicial performance[18] and basic research studies.[19] These two bodies of literature have been consulted to design a form for use by the court. (See Form 3.3.1, Illustrative Questionnaire Concerning the Practicing Bar's Views of the Court's Equality and Fairness.) The questionnaire that follows is divided into four sections. Section I seeks to establish the experience of attorneys with the courts. For example, attorneys who have had many cases heard before the court (question 1) may have different responses than attorneys who have had only a few cases heard. Section II focuses on the views of attorneys regarding whether the court's decisions are affected by characteristics of litigants or attorneys. Following Standard 3.3, the court should not be affected by legally irrelevant factors such as the gender or race of the attorneys or the litigants (questions 4, 7). Attorney views on court practices also can be gauged by asking them if the court shows favoritism (question 5) or antagonism (question 6) to any of the participants. Because there are many possible situations in which the court might demonstrate such undesirable practices, an open-ended question (question 8) is included to describe those situations. The answers to the questions in Section II will most likely determine the answers to the questions in Section III, which asks attorneys for their overall judgments concerning fairness and equality (questions 9 and 10). Finally, Section IV seeks to establish the profile of the attorneys. This information is helpful for comparing the responses between different categories of attorneys (e.g., male versus female). Data Collection. This step involves asking members of the bar to complete the questionnaire. Because there are many attorneys who have no direct contact with the court, a portion of them will not return the questionnaire. Hence, a preferred method is to send questionnaires only to those attorneys who have appeared before the court at least once during the past year. Names of these persons may be obtained by a canvas of dockets during the period. This approach has the advantage of identifying in advance attorneys who are heavy, medium, and light users of court resources. A court may wish to target one set of users or to sample attorneys in proportion to usage ratios. For both methods, however, followup mailings of reminder postcards should be used to ensure a good response rate. Data Analysis and Report Preparation. Most responses on the survey instrument are associated with a specific number code (e.g., "strongly agree" equals 1). For each survey form that is returned, attorney responses are recorded by entering these number codes into a computer file and then tabulated using a computer software program. For the first analyses, each question should be examined to determine whether the attorneys consider the court to be a source of unfair or unequal decisions. For example, what percentage of the attorneys believe that the court sets higher bail for particular racial/ethnic groups (question 7)? That is, how many respondents circled options 1 and 2? In general, the higher the percentage of attorneys that agree that the court acts without bias, the more the court meets Standard 3.3. That principle should guide the interpretation of individual questions. For example, if at least a majority of the respondents circle options 1 and 2 in question 10, it appears that the court, in general, is performing positively on this indicator. Conclusions, however, should not be drawn without first analyzing the responses of various subgroups of respondents. These analyses are important for determining whether the opinions of some groups are underrepresented. For example, if most respondents are white males, the general analyses will reflect the opinions of this group. If white males do not see the favoritism or hostility experienced or perceived by other groups, the general analyses will not give the whole picture. It is important, then, to determine how the responses of other groups compare with general responses. Finally, the responses to different questions can be examined in relationship to one another. Specifically, what issues explain the attorneys' overall reactions (questions 9 and 10)? As an illustration, it may be the case that the more a respondent believes that the court does not sentence defendants of particular racial/ethnic groups more severely question 7d), the more likely he or she is to agree that the court is fair (question 10). In considering such relationships, questions 4 through 8 can be regarded as potential criteria for determining attorneys' reactions regarding fairness and equality in the court.[20] It is important to note that this measure examines perceived bias among practicing attorneys. It does not consider the accuracy of those perceptions. It is up to the court to determine the level at which the perception of bias by practicing attorneys is sufficient to warrant further action. Measure 3.3.2: Evaluations of Equality and Fairness by Court Users All individuals (litigants, jurors, witnesses, and victims) who are involved in a court case form impressions of the way they and others are treated in the courthouse. Even members of the public who only observe the court proceedings form impressions. This measure is designed to collect information about their impressions of the court's ability to provide fair and equal treatment. Planning/Preparation. The first step is to construct a set of questions that measure the extent to which court users believe the court is treating individuals fairly. Many of the questions can be drawn from previous pools of judicial performance[21] and basic research studies.[22] These bodies of literature have been consulted to design two forms that can be used to gather information on the experience and perceptions of two groups of court users: (1) a courtroom group consisting of civil and criminal jurors, witnesses, and litigants involved in court proceedings; and (2) an administrative group consisting of persons coming to court to pay a fine, meet with a probation officer, or to check a court record. (Please refer to Form 3.3.2, Illustrative Questionnaire Concerning the Users' View of the Court's Equality and Fairness, for an example of the questionnaire.) The questionnaires are divided into three sections. Section I asks each respondent to comment on his or her general views of court policies, procedures, and practices. Section II asks each respondent to comment on his or her experiences. Section III asks for information on the respondent and the nature of his or her contact with the court. This information will provide a profile of the respondents that may help to explain their answers. Data Collection. Administration of the questionnaire is different for each group. The distribution strategy for each group is presented next. o Courtroom group: Lists of civil and criminal case jurors, witnesses, and litigants who have been involved in court proceedings during the past year are compiled. A questionnaire is mailed to each individual on the list. o Administrative group: Employees of each administrative office or section of the court distribute a questionnaire to each individual with whom they have contact. Employees ask each respondent to complete the questionnaire and return it in the envelope provided. Questionnaires should be distributed for a specific time period to ensure that a sizable number have been given out. Data Analysis and Report Preparation. Most responses on the survey instrument are associated with a specific number code (e.g., "strongly agree" equals 1). Responses are recorded by entering these number codes into a computer file and then tabulated using a computer software program. Analysis is conducted in two steps. First, each question should be examined to determine whether the respondent considers the court to be a source of unfair or unequal decisions. In general, the higher the percentage of court users that agree that the court acts without bias, the more the court meets Standard 3.3. Conclusions should not be drawn, however, without first analyzing the responses of various subgroups of respondents. These analyses are important for determining whether the opinions of some groups are underrepresented. For example, if most of the respondents are white males, the general analyses will reflect the opinions of this group. If white males do not see the favoritism or hostility experienced or perceived by other groups, the general analyses will not give the whole picture. It is important, then, to determine how the responses of other groups compare with general responses. The responses to different questions also can be examined in relationship to one another. Does the respondent's personal experience correlate with his or her views of how social groups are treated? For example, do those individuals who feel they were treated on the basis of their race (options 1 and 2 in question 2a) tend to see the court favoring or showing hostility toward a particular racial/ethnic group?[23] It is important to note that this measure examines perceived bias and not the accuracy of the perceptions. It is up to the court to determine the level at which perceived bias among court users warrants further attention. Measure 3.3.3: Equality and Fairness in Sentencing One application of Standard 3.3 is sentencing in criminal cases. Because the imposition of criminal sanctions deprives individuals of their liberty, the fairness of the process and corresponding outcomes is an important topic for the measurement of court performance. In fact, some courts might regard fairness in sentencing to be among the most critically important goals that it should strive to meet. However, fairness in sentencing is understandably very difficult to measure.[24] Even the most refined measurement will produce results more suggestive than definitive, which is not astonishing given the difficulty of sentencing for trial judges. Just as the trial judge must weigh, balance, and take into account many factors, the court researcher must identify, measure, and interpret the effects of many complex factors, including some that are difficult to express as a precise scale of measurement. Hence, trial courts take on a very daunting task by attempting to measure fairness in sentencing. Why? Because of the sensitive nature of conclusions about fairness, a court will want to know that the conclusions are valid to the greatest extent possible. However, sound conclusions require a rigorous methodology, which requires a substantial commitment of time, quantitative skills, and resources. Thus, without intending to deter courts from applying this measure, honesty requires acknowledging the labor-intensive aspect of the measurement process necessary to reach the kind of conclusions the court is likely to want to draw. (Note: The same point applies equally to the measurement of fairness in bail decisions, Measure 3.3.4.) What does fairness in sentencing mean? According to Standard 3.3, "trial courts give individual attention to cases deciding them without undue disparity among like cases and only upon legally relevant factors." Translated into more operational terms, the standard is saying that the imposition of punishment should not be on the basis of a defendant's race or gender. For example, African- Americans should not receive longer sentences than non-African-Americans simply because they are African-American. Different sentences should be the product of differences in criminal backgrounds, offense severity, circumstances surrounding the offense, and other legally relevant factors. Finally, while equality and fairness are positive standards, they are observed in the negative. Courts are urged to be equal and fair in their treatment, but their performance is measured in terms of outcomes that are not supposed to occur-- inequality, disparity, and inconsistency. Planning/Preparation. Courts should consider four steps in planning to undertake the measure. First, some familiarity with the literature on sentencing might prove useful. The most comprehensive volume, Research on Sentencing: The Search for Reform, is published by the National Academy of Sciences and available in most public and college libraries. The volume is written from the researcher's perspective, however, and contains some articles of a technical nature. A complementary article,"Racial Discrimination" by Rose Matsui Ochi, which appeared in 1985 in The Judges' Journal, illustrates how research results are interpreted and used by practitioners who seek to eliminate bias in sentencing. This article is also useful because it references additional readings on the topic that are readily available. A second step is for the court to examine its capacity for conducting a rigorous measurement process. If the court lacks a staff person skilled in quantitative analysis, it might find it helpful to ask for guidance and assistance from a staff member of a State sentencing commission, State AOC, or local university to assist in designing a plan of data collection, analysis, and interpretation. A third step is to set some boundaries on the scope of the measurement process. Despite the fact that researchers construct very complex quantitative models of sentencing, the proposed measure is intended to help a court assess itself and not necessarily to advance the state of knowledge. Hence, it permits the court to limit the scope and detail of its inquiry without sacrificing the validity of the results. As an example, the court needs to decide what aspect of sentencing is of greatest importance. Is it more important to determine fairness in the types of sentences that defendants receive (e.g., incarceration versus probation) or in the length of sentences imposed (e.g., are men incarcerated for longer periods of time than women)? Are both aspects equally important? Finally, before applying the measure, the court should discuss how it plans to interpret the results. The results will be in the form of numbers called coefficients that are based on the application of quantitative techniques to information gathered from individual case files. There will be a coefficient for each legally relevant (e.g., prior record and offense committed by the offender) and each extra-legally relevant factor (e.g., race of offender). The coefficient measures the impact of a particular factor, controlling for the effects of all other factors. If the legally irrelevant factors are not influencing outcomes, the numerical value of their respective coefficients will not be statistically different from zero. For example, knowing that an offender is a man will not predict the sentence any better than knowing that the offender is a woman. Additionally, the coefficients of all legally relevant factors should be significantly larger than those of irrelevant factors. If they are, one reasonably can draw the conclusion that there is limited bias in sentencing and that sentencing is primarily a product of legally relevant factors. If the court knows what to look for in advance, it will be more prepared to interpret and use the results both internally for self-improvement and for presentation to interested groups outside the court. Defining the Data Elements. Although the exact delineation of legally relevant and legally irrelevant factors may vary somewhat across States because of differences in substantive and procedural law, some distinctions likely will be valid in almost all situations. For the purposes of demonstrating the utility of the measure, therefore, it is assumed that legally relevant factors include offense seriousness, quality of the evidence, prior criminal record, and current legal status. Irrelevant factors include demographic, socioeconomic, and social stability attributes, and case processing attributes.[25] Based on that assumption, a court meeting Standard 3.3 has sentencing outcomes that can be explained more on the basis of those legally relevant factors than on factors deemed irrelevant. In addition to identifying a set of determinants of sentencing outcomes, the initial measurement step involves specifying the outcomes of sentencing. Two related outcomes are especially important: o In/Out Decision. Is the offender sentenced to a term of institutional incarceration? Or is the offender given some alternative such as probation, restitution, community service, or fine? o Length of Sentence. How long is the period of institutionalized incarceration? The first outcome distinguishes between convicted offenders who are sentenced to prison or jail and those who are given a sentence outside these institutions. The second outcome focuses on the length of the sentence in years, months, or days imposed on individuals sentenced to jail or prison. Legally relevant factors: Concerning the seriousness of the offense, a basic judgment must be made to focus on either a broad range of offenses or to isolate particular offense categories (e.g., robbery, burglary). The first option is to consider a large set of offenses and to rank them according to severity (e.g., homicide, robbery, rape, assault, weapons, drug sale, drug possession, burglary, forgery, and theft).[26] Additionally, other indicators may be used to gauge the more specific degrees of severity, such as the use of a dangerous weapon, the extent of injury to the victim, the amount of property taken, and whether the offender was a principal or accessory to the offense. Although some version of the first approach is highly recommended, a second option is to focus on selected offenses separately. If particular offenses are deemed of such importance to the court and the community that they merit special attention, this approach may be appropriate. However, this option lacks the representativeness of the first option, which encompasses the full range of offenses. Hence, we generally recommend some version of the first option. The quality of the evidence is extremely difficult to measure and may be known fully only by the participants involved in each individual case. As a result, retrospective reliance on case records for information only approximates the complete and correct picture of the strength of the evidence. Possible indicators include the number of prosecution witnesses, the number of expert witnesses, the number of exhibits, the submission of laboratory tests, and so forth. A limitation to these indicators, of course, is that they relate primarily to the few cases that go to trial. Prior criminal history is usually information presented to the court from State law enforcement records. Although some law enforcement information systems are more detailed than others, criminal history generally is measured in terms of the number of prior adult felony convictions, the elapsed time since the last conviction, whether the last conviction was for the same offense as the current charge, and the current legal status of the individual at the time of arrest (e.g., on parole or probation). Legally irrelevant factors: Demographic, socioeconomic, and social stability factors are a combination of quantitative indicators such as age (years), income (earned income per month), and education (number of years) and categories such as gender (male versus female), race (white versus nonwhite), employment status (employed versus unemployed), and marital status (married versus nonmarried). The case processing characteristics are all categories. Pretrial release status may be divided between those offenders on bail, those detained at least part of the time between arrest and final disposition, and those detained all of the time. Disposition similarly can be separated among those offenders who pled guilty, those convicted by a bench trial, and those convicted by a jury trial. A final factor is the judge presiding over a sentencing decision. Each judge need only be identified by an alphabetic character (e.g., Judge A, Judge B, Judge C, and so forth). The measure is intended to determine if any judge has an influence on sentencing that is greater than generally accepted legal factors. Sentencing outcomes involve the distinction between institutional incarceration and some alternative to incarceration. This distinction captures the in/out decision. For the length of the sentence, a standardized measure is the percentage of the statutory maximum imposed in the actual sentence. Because some sentences may involve a range, the minimum of the sentence imposed should be used in calculating the percentage. This standardization permits different offenses to be compared despite their differences in severity. Data Collection. In most jurisdictions, virtually all of the factors and sentencing outcomes can be measured against information contained in presentence investigation reports and closed court case records. A court can use these sources by drawing a random sample of approximately 1,000 closed cases and selecting from that pool those cases in which a conviction was obtained by guilty plea or trial. (The remaining cases should not be discarded because they can be used as part of the data set for Measure 3.3.4, Equality and Fairness in Bail Decisions.) Of this pool, 70 percent are likely to involve some sort of conviction, which means that these 700 cases can be used to examine the factors associated with the in/out sentencing decision. Of these cases, approximately half will involve a sentence of institutional incarceration, providing the basis for assessing the factors associated with the length of the sentence. The measurement of sentencing and sentencing outcomes described above needs to be translated into a more specific and detailed form prior to the review of court case records and presentence investigation reports. A data collection form should be constructed for the purpose of applying the sorts of indices suggested for the different factors. (Please see Form 3.3.3, Illustrative Sentencing Data Collection Form.) Data Analysis and Report Preparation. The question of whether legally relevant factors are more powerful predictors of sentencing outcomes than are irrelevant factors is addressed by the use of statistical models. These models are available in many software computer programs that are likely to be familiar to sentencing commission staff, court researchers in a State administrative office, or university professors. One or more of these individuals will likely know how to use an appropriate software program to analyze the data collected on the data collection form. Specifically, the expert will know what particular quantitative techniques should be applied to determine the independent impact of each legally relevant and irrelevant factor on the two types of sentencing decisions. In the case of the in/out decision, an appropriate technique is logit analysis. Logit analysis is designed to indicate the independent effects of various factors on different categories (e.g., a sentence of institutional incarceration versus one of nonincarceration). The numbers generated by logit analysis include coefficients for each factor. The sign (ñ) of the coefficient indicates whether there is a positive (e.g., the more serious the offense, the more likely the sentence will involve incarceration) or inverse (e.g., the longer the length of time since the last conviction, the less likely the sentence will involve incarceration) relationship between each factor and the outcome. A comparison of the magnitude of the coefficients will indicate the relative importance of each factor in determining whether an offender is sentenced to prison as opposed to some alternative sentence. The issue of the length of sentences for incarceration is examined appropriately through the use of regression analysis. Regression analysis is designed to indicate the independent effects of factors on an interval factor such as the number of months to be served. Similar to the logit analysis, coefficients are generated by regression analysis. They indicate if there is a positive (e.g., the older the offender, the longer the sentence) or inverse (e.g., the higher the offender's level of education, the shorter the sentence) relationship between each factor and the length of the sentence.[27] The coefficients bear upon the central purpose of the measure in two ways. First, if the legally irrelevant factors are not influencing outcomes, the coefficients associated with them should not be statistically different from zero.[28] Second, the coefficients of all legally relevant factors should be significantly larger than those of irrelevant factors. Looking at the coefficients associated with the different factors, the court can begin to assess their implications for fairness in sentencing. Do the results signal that legally irrelevant factors are having undue influence on the likelihood of incarceration or the length of sentences? Or do the results signal that irrelevant factors fail to account for the court's decisions to sentence offenders to prison or the length of prison sentences? In sum, do the results indicate that sentencing decisions are the product primarily of legally relevant factors and that irrelevant factors are of limited significance? Depending on what the results indicate, the court can use the information as a guide to reviewing its sentencing policies, practices, and procedures. The results might suggest the need for special training programs for newly appointed judges, especially those who come from private civil practice backgrounds. Or, the results might suggest the need for a courtwide training program on current developments in substantive and procedural criminal law. Measure 3.3.4: Equality and Fairness in Bail Decisions The purpose of this measure is to provide information to the court concerning the nature of the factors associated with bail, bond, and release on recognizance decisions.[29] In making these decisions, a court should focus on factors permitted by law. One way to measure the court's reliance on appropriate factors is to determine whether differences in bail decisions are linked more to factors recognized in law or to extra-legal factors such as the defendant's race or gender, the judge assigned to the case, or the geographic location of the court. According to Standard 3.3, the greater the degree to which the differences in the bail status of defendants are consistent with factors permitted by law, the better the court is performing on this measure. The remainder of this discussion outlines a step-by-step procedure that courts can use to measure and assess factors associated with bail decisions.[30] Planning/Preparation. The initial step is to identify the factors permitted by law to shape the court's bail decisions. Because States have different bail guidelines, the list of factors will differ somewhat across jurisdictions. However, most courts use a core set of factors in deciding whether to release the defendant on recognizance and in setting the dollar amount of the required surety bond if the defendant is not released. Legally relevant factors are as follows: o Defendant's Criminal and Court History (1) Prior record--Does the defendant have prior felony convictions? If so, how many and for what offenses? The notion is that it is rational for the court to set stiffer bond requirements for a more extensive prior record, especially if the defendant has recent convictions for the same offense. Some States may incorporate this rationale explicitly into bail guidelines by limiting the release of "dangerous offenders." Finally, did the defendant intimidate witnesses while on release for prior offenses? Such behavior also is grounds for imposing a more restrictive bond. (2) Prior court appearances--Has the defendant missed prior court appearances? How many times? Did the defendant leave the area on those occasions? Because a rationale of bail is to ensure court appearance, previous failures to appear also are reasonable grounds for imposing a more restrictive bond. (3) Current legal status--Is the defendant on parole or probation? Are there outstanding warrants? Parole or probation violations are considered sound reasons for imposing stricter bond conditions. Similarly, an outstanding warrant justifies stricter bond conditions. o Current Offense Is the defendant charged with a violent offense? Was there alleged bodily harm caused to a victim? What is the length of the sentence on conviction of the charged offense? It is often deemed appropriate to place more constraints on individuals who are believed either to pose serious threats to the community or face the possibility of severe sanctions. o Community Ties Is the defendant a resident of the jurisdiction? For how long? With whom does the defendant live? Do family members live in the area? Is the defendant employed? What is the defendant's monthly income? Individuals with close ties to the community are considered likely to appear in court when required and are, therefore, regarded as appropriate candidates for release on recognizance or low surety bonds. o Defendant's Character Is the defendant currently using drugs? Could the defendant's mental or physical condition be impaired by detention? Defendants free of drugs or likely to suffer under detention should receive less restrictive bonds. To determine whether legally irrelevant factors affect bail decisions, data also must be collected on these factors. Legally irrelevant factors include: o Demographic Characteristics These characteristics include race and gender. o Legal Counsel Was counsel available to the defendant? If so, when? What type of attorney represented the defendant? o Judge Assigned to the Bail Hearing Each judge need only be identified by an alphabetical character (e.g., Judge A, Judge B, and Judge C, and so forth). The measure is included to see if the composite effect of judge identity is greater than the effect of legally relevant factors. In addition to identifying possible determinants of bail decisions, the decisions themselves need to be outlined. Three of the most fundamental issues are as follows: o Is the defendant released on recognizance? Because of limited incomes, many defendants cannot post even modest surety bonds. For these defendants, release on recognizance may be the only avenue to pretrial release. It is important therefore to see the relative frequency with which the court decides to use this option.[31] o If the decision is made not to release the defendant on recognizance, what is the amount of the surety bond? o If a different bail decision is made after the first appearance, should the initial or subsequent decisions be counted? That is, if a surety bond is set but the defendant is later released on recognizance, should the defendant be considered to be released on recognizance? If the amount of the surety bond is lowered or raised at a later proceeding, which figure should be recorded? One approach to this question is to record bail status at the initial appearance separately from the decision in place at 15 or 30 days after the first appearance. This strategy captures more of the legal process without elevating one decision over another. Data Collection. The information necessary for this measure is available in closed court case records and the records of local bail agencies, pretrial release organizations, or probation departments. The process of selecting cases for analysis involves drawing a sample of 1,000 closed court cases and tracing those cases back to the other organization's files. (The sample of cases can be drawn from the pool of cases used for Measure 3.3.3, Equality and Fairness in Sentencing). Measurement of bail decision determinants consists of a combination of quantitative scales and classification schemes. An illustrative data collection form is offered as a way of measuring the factors that determine bail decisions and the decisions themselves. (See Form 3.3.4, Illustrative Bail Decision Data Collection Form.) Generally, the factors included on the data collection form are the same as those used by researchers in the field.[32] However, in some jurisdictions, the court may never receive information on specific aspects of the defendant's community ties, character, or socioeconomic status. Instead, these factors may be taken into account by a bail agency that recommends bail decisions to the court. If this is the situation, the bail agency's recommendation should be considered a surrogate for those factors.[33] Data Analysis and Report Preparation. Quantitative techniques can be applied to the data and each data element assessed for its effect on bail decisions after taking into account the influence of all other factors. The results of the analyses will tell the court whether and to what extent each legally irrelevant factor influences bail decisions. Results also will tell the court whether and to what extent legally relevant factors are more influential in decisionmaking than legally irrelevant ones. The two types of bail decisions require different types of analyses: logit analysis and regression analysis. They are discussed subsequently. In the case of the decision to release or not to release on recognizance, an appropriate technique is logit analysis. The basic results of logit analysis are numbers, called coefficients. A coefficient is associated with each factor. The sign (+) of the coefficient indicates whether there is a positive relationship (e.g., the longer the defendant has lived in the community, the more likely his or her release on recognizance) or inverse relationship (e.g., the greater the number of past failures to appear, the less likely his or her release on recognizance) between each factor and the bail decision. A comparison of the magnitude of the coefficients will indicate the relative importance of each factor in predicting the likelihood of a defendant being released on recognizance. The issue of what factors predict the amount of surety bonds is examined by another technique called regression analysis. Regression analysis is designed to indicate the independent effects of variables on an interval measure such as the dollar amount of bonds. Similar to logit analysis, in regression analysis coefficients are generated by the technique. The sign (+) of a coefficient indicates if there is a positive (e.g., the greater the number of prior felony convictions, the larger the amount of the bond) or inverse (e.g., if the offender has family members in the community, the lower the bail amount) relationship between each factor and the bail amount. The coefficients bear upon the central purpose of the measure in two ways. First, if legally irrelevant factors are not influencing decisions, the coefficients associated with them should not be significantly greater than zero.[34] Secondly, the coefficients of all legally relevant factors should be significantly larger than those of irrelevant factors. An inspection of the coefficients should address these issues. After the data have been gathered and analyzed, a key task is to present the results to the court. Do the results make sense? For example, should the court be concerned if the results indicate that having family members in the community decreases a defendant's chances of personal recognizance? What does it mean if the presence of family members decreases the average surety bond by a certain amount? In addition to reviewing the intuitive soundness of the results, the court should assess their implications for court performance. Do the results signal that irrelevant factors are not having undue influence? Or, do the results confirm that irrelevant factors have emerged as unacceptably powerful predictors of the court's release decisions? Finally, the court must decide what to do with the results. Regardless of the level of performance, what should be done? What sort of action is appropriate to improve performance? For example, is a courtwide review of bail policies, procedures and practices needed? Would special training programs for newly appointed judges and programs on current developments in substantive and procedural criminal law help? Although the court must make its own judgments as to what is necessary and desirable, the empirical evidence should inform the making of that judgment. Measure 3.3.5: Integrity of Trial Court Outcomes Measures 3.3.3 and 3.3.4 address adherence to laws or procedures, which can be ascertained explicitly and objectively. A complementary approach, which looks at adherence more broadly, involves the examination of appeals taken from trial court judgments. The analysis of the outcomes of appeals in terms of affirmance and reversal patterns will uncover where problems may exist and point to areas where trial court performance can and should be improved. Such an examination will shed light on where problems (i.e., reversible errors) occur. Do problems more frequently arise in particular areas of civil law such as property and commercial litigation and not in other areas such as torts? Are problems more common in appeals taken from certain trial court proceedings such as pretrial motions and not from nonjury trials? Or, are problems associated with particular issues? For example, in criminal appeals, how often are suppression issues successful on appeal? Information on the nature and rate of reversals will enable individual trial courts to identify where problem areas exist.[35] It will also be useful in identifying problem areas for all trial courts within a State and in examining performance over time.[36] A step-by-step procedure for examining the decisions of first-level appeals courts is described next.[37] Planning/Preparation. An examination of appeal outcomes should include all subject areas (e.g., civil and criminal). However, because of the constitutional nature of the issues involved, if a jurisdiction does not have the resources to conduct an examination of all outcomes, first attention should focus on criminal cases. The number of cases to be examined will depend on the scope of the inquiry. In an examination of only civil or criminal appeals, for example, 250 to 300 appeals resolved on the merits will be sufficient in each category to see broad patterns. The information to be collected from each appeal will depend on the subject matter of the case, which may include: o An area of law or criminal offense (e.g., for civil appeals: tort, commercial/contract, domestic, property; for criminal appeals: homicide, other crimes of violence, property crimes). o A trial court proceeding (e.g., jury trial, nonjury trial, pretrial motion, agency review). o The nature of each issue raised on appeal and its outcome. o The outcome of the appeal. Additional information may be included as measures of case complexity (e.g., severity of the sentence, number of parties, or type of counsel). An example for collecting the data is presented on Form 3.3.5, Illustrative Outcomes Data Collection Form for Criminal Appeals. Data Collection. The information needed to conduct this measure is available in the case records of the appeals court, although different sources may have to be checked. The docket should be the first source consulted to identify the appeals (e.g., trial court, subject matter, and resolved on merits). The docket also may be a source of other information (e.g., type of counsel). The court's decision document/opinion is a key source of information on the issues raised and their treatment. The notice of appeal or docketing statement is a useful source for background information (e.g., sentence in a criminal appeal) that may not be provided in the decision document. Finally, it may be necessary to check the briefs if the court does not file a written decision or if the decision does not identify the issues the court considered. Data Analysis and Report Preparation. A variety of basic analyses can shed light on the pattern of appeal outcomes and the frequency and distribution of error. For example: o The relative frequency of appeals by subject matter, by underlying trial court proceeding, and by other measures of case complexity. o The relative frequency of outcomes by subject matter, by underlying trial court proceeding, and by other measures of case complexity. o The relative frequency of issues raised by issue disposition. These tabulations help jurisdictions determine whether and the extent to which cases involving certain areas of law, raising particular issues, and being resolved in particular trial court proceedings are more likely to pose problems for trial judges than are other appeals. The analysis can be expanded to include other questions of interest. In addition to the quantitative analyses, a qualitative examination of the circumstances surrounding the errors can be undertaken. From a qualitative perspective, it is important to know whether an error occurred because of one of three basic circumstances: (1) the error arose in a new area of law or litigation, (2) the error resulted from the misinterpretation or misapplication of applicable law, or (3) the error was caused by a failure to follow established or appropriate procedures. The results should be reviewed to identify areas of difficulty for trial courts that need to be improved. For example, if the relative frequency of error is strongly related to the area of law/offense, the trial court proceeding, or other measures, the court should focus its corrective measures (e.g., educational programs) on such areas. A disproportionate "error rate" for particular issues would also indicate the need for educational attention. This analysis can be used over time both to identify areas in need of corrective measures and to indirectly measure the effectiveness of such programs. In addition, the use of a common data collection system and a common set of data elements across jurisdictions can highlight the existence of alternatives. For example, a jurisdiction that has a high incidence of error on jury instructions can and should learn how and why other jurisdictions have fewer instruction errors. Finally, the collection and analysis of information on the outcomes of appeals should provide trial courts with a concrete starting point for establishing the acceptable and unacceptable frequency of reversible error. Because most estimates of reversal patterns are based on impressions and personal observations, the data can help courts construct meaningful standards that combine both the frequency of errors and the circumstances under which the errors occur. Standard 3.4: Clarity The trial court renders decisions that unambiguously address the issues presented to it and clearly indicate how compliance can be achieved. Commentary. An order or decision that sets forth consequences or articulates rights but fails to tie the actual consequences resulting from the decision to the antecedent issues breaks the connection required for reliable review and enforcement. A decision that is not clearly communicated poses problems both for the parties and for judges who may be called upon to interpret or apply it. Standard 3.4 requires that it be clear how compliance with court orders and judgments is to be achieved. Dispositions for each charge or count in a criminal complaint, for example, should be easy to discern, and terms of punishment and sentence should be associated clearly with each count upon which a conviction is returned. Noncompliance with court pronouncements and subsequent difficulties of enforcement sometimes occur because orders are not stated in terms that are readily understood and capable of being monitored. An order that requires a minimum payment per month on a restitution obligation, for example, is clearer and more enforceable than an order that establishes an obligation but sets no time frame for completion. Decisions in civil cases, especially those unraveling tangled webs of multiple claims and parties, also should connect clearly each issue and its consequences. Measurement Overview. Three measures are associated with this standard. Two of them require examination of case records, and the third requires a survey of court officials and other individuals who have occasion in their work to read and interpret the terms of court orders and judgments. Measure 3.4.1 focuses on criminal cases, Measure 3.4.2 concerns civil judgments, and Measure 3.4.3 applies to both civil and criminal cases. Measure 3.4.1: Clarity of Judgment and Sentence The purpose of this measure is to determine how well the court communicates the terms and conditions of criminal sentences. Sentences deprive individuals of liberty. Consequently, courts should not contribute to incorrect applications of punishment. Courts should state sentences clearly and precisely enough in orders that correctional officials, probation officers, and others know how to administer them. This measure requires the random selection of 50 criminal cases in which the defendant was found guilty (the measure, with few adjustments, is also suitable for application to juvenile delinquency or offender cases). Specific information about the details of the judgment and sentence that are key indicators of clarity of the court's orders can be recorded on data collection forms and analyzed quantitatively. Planning/Preparation. From a pool of criminal cases disposed within a recent 6- to 12-month period in which the defendant was found guilty, at least 50 cases are selected at random. To select the cases, the total number of cases on the list must first be determined. For example, the court may have disposed of 1,220 criminal cases in a 6-month period in which the defendant pled guilty or was found guilty after a trial. This total number of cases is then divided by 50, the total number of cases desired in the sample. In the example, this results in 24.4 (round to 24). Next, a number between 1 and 10 is randomly selected and used to identify the initial case for inclusion in the sample. For example, if the random start number is 6, the first case selected is the sixth case on the list. Thereafter, every 24th case is selected. In the example, case numbers 6, 30, 54, 78, and so forth on the list would be selected until the sample comprises 50 cases. The source of the data for this measure will be the documents wherein the findings of the court and the judgment and sentence are set forth. Data Collection. Based on an examination of the findings from judgment and sentencing documents, a data collection sheet for each case is used to record information concerning the following issues: (1) Is it clear from the findings what each charge was and how each was disposed? (2) Is a distinct sentence articulated for each charge for which the defendant was convicted? (3) Is it clear whether sentences are to run consecutively or concurrently? (4) If financial conditions are imposed, is there an unambiguous payment schedule? (5) If there is a finding of joint and several responsibility among multiple defendants, is it clear from the order what will count as noncompliance? For a sample of a data collection sheet see Form 3.4.1, Illustrative Data Collection Form/Clarity of Judgment and Sentence. Data Analysis and Report Preparation. Because criminal sanctions deprive offenders of their liberty, trial courts must achieve the highest level of clarity in stating the terms of sentences to those correctional and probation officials who must administer them and to other judges who may in a subsequent probation revocation hearing be required to determine whether noncompliance has occurred. There are at least three interrelated indicators of performance in this regard. First, no less than 99 percent of all findings should state each of the charges. Second, no less than 99 percent of all judgments should state each of the offenses at conviction. Third, no less than 99 percent of all judgments should indicate whether convictions of multiple offenses should run concurrently or consecutively. Moreover, completeness and clarity in specification of these conditions is essential for court personnel to monitor and enforce financial conditions of orders and disburse funds collected on behalf of victims or the State consistent with the court's intent. Where court orders do involve these conditions, high standards of clarity also should be maintained, although incomplete or ambiguous orders relating to such specific terms may be of lesser consequence than those related to the basic charge, adjudication, and sentencing facts. Measure 3.4.2: Clarity of Civil Judgments This measure evaluates how well the court states the final action taken in the adjudication of a civil dispute. Integrity demands that courts clearly state the terms and conditions of obligations decided by a trial or a court-approved settlement. The measure parallels Measure 3.4.1 but differs in the types of cases selected for review, in the details of the data collection form, and by the addition of a step to determine the clarity of injunctive or declaratory orders/judgments. Planning/Preparation. From a pool of civil cases disposed within a recent 6- to 12-month period, at least 50 civil cases are selected at random in the same manner as described for Measure 3.4.1. (The greater the number of cases included in the sample, the greater the measure's reliability.) In selecting the cases for which data are to be collected, care must be taken to ensure that they are cases for which a judgment was entered by the court (i.e., the sample should result in at least 50 cases that do not include orders of dismissal or cases disposed by reference to a settlement agreement, the terms of which are not incorporated into a judgment of the court). The sample should be stratified to include at least 10 cases in which the order/judgment involves injunctive or declaratory relief (or approximately 20 percent if a sample of more than 50 cases is used.) The source of the data for this measure will be documents in which the terms of the order and judgment are set forth. The last order/judgment should be examined in complex multiparty cases that had final adjudication among some but not all of the claims or parties prior to the adjudication and judgment that is dispositive of the entire case. For example, if there are two orders/judgments found in the case file, one dated January 1, 1990, and one entered July 1, 1990, the latter is the order/judgment from which data will be collected. Data Collection. Based on an examination of the findings and judgment documents, a data collection sheet for each case is used to record information concerning the following issues: (1) If money judgments are involved, is it clear who is the judgment debtor and who is the judgment creditor with respect to each claim? (2) Is it clear how and when money judgments are to be paid? (3) If there are special conditions of the order, are they clearly set forth (e.g., is the order's intention clear with respect to the calculation of both pre- and postjudgment interest, if any)? (4) If the order/judgment includes declaratory or injunctive relief, are its terms clear? (5) Is it clear from the order/judgment whether it is dispositive of the case in its entirety (i.e., does it state that there are no more unresolved claims among parties to the suit?) For a sample questionnaire, please refer to Form 3.4.2, Illustrative Data Collection Form for Clarity of Civil Judgments. Data Analysis and Report Preparation. Because civil judgments are the basis for restoring or compensating for financial loss, enjoying rights in property, or establishing the creditworthiness of judgment debtors and creditors, trial courts must achieve the highest level of clarity in stating the terms of judgments. Confusion in acting on the terms of judgments must be avoided, especially as a cause for postjudgment litigation initiated merely to clarify the meaning of the order/judgment. Should postjudgment action be required to enforce a civil judgment, the official(s) responsible for enforcement should clearly understand what the court intended when the facts were adjudicated and the rights and duties of the parties were established. There are several interrelated indicators of performance in this regard. First, orders/judgments on claims for monetary relief should clearly state each of the creditor/debtor relationships. Second, all money judgments should include a specific dollar amount. Third, the basis for computing any judgment interest should be stated unambiguously and incorporated into the terms of the order/judgment. Fourth, in complex cases, the judgment that is dispositive of all issues and claims should be identified as such. Finally, the details of injunctive or declaratory relief should be spelled out in such a way that no further authority is required to determine if the conditions of the order/judgment have been met. Good performance is indicated by meeting these requirements in no less than 100 percent of all money judgments for the first two factors, while a lesser standard is tolerable for the third and fourth factors. Statistical methods are inappropriate for the fifth factor--the clarity of conditions of declaratory or injunctive relief-- because it is inherently more qualitative. Orders/judgments that include injunctive or declaratory relief are identified on the data collection form. (See question 12.) A copy of the order or judgment for these cases should be distributed to three to five experienced civil attorneys along with a supplemental evaluation form. (See Form 3.4.2, Illustrative Data Collection Form for Clarity of Civil Judgments.) The attorneys should review the orders/judgments and evaluate each for clarity using a Likert scale, commenting on any aspects of the order/judgment they find particularly problematic. The ratings scores should then be averaged. An average score of greater than 2.5 indicates that the terms of the order/judgment lacked an acceptable degree of clarity. For each of the orders that lack clarity, court personnel should examine the attorney comments and discuss them with judges and, if necessary, attorneys to determine whether and what systematic efforts could be undertaken in the court to make improvements. Measure 3.4.3: Experience in Interpreting Orders and Judgments This measure complements the previous two measures by looking at clarity from a different perspective. While Measures 3.4.1 and 3.4.2 provide quantitative data concerning details of orders that are presumed to be necessary, this measure assesses the extent to which lack of clarity is seen as problematic by various individuals who are called on to read, interpret, and enforce orders. The measure allows the court to discern whether problems in the clarity of orders are related more to certain types of judgments or relief and to explore how these might be addressed. Planning/Preparation. The first step for completing this measure is to compile a list of individuals who regularly read and interpret court orders. The list should include up to 10 individuals in each of the following groups: judges, probation officers, criminal and civil attorneys, clerk's office staff who regularly record terms of judgments, and employees of title companies or other private agencies who regularly search judgment dockets. If a court employs more than 10 persons in any of these categories, a random selection of at least ten from each group is desirable. A total of 50 individuals should be included. Data Collection. Form 3.4.3, Illustrative Questionnaire Form/Experience in Interpreting Orders and Judgments, can be used to collect data through a telephone or in-person interview of the individuals identified above.[38] The data collection form contains questions to determine the respondents' views about clarity of orders in general as well as questions that target specific areas in which clarity may be a problem. The specific questions are broken into two parts, covering criminal and civil cases, respectively. The data collection form instructs interviewers to skip those sections that are not relevant to the respondent's experiences. Data Analysis and Report Preparation. Responses to the questions can be averaged to obtain an overall indication of the respondents' views. The fewer the number of individuals who state that they never or rarely have difficulties understanding court orders, the better the court is performing on this measure. The distribution of the responses also needs to be examined because different spreads in the responses indicate different problems. For example, a survey in which 41 of the respondents say they never experience problems with clarity of court orders but 9 say they often do calls for a different interpretation than a survey reporting that 15 say they never and 35 say they almost never have problems, although the average scores would be nearly the same. More importantly, however, the responses of different groups of respondents should be compared to discern whether some groups experience problems that are not experienced by others. Judges who are called upon to rule on alleged violations of orders in criminal cases may experience different problems more frequently than do probation officers or lawyers; judgment clerks may experience problems frequently that attorneys do not encounter at all. Finally, responses to the questions in Parts II and III should be examined to determine whether there are particular practices that regularly cause problems; these practices may indicate areas that can be improved systematically. When patterns of responses indicate that one group of respondents has problems that others do not, or that some types of orders consistently cause problems, followup interviews can be used to determine why the orders are written as they are. Moreover, such interviews can be used to determine the extent of agreement among judges or members of the bar about the meaning or application of specific judgment conditions and to elicit suggestions for how to resolve them (development of pattern language or education of attorneys, for example). Standard 3.5: Responsibility for Enforcement The trial court takes appropriate responsibility for the enforcement of its orders. Commentary. Courts should not direct that certain actions be taken or be prohibited and then allow those bound by their orders to honor them more in the breach than in the observance. Standard 3.5 encourages a trial court to ensure that its orders are enforced. The integrity of the dispute resolution process is reflected in the degree to which parties adhere to awards and settlements arising out of them. Noncompliance may indicate miscommunication, misunderstanding, misrepresentation, or lack of respect for or confidence in the courts. Obviously, a trial court cannot assume responsibility for the enforcement of all of its decisions and orders. Court responsibility for enforcement and compliance varies from jurisdiction to jurisdiction, program to program, case to case, and event to event. It is common and proper in some civil matters for a trial court to remain passive with respect to judgment satisfaction until called on to enforce the judgment. Nevertheless, no court should be unaware of or unresponsive to realities that cause its orders to be ignored. For example, patterns of systematic failures to pay child support and to fulfill interim criminal sentences are contrary to the purpose of the courts, undermine the rule of law, and diminish public trust and confidence in the courts. Monitoring and enforcing proper procedures and interim orders while cases are pending are within the scope of this standard. Standard 3.5 applies also to those circumstances when a court relies upon administrative and quasi- judicial processes to screen and divert cases by using differentiated case management strategies and alternative dispute resolution. Noncompliance remains an issue when the trial court sponsors such programs or is involved in ratifying the decisions that arise out of them. Measurement Overview. This standard requires the court to "take responsibility" for enforcement of its orders. The extent of a court's involvement in the administration of systems for monitoring compliance with court orders and initiating enforcement action varies widely from State to State and, in some States, varies from jurisdiction to jurisdiction. For many kinds of orders, the structure of the law removes the court a significant distance from the system of enforcement. In the detailed measures that follow, therefore, court performance is not measured simply by the level of compliance by those to whom orders are directed. The goal is to first establish and evaluate the context for enforcement and then examine indicators of how the court "takes responsibility" within that context. Although some of the measures do call for statistical analysis of compliance rates, this analysis is only valid for performance evaluation when understood against the contextual background. When measures for this standard employ quantitative measures of compliance, terms of orders involving money judgments are used almost exclusively. Terms of money judgments are relatively unambiguous and monitoring is possible and relatively free of evidentiary issues. Measures 3.5.1, 3.5.2, 3.5.3, and 3.5.4 focus on the extent to which particular types of court orders and policies are followed. Measure 3.5.1 considers probationary orders; Measure 3.5.2 considers child support orders; Measure 3.5.3 considers civil judgments; and Measure 3.5.4 considers case processing rules and orders. The methodological approach used for all of them is the same. It calls for the collection, analysis, and interpretation of pertinent data from closed case files. Illustrative data elements, data collection forms, and methods of analysis are provided. Generally speaking, the greater the extent that orders are followed, the higher the court's performance. Finally, an important contextual variable surrounding each of the measures is the agency responsible for administering the enforcement process. Is probation administered by the court or by an executive agency? Similarly, is child support enforced by the court, an executive agency, or a private agency? Courts should look at their own operations and options for enforcement when enforcement is their exclusive responsibility. On the other hand, the court should work with public and private agencies to identify reasons for less than complete enforcement when enforcement is not the court's exclusive responsibility. Measure 3.5.1: Payment of Fines, Costs, Restitution, and Other Orders by Probationers This measure uses summary statistics about compliance with monetary penalties to complement the evaluation of court activities related to enforcement. Relevant data include the amount of money ordered, the amount of money paid, and when money is paid. Analysis will indicate the amount of money paid as a percentage of what was ordered. Planning/Preparation. An illustrative set of data elements is provided on Form 3.5.1, Illustrative Data Elements for Measuring Enforcement of Probationary Orders. These data can be obtained by separate examination of the order and sentence document and the payment bookkeeping records. In many cases, a bookkeeping record may contain all required data. A sample of cases will be drawn from the source best suited to capture cases with monetary penalties and cases older than the typical term of probation or cases that have been "closed" on the bookkeeping records due to termination of probation or payment in full. The sample should not be taken directly from bookkeeping records alone, unless there is evidence that a bookkeeping record is created for all cases in which an order includes monetary sanctions. It is possible, for example, that the bookkeeping agency only creates a record when a payment is made. Sampling from that source would not be representative of all cases. Data Collection. Data are collected on coded forms. For an example, refer to Form 3.5.1, Illustrative Data Elements for Measuring Enforcement of Probationary Orders. Data Analysis and Report Preparation. Data analysis will include reports showing averages for total penalty amounts imposed and percentages of amounts collected. The data collected will also allow analysis in subgroups related to total amounts ordered and how long it took for payment. Review of the summarized data will yield information about compliance rates. In addition, the court will be able to look at the statistics and determine how the total amount imposed relates to percentage of payment, whether the total amount imposed has an important relationship to how long it takes to pay, and whether how long it takes to pay is related to the time allotted for payment. Comparisons among more than one jurisdiction in a State will be constructed where possible as well as comparisons with available compliance rate data found in the literature. Measure 3.5.2: Child Support Enforcement This measure is similar to Measure 3.5.1. However, its focus is on child support orders rather than probationary orders. Planning/Preparation. Illustrative data elements are provided on Form 3.5.2, Illustrative Data Elements for Measuring Enforcement of Child Support Orders. Data of this type can be obtained by examining the order and the payment bookkeeping records separately. In many cases, a bookkeeping record may contain all required data. Sampling must be from court case disposition records, unless it is demonstrated that records of the bookkeeping agency include all court cases and do not include cases for which enforcement jurisdiction is not with the court. If court case disposition records are used, the sampling technique must allow for cases in which no child support is ordered. The sample should be taken from cases in which a divorce, dissolution, or paternity establishment was entered at least 18 months prior to the sample date, and no more than 36 months prior to the sample date. This restriction will allow adequate time for a payment pattern to develop and for enforcement action to be taken, and it will exclude cases that are so old that they have little relevance to contemporary policy and practice. The sample should include 300 cases. Data Collection. Data are collected on coded forms. For an example, please refer to Form 3.5.2, Illustrative Data Elements for Measuring Enforcement of Child Support Orders. The data related to the status of enforcement actions taken may prove problematic to collect. However, an effort should be made to collect it. If problems are encountered, they should be described. Specifically, the reasons why particular data elements are not available should be noted. These reasons may have a bearing on the enforcement capacity of the responsible agency. Data Analysis and Report Preparation. Analysis involves computing summary statistics to describe the amounts ordered and paid, regularity of payment, and enforcement responses. All States are required to collect and report to the Federal Government information on the volume of Title IV-D child support cases, the amounts of money collected, and other related information. This information should be obtained for each jurisdiction in the State and used to assist in the evaluation of the data for the court. The information can be obtained from the State's official Title IV-D agency, usually a division of the State's health and welfare organization. The summary results returned to the court will allow it to see the trends in compliance as well as in enforcement by the responsible agency. If it proves difficult to document the enforcement status of the cases, a description of the reasons for the difficulty may suggest changes in practices that would improve the monitoring capability of the system. Summary results may be compared with information obtained from the State's official Title IV-D agency, as previously described. Results also may be compared with data published for all States by the U.S. Government Office of Child Support Enforcement in its annual statistical report. These comparisons should be focused on States in which the respective roles of the court and other agencies are similar. Although such comparisons should be cautiously approached and their significance interpreted in the most tentative fashion, they may suggest benchmarks for performance. Measure 3.5.3: Civil Judgment Enforcement This measure is similar to Measure 3.5.1. In addition to collecting data from case files, it involves collecting interview data. Planning/Preparation. Samples will be taken from new cases added to the court judgment dockets for a period of at least 6 months prior to the sample date and not more than 12 months after the sample date. (Terminology among courts for "judgment docket" may vary; the source to use is that maintained by law to identify judgment debtors and creditors.) The sample should include all cases with money judgments that were payable before the date of the sample. At least 150 cases should be included. Further work on this measure is needed to consider whether it is appropriate to distinguish certain types of civil money judgments from others. If so, the sample should be taken in a way that ensures sufficient numbers of each type. Data Collection. The basic data to be collected include the following: judgment amounts, judgment satisfaction, evidence of enforcement actions, type of enforcement action, and type of legal representation. A data collection form, which includes these data elements, should be created. When the judgment docket shows no evidence of a satisfaction filed, interviews will be required of the judgment creditor or the creditor's attorney. The purpose of the interviews is to verify whether the judgment is satisfied; if not, what action was taken; if none, why not. If the judgment docket does not contain the information necessary to locate the creditor or creditor's attorney, that information should be obtained from the case record cross-referenced by the judgment docket. Because the sampled cases will be very recent, address and telephone information should be current for most cases. Data Analysis and Report Preparation. Data analysis should be undertaken to determine (1) the number of judgments for which a record of satisfaction is recorded, (2) the number of judgments for which an interview was required to determine the judgment status and what the status was, and (3) the total number of satisfied and unsatisfied judgments. These figures can then be broken down into subcategories depending on whether the parties had legal representation. It may or may not be possible to use statistical methods to summarize results of two other variables: the number and type of enforcement actions taken and the reasons for not taking enforcement action in cases where judgments were not satisfied. If these variables cannot be analyzed statistically, they should be analyzed qualitatively. Statistical summaries will provide information to the court on what happens to the civil judgments it enters. Qualitative information will provide some insight into reasons why judgment enforcement action is not taken. Measure 3.5.4: Enforcement of Case Processing Rules and Orders This measure addresses the court's performance in enforcing its own rules and orders. For this measure, one area of court activity--caseflow management--has been selected because some policy on caseflow are predictably found in most trial courts. More specifically, the measure focuses on rules governing continuance of trial settings. Planning/Preparation. The authority (e.g., rule, order, or administrative memorandum) and substance of the court's policies should be documented. Data Collection. Data collection forms will vary depending on specific court policies. For example, some policies will require that a motion for continuance be made in writing and filed no later than a specified number of days prior to the scheduled trial. A data collection method for this kind of rule should involve an examination of sampled case files to determine: (1) whether such a document is found, and (2) whether it was filed in a timely manner. Other rules may simply state that each party may be granted one continuance upon request and that other continuances will be granted only for "good cause shown." In such cases, data collection would involve sampling summary records or case files and counting the number of continuances associated with each. Data Analysis and Report Preparation. The structure for data analysis will be determined by the type of court policy in effect and the data collection methods used for evaluating whether the policy is followed. For the first example described above, tables could be generated to show the total number of continuances that occurred for the cases sampled and the percentage of cases in which motions were filed as per the policy. For the second example, in which the court policy calls for simple counts of the number of continuances associated with each case, tables could be generated to show the percentage of all cases that had specific numbers of continuances. The way in which the results of the analysis will be interpreted will depend on the type of policy and the corresponding data collection method and analysis. In some instances the results may be returned to the court in purely descriptive form. In other instances, a standard may be established prior to data collection and summary results compared to that standard. For example, if continuances are examined, an excellent score might be one in which more than two continuances occurred for 5 percent or fewer of the cases, and an unacceptable score might be one in which more than two continuances occurred for 25 percent or more of the cases. Standard 3.6: Production and Preservation of Records Records of all relevant court decisions and actions are accurate and properly preserved. Commentary. Equality, fairness, and integrity in trial courts depend in substantial measure upon the accuracy, availability, and accessibility of records. Standard 3.6 requires that trial courts preserve an accurate record of their proceedings, decisions, orders, and judgments. Relevant court records include indexes, dockets, and various registers of court actions maintained for the purposes of inquiry into the existence, nature, and history of actions at law. Also included are the documents associated with cases that make up official case files as well as the verbatim records of proceedings. Preservation of the case record entails the full range of responsible records management practices. Because records may affect the rights and duties of individuals for generations, their protection and preservation over time are vital. Record systems must ensure that the location of case records is always known, whether the case is active and in frequent circulation, inactive, or in archive status. Inaccuracy, obscurity, loss, or untimely availability of court records seriously compromises court integrity and subverts the judicial process. Measurement Overview. All of the measures for this standard recommend the use of descriptive statistics, such as averages and percentages, as the basis for evaluation. Particular scores have been identified as acceptable and unacceptable levels of performance for some of the measures. For other measures, criteria of acceptable performance can be formulated from the informed judgments of trial court personnel. Moreover, the criteria can be refined by comparing the results from different courts. The comparative data will help establish norms and standards of very high or very low performance. Finally, Measures 3.6.1 to 3.6.4 rely on essentially the same database of cases. That is, the cases selected for Measure 3.6.1 can be used for the other measures and vice versa. In fact, Measures 3.6.1 to 3.6.4 require some of the same data elements and can be implemented in a relatively efficient manner. Measure 3.6.1: Reliability of the File Control System Information in court case files affects the interests and constitutional rights of litigants, which the court is expected to protect. As a result, one indicator of integrity is the extent to which the files can be retrieved on request. More specifically, the timeliness of retrieval is an indicator of the court's degree of integrity. This measure tests whether the file control system is adequate to permit timely retrieval of individual case files, which contain legal papers but not necessarily exhibits, tapes of proceedings, or a court reporter's notes. The adequacy of the system is tested for each type of case file management and storage system, such as the systems for managing cases that are pending, cases that are closed but not removed to offsite storage, and cases that are closed and in offsite storage areas, including those in alternative storage media.[39] Implementing this measure requires an understanding of the file control systems used by the court. This information can be obtained through discussions with the person or persons responsible for court records. Visual inspection of the record storage areas and verification of the file control system should then be carried out to confirm the information gained from the discussion. Planning/Preparation. A random sample of pending cases, closed and onsite cases, and closed and offsite cases should be selected from each category of cases: criminal, civil, domestic relations, and juvenile. To minimize the effects of highly unusual recordkeeping for a few, peculiar cases, the size of the samples should be no less than 50 cases. Data Collection. A form should be designed to record basic information on each case. The information should include the location of the file and the time it takes to find the file, including files that are in circulation. For this measure, locating the file means that the data collector must see the file. For example, it is not sufficient for a file to be listed as "in circulation." For an example questionnaire, please refer to Form 3.6.1, Illustrative Data Collection Form: The Reliability of the File System. Information gathered from the search for files can be used to address two basic questions. First, what percentage of the files can be located? Second, how long on average does it take to locate the files? These questions should be addressed for each of the four categories of cases. In addition, it is useful to determine if the age of the cases is associated with particular problems. Data Analysis and Report Preparation. Standards for the number of pending cases, closed and onsite cases, and closed and offsite cases that can be located should be uniform, although the time required to locate them may vary. For all types of cases, an acceptable level of performance is the ability to locate 99 percent or more of the files. A superior level of performance is 99.5 percent or higher. Concerning pending and onsite files, an acceptable level of performance is the ability to locate 90 percent or more of the files within 10 minutes. For offsite files, acceptable performance is the ability to locate 90 percent or more of the files within one working day. The information gathered for this measure can be used to determine whether problems exist for a few, some, most, or almost all cases in terms of their location or the time required to retrieve files. The information will also reveal whether problems vary by case category or by the age of the case. Finally, the court can use the information to identify what file systems need corrective action. Is there a need to ensure that files are stored in proper order within a particular file system? Do the procedures regulating the circulation of files need to be clarified or tightened? Do file systems require a thorough review in order to prevent the loss of files? Measure 3.6.2: Adequate Storage and Preservation of Physical Records This measure assesses whether the court's records management system preserves information about closed cases consistent with State law and sound records management principles. Information concerning relevant laws and principles can be obtained through discussions with the person or persons responsible for court records. The purpose of the discussion is to determine what files must be preserved, for how long, and what, if any, regulations must be observed regarding the media used for storage. The discussion should determine if the requirements for storage are based on a records retention schedule, informal practices, or a combination of both. Planning/Preparation. To prepare for this measure, courts can use the two sets of closed cases proposed for Measure 3.6.1, Reliability of the File Control System. These sets include cases both on site and in storage. Data Collection. Please refer to Form 3.6.2, Illustrative Data Collection Form: Adequate Storage and Preservation of Physical Records, for an example to be reviewed and modified as necessary. Note that items 1, 2, 3, and 4 on Form 3.6.2 also appear on Form 3.6.1. As a result, a court can combine the two forms if it chooses to apply both measures. Data Analysis and Report Preparation. A summary of the information from individual data collection forms can be used to address several key questions: Can files be located, are they in their proper location, are they stored in their proper form, and is the required information preserved? Measure 3.6.3: Accuracy, Consistency, and Utility of the Case Docket System This measure tests whether the case docket system conforms to State law and serves the purposes for which it is intended. The basic objective of a docket system is to provide a summary of each case history, the names of the parties involved, and the documents filed in that case. Planning/Preparation. This measures involves the inspection of individual entries in the case docket system. The cases to be examined can be the same samples of criminal, civil, domestic relations, and juvenile cases used in Measure 3.6.1, Reliability of the File Control System. Additionally, the file for each case should be obtained to verify the completeness of the docket system. Data Collection. Review of the individual cases is intended to answer basic questions concerning the adequacy of the docket system. This review can be carried out by comparing the entries in the docket system with the information contained in the case files. Are all the cases in the system? Are all the entries per case clear and understandable or are some unreadable or unintelligible? Please refer to Form 3.6.3, Illustrative Data Collection Form: Accuracy, Consistency, and Utility of the Case Docket System, for an example. Note that items 1, 2, and 4 on Form 3.6.3 are found on Forms 3.6.1 and 3.6.2. Hence, a court can combine the three forms if it chooses to apply all three measures. Data Analysis and Report Preparation. In an acceptable docket system, no more than 1 percent of the cases should be missing and no more than 5 percent should have missing, illegible, or unintelligible entries. Measure 3.6.4: Case File Integrity The purpose of this measure is to determine the integrity of case files. Are there clear procedures for selecting which documents to place in a file? How closely do files adhere to those procedures? The measure relies on case file data. Planning/Preparation. This measure involves a close inspection of individual case files. It can use the same set of cases proposed in Measure 3.6.1, Reliability of the File Control System. A discussion with the judges and the person or persons responsible for case records management should indicate what documents should be in the files (e.g., the pleadings, answer, motions and judgment) and how they should be organized. Data Collection. To develop a data collection form, information should be gathered on the condition and contents of selected case files. One way of verifying the integrity of the case files is to compare them with the entries in the case docket system. Refer to Form 3.6.4, Illustrative Data Collection Form for Case File Integrity, for an example. Note that items 1 and 2 on Form 3.6.4 are also found on Forms 3.6.1, 3.6.2, and 3.6.3. Hence, a court can combine the four forms if it chooses to apply all four measures. Data Analysis and Report Preparation. A summary of the information gathered from the examination of individual case files can be used to address basic questions of performance. An acceptable level of performance is the ability to locate 99 percent of the files (see Measure 3.6.1), and missing documents in no more than 5 percent of the files. Additionally, the information obtained for this measure can be used to suggest areas of improvement. Are there particular types of cases that have a higher percentage of missing documents? Do the files generally conform to procedures governing the order in which documents should appear? Measure 3.6.5: Reliability of Document Processing The purpose of this measure is to determine how well the court handles the flow of legal documents from the time that they are executed or filed until they are placed in the individual case file. Are the documents processed within expected timeframes or do bottlenecks impede document flow? The measure involves recording data from case file documents. Planning/Preparation. Discussions with court officials will indicate the nature of the system for handling documents from the point when a paper is filed at the clerk of court's office counter or when a judge executes an order in court or chambers. The design of the data collection form will reflect the level of measurement detail the court chooses to pursue. Refer to Form 3.6.5, Illustrative Data Collection Form for Reliability of Document Processing. It represents an approach that would apply to most courts. Data Collection. Data should be collected for documents related to the following categories of cases: criminal, civil, domestic relations, and juvenile. Depending on the volume of paperwork processed by the clerk of court's office in a day, one or more days should be chosen for data collection. The days should be selected to avoid abnormal conditions (unusually high or low volume or special projects in the court). On one of these days, samples should be taken from the place where papers await distribution to case file jackets. Data Analysis and Report Preparation. The information obtained from the data collection form includes the date an order is executed, the time the document is filed/stamped, and the date the sample was taken. An analysis of the average and the range of processing times will reveal how well the court is meeting its objectives for document processing. Are all documents processed expeditiously? Do documents for particular types of cases take longer than is desirable? There are two interrelated criteria of acceptable performance for this measure. First, 90 percent or more of all documents should be processed within 5 working days from the date that they are filed/stamped at the clerk of court's office counter or the date that they are ordered/signed by the judge. Second, 100 percent of the documents should be processed within 10 working days. Measure 3.6.6: Verbatim Records of Proceedings This measure gauges attorneys' views on the integrity of records of court proceedings. Attorneys who have brought cases on appeal are in a position to know whether records of the trial court proceedings are incomplete or difficult to understand. Because attorneys need records of proceedings to prepare briefs, they are concerned about the quality of electronic audio or video recording as well as the traditional written transcript. For this reason, positive opinions by attorneys indicate positive court performance. This measure relies on questionnaire data. Planning/Preparation. A random sample of notices of appeal filed with the trial court should be selected. The appropriate appellate court should be contacted to determine the names and addresses of the appellant's and appellee's attorneys. Data Collection. A questionnaire should be designed to solicit the views of attorneys concerning the quality of the record. An example is Form 3.6.6, Illustrative Questionnaire: Verbatim Records of Proceedings. Data Analysis and Report Preparation. The information from the responses can be summarized in terms of the kinds of problems that arise, the seriousness of the problems, and the degree of effort required to resolve them. Do the problems concern missing information? Is the recorded information unintelligible? Do the problems suggest a momentary lapse in the performance of recording equipment or a court reporter, or do the problems suggest a persistent problem? Additionally, the location of problems can be identified. For example, do problems arise more in civil than in criminal appeals, or are jury trials in both types of cases the predominant source of problems? An acceptable level of performance is less than 10 percent of the attorneys expressing problems with the quality of proceeding records. Another indication of acceptable performance is 5 percent or less of the cases requiring formal settlement resolution of the problems. In the event that the court's performance is unacceptable, the survey information will suggest areas for improvement. What kinds of problems warrant attention? What sorts of proceedings need to be monitored more carefully to ensure an adequate record? What procedures can be introduced to prevent problems from occurring? End Notes 1. An alternative approach to using panels of experts is to preselect areas of law that apply to all courts in all States. However, it is impossible to specify in advance what laws and procedures are of interest and apply in measurable detail to every State unless they are restricted to Federal constitutional law or congressional legislative requirements. 2. A publication, Standards Relating to Juror Use and Management, is available from the American Bar Association, 750 North Lake Shore Drive, Chicago, Illinois 60611. 3. G. T. Munsterman and J. T. Munsterman, "The Search for Jury Representativeness," Justice System Journal 11 (1986):59-78. 4. Taylor v. Louisiana, 419 U.S. 526 (1975). 5. Duren v. Missouri, 439 U.S. 357 (1979). 6. Available from the Superintendent of Documents, Government Printing Office, Washington, DC 20402. 7. Most libraries have these volumes. 8. U.S. Department of Commerce, Statistical Abstract of the United States, 1988 (Washington, DC: Bureau of the Census, 1988). 9. National Institute of Law Enforcement and Criminal Justice, Multiple Lists for Juror Selection: A Case Study for San Diego Superior Court (Washington, DC: Law Enforcement Assistance Administration, U.S. Department of Justice, 1978). 10. National Center for State Courts, Methodology Manual for Jury Systems, NCSC Publication CJS-004, (Williamsburg, VA, 1981). 11. D. J. Knuth, The Art of Computer Programming, Semi-Numerical Algorithms, vol. 2, 2d ed. (Reading, MA: Addison-Wesley, 1981). 12. National Center for State Courts, A Supplement to the Methodology Manual for Jury Systems: Relationships to the Standards Relating to Juror Use and Management (Williamsburg, VA, 1987), pp. 10-15. 13. D. Kairys, J.B. Kadane, and J.P. Lehoczky, "Jury Representation, A Mandate for Multiple Source Lists," California Law Review 65 (1977):776-827. 14. See note 13. 15. Menaster, Spooner, and Greenberg, "Getting a Fair Cross-Section of the Community," Forum (1989):14-21. 16. Approximately 20 States at this writing have undertaken efforts to establish a racial/ethnic bias commission or task force. Similarly, nearly every State has established a gender bias commission or task force. 17. Suggested sources for the literature published in this area can be accessed in the Index to Legal Periodicals or automated databases. The Information Service at the National Center for State Courts also can provide information on articles or reports published, particularly in court-related publications. The court also may be able to access actual data on the topic from such bodies as a State sentencing commission or race and ethnic bias task force. 18. See, for example, D. Maddi, Judicial Performance Polls (Chicago: American Bar Foundation, 1977); and C. Philip, How Bar Associations Evaluate Sitting Judges (New York: Institute for Judicial Administration, 1976). 19. See, for example, T. Tyler, "What is Procedural Justice? Criteria Used by Citizens to Assess the Fairness of Legal Procedures," Law and Society Review 22 (1988):103. 20. One method of determining the association between the survey items is correlational analysis. A statistical measure called the gamma coefficient can be used to test the extent to which the response to one question is associated with the response to another question. Statistical software packages routinely provide the statistic when cross tabulations of items are requested. 21. See note 18. 22. See note 19. 23. One technique for determining the association between the survey items is correlational analysis. A statistical measure called the gamma coefficient can be used to test the extent to which the responses to one question are associated with the responses to another question. The technique is available in most computer software packages. 24. The measure proposed outlines a statistical approach to assessing whether there is undue disparity and bias in a court's proceedings. However, it is not a complete treatment of every aspect of particular techniques and their interpretation. For this reason, the court may wish to consult outside experts when applying the measure. 25. The definition of the data elements and the proposed methods of data analysis reflect the input and advice of academic sentencing experts and former staff of the U.S. Sentencing Commission. Their opinions were solicited to achieve maximum statistical validity, although future research is likely to use even more refined methods in this growing area of research. 26. An offense severity scale can be developed by assigning numerical weights to different offenses. The U.S. Sentencing Commission has constructed such a scale. 27. For discussion of parallel applications of this technique to case processing data, see R. Flemming, P. Nardulli, and J. Eisenstein, "The Timing of Justice in Felony Trial Courts," Law & Policy 9 (1987); and M. Luskin and R. Luskin, "Why So Fast, Why So Slow: Explaining Case Processing Time," Journal of Criminal Law & Criminology 77 (1989). 28. A coefficient may be greater but not statistically greater than zero because the factor under consideration (e.g., race) does not have consistent, uniform effects on what is being measured (e.g., sentence length). However, a statistical test performed by the software will indicate whether each coefficient is significantly greater than zero. 29. In most jurisdictions the majority of bonds are released on recognizance and surety bonds. However, in some courts, cash bonds also are prominent. In this event, the court should consider what factors account for the amounts of different cash bonds. 30. The measure proposed outlines a statistical approach to assessing whether there is undue disparity and bias. However, it is not a complete treatment of every aspect of particular techniques and their improvement. For this reason, the court may wish to consult outside experts when applying the measure. 31. The majority of defendants released on recognizance typically have nonfinancial conditions placed on them, such as third-party custody, prohibitions against returning to the scene of the crime, and restrictions on residence, travel, associations, drug and alcohol use, and weapons possession. These conditions are not crucial to determining whether legally relevant or irrelevant factors explain who is released and who is not. Hence, the court should collect data on these matters only if it seeks to pursue other research questions concerning bail decisions. 32. See, for example, J. Goldkamp and M. Gottfredson, Guidelines for the Pretrial Release Decision: Superior Court of Arizona, Maricopa County; Circuit and County Courts, Dade County; Boston Municipal Court; and Suffolk County Superior Court, Bail Guidelines Project (Philadelphia: Temple University, 1985). 33. See, for example, I. Nagel, "The Legal/Extra-Legal Controversy: Judicial Decisions in Pretrial Release," Law and Society Review 17 (1983):481. 34. A coefficient may be greater but not statistically greater than zero because the factor under consideration (e.g., race) does not have consistent, uniform effects on what is being measured (e.g., to release or not to release on recognizance). However, a statistical test performed by the software will indicate whether each coefficient is significantly greater than zero. 35. Another important question is: Are there differences in the rates of reversals across individual trial courts within the same State? To address this issue, there must be a sufficient number of appeals from each court. Because few courts generate more than 50 appeals each year, the data requirements are difficult to satisfy. Hence, as a first effort, this measure is most profitably aimed at statewide patterns or patterns with a regional appellate district. 36. For an investigation of reversible error in criminal appeals, see J. Chapper and R. Hanson, Three Papers on Understanding Reversible Error in Criminal Appeals (Williamsburg VA: National Center for State Courts, 1979). The authors present evidence from a study of five appellate courts and discuss the implications of the results for judicial education. 37. Defining trial court error by the decisions of first-level appeals is not conclusive, of course. Trial court decisions overturned on first-level review may be reinstated by a higher court. Such subsequent review is uncommon, however. In 1987, for example, State courts of last resort granted review in only 14.1 percent of the discretionary petitions filed. As a result, first-level appeals courts are, in fact if not in law, the final arbiter for most appeals. 38. Two courts in the demonstration project administered the survey via mail rather than telephone interview. If this method is used, the court should increase the number of individuals included in the sample to ensure a sufficient number of responses. 39. These three categories may not reflect meaningful differences in records management in all courts. The main point is that the sampling and measurement should be carried out in a way that allows the court to apply the measure to each case file management and storage system. ------------------------------ Performance Area 4: Independence and Accountability The judiciary must assert and maintain its distinctiveness as a separate branch of government. Within the organizational structure of the judicial branch of government, trial courts must establish their legal and organizational boundaries, monitor and control their operations, and account publicly for their performance. Independence and accountability permit government by law, access to justice, and the timely resolution of disputes with equality, fairness, and integrity; and they engender public trust and confidence. Courts must both control their proper functions and demonstrate respect for their coequal partners in government. Because judicial independence protects individuals from the arbitrary use of government power and ensures the rule of law, it defines court management and legitimates its claim for respect. A trial court possessing institutional independence and accountability protects judges from unwarranted pressures. It operates in accordance with its assigned responsibilities and jurisdiction within the State judicial system. Independence is not likely to be achieved if the trial court is unwilling or unable to manage itself. Accordingly, the trial court must establish and support effective leadership, operate effectively within the State court system, develop plans of action, obtain resources necessary to implement those plans, measure its performance accurately, and account publicly for its performance. Overview of Standards. The five standards in the performance area of Independence and Accountability combine the principles of separation of powers and judicial independence with the need for comity and public accountability. Standard 4.1 requires the trial court to exercise authority; to manage its overall caseload and other affairs; and to realize the principles of separation of powers, interdependence of the executive, legislative, and judicial branches of government, and comity in its governmental relations. Standard 4.2 requires a trial court to seek adequate resources and to account for their use. Standard 4.3 extends the concept of equal treatment of litigants to the court's own employees by requiring every trial court to operate in accordance with personnel practices and decisions that are free of bias on the basis of race, religion, ethnicity, gender, sexual orientation, color, age, handicap, or political affiliation. Standard 4.4 requires the trial court to inform the public of its programs and activities. Finally, Standard 4.5 acknowledges that the court's organizational character and activities must allow for adjustments to emergent events, situations, and social trends. Overview of Measures. All of the measures of independence and accountability presuppose that they will be undertaken only following the formation of a steering committee composed of judges and court managers, who plan data collection and discuss the significance of the results. Field tests of experimental measurement approaches for standards in this performance area show that performance evaluation is highly context driven. Differences in the sizes of courts, the statutory frameworks governing court funding, and the structural arrangements of essential justice system services make it very difficult to prescribe a standard set of measurement approaches. Accordingly, all of the measures for standards in independence and accountability should be preceded by the formation of a steering committee that will (1) make a threshold assessment of the utility of the measures in light of the court's interests and circumstances, (2) meet after data is collected to discuss and consider its significance for court performance, and (3) integrate the findings into an overall review of court performance. Field testing of the measures suggests that the data and assessments for some of the standards relate closely to inquiries and assessments for other standards. For example, results of surveys related to perceptions of the importance of independent decisionmaking in the court may have bearing on the court's performance in public education and vice versa. These standards, in turn, may be related to Standard 4.5, Response to Change. Undertaking the measures for independence and accountability requires the following basic resources: o A steering committee consisting of a small group of judges and nonjudicial court personnel who can meet on several occasions for sessions that range from 30 minutes to 2 hours. o A skilled facilitator who leads group meetings and collaborative activities and is skilled in using group techniques for decisionmaking. o Individuals to provide analytic and clerical staff support during research. o A 2- to 6-month commitment from all participants to complete the process. Planning/Preparation for Steering Committee Meetings. The first step in the measurement process for all standards in this area is to assign court management or planning staff to review the specific data collection techniques described for each measure. A brief summary of all of the measures should be prepared for presentation to the chief judge and members of the steering committee at an initial meeting. This summary should be a somewhat more detailed version of the following list: Standard 4.1, Independence and Comity Measure 4.1.1, Perceptions of the Court's Independence and Comity Standard 4.2, Accountability for Public Resources Measure 4.2.1, Adequacy of Statistical Reporting Categories for Resource Allocation Measure 4.2.2, Evaluation of Personnel Resource Allocation Measure 4.2.3, Evaluation of the Court's Financial Auditing Practices Standard 4.3, Personnel Practices and Decisions Measure 4.3.1, Assessment of Fairness in Working Conditions Measure 4.3.2, Personnel Practices and Employee Morale Measure 4.3.3, Equal Employment Opportunity Standard 4.4, Public Education Measure 4.4.1, Court and Media Relations Measure 4.4.2, Assessment of the Court's Media Policies and Practices Measure 4.4.3, Community Outreach Efforts Standard 4.5, Response to Change Measure 4.5.1, Responsiveness to Past Issues The second step is the selection of a facilitator who will lead the work of the steering committee during its meetings. The chief judge selects the facilitator, assisted by staff who are providing technical support during the application of the TCPSM System. Because the facilitator ensures that group meetings are conducted efficiently, he or she should be well versed in applying group techniques for analysis and decisionmaking. These skills are critical to the successful application of the measures in this performance area. Highly structured group techniques are preferred social science research techniques when the object of study resists simple and generally agreed upon problem statements or agreement about the meaning of data that might be collected. Structured group techniques have the following advantages: o They provide a way for groups to address complex, ill-defined problems. o They provide an effective way to obtain the views of many actors affected by the problems by using their time efficiently and productively. o They produce a solution superior to that possible with techniques designed for individuals by allowing those affected by the problem to work as a group. o They create a commitment on the part of the actors involved to the solution produced, which is especially valuable when political consequences of action are likely. The third step of the process is to select the members of the steering committee, which should include both judges and court management personnel. The chief judge should select five to seven individuals using the following criteria: o Experience--Has worked in the court for a minimum of 2 years as a judge or court staff member. o Credibility--Is well respected by peers within the court and by officials of other agencies. o Ability to work in a group setting--Is able to work cooperatively in group settings, including the ability to work within the constraints imposed by the evaluation technique and a willingness to encourage others (especially persons of subordinate status) to express their ideas. o Confidence--Has the ability to express and explain ideas, even if the ideas diverge from the thinking of others of superior status. o Commitment--Has a high level of interest and willingness to spend the required time meeting with others. After the steering committee is established, members are provided with the performance area's standards and commentary. The committee is then asked to meet several times for up to 2 hours to review written materials and data. (In no case should a meeting extend beyond 2 hours.) The fourth step in the process is to conduct an orientation meeting of the steering committee, lasting no more than 90 minutes. The chief judge should open the meeting, reaffirm his or her support for the process, and restate the charge to members of the steering committee. The chief judge should also introduce the facilitator. The facilitator's agenda should be to: o Introduce the subject matter of the standards and commentary. o Explain the rationale behind a group process (why group methods are favored for research and problem solving in applied social science) and entertain general questions. o Review the data collection methods available for standards in this area. o Lead a group discussion to determine which standards will be the main focus of concern and which measures the court wishes to undertake. Overview of Group Techniques. Group techniques for decisionmaking are described in detail in Group Techniques for Idea Building by Carl M. Moore.[1] Two of the techniques are briefly summarized below. Nominal Group Technique (NGT) is most useful for generating ideas. It is also an efficient method for making decisions and establishing priority among alternative action plans. Ideawriting also is useful for generating ideas but is most effective for developing ideas that already have been generated. It requires that participants be comfortable expressing themselves in writing; limited group discussion is required. Nominal Group Technique: This technique requires completion of four activities during meetings that should last no more than 90 minutes. The description provided in Moore (pp. 22-36) should be followed closely. NGT involves four steps: 1. Individual, written generation of ideas in response to a discussion prompt that is formulated as a question. The following questions, for example, might be appropriate for the first steering committee meeting: o Which standards of independence and accountability are you most interested in working on during this study? o Given the resources available to us (staff expertise, time, money), will we be able to collect the data suggested in the measurement procedures? 2. Round-robin recording of ideas or opinions (e.g., rankings of preferences for study). Flip charts are used to record the ideas or opinions (discussion is not permitted at this stage). 3. Serial discussion of ideas to clarify the meaning of each idea, not to argue its merits or value. 4. Voting to select the most important ideas. Each member is asked to select the most important ideas on the list and then rank them. Ideawriting: When relationships of "leader and follower" develop in a group or when differences in status need to be neutralized, Ideawriting may be a more useful technique. Taken from Moore (p. 49), the following steps summarize the Ideawriting process: 1. Brief orientation to the technique and presentation of the stimulus question. 2. Initial response by group members using the following instructions: o Write down a few ideas on a pad of paper in response to a stimulus item and then place the pad in the center of the table. o Work quickly, silently, and independently. o Do not tear the sheet off the pad; additional sheets will need to be used by others. 3. Written interaction: o After the pads have been placed in the center of the table, select another member's pad, read it, and briefly respond with written comments. o Repeat this process until each member has responded to every other person's ideas. 4. Analysis and reporting: o Analysis of the sheets can be left to the facilitator to work on after the meeting. The facilitator will report the results back to the group at a later meeting or in a memo. This is an advantage of Ideawriting--it saves committee meeting time. OR o If immediate analysis is desired, the group discusses its products and summarizes its efforts on a single sheet of paper. The use of the steering committee in conjunction with research efforts undertaken by court staff or consultants constitutes a process that combines fact gathering, value clarification, decisionmaking, and action. Courts that have undertaken the process during the testing of measures in this area have had to adapt the details of the process to their own circumstances. By following the procedures as closely as possible, however, courts that undertake the process will better understand the complex problems associated with the standard and will become engage in the process of self-improvement. Regardless of the amount of time and resources that courts participating in the demonstration devoted to the measurement process and to steering committee work, the courts agreed that the process of self- examination yielded insights into the court's practices and problems and a range of ways to improve performance. Following each standard's set of measures are activities the steering committee may want to undertake to enhance or focus its work. Whether or not these activities are useful is a decision that each committee should make in light of local circumstances. Standard 4.1: Independence and Comity The trial court maintains its institutional integrity and observes the principle of comity in its governmental relations. Commentary. For a trial court to persist in both its role as preserver of legal norms and as part of a separate branch of government, it must develop and maintain its distinctive and independent status. It also must be conscious of its legal and administrative boundaries and vigilant in protecting them. Effective trial courts resist being absorbed or managed by the other branches of government. A trial court compromises its independence, for example, when it merely ratifies plea bargains, serves solely as a revenue-producing arm of government, or perfunctorily places its imprimatur on decisions made by others. Effective court management enhances independent decisionmaking by trial judges. The court must achieve independent status, however, without damaging the reciprocal relationships that it maintains with others. Trial courts are necessarily dependent upon the cooperation of other components of the justice system over which they have little or no direct authority. For example, elected clerks of court are components of the justice system, yet in some matters many function independently of trial courts. Sheriffs and process servers perform both a court-related function and a law enforcement function. If a trial court is to attain institutional independence, it must clarify, promote, and institutionalize effective working relationships with all other components of the justice system. The boundaries and effective relationships between the trial court and other segments of the justice system must therefore be apparent both in form and practice. Measurement Overview. This standard entails one data collection measure. Measure 4.1.1 is a survey of the opinions and perceptions of judges, court employees, and representatives of other government organizations about issues related to independence of the court and the quality of its relations with professional constituent groups and other government agencies. In addition to the survey, suggestions to enhance the work of the steering committee in considering the court's performance with respect to this standard are offered following Measure 4.1.1. Measure 4.1.1: Perceptions of the Court's Independence and Comity This measure uses a questionnaire (Form 4.1.1, Questionnaire Regarding the Independence of the Judiciary and Intergovernmental Relationships) to gauge perceptions of the court's independence and comity held by the steering committee, other judges and court personnel, and noncourt officials who interact with the court either on case-related matters or on administrative matters. Part I of the questionnaire addresses independence of the court and Part II concerns organizational relationships (comity). Some commentators on the measure suggest that Part I can reasonably be eliminated from the survey of noncourt personnel because Part II provides the court with the most useful information from these individuals. Planning/Preparation. Before undertaking the measure, the steering committee should consider recommendations from the court's research staff regarding the procedures for administering the questionnaire. Experience with experimental tests of this measure suggests that special care should be taken to emphasize to survey recipients the importance the court places on the survey and on securing responses to it. Without this emphasis, the rate of return on the surveys will be low. There is no point in undertaking this measure unless the numbers of responses will be large and diverse enough to permit meaningful analysis. (See the table of minimum and preferred numbers of survey responses on page 1 of Form 4.1.1.) Therefore, the steering committee's input into strategies to secure responses is especially important to the researchers. It may be necessary for steering committee members to initiate personal contact with representatives of target groups to encourage their cooperation. Obtaining the personal commitment of several key officials (e.g., the chairperson of the county board of supervisors, the district attorney, the sheriff and chief of police, the president of the local bar association, and directors of social services and community corrections agencies) may be required. The steering committee should engage in a minimum of two activities for this measure: 1. Assisting in the identification of survey recipients and strategies to secure their cooperation. (This process may result in a decision not to pursue the measure.) 2. Assisting in the analysis and interpretation of the survey data and their significance. With respect to the former task, the steering committee facilitator should structure the consideration of potential survey respondents by posing the following question: What criteria will ensure that the survey targets a broad, representative group of respondents who affect, and are affected by, our performance in maintaining independence and comity? If the steering committee's consideration of this question suggests that obtaining cooperation from survey recipients will require more work than is warranted by the value of the data, this is itself an important finding. The result could indicate that the court has poor relationships or undeveloped lines of communication or conversely that the community within which the court operates is so close and open that a formal survey of this kind would be superfluous. A decision to forego the survey should be made with careful consideration, however. Conducting the survey has advantages beyond that of the data it yields. Distributing the survey and emphasizing its importance to the court broadens awareness within and outside of the court about the value the court places on judicial independence and comity. Conducting the survey also engages the court in the process of self- improvement. Broadening awareness and expanding participation in problem identification contribute to greater comity in governmental relations. Data Collection. The steering committee members and other designated judges and court staff first complete Part I of the survey, which probes the respondents' opinions about matters relevant to the independence of the court. The primary purpose of having the steering committee complete the survey is to allow the members to discover how much they agree or disagree about issues related to independence. Knowing what the court's own values are in this area (or the range of disagreement within the court) will make it possible to evaluate the survey responses from "outsiders" in a more meaningful and useful fashion. A second purpose of this step is to determine what items may need to be added, deleted, or revised to improve the questionnaire before it is distributed more broadly. Local court managers know how to distribute the survey most efficiently and reliably inside the court. For employees who may have a concern about expressing their opinions to judges of the court, the surveys should be distributed, returned, and analyzed in a way that protects the anonymity of the respondents. After initial groundwork to secure cooperation outside the court has been completed, methods for distributing the questionnaires are considered. Two possibilities are mailings and asking key officials representing the respondent groups to distribute the questionnaires personally. The method that is most appropriate will depend on the respondent group and local circumstances. In either case, the questionnaire should be accompanied by a cover letter signed by the key official of the respondent group and by the chief judge. Regardless of the distribution method used, the research team must: o Keep track of the number of questionnaires distributed to each respondent group and when they were received by the respondents. o Monitor the number of questionnaires returned. o Follow up by mail or personal communication with the designated liaison for each group to secure the return of outstanding questionnaires. The expected return date for the questionnaires should be after a short interval (e.g., 1 week). Giving recipients a longer time to respond before followup does not increase the response rate; it allows more time for the survey to lose priority and be misplaced or discarded. Within a week after the announced return date for the questionnaires has passed, or when the number of questionnaires being returned has fallen off sharply (whichever occurs later), followup should begin. Data Analysis and Report Preparation. A staff analyst performs the analysis of the questionnaire data. The analysis has three components. First, each question is examined to determine the level of agreement and disagreement among the respondents about the question or statement. The mean, low, high and standard deviation of the mean scores should be calculated in a summary report for each question. From this analysis the court will get an overall picture of agreement on principles of judicial independence and perceptions of the court's performance in maintaining comity in its relations with others. Second, it is instructive to compare the patterns of response for different respondent groups when there are at least five responses from a particular group. The following groupings are suggested: o Judges. o Court employees who are not judges. o Law enforcement officials. o District attorneys and public defenders. o Private bar members. o Social services and community corrections personnel. o Other county government officials such as county board members, the county manager, and budget office staff. If the court has been unable to secure questionnaire responses from at least five individuals in each group, additional combining may still prove instructive (e.g., comparing "insiders," lawyers, and "outsiders.") Finally, the analysis includes review and summary of any new issues or problems that come from the comments section of the questionnaire. The report of survey results will be submitted to members of the steering committee for consideration during one of its meetings. Suggested Steering Committee Activities for Standard 4.1 In addition to the roles recommended for the steering committee in Measure 4.1.1 and for oversight and evaluation of data collected for Performance Area 4, the steering committee can help evaluate court performance for Standard 4.1 in other ways. These activities are described next. The steering committee facilitator should first review the following activity descriptions and determine: o How much time and resources will be required to integrate some or all of the activities into the committee's work. o How best to explore the committee's interest in pursuing some or all of the activities given practical constraints on the committee's time and resources. If any of these activities are included in the committee's agenda, they should be scheduled for completion before finalizing and distributing the survey described in Measure 4.1.1. Part I: Readings Review of the following publications may improve the quality of the steering committee's discussions and deliberations. The readings may also be useful for the facilitator as a way to "warm up" the committee during an organizational meeting. If any members are interested, the following publications should be made available to them: o John C. Cratsley, Inherent Powers of the Courts (National Judicial College, 1980). o Carl Baar, Separate but Subservient, Chapter 7 (Lexington Books, 1975). o Russell Wheeler, "Judicial Administration and Judicial Independence" in Judicial Administration: Its Relation to Judicial Independence. (National Center for State Courts, 1988), pp. 36-45. o John M. Connors, "Inherent Power of the Courts: Management Tool or Rhetorical Weapon?" Justice System Journal 1 (1), pp. 63-72. Part II: Values Clarification "Judicial independence," a term with many connotations, is invoked variously in different contexts. The committee may want to explore the implications of the following factors related to "independence" and the extent to which the committee believes they may pose a threat to keeping judges' case-related decisionmaking free of inappropriate influences. Consideration and discussion of these issues will be conducted more efficiently if each committee member independently considers the following statements between meetings and frames two responses for each. The first response addresses the accuracy of the statement: Do the steering committee members believe the statement is very accurate, somewhat accurate, mostly inaccurate, or very inaccurate? The second response concerns whether the members believe that the circumstance is an important threat to independent case-related decisionmaking. Does it make decisionmaking more difficult? 1. State or county revenues have exceeded expenditure budgets in recent years. Accuracy Very accurate ___ Somewhat accurate ___ Mostly inaccurate ___ Very inaccurate ___ Importance Very important ___ Somewhat important ___ Mostly unimportant ___ Very unimportant ___ 2. The trial court prepares its own budget. Accuracy Very accurate ___ Somewhat accurate ___ Mostly inaccurate ___ Very inaccurate ___ Importance Very important ___ Somewhat important ___ Mostly unimportant ___ Very unimportant ___ 3. If the trial court prepares its own budget, it does so based on expenditure caps dictated by another agency. Accuracy Very accurate ___ Somewhat accurate ___ Mostly inaccurate ___ Very inaccurate ___ Importance Very important ___ Somewhat important ___ Mostly unimportant ___ Very unimportant ___ 4. Within an approved budget, the court is free to make category or line item adjustments without prior review and approval by another agency. Accuracy Very accurate ___ Somewhat accurate ___ Mostly inaccurate ___ Very inaccurate ___ Importance Very important ___ Somewhat important ___ Mostly unimportant ___ Very unimportant ___ 5. The court has authority to negotiate, select, and contract with vendors for purchases of supplies, equipment, and services. Accuracy Very accurate ___ Somewhat accurate ___ Mostly inaccurate ___ Very inaccurate ___ Importance Very important ___ Somewhat important ___ Mostly unimportant ___ Very unimportant ___ 6. The personnel classification system used in the court is developed by the court to meet its own needs. Accuracy Very accurate ___ Somewhat accurate ___ Mostly inaccurate ___ Very inaccurate ___ Importance Very important ___ Somewhat important ___ Mostly unimportant ___ Very unimportant ___ 7. The procedures followed in hiring new personnel are administered by a noncourt agency. Accuracy Very accurate ___ Somewhat accurate ___ Mostly inaccurate ___ Very inaccurate ___ Importance Very important ___ Somewhat important ___ Mostly unimportant ___ Very unimportant ___ 8. Personnel responsible for the management of official court records are under the administrative authority of the court. Accuracy Very accurate ___ Somewhat accurate ___ Mostly inaccurate ___ Very inaccurate ___ Importance Very important ___ Somewhat important ___ Mostly unimportant ___ Very unimportant ___ 9. Jury staff and services are under the administrative authority of the court. Accuracy Very accurate ___ Somewhat accurate ___ Mostly inaccurate ___ Very inaccurate ___ Importance Very important ___ Somewhat important ___ Mostly unimportant ___ Very unimportant ___ 10. The court has administrative authority over the following services that affect caseflow: a. Bail screening Accuracy Very accurate ___ Somewhat accurate ___ Mostly inaccurate ___ Very inaccurate ___ Importance Very important ___ Somewhat important ___ Mostly unimportant ___ Very unimportant ___ b. Adult probation Accuracy Very accurate ___ Somewhat accurate ___ Mostly inaccurate ___ Very inaccurate ___ Importance Very important ___ Somewhat important ___ Mostly unimportant ___ Very unimportant ___ c. Juvenile probation Accuracy Very accurate ___ Somewhat accurate ___ Mostly inaccurate ___ Very inaccurate ___ Importance Very important ___ Somewhat important ___ Mostly unimportant ___ Very unimportant ___ Steering committee members can complete this exercise before they come to a meeting, where the facilitator will tally the responses. For issues on which there is clear agreement, no discussion is needed. The facilitator simply reports areas of agreement to the steering committee. However, for issues on which there are outliers (e.g., five answers are on one side of the question and two are on the other) or general variation (answers lie fairly evenly on both sides of the question), it is appropriate to ask for discussion. Variation may be attributed to minor differences in the way the question is interpreted. Discussion may reveal that these variations signify only a slight divergence of opinion, or it may reveal that the variations reflect fundamentally different values. This is what is important for the court to know: organizational values (the "message" the court conveys to the community through its actions) reflect the values of the most influential individuals in the organization. Part III: Local Concerns The steering committee may also wish to engage in a process of consensus building regarding specific conditions they believe are problems in the jurisdiction. Figure 2 is an example of a final list of ideas developed using the NGT technique during a committee meeting in response to the following discussion prompt: What circumstances, events, or situations most threaten judicial independence in this court? (When the question is framed, the members should be encouraged not to spend time on problems that may be inherent in State-level law practices. Discussion of these problems is distracting and wastes time.) There is no prescriptive model offered as part of this measure for how any of the activities described above should be analyzed or incorporated into a report. In one important respect, the process is both the analysis and report: these are all values clarification exercises that may help the court sharpen both the questionnaire instrument suggested for Measure 4.1.1 and, when the survey is completed, interpret the results. When the court appears to share values with the wider community regarding aspects of independence or comity but discovers discrepancies in perceived performance, there is a "problem"--the court's perceived performance is not consistent with its values. Moreover, when the court discovers fundamental differences in values, there may still be a "problem," but one of a different kind. It may be that more public education is called for to explain why the institutional role of the court is different from that of other units of government. Such a finding is relevant to Standard 4.4, Public Education, and may suggest an area of concern for the committee. Finally, the activities suggested in items 2 and 3 of this section may serve to bring judges and management personnel to greater appreciate what the court's priority in this area should be. Where there is consensus that a problem exists and could reasonably be addressed, an action priority has been identified. Standard 4.2: Accountability for Public Resources The trial court responsibly seeks, uses, and accounts for its public resources. Commentary. Effective court management requires sufficient resources to do justice and to keep costs affordable. Standard 4.2 requires that a trial court responsibly seek the resources needed to meet its judicial responsibilities, use those resources prudently (even if they are inadequate), and account for their use. Trial courts must use available resources wisely to address multiple and conflicting demands. Resource allocation to cases, categories of cases, and case processing are at the heart of trial court management. Assignment of judges and allocation of other resources must be responsive to established case processing goals and priorities, implemented effectively, and evaluated continuously. Measurement Overview. Measures for this standard ideally address the following sets of questions: o Seeking resources: What are the court's resources? Are they sufficient? If they are not, what action does the court take to improve them? What are the resources of other agencies that are essential to determination of cases by the court? Are court operations impaired because these resources are inadequate? If so, what action does the court take to compensate for or improve the situation? o Using resources: How are the court's resources distributed? Are the resources allocated according to a predetermined rationale? Does the rationale reflect statutory or other priorities? o Accounting for resources: How does the court evaluate whether its resource allocation meets intended objectives? How does the court distinguish between a shortage of resources and resources that are not used effectively or efficiently? Are resources spent according to budget objectives, policy, and law? The State and local contexts for securing resources vary widely among courts, as do the factors that influence priorities for how those resources should be allocated. Therefore, a predetermined set of indicators that validly measure performance cannot be prescribed for all courts.[2] The three measures for this standard are designed to gather relevant data for evaluating a court's performance with respect to how the court uses its resources and how well it accounts for them. In addition to the suggested measures, courts with a high interest in this standard should be aware that the measures are designed not only to provide useful data in their own right but also to put the court in a position to later undertake some form of weighted caseload study. Weighted caseload studies are believed to be the best direct measure of the demand for court services.[3] Properly conducted weighted caseload studies, however, require careful guidance from research professionals and a substantial investment of time from court personnel. Moreover, improperly conducted weighted caseload studies could easily result in inaccurate decisions about the demand for court services--and the way resources should be allocated to meet the demand--that could actually undermine the intent of Standard 4.2. The procedures recommended here, therefore, are consciously chosen to: (1) prepare a court for conducting weighted caseload studies at a later time without prohibitive costs, and (2) ensure that an appropriate dose of experiential common sense is applied to the collection of data about the relationships between workload and resource allocation and to the interpretation of those data once they are collected. Measure 4.2.1 provides a way for the court to make an assessment of the adequacy and utility of its caseload statistical reporting capacity and to make improvements indicated by the measure once it is complete. Measure 4.2.2 provides a framework for bringing together information about the three critical factors that determine whether a court is allocating its resources in a prudent manner. This framework facilitates a structured inquiry, albeit a highly subjective and intuitive one. The three factors are: o The court's case categories (how the court defines and conceptualizes it services). o How the court's judges and operational staff are in fact organized and allocated in relation to those case categories (its management decisions). o The information about demand the court does have (its case-filing data). Measure 4.2.3 entails a structured review of the court's formal auditing practices (or lack of them) and indicates weaknesses in the way the court accounts for its resources such that misappropriation of public funds has or could likely occur. As with all of the measures for standards in the area of independence and accountability, the measures for Standard 4.2 propose that a steering committee be convened to oversee the measurement effort and to evaluate its findings. Suggestions for how the steering committee might approach the evaluation of the data collected through the measures are provided following Measure 4.2.3. Measure 4.2.1: Adequacy of Statistical Reporting Categories for Resource Allocation This measure determines if the court has a statistical reporting capacity useful for assessing the relationship between the court's workload and how its resources are distributed. Planning/Preparation. Planning for this measure involves three simple activities performed by the trial court manager. The first activity is to prepare a list of the case types found in the court for which statistical data on case filings are regularly maintained. These case types will vary from State to State, but nearly all courts maintain and report to the State certain standard case- filing statistical data broken down by major case type. This type of data also varies from county to county because some counties maintain more fine- grained case-filing statistical data than are reported to the State. The State reports are compiled by collapsing more fine-grained case-type distinctions into the State's broad categories. The case-filing statistical categories are hereafter referred to as "statistical case types" (SCTs). The second activity is to identify a group of five to seven of the most experienced clerks and other court operations support personnel who, in the trial court manager's opinion, have the best grasp of how the court processes the types of cases it hears. In assembling this group, trial court managers should not overlook special case- processing areas such as mental health, domestic violence protection orders, and judgment processing. The group serve as an analytic team for data collection using structured group techniques. The third activity is to identify a group of judges (at least three and no more than seven) who are willing to devote about 1 hour in chambers and about 1 hour in a group meeting to help evaluate the adequacy of the court's statistical reporting categories. Once the two groups are identified, the trial court administrator provides each member of the two groups with a list of the court's SCTs and instructions for beginning the evaluation, as described below in the discussion of data collection. Data Collection. Data collection consists of two stages: individual analysis of SCTs by each member and group analysis during a team meeting for each group that should last no more than 2 hours. Individual analysis: For the individual analysis, each participant receives the list of SCTs with the following instructions, modified as appropriate to fit the court's collection of SCTs: 1. Review the list of SCTs and think about their distinctions and interrelationships. Are the distinctions among them clear? Could any case you encounter be neatly classified into one of the categories? Are there any types of cases the court handles that would not fit neatly into one of the categories? If there are, what are those case types? 2. Write down the name of the first SCT and any obvious distinctions within it (case subtypes) that come to mind. These distinctions should be related primarily to the time resources a particular case subtype requires, not to legal or other policy- related distinctions. If the case type is "traffic," for example, you may think there is an important distinction between "mandatory appearance" traffic cases and all others. You may also think of distinguishing "driving under the influence" from other "mandatory appearance" cases. If they are not already present, subtypes for "felony" and "misdemeanor" may be obvious as well. In general civil cases, you may think there are obvious distinctions between torts, contracts, and some other case subtypes. You may even want to identify subtypes among torts (e.g., auto, products liability, medical malpractice). The important "rule" for doing this analysis is to get through all of the case types in the time available and to keep the analysis to no more than five subtypes per SCT. 3. Return your completed sheets to____________________ (name, location). We will compile a report summarizing the breakdowns provided by all judges and court staff performing this evaluation. (The report will be presented and discussed at a meeting scheduled for _______.) The trial court manager then compiles a list of all the subtypes identified by group members, eliminating obvious duplications. Subtypes on which there was agreement by everyone or almost everyone should be shown on the report as "recommended SCTs." Others should be listed together with the frequency with which they were suggested. Group meetings: Group meetings should be led by a facilitator trained in group techniques for decisionmaking, as described in the measurement overview section of this performance area. The meetings should be scheduled for no more than 2 hours--with a skilled facilitator, the work may be accomplished in 90 minutes or less. During the meetings the analysis group will examine the court's SCTs, evaluate their utility to workload analysis for management purposes, and, if appropriate, make recommendations regarding how they should be modified or broken down into more fine-grained SCTs. Group members will consult their initial analyses and will have read the summary report of the collective analysis. The facilitator should prepare charts displaying the "consensus recommended" SCTs. Data Analysis and Report Preparation. The objective of the meeting is to arrive at a consensus within no more than 2 hours about what conceptual improvements should be made to the court's SCT classifications. An "improvement" means that the SCT system will be more useful in the future for evaluating the way the court's workload is related to required resources (both bench officers and operational support staff). This report should be submitted to the steering committee for further discussion. Measure 4.2.2: Evaluation of Personnel Resource Allocation This measure offers a structured method for systematically gathering the opinions of a well- informed group of personnel (judges and key operational support staff) about how well the court's most important resource (people) is allocated among the categories of cases that come before the court. It seeks to provide information that informs the court about two related questions implied by the commentary to Standard 4.2: o Are the court's resources adequate for the work that is required? o Even if the court's resources are not adequate, are they prudently allocated so that all categories of cases are proportionately affected by the shortage?[4] Planning/Preparation. Planning for this measure, as for Measure 4.2.1, is the responsibility of the trial court manager. If the court chooses to undertake Measure 4.2.1, some of the preparation for the two measures (such as selecting the court operations staff group and the judges group) can be combined. (See the description of these groups in Measure 4.2.1.) This measure also requires the use of case-filing statistics for case categories. Whereas Measure 4.2.1 examines only case-filing statistics categories, this measure requires the use of actual data. These data should be collected at the most fine-grained level possible and made available in the form of routinely produced reports for the most recent 1-year period. The data need to be collected according to the model described next. In addition to establishing the groups, the court manager must prepare a graphic model with very brief textual annotations showing the organizational structure and staffing patterns of the court in relation to the court's entire inventory of case categories and the case filings for those categories for a recent 1-year period. In some respects the model may be very similar to the court's organizational chart, but there are important differences between the recommended model and most organization charts. Typical organization charts show personnel in terms of a chain of command of individuals and their supervisors. Names of work units sometimes suggest their relationship to case categories; sometimes they do not. For this measure, it is vital that the model clearly show how bench officers and operations support personnel are allocated in relation to case categories. The model is explained further under the data collection section below. If the relationship is inherently unclear because some personnel provide services for all or a mix of case types, it is important for the model to show this in a way that is very easy to visualize. Figure 3 provides an illustration of the potential problem of clarity using records management staff in relation to case categories and the judges assigned to hear them. In the illustration, it is easy to see the relationship between records staff support, judges, and case categories for Court 1. The same is not true for Court 2. (The illustration should not be taken to imply that one arrangement is better or more efficient than another, only that the resource allocation is more clear in one than in the other.) The model the court administrator develops should, to the extent possible, include all personnel resources associated with the case categories. (See Figure 3.) Typically, the administrator starts with judges, adds staff that are known to be allocated to each judge or courtroom (e.g., reporters, courtroom clerks, bailiffs, secretaries), and then examines other clerical or administrative staff that may support the work associated with that case category. These staff are usually physically separated from courtrooms and chambers or work as part of a centralized team (e.g., clerk's counter staff, bookkeeping and accounting, judgment recording, jury staff, records clerks). Only the trial court manager(s) will know how these personnel should be shown on the model. Data Collection. Data collection involves two stages. The first stage is the construction of the model described above. A simplified but generally illustrative example of how the model should look when it is completed is shown in Figure 4. The model is distributed to each member of the two analysis groups with a request to review and become familiar with its details before the meeting. The group members are asked to make notes regarding anything they find striking in the model with respect to the way the court's resources are allocated. To focus the individual review, it is helpful to ask the group members to consider the following questions: o Does the model in any way misrepresent the way in which the court's resources are distributed? o Do you know why the court's resources are organized and distributed in this way? Or does the model prompt you to ask why? o Do you agree with the reasons for organizing and distributing the court's resources in this way? o Very importantly, do you find anything striking about what the model does not tell you about the relationship between case categories and resource allocation? The second stage focuses on group meetings. One meeting should be held with judges and another with court operational personnel. Keeping the groups separate is recommended for three reasons: (1) it keeps group size manageable, (2) it compensates for the tendency of court staff to defer to judges during meetings and withhold opinions that might differ from those of judges, and (3) it provides an opportunity to prepare a report for the steering committee that shows how the opinions of judges and court operations personnel compare with respect to allocation of the court's personnel resources. (In some courts, especially in smaller and medium-sized courts that have fewer judges and staff or when judges and operations staff are accustomed to working closely together, this separation may not be necessary.) The purpose of the group meetings is for the facilitator to use the model as a springboard for eliciting and discussing reactions of group members to the model and to look for reactions that appear to be shared by all or most members of the group. Data Analysis and Report Preparation. It is possible that the group's work will require adjustments to the model. For example, structural or work assignment details within work units may come to light that are not apparent to upper management. It also is possible that changes may have occurred that are not reflected in standard reports or other data sources the court manager relied on to prepare the model. If there are no changes, the model itself serves as part of the analysis and report. The remainder of the report should be devoted to the facilitator's written presentation of (1) the consensus views of members of the two groups regarding the way that court resources are allocated and (2) what group members would like to know but cannot discern about resource allocation as a result of the modeling and group interaction process. An important part of the facilitator's responsibility in preparing the report is to highlight any striking differences between the views of judges and staff, especially if it appears that judges know the answers to questions that staff are asking, or vice versa. In summary, the report produced by this measure consists of: o The model showing the relationships among case categories, judges, staff, and case filings. o A summary of the consensus views of the groups. o A summary of consensus views of the two groups about what is missing from the model. o Highlights from a comparison of the views of the two groups. Measure 4.2.3: Evaluation of the Court's Financial Auditing Practices Periodic audits of financial practices are designed to reveal whether revenues and expenditures of governmental organizations are handled in accordance with law, regulation, contractual obligations or, in some cases, policy. This measure focuses on whether the court uses formal financial auditing to prevent and detect irregularities, misfeasance, or malfeasance in its financial practices. To assess the court's auditing procedures, the following questions are examined: o Are there internal auditing procedures? o How frequently do internal audits occur? o Is there an independent external audit conducted periodically to assess the effectiveness of the court's internal controls, including its internal audit procedures? o What is the scope of the external audit? (For example, is the audit conducted on financial statements and internal controls or just on cash controls?) o What use is made of the financial audit? Are auditors' suggestions for improvements reviewed and implemented? Planning/Preparation. This step involves determining who will carry out the data collection and analysis. Should a trial court staff member be selected because of financial or other limitations? Or should the court consider an outside researcher to work in cooperation with court administrative staff? Because an outside researcher may produce a more thorough and objective assessment and bring special expertise to the subject, this alternative is recommended. Insiders are needed, in any case, to help secure the information necessary to conduct the assessment. Preparation for the measure includes review of Form 4.2.3, Auditing Practices Checklist and Performance Index. It may be necessary for the researcher, in consultation with court personnel, to modify the instrument to improve its specificity and appropriateness for State and local conditions and terminology. Data Collection. The researcher will conduct interviews to become familiar with policies that govern internal and external audits of the court's financial controls. These interviews allow the researcher to partially complete Form 4.2.3. The initial interviews, for example, will clarify whether periodic audits are performed and who performs them. Copies of audit reports or memoranda then should be collected for a 3-year period. After reviewing the audit reports, the researcher should discuss them with court financial officers and managers to determine who reviewed the audit reports and what actions were taken in response to any problems or deficiencies noted in them. Data Analysis and Report Preparation. A performance index assigns scores to the court's use of audits. The scoring method uses points as negative indicators; a perfect score is "0." Once the researcher has completed the checklist and summed the index scores associated with it, a brief narrative should be prepared to explain checklist items that are not scored as "0." The checklist and rating form, with narrative, should be submitted to the steering committee formed to oversee and interpret the results of the evaluation, as described in the introduction to the measures of performance for independence and accountability. Suggested Steering Committee Activities for Standard 4.2 The steering committee should receive three reports to review with the help of the facilitator. Each of the reports should include explicit or implied findings regarding performance or recommendations for changes. The focus of the steering committee review should be on prioritizing the level of policy concern the reports engender (evaluation of findings or problems) and the feasibility of taking action with respect to them (action planning). Usefulness of Statistics for Resource Allocation Planning Evaluation: o Do the court's statistical case types give us a clear picture of how workload and resources are related? o What changes in case type categories are needed, if any? Action: o What changes to statistical case types appear feasible in the next 12 months? 24 months? Organization and Resource Allocation Evaluation: o Do the reports indicate that the court may be overstaffed in some areas and understaffed in others? o Do there appear to be other reasons to reevaluate the way the court organizes its judicial assignments and operations staff? o What changes should be made? Action: o What changes in the way personnel are assigned to case categories appear feasible in the next 12 months? 24 months? Auditing Evaluation: o Do the reports indicate that changes are needed in the court's auditing procedures or fiscal controls? o What changes should be made? Action: o What changes could be made in the court's auditing practices or fiscal controls in the next 12 months? 24 months? Other Related Considerations for Standard 4.2 The measures proposed for Standard 4.2 provide no direct measurement of several issues related to performance. It is useful to review those issues, described below, during a separate meeting. Seeking resources: o What are the court's resources? (Was it easy or difficult for the court manager to prepare an accurate inventory of personnel? How difficult is it to make a comparative assessment of the resources applied to the court's case categories?) o Do the court's resources, overall, appear to be sufficient? If they are not, what action can be taken to improve them? Has this evaluation been helpful in making a case for more resources? o What resources of other agencies are essential to determination of cases by the court? Are court operations impaired because these resources are inadequate? If so, what action could be taken to compensate for or improve the situation? How does this relate to our findings with respect to judicial independence and comity (Standard 4.1)? Using resources: o What has been learned about how the court's resources are distributed, or about the rationale used in making those allocations? Accounting for resources: o Are resources allocated according to a reasonably clear and generally agreed upon set of objectives? o How does the court distinguish between a shortage of resources and the ineffective or inefficient use of resources? Standard 4.3: Personnel Practices and Decisions The trial court uses fair employment practices. Commentary. The trial court stands as an important and visible symbol of government. Equal treatment of all persons before the law is essential to the concept of justice. Extended to the court's own employees, this concept requires every trial court to operate free of bias--on the basis of race, religion, ethnicity, gender, sexual orientation, color, age, handicap, or political affiliation--in its personnel practices and decisions. Fairness in the recruitment, compensation, supervision, and development of court personnel helps ensure judicial independence, accountability, and organizational competence. Court personnel practices and decisions should establish the highest standards of personal integrity and competence among its employees. Measurement Overview. Three measures are associated with this standard. Measure 4.3.1 elicits unstructured information about fairness in personnel practices directly from court employees by having them write down comments on index cards in a way that assures anonymity. The index cards can be sorted quickly into groups that express similar ideas. Measure 4.3.2 uses a confidential written survey composed of structured questions about fairness in personnel practices. Response options are presented in a scale that permits quantitative analysis of the survey results. Although Measure 4.3.1 and Measure 4.3.2 overlap, they differ sufficiently to allow both to be undertaken. If time and resources are scarce, courts should do one measure or the other, but not both. Doing one measure carefully has more value than doing both in a way that is flawed methodologically. Measure 4.3.3 requires review of court records to obtain information about the race, gender, type of position, salary, and tenure of employees. These data indicate possible bias in the court's employment practices. Following Measure 4.3.3 are suggestions of approaches the steering committee may use to identify and prioritize the most striking findings of the previous three measures, arrive at some consensus about their significance, and develop a set of recommendations for action to correct any deficiencies. Measure 4.3.1: Assessment of Fairness in Working Conditions The procedure described in this measure offers a quick and inexpensive way to gather data about employees' assessments of the court as a fair employer. This measure is unsuitable for courts that employ fewer than 10 nonjudicial employees. It is designed for courts that employ more than 30 nonjudicial employees. Employees are divided into groups of 10 to 30, and each group is convened in a courtroom or meeting room in the building. The employees are then asked to write statements about what they believe are the strengths and weaknesses of the court's personnel practices with respect to fairness. The statements are written anonymously on plain index cards, which are deposited in collection boxes and analyzed. The method assures spontaneity of responses because it does not rely on previously prepared questions. Properly planned and carried out, the session should take no more than 30 minutes. Field tests of the measure suggest that courts should consider the following cautions before assigning the task to a research coordinator. o Employee and supervisor cooperation must be requested from a high level. Participation by a high percentage of the court's employees in all or most units is essential for the measure to work. Someone lacking visible authority (a relatively unknown and unsupported court "planner" or "analyst," for example) can not successfully organize and carry out this measure without visible support from the top. o It may prove difficult to schedule the groups in a way that is not overly disruptive to the court's work priorities and preserves anonymity of responses yet still allows differential analysis of responses by the kind of work employees perform. Before deciding to undertake the measure, the steering committee should agree on what compromises they are willing to make and whether there is a way to balance those practical concerns to yield useful results. For example, are employees willing to come early, stay late, or use some of their lunch break? Is the court's top management willing to have the court operate on a skeleton crew in some work units for 30 minutes a day? Would the court find the results useful if there is no way to discern whether patterns in responses are more typical of employees who do one kind of work rather than another kind (e.g., court reporters, records clerks, accounting personnel, probation officers)? o Despite procedures in the demonstration sites that protected the anonymity of responses, there were indications that employees nevertheless doubted the promise of anonymity. The steps taken to assure confidentiality therefore must be visible and convincing to the employees. The skills of the person who will "proctor" the sessions should be considered. Can the person convincingly put employees at ease and make clear that the exercise is not a "test"? Will the planners use good judgment as they set up the procedure? Will the groups be so crowded in the room that they feel someone is looking over their shoulder when they write? Would it be easier to conduct the measure in the employees' work area? Planning/Preparation. Make an initial assessment of the total number of court employees and determine how to organize them into groups for administering the procedure. The plan for establishing the groups will vary for each court depending on court size and how work units and persons performing similar duties are organized. (Under any plan, however, management and supervisory personnel should be formed into a separate group.) Factors to consider in forming the groups include: (1) the effects on work flow when employees are away from their workstations, (2) desired minimum group size (generally no fewer than 10), and (3) the value of being able to analyze responses by organizational unit or job classification (e.g., document and records processing staff, courtroom and chambers personnel, probation department). Group size should be between 10 and 30 individuals. When the groups are formed, the goal is to balance efficiency (e.g., keeping the group size as close as possible to 30 or conducting the sessions in the employees' work area) with the ability to preserve distinctions among types of positions or divisions of the court. Form 4.3.1, Illustrative Position Groupings and Schedule, shows how 1 court with 150 employees might have organized groups and a schedule for conducting this measure. The person administering the procedure should have the ability to follow directions and to make groups of employees feel at ease with the procedure. Experience in field tests of this measure in trial courts has shown that employees have questions about why the procedure is being conducted and how confidentiality will be preserved. Employees are likely to feel more at ease if the person administering the procedure is not an employee of the court. Representatives of the county personnel office or volunteers from colleges and universities could be used. The individuals overseeing the trial court performance evaluation should arrange for the selection and training of the person responsible for administering the procedure. A walkthrough of the procedure is advisable before it is administered to employees. A supply of 3 x 5 index cards will be needed, approximately 10 for each employee. One or more rooms should be designated that are large enough to accommodate the group and close to the employees' work area (e.g., courtrooms, conference rooms, or training rooms). The closer the room is to the employees' workstations, the faster the procedure can be completed. To conduct the procedure, a schedule should be drawn up that permits groups of employees to be away from their workstations for no more than 30 minutes. Planning should take into account the possibility of minimal disruptions of court business. Groups should be scheduled at least 45 minutes apart to allow for transition time. Data Collection. The procedure described here should be followed strictly. Employees assemble in the meeting room at the scheduled time for their group (or the proctor will go to them). The person administering the procedure explains the general purpose of the measure, which is to allow employees to evaluate and contribute to the improvement of fair working conditions in the court. He or she describes the procedure to the employees and explains that strictly following the procedure assumes anonymity of responses. Employees should be invited to ask questions about the mechanics of the procedure. Each employee is given a supply of ten 3 x 5 index cards. After the purpose of the measure is explained, the employees are asked to write down on one of the cards one striking example of fairness in the court's personnel practices. The employees should be instructed not to identify themselves on the cards in any way and are given no more than 2 minutes to complete their answers. If an employee has no comment to make in that time, it simply means that there is no important issue he or she wishes to report. Employees may turn in a blank card or none at all. If employees appear to be concerned about anonymity, the cards should be deposited directly by the employee into a box or envelope that is passed around the room before the next segment begins. The procedure is then repeated with the proctor asking the employees to write down one striking example of unfairness in the court's personnel practices. After the second set of cards has been collected in a separate box or envelope, employees are given 10 minutes to write down any other observations they wish to make about fairness in working conditions at the court. Each thought should be written down on a different card and collected in a third box or envelope. The employees may return to their workstations whenever they choose, before or at the end of the session. For each group of employees, the three sets of cards are kept separate and labeled to facilitate analysis. Data Analysis and Report Preparation. The three sets of cards are reviewed and analyzed by a person familiar with social research techniques and personnel issues. This individual may be a county personnel specialist, a personnel specialist employed by the State, or a research specialist at a nearby university or consulting firm. Analysis consists of grouping the comments into sets of similar observations or statements of concern and summarizing the frequency with which similar positive and negative observations occur for each employee group and for the court as a whole. Experience in the demonstration sites suggests that researchers should avoid overanalyzing and cataloguing the responses. The analyst should identify the three to five themes occurring most frequently. Patterns, however, may vary with different groups. The analyst should look for themes that appear to run through all of the groups or that are reported by many members of the same group. The report the analyst prepares should be provided to the steering committee. It is important that employees be informed of the evaluation's general results and be advised of the steps the court plans to take to remedy any deficiencies that were identified. Doing so will signal integrity and openness in court personnel practices as well as increase confidence among employees that fair practices are a concern taken seriously by the court. Evaluation results and corrective plans could be shared at a courtwide staff meeting, at the beginning of a courtwide social gathering, or in a letter from the chief judge. A more personal approach is recommended over a letter. It may be difficult to draft a letter that is not overly guarded or vague, particularly if some of the results are sensitive. Measure 4.3.2: Personnel Practices and Employee Morale This measure complements Measure 4.3.1 by obtaining employee responses to structured questions about fairness in personnel practices related to employee morale and competence. It allows a more fine- grained analysis by employee position and by issue area than does Measure 4.3.1. Measure 4.3.2 is not suitable for courts that employ fewer than 10 employees. Planning/Preparation. Identify groups of employees who perform similar duties in the court, such as bailiffs, court reporters, counter and courtroom clerks, and calendar and probation staff. The court's management and supervisory personnel should form a separate group. Classes of employees might also be distinguished by court divisions or other relevant organizational subdivisions. These classes of employees are designated on the questionnaire forms. (See Section VII of Form 4.3.2, Employee Survey on Personnel Practices and Employee Morale.) Courts should review Form 4.3.2 and add or modify questions to better fit their local circumstances. The questionnaire covers such subjects as recruitment, promotion, termination, salaries, and communications. A modification all courts should make is to tailor the system for coding employee position categories so that specificity of employee groups (by unit, job function, or demographic data) is balanced with the need to preserve confidentiality. One way to preserve confidentiality that is credible to employees is to have no position or demographic group code for groups with fewer than 10 members. The provisions for preserving confidentiality should be clearly explained on the questionnaire form. Data Collection. Survey forms should be distributed to all employees in a way that ensures that they receive them. It is important that the court have an accurate count of the number of surveys distributed to employees (total and by each group). The count should not be assumed to be equivalent to the number of employees or inferred in some other way based on the procedure for distribution. One method for obtaining an accurate count of surveys distributed is to ask unit supervisors to deliver them personally and request that employees initial a distribution list. If this alternative is chosen, it is important to brief supervisors personally about the importance of getting an accurate count. Another way is to enclose the survey with employee paychecks. In any case, the distribution and collection of the survey and the design of the questionnaire itself should be done in a manner that makes it obvious to employees that confidentiality is being preserved. Questionnaires should be distributed to employees with return envelopes that can be sealed, and identifiers should not be included on the questionnaire. It is important to stress again that employee groups should not be so small that employees believe that their identity or likely identity could be deduced. A suggested number is no fewer than 10, as noted above. This should be clearly stated in Section VII of Form 4.3.2. Analysis of the results should not begin until at least 80 percent of the questionnaires are returned. It is suggested that response time be limited to no more than 24 hours and that a followup notice be sent after 2 days if 80 percent of the questionnaires have not been returned. Data Analysis and Report Preparation. Questionnaire data is entered into a computer for analysis using statistical software. The services of an analyst who is skilled in the design and interpretation of statistical analyses are required. Analysis using summary statistical methods is useful, but for larger courts (e.g., more than 30 employees) the analyst also needs to understand and apply techniques for correlating variables (e.g., relationship between pay levels, category of job, and attitudes about hiring or promotion). Although it is useful to correlate attitudes with factors such as gender, race, or seniority, this analysis should not be attempted if there is an appreciable risk of compromising confidentiality and the validity of the results. The report prepared by the analyst should be provided to the steering committee established for evaluating the results of the personnel measures. (See the introduction to all measures for Performance Area 4 and the overview of measures for Standard 4.3.) It is important that employees be informed of the evaluation's general results and be advised of the steps the court plans to take to remedy any deficiencies it has identified. Doing so will signal integrity and openness in court personnel practices and increase confidence among employees that fair practices are a concern the court takes seriously. The results of the survey and the court's plan to take corrective action could be shared with employees at a courtwide staff meeting or social gathering or in a letter from the chief judge. The first two approaches are more personal and permit more flexibility in presentation. A written communication that is not overly guarded or vague may be difficult to draft if the results are sensitive. Measure 4.3.3: Equal Employment Opportunity This measure uses statistical methods to assess the court's performance as an equal opportunity employer. The proportion of major ethnic groups in the community is compared with the proportion of individuals in those groups who are employed by the court in various capacities and salary levels. The measure also looks at gender in the same way. Planning/Preparation. Using the most current and complete source of local demographic data (or at a minimum the U.S. Department of Commerce, Bureau of Census County and City Data Book for the current year), record the percentage of adult minorities and adult women in the jurisdiction. Data Collection. Using court payroll and personnel records, record for each employee the type of position held, salary, tenure, gender, and race. If the court's personnel records do not include all of this information, ask each unit supervisor to collect the information from the employees directly. Record the information on a data entry form similar to that shown on Form 4.3.3a, Illustrative Data Collection Form for Personnel Information, or enter it into a similarly structured computer file that can be analyzed using an appropriate statistical software package. Data Analysis and Report Preparation. Group together similar employee position classes for the statistical analysis. Using a statistical package or manual calculations, analyze the data to produce the summary data shown on Form 4.3.3b, Illustrative Summary Statistical Report on Race and Gender Mix Among Employees. This analysis will show the percentages of employees by race, gender, average salary, and tenure in each position class as well as percentages for the court as a whole. It also will provide a comparison of representation, salaries, and tenure for each race or ethnic group. Additional grouping of position classes may be necessary to obtain sufficient numbers in a group to permit meaningful averaging of salaries and use of percentages. When groups of employees are too small for averaging and cannot be regrouped in a way that yields meaningful results, the report should not show that employee class on the summary statistical report. Instead, a separate display listing actual data for these positions should be prepared for consideration as part of the overall report. For example, showing the percentages for a group of three top management positions in a court (clerk of court, court administrator, administrator of juvenile probation services) may not be meaningful. Grouping these positions with others in the court also may not be appropriate. In general, good performance for this measure is indicated if the percentages of adult women and minorities in the jurisdiction's population approximate the percentage of women and minorities employed by the court in each position class, and the average salaries of employees in each position class are similar, regardless of gender and race. Results should be evaluated in terms of comparative disparities (see Measure 3.2.3, Representativeness of Final Juror Pool, for an explanation of this term) between the percentage of women or minorities in each position class that would be predicted from the demographic data and the actual percentages shown in the data. Where differences in average compensation by race or gender among comparable employee classes (adjusting for tenure of the employees) are greater than 10 percent, the reasons for the discrepancies should be systematically documented and reviewed by the court. In evaluating the significance of the results and the need for an affirmative action plan (or an improved affirmative action plan), the court should consider the rulings of the State's appellate courts in equal employment opportunity cases, the level of the disparity (greater disparities require more aggressive actions by the court), and the alternatives available to the court for implementing an improved affirmative action plan. Suggested Steering Committee Activities for Standard 4.3 After one or more of the data collection measures for Standard 4.3 have been completed, the steering committee members should review the information presented in the reports and formulate their thoughts about what aspects of the data they find most striking. This review should be followed by consideration of what, if any, action is suggested by the data. A facilitator should assist in this effort using the structured group techniques described in the measurement overview for Performance Area 4. The steering committee's work has three purposes. The estimated time required to complete the group activity related to each purpose ranges from 30 minutes to 1 hour. The total time required will depend on the skills of the facilitator and the preferences of the group. The committee, therefore, has the option of addressing each purpose at separate meetings or scheduling a single 2-hour session broken into three parts. The most appropriate option will depend entirely on local conditions and committee member preferences. The purposes of the review and an estimate of the time required for each activity are as follows: o Determine the most striking aspects (good or bad) of the data produced for Measures 4.3.1 through 4.3.3: 1 hour. o Identify key features of a plan to address perceived deficiencies: 1 hour. o (Optional) Arrive at a consensus rating of the court's overall performance in achieving fairness in employment practices: 30 minutes. The purposes of the work are accomplished by posing the following questions to the committee. The committee answers as a group, using the decisionmaking techniques described in the measurement overview: 1. What did you find most striking in the data and reports about fairness in the court's personnel practices? Either NGT or Ideawriting is suitable for addressing this question. 2. On a scale of 1 to 5, where 1 is very poor and 5 is very good, how would you rate the court's performance for this standard, and why? On the pads that the members are provided, each member writes a score and a brief explanation for the score. The members then report their scores (but not the reasons) aloud to the facilitator. If there is general agreement, no further discussion is required. If the range of views is wide, some discussion may be useful. If there is a clear trend but some outliers, the reasons for the views of the outliers may be solicited. After discussion, the process is repeated to determine if the group has reached a more definitive consensus. If there are no significant disparities in the evaluation scores of the members, a consensus rating is likely to be represented by an average of all the ratings. If there are wide discrepancies that are not resolved through discussion, the evaluation rating should reflect the mean of the predominant viewpoint, with the exception(s) being reported as such. The evaluation(s) and explanatory notes should be preserved as part of the documentation of the court's measurement effort. These reports will be useful when reevaluations occur in future years. 3. What action should the court take to improve its performance as a fair employer? Either NGT or Ideawriting is suitable for addressing this question. The choice of techniques depends on the size of the steering committee and members' level of comfort with expressing themselves in writing rather than orally. The outcome of this step is a prioritized list of five to seven items the group agrees are the most feasible and effective actions the court can take to improve its performance or its ability to conduct more valid or reliable assessments in the future. It is important that employees be informed of the evaluation's results and advised of the steps the court plans to take to remedy any deficiencies it has identified. Doing so will signal integrity and openness in court personnel practices and increase confidence among employees that fair practices are a concern the court takes seriously. The results and plan for corrective action could be shared with employees at a courtwide staff meeting or social gathering or in a letter from the chief judge. One of the more personal approaches is recommended in preference to a letter. It may be difficult to draft a letter that is not overly guarded or vague if the results are sensitive. Standard 4.4: Public Education The trial court informs the community about its programs. Commentary. Most public citizens do not have direct contact with the courts. Information about the courts is filtered through sources such as the media, lawyers, litigants, jurors, political officeholders, and employees of other components of the justice system. Public opinion polls indicate that the public knows very little about the courts, and what is known is often at odds with reality. Standard 4.4 requires trial courts to inform and educate the public. Effective informational brochures and annual reports help the public understand and appreciate the administration of justice. Participation by court personnel in public affairs commissions also is effective. Moreover, courts can effectively educate and inform the public by including able public representatives on advisory committees, study groups, and boards. Measurement Overview. Three measures assess how well the trial court informs the community of its programs. Measure 4.4.1 consists of a checklist of factual matters regarding the policies and practices for responding to media requests. It should be completed by the trial court manager and then summarized in a brief report for the steering committee. Three evaluative questions are posed at the end of the checklist to guide the steering committee's consideration of the policies and practices in light of Standard 4.4. Measure 4.4.2 consists of two interview surveys, one for media representatives and one for court employees, which are designed to reveal significant divergence of views of employees and media representatives. Completing both interview surveys potentially provides a more complete and balanced perspective on court policy and practice than does the checklist, which primarily reflects the "view from the top." Measure 4.4.3 examines the breadth and diversity of the court's community outreach programs. It requires interviews with court officials about educational activities they engage in and inspection of educational or public information materials the court produces. Suggestions for approaches the steering committee may use to review the information gathered through these three measures is presented following Measure 4.4.3. Measure 4.4.1: Court and Media Relations This measure examines the court's policies or practices relating to media requests for information. Court policy is examined by the trial court manager or designee using a checklist of questions. Planning/Preparation. Preparation for this measure requires the trial court manager to identify and collect copies of court policies that govern responses to media inquiries. If there are no written policies, it may be necessary to interview court staff who are familiar with the court's actual practices when responding to media inquiries. Data Collection. Form 4.4.1, Checklist for Court Policy Governing Response to Media Inquiries, provides a checklist of issues that should be examined during the review. The review consists of examining written policies and conducting interviews when necessary to fill in background information or to clarify matters subject to interpretation. In courts in which policy is governed less by written policy and more by unwritten practices or rules, the trial court manager should interview key judges and court staff who may be exposed to media inquiries. It also is possible that both the form and substance of policy varies among departments of the court because they field different types of inquiries. (For example, judges and their personal staff who receive case- related inquiries may routinely pass them on to the clerk of court's office where information of public record is available; the clerk's office staff in turn may either provide information from their records or refer the inquirer to the records themselves. Court probation department officials may have policies and procedures that are unique to their department.) For each item on the checklist, the source or sources of the data should be recorded (i.e., the document(s) examined or the person interviewed). The checklist focuses on aspects of court policy governing media relations, including: o Whether or not a policy exists. o Whether opinions of representatives of the media were taken into account in the policy's formulation. o How clearly the policy spells out the manner in which media inquiries are to be handled by court personnel and who should handle them. o Whether the policy includes provisions for the court to monitor and respond to the media. Data Analysis and Report Preparation. After the checklist is completed, a brief report is prepared covering each question and summarizing the results. In some cases a question may require only a simple "yes" or "no" answer. The report should be crafted by the trial court manager as a simple factual report, free of evaluative judgments. The process of evaluation will be left to the steering committee, of which the trial court manager should be a member. Three questions are included at the end of the checklist that steering committee members should receive with the report. Each member should complete her or his responses to the questions before meeting to consider the report. Measure 4.4.2: Assessment of the Court's Media Policies and Practices This measure surveys representatives of the media and court personnel to obtain information about court practices when responding to media inquiries. Designed to elicit open-ended responses, the survey is conducted in an interview format, by telephone, or in person. Relatively small numbers of interviews are required: a maximum of 20 in the largest courts and 6 to 10 in medium-sized courts (5 to 10 judges). Planning/Preparation. Court administrators identify representatives of the media and court personnel for a survey about court practices. Representatives of the media who interact with the court regularly and court employees who routinely field or respond to media inquiries are identified. Court employees are briefed by the court manager about the process so that they will cooperate with the interviewer. If possible, two to three times as many court employees should be identified and briefed than will be interviewed so that any employee concerns about protecting anonymity can be satisfied. When this is not possible, survey questions may need to be omitted or may not be candidly answered (e.g., a question asking the employee to evaluate the appropriateness of court policy). In all cases, the suggested survey forms (Forms 4.4.2a and 4.4.2b) should be reviewed and modified to fit specific local circumstances while preserving the survey's basic content requirements. After reviewing and revising the survey as necessary, the court should select an interviewer who is not a court employee. (See discussion of data collection below.) Data Collection. This step should be completed by a designated research professional. However, it is best to avoid the use of court personnel to conduct the interviews. The interview is not especially complex or demanding and arrangements for securing an interviewer might be made with professors or graduate students at a local college. In smaller communities it may be possible to find an appropriately qualified person who teaches journalism or political science at a local high school. What is crucial is that the person be skilled at placing people at ease, understand the importance of sticking closely with the interview structure, and refrain from "leading" the interviewee while eliciting responses. The person also must understand the anonymity requirements and be able to judge how best to preserve them in summarizing the results of the surveys. The surveys address the level of satisfaction with the court's policy, the faithfulness with which court policy is carried out, and satisfaction with actual procedure or experience. Among the topics examined are: o Whether media representatives and court employees know the court's policy. o Whether media representatives and court employees consider the policy to be reasonable and workable. o Whether court responses are timely. o Whether court responses are of high quality. o Open-ended observations and comments about court and media interactions that are not covered in the survey questions. Data Analysis and Report Preparation. After the survey has been completed, a summary of responses should be prepared using simple frequencies. (Percentages are not appropriate with very small numbers of cases.) Because the numbers are small, no special analysis tools are needed--a pencil and paper tabulation should be sufficient. Comments made by interviewees also might be included in the report if they do not compromise anonymity. An important requirement for report preparation is to present the data in a way that allows a ready comparison of responses of media representatives and court personnel to parallel questions. These data will provide the court with some insight into two aspects of its performance with respect to media relations: o The court's success in informing the media and its employees about its policies. o The satisfaction of court and media representatives with the court's policies. Measure 4.4.3: Community Outreach Efforts This measure determines the extent to which the court disseminates information to the public about its purposes, operations, and programs and compares this activity to a checklist of possible public education activities. The measure allows the court to compare its actual public education efforts to a wide range of possible public education activities. A public information specialist or a panel of court officials may then evaluate this information in light of court problems, goals, and resources to establish action plans related to public education. Planning/Preparation. The measure involves examining documents or program materials used for public education, interviewing judges and court staff who are involved in public education activities, and summarizing the data qualitatively for each category of public education identified on the checklist. A public information specialist and a knowledgeable court employee work together to gather and summarize data. The court employee adds efficiency and expertise to the processes of collecting documents and identifying people to interview while the public education specialist enhances the efficiency of summarizing relevant data and putting it into perspective in terms of cost and effectiveness. An alternative approach is to have court employees complete, collect and summarize the data. The employees then consult with a public information specialist for a formative review of the completeness and relevance of the summary. If necessary, they meet with the specialist a second time for evaluative comments about how the court's actual practices compare with the range of dissemination activities available. Data Collection. Court employees assisting with the measure first gather all public education documents (including any audio/visual media presentations) and compile a list of activities based on the checklist shown in Form 4.4.3a, Checklist of Potential Community Outreach Efforts: Organizational Efforts. The court employee and public information specialist team then review the documents and other information to determine what interviews are needed to gather more detailed information about how the documents have been used or how activities are carried out. Optimally, the public information specialist assists in scheduling interviews and leads the interviews. As the research team conducts interviews to identify education materials, they may also conduct interviews to assess court employees' public education and outreach activities (see Form 4.4.3b, Checklist of Potential Community Outreach Efforts: Individual Efforts). This checklist covers activities carried out by individuals in the court on their own initiative. Additional interviews with judges, court managers, program specialists, and probation staff then are needed to supplement the information on individual activities. Each major division of the court is reviewed separately. The research team briefly summarizes the data collected from the document reviews and the interviews in qualitative terms for each checklist item. A separate report for each court division or major program is prepared. Some methods of dissemination may not be used at all in the court, some may be used very little, and some may be used extensively. Quantitative data should supplement the qualitative description where appropriate. For example, the quantity of brochures printed and distributed is relevant information, as are the frequency and duration of public service announcements or public speaking engagements. The checklist provides an inventory of community outreach approaches used by the court (e.g., brochures, videos, public appearances, public service announcements, adult education programs, tours of facilities, posting of notices, and second direct mailings). In general, the more approaches used, the more diverse the impact of the court's outreach efforts. The summary for each checklist should include: o The number of variations of each type of community outreach approach, such as different public service announcements on or in television, radio, newspapers, public transportation, public buildings, utility bills, stores, and billboards. o The number of instances of community outreach efforts within each variation; for example, five public service announcements on television. o The geographical, social, ethnic, or cultural distinctions that are likely to be associated with the public information activity. Data Analysis and Report Preparation. The summary report is a valuable tool for self-evaluation by court officials who have an implicit understanding of what the checklist summary implies given the court's public information needs and its available resources. The evaluative potential of the checklist is increased further if annotated with comments by a public information specialist. An even more accurate assessment of the court's performance is obtained by considering the checklist data (with or without annotations and assessment by a public information specialist) in group sessions for idea building, as described following this measure. Suggested Steering Committee Activities for Standard 4.4 The specific questions appended to Measure 4.4.1 should be addressed by the steering committee if either Measure 4.4.1 or Measure 4.4.2 is completed. These questions help the committee determine whether the court's policies regarding responsiveness to media inquiries are consistent with its values. The data produced for one or all of the three measures for this standard are considered by the steering committee within the standard framework of facilitated group techniques for idea and consensus building. The three issues to focus the steering committee's deliberations for these measures should be: 1. What do we want to inform the public about? What are our public information objectives? Either NGT or Ideawriting is suitable for addressing this question. The outcome of this step should be a short, prioritized list of items the group agrees are the court's most important public information objectives. 2. How are we informing the public about our programs now and who is doing it? With this question, the steering committee considers whether the court is informing the public about the most relevant issues and using the right people to deliver the information. The facilitator may wish to guide the steering committee's discussion by asking them to respond to the following prompt: What did you find most striking in the data and reports you read about the court's public information activities? (For example, is public information a coordinated effort? Is the effort balanced in terms of who is involved? Are the needs of elected judges balanced with the need to present information about the entire court?) Any problems with the court's performance in this area should emerge during this discussion. 3. What action should the court take to improve public information practices? Consideration of this question moves the court from performance evaluation to action planning. Here, the answers to the first question (what do we want the public to be informed about) are considered in light of practical considerations: What are the most feasible and effective steps we can take to improve the court's communications with the public? Finally, the steering committee should include in its deliberations on this measure any findings from work done in relation to Standard 4.1, Independence and Comity, that appear to call for the court to take a more active role in educating the public about its responsibility to maintain the court's independence and about the circumstances that threaten that independence. Standard 4.5: Response to Change The trial court anticipates new conditions and emergent events and adjusts its operations as necessary. Commentary. Effective trial courts are responsive to emergent public issues such as drug abuse, child and spousal abuse, AIDS, drunken driving, child support enforcement, crime and public safety, consumer rights, gender bias, and the more efficient use of fewer resources. Standard 4.5 requires trial courts to recognize and respond appropriately to such public issues. A trial court that moves deliberately in response to emergent issues is a stabilizing force in society and acts consistently with its role of maintaining the rule of law. Courts can support, tolerate, or resist societal pressures for change. In matters for which the trial court may have no direct responsibility but nonetheless may help identify problems and shape solutions, the trial court takes appropriate actions to inform responsible individuals, groups, or entities about the effects of these matters on the judiciary and about possible solutions. Measurement Overview. One measure is associated with Standard 4.5. It attempts to determine how responsive the trial court is to changes in its environment which manifest themselves in terms of public policy issues (e.g., gender bias, alternative dispute resolution, drunk driving, and child support). Measure 4.5.1 is a retrospective assessment of how the court has responded to public policy issues in the past. It requires that the court construct a narrative account (or case study) of its responses to selected issues. What were the issues and the responses? How timely were the responses? How effective were they? Like other measures in this area, Measure 4.5.1 presupposes that a steering committee has been formed to oversee data collection and its interpretation. Suggestions for steering committee activities are described following Measure 4.5.1. Measure 4.5.1: Responsiveness to Past Issues This measure determines how well the court has responded to past changes in its environment. Issues to consider might include demands for the elimination of gender bias, the introduction of alternative dispute resolution programs, and the use of special procedures with respect to individuals who have AIDS. For each issue, has the court maintained the timely flow of cases, conducted hearings, and accommodated the needs of all of the participants? Although the substantive nature of significant past issues may vary across jurisdictions, this measure provides a step-by-step approach to assessing the adequacy of the responses. Planning/Preparation. The measure relies on a structured approach for collecting a wide array of opinions from a range of persons inside and outside the court. The central task of the measure is to organize the ideas of numerous individuals and to produce a narrative account of how the court has responded to past issues. This account is then used as a springboard for group discussion. Because judges and court managers are asked for their views concerning the nature and effectiveness of the court's past responses, the narrative account will be most objective if a person from outside the court (e.g., private consultant or university professor) is responsible for preparing it and leading the group discussion. (Hereafter this person is referred to as the facilitator.) In addition to the steering committee, a group of individuals should be identified to serve as knowledgeable informants for survey interviews and to participate in group discussions at critical points. At least one of the following three criteria should be used to select these individuals: (1) experience in managing some aspect of the court system, justice system agency (e.g., police department, public defender's office) or other public (e.g., county executive) or private organization (e.g., bar association, Mothers Against Drunk Driving, law school) involved in the administration of justice; (2) experience in coordinating court policies, procedures, and practices with other public and private organizations (e.g., bench-bar committees, court- citizen task forces); and (3) demonstrated interest and involvement in at least one major public policy issue (e.g., the introduction of guidelines for child support or the development of alternatives to incarceration). In consultation with the steering committee, the facilitator should prepare a comprehensive list of persons who meet these criteria. The facilitator will then randomly select a set of individuals from the list. The court has two options for selecting the names. The first option is a stratified sample. Individuals are classified according to their institutional affiliation and samples are drawn from each institutional subgroup. This procedure ensures that individuals from all types of institutions are represented. The second option is a nonstratified sample in which the entire sample is drawn from an alphabetical list of all names. This procedure gives every individual a chance of being selected regardless of organizational affiliation. For either option, the selection should be made randomly to avoid criticism that participants were invited because they are friends of the court. Finally, the number of individuals selected should be kept within a manageable range, although the specific number may vary from jurisdiction to jurisdiction. Because the individuals will participate in both an opinion survey and a group discussion, a sample of 15 to 20 individuals is appropriate for most jurisdictions. Data Collection. The facilitator determines the three most important issues that the court has responded to in the past decade by soliciting the opinions of individuals inside and outside the court. Through face-to-face interviews with the 15 to 20 individuals participating in the survey, the facilitator asks the respondents to list the three most important public policy issues that affected the fair and effective administration of justice by the court in the past 10 years. The respondents also are asked to rank each issue according to five dimensions: (1) the workload demands the issue places on the court's resources, (2) the issue's degree of complexity, (3) the level of public interest in the issue, (4) the issue's need for specialized treatment and procedures, and (5) the issue's long-term impact on the court's caseload. The facilitator summarizes the responses of the individuals into group rankings. The rankings are based on the frequency with which each issue is mentioned. Additionally, the facilitator can put these overall rankings into perspective by summarizing the group's views of each issue associated with the dimensions described above. Finally, a brief written summary of this information is provided to the steering committee and used in the next step of the measure. After having identified (either alone or in consultation with the steering committee) the three most important issues affecting the court in the past decade, the facilitator prepares the narrative account or case study. The purpose of the narrative is to address the following basic questions for each issue: (1) When did the issue begin to emerge? (2) How and when did the court initially become aware of the issue? (3) How did the court initially respond? (4) How did the issue further develop? and (5) How did the court respond to the issue when it fully emerged? To guide the search for answers to these questions, more specific questions can be posed. o What was the first indication that the issue was emerging? Did its emergence coincide with national or statewide trends? Which individuals or organizations were concerned with the issue? What demands did these individuals or groups make on the courts? o When did the court become cognizant of the issue? How was the issue brought to the court's attention? How long after the issue arose did the court respond? Which individuals in the court responded to the issue? o How did the court initially respond? What was the nature of the court's discussion of the issue? Who participated in the discussion? What alternative courses of action were considered, selected, and implemented? Did the court choose to monitor the issue's development? o How did the issue develop? Did public interest in the issue grow over time? What was the nature of its effects on caseload volume? Did other institutions inside or outside the justice system become involved? How did the individuals initially concerned with the issue react to the court's response? o How did the court respond to the issue after it had developed more fully? When did the court's leadership take action? What procedural and policy adjustments were made? What additional resources were required or allocated? How did the court monitor potential problems associated with the issue? The answers to these five sets of questions provide the basis for an informative narrative account of the development of the court's responsiveness to significant issues. The facilitator gathers this information by conducting interviews with past and present presiding judges, court managers, and other individuals involved in the issue. Additionally, the facilitator should collect a variety of documents related to each issue, including memoranda, agendas, administrative documents, and news clippings. For more detailed information on how case studies are conducted, organized, and written, there are several basic references to consult.[5] Data Analysis and Report Preparation. The analysis of the court's response to each issue consists of addressing three interrelated questions: (1) How satisfied is the court's leadership with the responses to each issue? (2) What worked and what did not? and (3) How satisfied are other individuals with the court's actions? The facilitator should interview key court officials and other individuals who were highly involved in each issue to determine their views on the adequacy of the court's response. The interviews should solicit views on whether responses were timely, comprehensive, and effective. Their purpose is to gain some sense of how well the court responded to each issue and to explore ideas on how reactions to future changes might be improved. Interview results should be incorporated into each issue's narrative account, which is then circulated back to the steering committee. This material also provides a basis for assessing how the court can improve its performance. The information compiled by the facilitator forms the basis for a thorough review process and the development of an action plan. A recommended approach is to convene the 15 to 20 individuals initially involved in identifying the issues (see the planning/preparation section of this measure) to review each narrative account and corresponding appraisal. The facilitator should focus the discussion by posing a few central questions: o What lessons can be learned from past responses? o What accounts for differences in the court's view of its performance and the views of key individuals involved in each issue? o What actions are necessary to improve the court's sensitivity to public policy issues, its mechanisms for monitoring issues, and its understanding of how issues affect court operations? A court that is performing well will use the information from this measure and corresponding feedback to design corrective actions that improve how it responds to issues in the future. Specifically, the court should develop an action plan for determining what changes in policies, procedures, and practices should be made to improve the timeliness and effectiveness of its responses. The facilitator can help write the action plan but substantive recommendations should be the product of the court's reactions to the review process. Suggested Steering Committee Activities for Standard 4.5 The steering committee plays a role in this measure at two critical stages and at one optional stage. First, members of the steering committee help the facilitator assemble the list of knowledgeable informants from which the survey respondents are randomly drawn. Second, the steering committee may meet and consider the survey results and participate in a group discussion with the facilitator to select three issues for the case studies. (optional) Third, in a group augmented by "outsiders" who have served as knowledgeable informants during the data collection, the steering committee interprets the case study results in light of the three questions: o What lessons can be learned from past responses? o What accounts for differences in the court's view of its performance and the views of key individuals involved in each issue? o Are actions necessary to improve the court's sensitivity to public policy issues, its mechanisms for monitoring issues, or its understanding of how issues affect court operations? It is recommended that the steering committee widely circulate its action plan for improving the court's ability to respond to emerging issues. The work of the committee on this standard should be integrated and considered with the collection of data for Standard 4.1, Independence and Comity, and Standard 4.4, Public Education. End Notes 1. C. Moore, Group Techniques for Idea Building, Applied Social Research Methods Series, vol. 9 (Beverly Hills, CA: Sage Publications, 1987). 2. For example, indicators that are without doubt relevant to "responsibly seeking resources"--such as mandates issued by a court under the doctrine of inherent powers--take meaning from contexts that are different in nearly every court jurisdiction. Is it good or bad that a court has never served the local funding authority with a mandate to pay for the cost of a service that is vital to the court's operations? In one court an explanation for the absence of mandates directed to the board of supervisors (a fact that can be established empirically) may be that the court effectively works the budget process and does, in fact, perform well in relation to Standards 4.1 and 4.2. In another court, the circumstance may signal that a court lacks confidence in its capacity to prove that a mandate was a reasonable exercise of inherent power; hence, that court shies away from unpalatable political controversy and risk. This court may be suffering from problems that Standards 4.1 and 4.2 encourage it to remedy. Would it be more to the point to look at the level of funding a court actually enjoys in order to measure performance with respect to Standard 4.2? Unfortunately, there is no way to do this. It would require a definition of judicial activity that is measurable across jurisdictions, and agreement on a definition has never been satisfactorily reached. It would require data based on analysis of court expenditures for comparable activities, which are not available. Comparable activity measurement would require a weighted caseload (workload) system that applies to courts generally, which does not exist today. Thus, it is not possible to determine whether a court's budget is below average, average, or higher than average, much less whether it is too little, adequate, or too much in absolute terms. 3. Task Force on Principles for Assessing the Adequacy of Judicial Resources, Assessing the Need for Judicial Resources: Guidelines for a New Process (Williamsburg, VA: National Center for State Courts, 1983), p. 6. 4. The underlying principle of this measure is that all of the court's cases deserve to be handled with an equal concern for quality in their disposition. While the resources (and expertise) that may be required to achieve that quality may be greater for some cases than others, a disposition in a small claims matter should be just as "good" (procedurally correct, equitable, and proportionately timely) as a disposition in a $1 million medical malpractice case. The guilt of the reckless driver should be as firmly established as that of the homicide defendant. 5. See, for example, R. Yin, Case Study Research Design and Methods (Beverly Hills, CA: Sage Publications, 1984). ------------------------------ Performance Area 5: Public Trust and Confidence Compliance with law depends, to some degree, on public respect for the court. Ideally, public trust and confidence in trial courts should stem from the direct experience of citizens with the courts. The maxim "Justice should not only be done, but should be seen to be done!" is as true today as in the past. Unfortunately, there is no guarantee that public perceptions reflect actual court performance. Several constituencies are served by trial courts, and all should have trust and confidence in the courts. These constituencies vary by the type and extent of their contact with the courts. At the most general level is the local community, or the "general public"--the vast majority of citizens and taxpayers who seldom experience the courts directly. A second constituency served by trial courts is a community's opinion leaders (e.g., the local newspaper editor, reporters assigned to cover the court, the police chief, local and State executives and legislators, representatives of government organizations with power or influence over the courts, researchers, and members of court watch committees). A third constituency includes citizens who appear before the court as attorneys, litigants, jurors, or witnesses, or who attend proceedings as a representative, a family friend, or a victim of someone before the court. This group has direct knowledge of the routine activities of a court. The last constituency consists of judicial officers, other employees of the court system, and lawyers--both within and outside the jurisdiction of the trial court--who may have an "inside" perspective on how well the court is performing. The trust and confidence of all these constituencies are essential to trial courts. Overview of Standards. The central question posed by the three standards in this final area is whether trial court performance--in accordance with standards in the areas of Access to Justice; Expedition and Timeliness; Equality, Fairness, and Integrity; and Independence and Accountability--actually instills public trust and confidence. Standard 5.1 requires that the trial court be perceived by the public as accessible. Standard 5.2 requires that the public believe that the trial court conducts its business in a timely, fair, and equitable manner and that its procedures and decisions have integrity. Finally, Standard 5.3 requires that the trial court be seen as independent and distinct from other branches of government at the State and local levels and that the court be seen as accountable for its public resources. Ideally, a court that meets or exceeds these performance standards is recognized by the public as doing so. In fulfilling its fundamental goal of resolving disputes justly, expeditiously, and economically, the court will not always be on one side of public opinion. Nevertheless, where performance is good and communications are effective, public trust and confidence are likely to be bolstered. When public perception is distorted and understanding unclear, good performance may need to be buttressed with educational programs and more effective public information. In addition, because in some instances a court may be viewed as better than it actually is, it is important for courts to rely on objective data and public perceptions in assessing court performance. Overview of Measures. Performance with regard to public trust and confidence is dependent, in large part, on the court's performance in the other four performance areas. Thus, several of the measures in the other areas that rely on informed opinions (i.e., opinions of individuals who have had contact with the court for various reasons) are appropriate for this performance area as well. Relevant measures are listed under each standard for this area. This performance area includes three measures that address Standards 5.1, 5.2, and 5.3. These measures are presented under Standard 5.1 and are referred to in the overviews of the other two standards. The measures include 5.1.1, Court Employees' Perceptions of Court Performance, 5.1.2, Justice System Representatives' Perceptions of Court Performance, and 5.1.3, General Public's Perceptions of Court Performance. The first measure is conducted through a mailed survey to court employees, the second through a modified focus group with representatives of the various components of the justice system, and the third via a telephone survey of the general public. Measures 5.1 and 5.2 provide the court with the most useful information for developing an action plan for improving performance in the area. The third measure provides a benchmark of the public's perception of overall performance. The benchmark will serve as a gauge for comparing the results of future surveys of public perception. However, because the general public has little firsthand information about trial courts, the results of the measure provide only limited help in developing an action plan for improvement. A court undertaking measures in this area may find it helpful to work with professionals skilled in research design (e.g., a marketing research group or professors of research methodology). This is particularly true for the survey of the general public. The methodologist could also help court officials weigh the benefits and costs of conducting the measures and discuss alternatives that address more specific needs of the court and its community. For example, if the court is particularly concerned with the perceptions of the media, it may prefer to focus its attention and resources on that public. A small town hall meeting with members of the media to obtain their perceptions of court performance may be a better approach for gauging public trust and confidence for this court. Similarly, a court might determine, based on the results of measures in other areas, that a followup study of the perceptions of attorneys or jurors may be warranted. In that case, the court, with the help of the research methodologist, might modify the survey or focus group measures to better address the population of interest. The court also could explore other options (e.g., interviews) for obtaining specific information of interest. Finally, it is important to note that the measures in this area examine individuals' perceptions of court performance with regard to the court's administration and operation. They do not examine the extent of public agreement with individual case decisions made by the court. Standard 5.1: Accessibility The public perceives the trial court and the justice it delivers as accessible. Commentary. The five standards grouped in the area of Access to Justice require the removal of barriers that interfere with access to trial court services. Standard 5.1 focuses on the perceptions of different constituencies about court accessibility. A trial court should not only be accessible to those who need its services but also be perceived as accessible by those who may need its services in the future. Measurement Overview. Several measures from the Access to Justice performance area are useful for measuring court performance for this standard as well. They include: o Measure 1.2.3, Perceptions of Courthouse Security. A questionnaire is mailed to regular users of the court (e.g., court employees, attorneys, probation officers, and jurors) to determine their perceptions of courthouse security. o Measure 1.2.6, Evaluation of Accessibility and Convenience by Court Users. The ease and convenience of conducting business with the court is measured through a survey of regular court users (i.e., court employees, attorneys, probation officers, and jurors). o Measure 1.2.7, Evaluation of Accessibility and Convenience by Observers. Volunteer observers (members of the general public collecting data for the court) are given a survey questionnaire on the ease of conducting business with the court at the end of their first observation day in the courthouse. o Measure 1.4.1, Court Users' Assessment of Court Personnel's Courtesy and Responsiveness. The courtesy and responsiveness of court personnel is measured through a survey of regular court users (i.e., court employees, attorneys, probation officers, and jurors). o Measure 1.4.2, Observers' Assessment of Court Personnel's Courtesy and Responsiveness. Volunteer observers are given a questionnaire regarding their treatment by court personnel. As noted, these measures collect data from several of the court's publics: court employees, attorneys, probation officers, jurors, and members of the general public who are assisting the court with some of the measures. In addition, this standard includes three measures that gauge the court's performance with regard to all of the standards for the Public Trust and Confidence performance area. Measure 5.1.1 examines the opinions of court personnel regarding court performance through a mailed survey. On a day- to-day basis, court employees are more familiar with the court and its activities than are any other public. They have an important perspective on how the court is performing in the various standard areas. If they are dissatisfied with the court's performance, they are not likely to convey a positive image of the court to members of the general public with whom they have contact, thus lessening the court's ability to instill public trust and confidence. Measure 5.1.2 uses focus group interviews to obtain the opinions of various members of the justice system regarding the court's performance. Individuals who routinely interact with the court to perform their jobs (e.g., law enforcement officers, attorneys, individuals from social service agencies) are included. These individuals have the advantage of having firsthand knowledge of many areas of the court's performance without being actual employees of the court. Finally, Measure 5.1.3 uses a telephone survey to obtain the general public's perception of the court's performance. Members of the general public have little, if any, firsthand knowledge of the court and its activities. Their perceptions are based on what they read, see, and hear. Measure 5.1.1: Court Employees' Perceptions of Court Performance This measure asks court employees about their views of the court's performance in the other four standard areas. Data are collected by means of a mailed questionnaire to court employees.[1] The measure should be conducted by an individual experienced in survey research who is perceived as independent of the court. Planning/Preparation. Based on the experiences of courts involved in testing the measures, the perceived confidentiality of employee responses to the questionnaire is critical to the success of the measure. Some employees may refuse to participate if they believe their responses will be read by other individuals in the court. Several steps can be taken to help ensure both the reality and the perception that responses will be confidential: o Contact someone external to the court to conduct the measure. Some courts in the demonstration project requested that staff from their State's Administrative Office of the Courts (AOC) conduct the measure. Private consultants and university faculty also could be approached. o Emphasize the confidentiality of survey responses in a cover letter accompanying the questionnaire. The following language was included in one court's cover letter: "Please understand that your answers will be completely confidential and no individual responses will be identified. No one in the court will see the completed questionnaires. Rather, the court will receive aggregate results once all responses have been tabulated." o Do not place any type of code on the questionnaires for tracking who has not yet completed one. Instead, send a followup letter to all respondents thanking those who have completed the questionnaire and asking the others to complete and mail the questionnaire. o Include with the questionnaire a stamped return envelope with the address of the external individual administering the questionnaire. o Do not include any demographics on the questionnaire. This is especially critical for small courts in which there are only a few employees who fall into various demographic categories. After considering the issue of confidentiality, the next step is to review Form 5.1.1, Court Employees' Perceptions of Court Performance. The questionnaire items address standards for access to justice; expedition and timeliness; equality, fairness, and integrity; and independence and accountability. Some of its content was drawn from other surveys of the public's perceptions of the justice system.[2] The questionnaire can be used as is or modified to include questions on the most salient issues facing each community.[3] Data Collection. A questionnaire should be sent to each full-time employee listed in the court's personnel files.[4] As noted earlier, a return envelope with the survey administrator's name on it should be included with each questionnaire. Two weeks after the questionnaire is sent, a reminder letter should be sent to all employees asking them to complete the questionnaire if they have not done so already. Data Analysis and Report Preparation. The number code corresponding to each question's response is entered into a computer file. Using a computer software statistical package, the percentage of each response for each question is calculated. Evaluations of the court are coded differently across items. Agreement with one question may indicate a positive appraisal, whereas agreement with another question may indicate a negative appraisal. As a result, if the response of "strongly agree" is always scored as "1," a score of "1" or "2" may indicate good performance on some questions and poor performance on other questions. For Section II of the questionnaire, items 5, 7, 10, 12, 14, 16, and 20 are negative items. In general, the more items on which a court is well perceived (a higher percentage of "3" and "4" on positive items and "1" and "2" on negative items), the closer the court comes to meeting the standards for public trust and confidence. The items on the questionnaire also can be analyzed by performance areas. Items 2, 6, 10, 14, and 17 refer to the standards in Performance Area 1, Access to Justice; items 3, 7, and 11 refer to the standards in Performance Area 2, Expedition and Timeliness; items 4, 8, 12, 15, 18, and 20 refer to the standards in Performance Area 3, Equality, Fairness, and Integrity; and items 5, 9, 13, 16, and 19 refer to the standards in Performance Area 4, Independence and Accountability. The percentage of positive responses on the items in each of these areas can be reviewed to determine the areas in which court employees approve of the court's performance and the areas in which court employees consider the court's performance lacking. Measure 5.1.2: Justice System Representatives' Perceptions of Court Performance This measure relies on modified focus group interviews to obtain the observations of representatives of other components of the justice system regarding the court's performance.[5] Planning/Preparation. This measure involves conducting facilitated group discussions with 8 to 12 individuals per group.[6] Each group should include representatives from other components of the justice system (e.g., law enforcement personnel, corrections, the local bar) as well as agencies that regularly interact with the court (e.g., child protective services). During the demonstration of the measures, the courts discovered that it was better to have individuals with the same level of knowledge participate in each group. That is, it is better to have individuals who are very familiar with the court participate in one session and those who are less familiar participate in a separate session.[7] An interview guide should include questions on the court's performance regarding accessibility; expedition and timeliness; equality, fairness, and integrity; and independence and accountability. The guide should be prepared with the help of a professional group moderator, who should also conduct the group sessions and draft a report. Contact consumer research firms, universities (the marketing, psychology, or sociology departments would be the best places to begin the inquiry), and local psychologists to find a moderator. Once a moderator has been recruited, he or she should meet with court representatives to determine the objectives for the focus group sessions and draft a preliminary outline of topics that will be covered during the interviews. The first step in recruiting participants for the focus groups is to compile a list of agencies that must interact with the court on a regular basis to do their work (e.g., law enforcement, prosecutor's office, public defender's office, social service agencies, probation office, corrections). The next step is to identify individuals from these organizations who could serve as representatives. Directors and managers would be appropriate as well as individuals who routinely interact with the court. Judges and other court employees should identify individuals they see on a regular basis to increase the list of possible candidates. The identified individuals should be contacted by telephone or letter to determine their willingness and availability to participate in a focus group discussion. Individuals should be screened to determine whether they are friends of other prospective participants and whether they have strong views about the court system (e.g., they have a relative in court administration). Participants are selected randomly from the individuals who meet the screening requirements of availability and neutralness. If three group sessions are planned, 24 to 36 individuals are selected because each group should have 8 to 12 participants. The individuals should be placed in groups according to their familiarity with the court, which will help ensure that discussions are not dominated by two or three individuals who have more knowledge of court procedures and activities. Each group should also include representatives from a variety of agencies (e.g., all law enforcement officials should not be in one group). A conference room is the best setting for a focus group interview. Focus groups also can be conducted in a home, hotel, or community meeting room. A neutral setting away from the courthouse is best. Participants should be arranged around a rectangular table with the moderator sitting at the head of the table. Based on the experiences of the courts participating in the demonstration project, the confidentiality of participants' remarks during the group sessions is necessary for the successful application of this measure. Individuals from the court should not be present during the sessions and should not watch or listen to the discussions in a separate room (which is a typical setup for focus group sessions).[8] It is important that someone take notes during the session and that an audio tape is made as a backup. The moderator should stress to participants that the notes and tapes will be used only in the preparation of the report and that the report will identify themes rather than individual comments. If taping the session becomes a problem, the taping should be stopped. Using the objectives and preliminary outline developed earlier, the moderator and court representative should determine the specific topics that will be addressed and the approximate length of time that will be spent on each topic during the interview session. The guide serves as a reference for the moderator to ensure that all important topics have been discussed during the session. Data Collection. The same moderator should conduct all three group sessions. Discussions should flow freely, but each should proceed along roughly the following format: o Introduction: The moderator should greet all of the participants, introduce himself or herself, introduce the participants to each other, and briefly describe the objective for the group session. Additionally, the moderator should establish an approximate time duration for the session and should describe the session's rules (e.g., one person talks at a time, no smoking). o Warmup: Each participant should be given an opportunity to discuss briefly his or her affiliation with the court process. The moderator should begin the discussion with a broad topic. For example, during the demonstration of this measure, one court conducted a brief brainstorming session to identify users of the courts and their expectations of the court system. o Main discussion: After allowing some discussion of the warmup questions, the moderator should guide the group into a discussion of the specific topics listed in the moderator guide. To maximize the utility of the focus group session, it is essential that only the moderator guide the discussion. This need to keep the discussion focused is the principal reason for requiring an experienced moderator. o Wrapup and closing: At the end of the session, the moderator should recap what he or she perceived to be the discussion's major points or conclusions to ensure the accuracy and importance of those points. Data Analysis and Report Preparation. Following the three sessions, a short report should be prepared by the moderator that covers the following topics: (1) the reasons the focus groups were conducted and the types of information sought, (2) a description of the groups (e.g., the types of individuals included, the size of the groups), (3) themes that emerged during the discussion, and (4) recommendations and conclusions that were developed during each group and as a result of all three groups. Measure 5.1.3: General Public's Perceptions of Court Performance This measure is designed to solicit the opinions of the general public by means of a telephone survey.[9] The survey includes questions concerning the court's performance in each of the other four standard areas. Planning/Preparation. Application of this measure requires the court to contract with a consulting firm that regularly conducts telephone surveys. The experience of courts in the demonstration project indicates that the measure is likely to be unsuccessful if attempted inhouse. To obtain a valid sample and ensure reliable results, a professional research/marketing firm is needed. The first step is to select the contractor who will conduct the measure. The court may wish to release a request for proposal (RFP) to obtain bids from relevant organizations. One court in the demonstration project asked for the following in its RFP: (1) the contractor's experience in conducting similar surveys; (2) the qualifications and experience of key personnel assigned to the project and their resumes; (3) a description of the telephone facility to be used and the relationship between the contractor and the facility; (4) a description of the sampling frame, how the sample will be drawn from the sampling frame, and the estimated sampling error; (5) the work schedule and timeframe for completion of the project; and (6) the proposed budget. In addition, the court specified the following responsibilities for the contractor: o Complete 1,000 interviews by telephone with county residents who are at least 18 years of age. A draft of the survey is provided in an attachment as a guide to survey length and as a means of determining the amount of phone time needed to complete each survey. The final survey may differ from the attached draft, but not significantly. o Finalize the survey instrument and pretest it. o Select the sampling frame to be used and draw an appropriate sample from this list. o Conduct all telephone surveys from an inhouse facility or through supervised staff at a calling facility that is used regularly by the contractor. o Encode and clean all data collected via the survey instrument for computer analysis. o Prepare frequency tabulation by demographic characteristics for all survey items, which should include, at a minimum, number and percentages by response categories. o Provide all data in (specify software)-readable format on 3.5-inch diskettes. o Provide all documentation needed to analyze the data. o Print all survey forms. o Cover all long-distance charges incurred in conducting the survey. o Provide a written description of the methodology used, estimates of the sampling error, limitations of the research, and a copy of the final survey instrument used. Once the contractor has been selected, the coordinator for the measure reviews Form 5.1.3, Public Perceptions of Court Performance, with the contractor to determine what modifications might be necessary to increase its relevance for the court's jurisdiction. Form 5.1.3 includes questions associated with each of the four standard areas.[10] As part of their review, the coordinator and contractor should consider what the court wants to learn as a result of the survey. Is the court interested in learning the public's perceptions of specific areas of court performance, regardless of the public's actual experience with the court? Or does the court want to know the perceptions of more informed members of the public who have had some contact with the court? If the latter is desired, the coordinator should instruct the contractor to use question 2 as a screening question. If a respondent has had no contact with the court, the interviewer should skip questions 4-18. Data Collection. The contractor trains interviewers with regard to the questionnaire to ensure standardization in the data collection process. The contractor then conducts the telephone interviews with the sample drawn according to the specifications in the contractor's approved proposal. Data Analysis and Report Preparation. The contractor ensures the data are entered into a computer file and checked for accuracy. The contractor then analyzes the data and prepares a report, which should include the percentage of each response for each question and highlight the areas in which the court is perceived as performing well and those in which improvement is needed. Responses by subgroups of respondents (i.e., age, education, gender, income, previous contact with the court, and race/ethnicity) can also be reviewed for discernible patterns. Standard 5.2: Expeditious, Fair, and Reliable Court Functions The public has trust and confidence that basic trial court functions are conducted expeditiously and fairly and that court decisions have integrity. Commentary. As part of effective court performance, Standard 5.2 requires a trial court to instill in the public trust and confidence that basic court functions are conducted in accordance with the standards in the areas of Expedition and Timeliness and Equality, Fairness, and Integrity. Measurement Overview. In addition to Measures 5.1.1, 5.1.2, and 5.1.3, described in the previous section, two measures from Performance Area 3, Equality, Fairness, and Integrity, are useful indicators as well. They are: o Measure 3.3.1: Evaluations of Equality and Fairness by the Practicing Bar. This measure ascertains the practicing bar's perceptions of the equality and fairness of the court's decisions and actions. Members of the bar who appear in court are asked to assess the fairness and equality of the court's actions and decisions through a survey questionnaire. o Measure 3.3.2: Evaluation of Equality and Fairness by Court Users. All individuals (litigants, jurors, witnesses, and victims) who are involved in a court case form impressions of the way they and others are treated in the courthouse. This measure collects information about their impressions as they relate to factors indicative of fair and equal treatment. Standard 5.3: Judicial Independence and Accountability The public perceives the trial court as independent, not unduly influenced by other components of government, and accountable. Commentary. The policies and procedures of the trial court, and the nature and consequences of interactions of the trial court with other branches of government, affect the perception of the court as an independent and distinct branch of government. A trial court that establishes and respects its role as part of an independent branch of government and diligently works to define its relationships with the other branches presents a favorable public image. Obviously, the opinions of community leaders and representatives of other branches of government are important to perceptions of the court's institutional independence and integrity. Perceptions of other constituencies (e.g., those of court employees) about court relationships with other government agencies, its accountability, and its role within the community also should not be overlooked as important contributions to a view of the court as both an independent and accountable institution. Measurement Overview. In addition to Measures 5.1.1, 5.1.2, and 5.1.3, described under Standard 5.1, four measures for Performance Area 4, Independence and Accountability, are useful to review as well. They are: o Measure 4.1.1, Perceptions of the Court's Independence and Comity. This measure uses a survey to evaluate the court's performance in achieving institutional integrity and comity in intergovernmental relations. Opinions about issues related to independence of the court and the quality of its relations with professional constituent groups and other government agencies are sought from judges, court employees, and representatives of other government organizations who interact with the court. o Measure 4.3.1, Assessment of Fairness in Working Conditions. This measure elicits unstructured information about fairness in personnel practices directly from court employees. o Measure 4.3.2, Personnel Practices and Employee Morale. This measure uses a mailed survey questionnaire to obtain employee responses to questions about fairness and personnel practices related to employee morale and competence. o Measure 4.4.2, Assessment of the Court's Media Policies and Practices. This measure provides data about whether the court's policies and practices for responding to media inquiries are well understood by both court employees and media representatives and are satisfactory to both groups. It involves conducting two sets of surveys (one for media representatives and one for court employees) in an open-ended interview format. End Notes 1. Although the survey addresses the perceptions of court employees, the instrument can be modified easily to address the perceptions of other publics such as attorneys, jurors, and litigants. Depending on the size of the population under study, the court may want to include some demographic questions on the modified instrument. 2. See, for example, Citizens' Commission to Improve Michigan Courts, Final Report and Recommendations to Improve the Efficiency and Responsiveness of Michigan Courts (Lansing, MI: Michigan Supreme Court, 1986); GMA Research Corporation, Washington State Judicial Survey (Olympia, WA: Office of the Administrator for the Courts, State of Washington, 1988); and Yankelovich, Skelly, and White, Inc., The Public Image of Courts: Highlights of a National Survey of the General Public, Judges, Lawyers and Community Leaders (Williamsburg, VA: National Center for State Courts, 1978). 3. If a court is interested in its performance in only one or two standard areas, the questionnaire can be modified by adding several questions in the areas of interest and eliminating questions from the other areas. The results of this survey will be more reliable with regard to public perception of court performance in the specific areas, but they will be less reliable with regard to public perception of the court's overall performance. As drafted, the instrument includes one question for each of the 19 standards in the first four performance areas. 4. Although a sample of employees could be drawn, there is value in soliciting everyone's opinions unless cost is a major consideration. In a few very large courts, the number of employees may exceed 1,000. For these courts, a systematic sample of employees should be selected. 5. The focus group sessions will not include much of the technical aspects of traditional focus group sessions such as video recording and observation of the group through a one-way mirror. The cost of conducting these sessions will be considerably less than for typical focus group sessions. The primary expense will be for the services of a professional moderator/facilitator. 6. The description of this measure relies on the work of D. Morgan, Focus Groups as Qualitative Research (Beverly Hills, CA: Sage, 1988). For more information on focus groups, see R. A. Krueger, Focus Groups: A Practical Guide for Applied Research (Beverly Hills: Sage, 1988). 7. Group decisionmaking software may provide an alternative to conducting sessions in person. The software provides a forum for discussions while ensuring the anonymity of participants' comments. 8. The coordinator for the measure may be included if the individual is seen as neutral (e.g., someone from the AOC or from an outside institution such as a university). 9. Although the survey addresses the perceptions of the general public, the instrument can be modified easily to address the perceptions of other publics such as attorneys, jurors, and litigants. 10. Items 23 to 29 of the questionnaire are relevant to Measure 1.5.3. The items seek information on the kinds of people who do not access the courts and the reasons they do not. See Measure 1.5.3 in Performance Area 1, Access to Justice, for more information. ------------------------------ Appendix A: Bibliography American Bar Association. Standards Relating to Juror Use and Management. Chicago, IL: American Bar Association, 1983. Belasco, J.A., and R.C. Stayer. Flight of the Buffalo: Soaring to Excellence, Learning To Let Employees Lead. New York: Warner Books, 1993. Blankenship, M.B., J.B. Spargar, and W.R. Janikowski. "Accountability v. Independence: Myths of Judicial Selection." Criminal Justice Policy Review 6 (1) (1992): 69-79. Bureau of the Census. Statistical Abstract of the United States, 1988. Washington, DC: U.S. Department of Commerce, 1989. Bureau of Justice Assistance. Planning Guide for Using the Trial Court Performance Standards and Measurement System. Washington, DC: U.S. Department of Justice, 1997. Bureau of Justice Assistance. Trial Court Performance Standards and Measurement System With Commentary. Washington, DC: U.S. Department of Justice, 1997. Bureau of Justice Assistance. Trial Court Performance Standards and Measurement System (Program Brief). Washington, DC: U.S. Department of Justice, 1997. Bureau of Justice Assistance. Trial Court Performance Standards and Measurement System Implementation Manual. Washington, DC: U.S. Department of Justice, 1997. Chapper, J., and R. Hanson. Three Papers on Understanding Reversible Error in Criminal Appeals. Williamsburg, VA: National Center for State Courts, 1979. Citizens' Commission to Improve Michigan Courts. Final Report and Recommendations to Improve the Efficiency and Responsiveness of Michigan Courts. Lansing, MI: Michigan Supreme Court, 1986. Clynch, E., and D.W. Neubauer. "Trial Courts as Organizations: A Critique and Synthesis." Administration and Management of Criminal Justice Organizations: A Book of Readings, Stan Stokjovic et al. (eds.) 2d ed. Prospect Heights, IL: Waveland Press, Inc, 1994. Cooper, C.S. Expedited Drug Case Management. Washington, DC: U.S. Department of Justice, Office of Justice Programs, Bureau of Justice Assistance, 1994. Covey, S.R. Principle-Centered Leadership. New York: Summit Books, 1991. Duren v. Missouri, 439 U.S. 357 (1979). Ellickson, P., and J. Petersilia. Implementing New Ideas in Criminal Justice (R-2929-NIJ). Santa Monica, CA: RAND Corporation, 1983. Flemming, R., P. Nardulli, and J. Eisenstein. "The Timing of Justice in Felony Trial Courts." Law and Policy 9 (2) (April 1987): 179-206. Gallas, G., and E.C. Gallas. "Court Management Past, Present, and Future: A Comment on Lawson and Howard." Justice System Journal 15 (2) (1991): 605-616. GMA Research Corporation. Washington State Judicial Survey. Olympia, WA: Office of the Administrator for the Courts, State of Washington, 1988. Goerdt, J.A., et al. Examining Court Delay: The Pace of Litigation in 26 Urban Trial Courts, 1987. Williamsburg, VA: National Center for State Courts, 1989. Goerdt, J.A., C. Lomvardias, and G. Gallas. Reexamining the Pace of Litigation in 39 Urban Trial Courts. Washington, DC: U.S. Department of Justice, Bureau of Justice Assistance, 1991. Goldkamp, J., and M. Gottfredson. Guidelines for the Pretrial Release Decision: Superior Court of Arizona, Maricopa County; Circuit and County Courts, Dade County; Boston Municipal Court; and Suffolk County Superior Court. Bail Guidelines Project. Philadelphia, PA: Temple University, 1985. Goodman, M.L. "Effective Case Monitoring and Timely Dispositions: The Experience of One California Court." Judicature 76 (5) (February-March 1993): 254-257. Gray, E.B. "Day in the Life of a Multi-Door Courthouse." Negotiation Journal 9 (3) (July 1993): 215-221. Hardenbergh, D. "Planning and Design Considerations for Trial Courtrooms." State Court Journal 14 (4) (Fall 1990): 32-38. Headley-Edwards, N., and D.A. Ryan. Comprehensive Adjudication of Drug Arrestees (CADA) Project, 1988-1990. San Jose, CA: Santa Clara County Office of the County Executive, 1990. Herbert, A., and R. Colton. Tables for Statisticians. New York: Barnes and Noble, 1963. Hewitt, W. Court Interpretation: Model Guides for Policy and Practice in the State Courts. Williamsburg, VA: National Center for State Courts, 1995. Jacoby, J.E. "Expedited Drug Case Management Programs: Some Lessons in Case Management Reform." Justice System Journal 17 (1) (1994): 19-40. Jacoby, J.E., E.C. Ratledge, and H.P. Gramckow. Expedited Drug Case Management Programs: Issues for Program Development, Executive Summary. Washington, DC: U.S. Department of Justice, National Institute of Justice, 1992. Johnson, S.S., and P. Yerawadekar. "Courthouse Security." Court Management Journal 3 (1981): 8-12. Kairys, D., J.B. Kadane, and J.P. Lehoczky. "Jury Representation, A Mandate for Multiple Source Lists." California Law Review 65 (1977): 776-827. Kiely, T.J. "Managing Change: Why Reengineering Projects Fail." Harvard Business Review 73 (2) (1995). King County Department of Public Safety, Seattle, WA. King County Department of Public Safety 1989 Annual Report, 1990. Knuth, D.J. The Art of Computer Programming, Vol. 2, Semi-Numerical Algorithms. Reading, MA: Addison-Wesley Publishing Company, 1969. Kotter, J.P. "Why Transformation Efforts Fail." Harvard Business Review 73 (2) (1995): 59-67. Krueger, R.A. Focus Groups: A Practical Guide for Applied Research. Beverly Hills, CA: Sage Publications, 1988. Luskin, M., and R. Luskin. "Why So Fast, Why So Slow: Explaining Case Processing Time." Journal of Criminal Law and Criminology 77 (1) (Spring 1986): 190-214. MacCoun, R.J., and T.R. Tyler. "Basis of Citizens' Perceptions of the Criminal Jury: Procedural Fairness, Accuracy, and Efficiency." Law and Human Behavior 12 (3) (September 1988): 333-352. Maddi, D. Judicial Performance Polls. Chicago: American Bar Foundation, 1977. Mahoney, B., et al. Changing Times in Trial Courts: Caseflow Management and Delay Reduction in Urban Trial Courts. Williamsburg, VA: National Center for State Courts, 1988. Martin, J.A. Approach to Long-Range Strategic Planning for the Courts. Alexandria, VA: State Justice Institute, 1992. Menaster, Spooner, and Greenberg. "Getting a Fair Cross-Section of the Community." Forum (1989): 14-21. Moore, C. Group Techniques for Idea Building. Applied Social Research Methods Series, Vol. 9. Beverly Hills, CA: Sage Publications, 1987. Morgan, D. Focus Groups as Qualitative Research. Beverly Hills, CA: Sage Publications, 1988. Munsterman, G.T., and J.T. Munsterman. "The Search for Jury Representativeness." Justice System Journal 11(1986): 59-78. Nagel, I. "The Legal/Extra-Legal Controversy: Judicial Decisions in Pretrial Release." Law and Society Review 17 (1983): 481. National Center for State Courts. The Americans with Disabilities Act: Title II Self-Evaluation. Williamsburg, VA, 1992. National Center for State Courts. Methodology Manual for Jury Systems. NCSC Publication CJS-004. Williamsburg, VA, 1981. National Center for State Courts. A Supplement to the Methodology Manual for Jury Systems: Relationships to the Standards Relating to Juror Use and Management. Williamsburg, VA, 1987. National Institute of Law Enforcement and Criminal Justice. Multiple Lists for Juror Selection: A Case Study for San Diego Superior Court. Washington, DC: Law Enforcement Assistance Administration, U.S. Department of Justice, 1978. National Sheriffs' Association. Court Security: A Manual of Guidelines and Procedures. Washington, DC: Law Enforcement Assistance Administration, U.S. Department of Justice, 1978. Osborne, D., and T. Gaebler. Reinventing Government: How the Entrepreneurial Spirit Is Transforming the Public Sector. Reading, MA: Addison-Wesley Publishing Company, 1992. Philip, C. How Bar Associations Evaluate Sitting Judges. New York: Institute for Judicial Administration, 1976. Press-Enterprise Co. v. Superior Court of California, 464 U.S. 501 (1984). Press-Enterprise Co. v. Superior Court of California for Riverside, 478 U.S. 106 (1986). Schultz, W.L., C. Bezold, and B.P. Monahan. Reinventing Courts for the 21st Century: Designing a Vision Process. Williamsburg, VA: National Center for State Courts, 1993. Sponzo, M.J. "Independence vs. Accountability: Connecticut's Judicial Evaluation Program." Judge's Journal 26 (2) (Spring 1987): 13-17. Task Force on Principles for Assessing the Adequacy of Judicial Resources. Assessing the Need for Judicial Resources: Guidelines for a New Process. Williamsburg, VA: National Center for State Courts, 1983. Taylor v. Louisiana, 419 U.S. 526 (1975). Tyler, T. "What Is Procedural Justice? Criteria Used by Citizens to Assess the Fairness of Legal Procedures." Law and Society Review 22 (1988): 103. U.S. Congress. Senate Committee on the Judiciary. Juvenile Courts: Access to Justice, March 4, 1992. Hearing before the Subcommittee on Juvenile Justice, 1992. U.S. Department of Health and Human Services. Final Report on the Validation and Effectiveness Study of Legal Representation Through Guardian Ad Litem. Washington, DC, 1994. Wagenknecht-Ivey, B.J. An Approach to Long-Range Strategic Planning for the Courts: Training Guide. Denver, CO: Center for Public Policy Studies, 1992. Williams, R.J. "Envisioning the Courts: Old Myths or New Realities?" The Court Manager 9 (4) (1994). Yankelovich, Skelly, and White, Inc. The Public Image of Courts: Highlights of a National Survey of the General Public, Judges, Lawyers, and Community Leaders. Williamsburg, VA: National Center for State Courts, 1978. Yin, R. Case Study Research Design and Methods. Beverly Hills, CA: Sage Publications, 1984. ------------------------------ Appendix B: Sources for Further Information For further information about the Trial Court Performance Standards and Measurement System, contact: Bureau of Justice Assistance Adjudication Branch 810 Seventh Street NW. Washington, DC 20531 Tel: 202-514-5943 World Wide Web: http://www.ojp.usdoj.gov/BJA Bureau of Justice Assistance Clearinghouse P.O. Box 6000 Rockville, MD 20849-6000 Tel: 1-800-688-4252 Fax: 301-519-5212 World Wide Web: http://www.ncjrs.org E-mail: askncjrs@ncjrs.org Department of Justice Response Center Tel: 1-800-421-6770 National Center for State Courts 300 Newport Avenue Williamsburg, VA 23185 Tel: 757-253-2000 Fax: 757-220-0449 World Wide Web: http://www.ncsc.dni.us