Reflections in Response to the Nevada Judicial Evaluation Pilot Project
Rebecca D Gill
University of Nevada, Las Vegas
Sylvia R. Lazos
University of Nevada, Las Vegas, William S. Boyd School of Law
December 18, 2009
UNLV William S. Boyd School of Law Legal Studies Research Paper No. 10-36
Our colleagues at the University of Nevada Reno, the authors of the Nevada Judicial Evaluation Pilot Project: Final Report (NJEPP Report) and the Nevada Judicial Evaluation Project: Supplement to the Final Report (NJEP Supplement), have done an admirable job of setting up a framework for a workable program. They succeed in identifying key components for a successful judicial performance evaluation (JPE) program for Nevada. In the spirit of collaboration, and because of our good faith interest in making sure that Nevada’s JPE system is as fair as it can possibly be, we wish to supplement the NJEPP Report’s findings with some additional reflections on creating a system that meets the Commission’s goals.
Our key observations are as follows: 1) there is a potential for bias in any judicial evaluation program. Steps must be taken to mitigate this possibility, 2) the survey respondent groups must be carefully identified, and instruments must be sensitive to the strengths and weaknesses of the assessments from each group, 3) because response bias is likely to lead to unreliable results, steps must be taken to boost the response rates of the various respondent groups, 4) care must be taken when aggregating the survey data, as implicit value judgments are made whenever averages are computed, 5) the process of creating, implementing, interpreting and utilizing judicial evaluation data will require many different groups of actors. The roles of these groups must be clearly delineated, 6) the use of state-sponsored JPEs in contested judicial elections is unprecedented. The use of such a program in this context raises critical issues of fairness and threatens the integrity of the electoral process, 7) female and minority judges are the most likely to be evaluated harshly, even when the JPE system is well designed, because of unconscious biases. In the interest of fairness, questions that are the most likely to bring into the evaluation process unconscious biases should be either eliminated or weighed less, and 8) The Commission should consider creating a system whereby each piece of information is weighted systematically and a priori. Putting more weight on objective measures (e.g., ethical complaints filed against a judge, record of whether a judge has been overturned by a superior court, and number of cases handled) may help to alleviate some of the potential for bias and unfairness.
Number of Pages in PDF File: 28
Keywords: judicial performance evaluationworking papers series
Date posted: July 30, 2010 ; Last revised: August 31, 2010
© 2014 Social Science Electronic Publishing, Inc. All Rights Reserved.
This page was processed by apollo6 in 0.422 seconds