Reflections in Response to the Nevada Judicial Evaluation Pilot Project

Rebecca Gill

University of Nevada, Las Vegas

Sylvia R. Lazos

University of Nevada, Las Vegas, William S. Boyd School of Law

December 18, 2009

Our colleagues at the University of Nevada Reno, the authors of the Nevada Judicial Evaluation Pilot Project: Final Report (NJEPP Report) and the Nevada Judicial Evaluation Project: Supplement to the Final Report (NJEP Supplement), have done an admirable job of setting up a framework for a workable program. They succeed in identifying key components for a successful judicial performance evaluation (JPE) program for Nevada. In the spirit of collaboration, and because of our good faith interest in making sure that Nevada’s JPE system is as fair as it can possibly be, we wish to supplement the NJEPP Report’s findings with some additional reflections on creating a system that meets the Commission’s goals.

Our key observations are as follows: 1) there is a potential for bias in any judicial evaluation program. Steps must be taken to mitigate this possibility, 2) the survey respondent groups must be carefully identified, and instruments must be sensitive to the strengths and weaknesses of the assessments from each group, 3) because response bias is likely to lead to unreliable results, steps must be taken to boost the response rates of the various respondent groups, 4) care must be taken when aggregating the survey data, as implicit value judgments are made whenever averages are computed, 5) the process of creating, implementing, interpreting and utilizing judicial evaluation data will require many different groups of actors. The roles of these groups must be clearly delineated, 6) the use of state-sponsored JPEs in contested judicial elections is unprecedented. The use of such a program in this context raises critical issues of fairness and threatens the integrity of the electoral process, 7) female and minority judges are the most likely to be evaluated harshly, even when the JPE system is well designed, because of unconscious biases. In the interest of fairness, questions that are the most likely to bring into the evaluation process unconscious biases should be either eliminated or weighed less, and 8) The Commission should consider creating a system whereby each piece of information is weighted systematically and a priori. Putting more weight on objective measures (e.g., ethical complaints filed against a judge, record of whether a judge has been overturned by a superior court, and number of cases handled) may help to alleviate some of the potential for bias and unfairness.

Keywords: judicial performance evaluation

working papers series

Not Available For Download

Date posted: January 24, 2010  

Suggested Citation

Gill, Rebecca and Lazos, Sylvia R., Reflections in Response to the Nevada Judicial Evaluation Pilot Project (December 18, 2009). Available at SSRN: http://ssrn.com/abstract=1539825 or http://dx.doi.org/10.2139/ssrn.1539825

Contact Information

Rebecca Gill (Contact Author)
University of Nevada, Las Vegas ( email )
4505 S. Maryland Parkway
Las Vegas, NV 89154
United States
HOME PAGE: http://faculty.unlv.edu/rwood
Sylvia R. Lazos
University of Nevada, Las Vegas, William S. Boyd School of Law ( email )
4505 South Maryland Parkway
Box 451003
Las Vegas, NV 89154
United States
Feedback to SSRN

Paper statistics
Abstract Views: 238

© 2014 Social Science Electronic Publishing, Inc. All Rights Reserved.  FAQ   Terms of Use   Privacy Policy   Copyright   Contact Us
This page was processed by apollo2 in 0.438 seconds