Reflections in Response to the Nevada Judicial Evaluation Pilot Project

28 Pages Posted: 30 Jul 2010 Last revised: 31 Aug 2010

See all articles by Rebecca D. Gill

Rebecca D. Gill

University of Nevada, Las Vegas

Sylvia R. Lazos

University of Nevada, Las Vegas, William S. Boyd School of Law

Multiple version iconThere are 2 versions of this paper

Date Written: December 18, 2009

Abstract

Our colleagues at the University of Nevada Reno, the authors of the Nevada Judicial Evaluation Pilot Project: Final Report (NJEPP Report) and the Nevada Judicial Evaluation Project: Supplement to the Final Report (NJEP Supplement), have done an admirable job of setting up a framework for a workable program. They succeed in identifying key components for a successful judicial performance evaluation (JPE) program for Nevada. In the spirit of collaboration, and because of our good faith interest in making sure that Nevada’s JPE system is as fair as it can possibly be, we wish to supplement the NJEPP Report’s findings with some additional reflections on creating a system that meets the Commission’s goals.

Our key observations are as follows: 1) there is a potential for bias in any judicial evaluation program. Steps must be taken to mitigate this possibility, 2) the survey respondent groups must be carefully identified, and instruments must be sensitive to the strengths and weaknesses of the assessments from each group, 3) because response bias is likely to lead to unreliable results, steps must be taken to boost the response rates of the various respondent groups, 4) care must be taken when aggregating the survey data, as implicit value judgments are made whenever averages are computed, 5) the process of creating, implementing, interpreting and utilizing judicial evaluation data will require many different groups of actors. The roles of these groups must be clearly delineated, 6) the use of state-sponsored JPEs in contested judicial elections is unprecedented. The use of such a program in this context raises critical issues of fairness and threatens the integrity of the electoral process, 7) female and minority judges are the most likely to be evaluated harshly, even when the JPE system is well designed, because of unconscious biases. In the interest of fairness, questions that are the most likely to bring into the evaluation process unconscious biases should be either eliminated or weighed less, and 8) The Commission should consider creating a system whereby each piece of information is weighted systematically and a priori. Putting more weight on objective measures (e.g., ethical complaints filed against a judge, record of whether a judge has been overturned by a superior court, and number of cases handled) may help to alleviate some of the potential for bias and unfairness.

Keywords: judicial performance evaluation

Suggested Citation

Gill, Rebecca D. and Lazos, Sylvia R., Reflections in Response to the Nevada Judicial Evaluation Pilot Project (December 18, 2009). UNLV William S. Boyd School of Law Legal Studies Research Paper No. 10-36, Available at SSRN: https://ssrn.com/abstract=1650764 or http://dx.doi.org/10.2139/ssrn.1650764

Rebecca D. Gill

University of Nevada, Las Vegas ( email )

4505 S. Maryland Pkwy. Box 455029
Las Vegas, NV NV 89154
United States
7028952525 (Phone)
7028951065 (Fax)

HOME PAGE: http://www.rebeccagill.net

Sylvia R. Lazos (Contact Author)

University of Nevada, Las Vegas, William S. Boyd School of Law ( email )

4505 South Maryland Parkway
Box 451003
Las Vegas, NV 89154
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
61
Abstract Views
896
Rank
641,973
PlumX Metrics