Gendered Information in Resumes and Hiring Bias: A Predictive Modeling Approach

46 Pages Posted: 12 Apr 2022

See all articles by Prasanna Parasurama

Prasanna Parasurama

New York University (NYU) - Leonard N. Stern School of Business

João Sedoc

New York University (NYU) - Leonard N. Stern School of Business

Anindya Ghose

New York University (NYU) - Leonard N. Stern School of Business

Date Written: April 4, 2022

Abstract

We study if men and women with similar job-relevant characteristics write their resumes differently, and if so, how this difference affects hiring outcomes using a predictive modeling approach. Using a matched sample set of resumes (348k resumes), we train a state-of-the-art deep learning model to quantify the extent of gendered information in resumes, and inductively learn differences between male and female resumes with similar job-relevant characteristics. We then use this model to develop a measure of gender-incongruence – i.e. a measure of how much the self-presented gender characteristics in the resume deviate from the self-reported gender of the candidate. Using this measure along with historical hiring data from technology firms, we test whether applicants whose resume gender characteristics deviate from their actual gender (i.e. male resumes with feminine characteristics, female resumes with masculine characteristics) are less likely to receive a callback. We find three main results: (1) there is a significant amount of gendered information in resumes – even among anonymized applicants with similar job-relevant characteristics, our model can learn to distinguish between genders with a high degree of accuracy (AUC=0.81). (2) This gendered information plays a role in hiring bias – women who exhibit masculine characteristics in the resume are less likely to receive a callback after controlling for job-relevant characteristics. (3) Extant dictionary-based methods underestimate this effect. We discuss these findings in light of algorithmic and human bias in hiring.

Keywords: hiring bias, gendered language, NLP

Suggested Citation

Parasurama, Prasanna and Sedoc, João and Ghose, Anindya, Gendered Information in Resumes and Hiring Bias: A Predictive Modeling Approach (April 4, 2022). Available at SSRN: https://ssrn.com/abstract=4074976 or http://dx.doi.org/10.2139/ssrn.4074976

Prasanna Parasurama (Contact Author)

New York University (NYU) - Leonard N. Stern School of Business ( email )

44 West 4th Street
Suite 9-160
New York, NY NY 10012
United States

João Sedoc

New York University (NYU) - Leonard N. Stern School of Business

Anindya Ghose

New York University (NYU) - Leonard N. Stern School of Business ( email )

44 West 4th Street
Suite 9-160
New York, NY NY 10012
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
817
Abstract Views
2,679
Rank
59,355
PlumX Metrics