A study from the University of Washington revealed significant racial and gender bias in artificial intelligence tools used to screen job applications. The research found that three open-source language models favored resumes from individuals with white-associated names 85% of the time and female-associated names 11% of the time. Black men faced the worst discrimination with these models, as they were almost always passed over in favor of other candidates. This bias is a reflection of the existing privileges in society that are captured in the training data used to develop these AI models.

The experiment involved manipulating 554 resumes and 571 job descriptions to test for gender and race bias as well as intersectional bias. Surprisingly, the technology even preferred white men for roles traditionally held by women, such as human resources workers. This study is part of a growing body of research highlighting biases in AI models, and addressing this issue is a significant challenge for researchers. Commercial models are often proprietary and lack transparency regarding the patterns or biases within them. Simply removing names from resumes may not solve the problem, as technology can still infer someone’s identity based on other information provided.

The researchers focused on three top-performing, open-source Language Models (LLMs) from Salesforce, Contextual AI, and Mistral. These models were specifically trained to produce numerical representations of documents for easy comparison. While Salesforce and Contextual AI clarified that the models used in the study were not intended for real-world hiring processes, they acknowledged the importance of addressing bias and ethical use of AI. Initiatives like California passing a law to protect intersectionality as a characteristic and New York City requiring transparency from companies using AI hiring systems are initial steps towards addressing discrimination in the hiring process.

Removing bias from AI models is a complex challenge, particularly because they learn from existing societal privileges captured in their training data. The black-box nature of many commercial models makes it difficult for researchers to identify or rectify biases. An important part of the solution will be creating training datasets that are free from biases in the first place. The next focus for researchers will be on how human decision makers interact with AI systems, as reliance on technology can potentially lead to even more biased decision-making. Awareness and action to address bias in AI tools and hiring processes are essential steps towards creating a more equitable job application process.

Share.
Exit mobile version