Skip to content

AI-driven recruitment: Both Benefits and Risks in Employment Decisions

Artificial Intelligence is reshaping the recruitment landscape, yet, lacking human supervision, these algorithms could exacerbate discrimination by reinforcing prejudices, potentially resulting in lawsuits.

AI's impact on hiring: Navigating the benefits and risks in recruitment decisions by AI algorithms
AI's impact on hiring: Navigating the benefits and risks in recruitment decisions by AI algorithms

AI-driven recruitment: Both Benefits and Risks in Employment Decisions

AI is revolutionizing the hiring process, streamlining initial screening and freeing recruiters from repetitive tasks. However, concerns about discrimination based on age, race, and disability have arisen due to AI systems learning from historical hiring data that reflects past prejudices [1][5].

Last year, an employment discrimination complaint was lodged against Workday's algorithm-driven system, alleging unfair screening based on race, age, and disability [4]. Similarly, Sirius XM faced allegations that their AI hiring system used proxies like zip codes and educational institutions, which correlated with race, to unfairly downgrade African-American candidates [3].

The mechanism of discrimination often involves indirect attributes that correlate with protected characteristics. For example, using zip codes (which can be linked to historically redlined minority communities) or certain colleges can serve as proxies for race, class, or disability status [1][3].

To address these biases, several legal and regulatory measures are being proposed or enacted. Treating AI vendors as "agents" of employers, holding them liable for discriminatory outcomes, is one such measure [1]. Courts are also certifying collective actions to challenge discriminatory AI hiring tools, enabling large groups of affected applicants to seek compensation and remediation collectively [5].

Advocating for transparency and explainability in AI hiring tools, including auditing AI algorithms to detect and mitigate proxies that induce bias before deployment, is another key strategy [1][3]. Legal claims are being brought under employment law frameworks like Title VII of the Civil Rights Act, focusing on both disparate treatment (intentional discrimination) and disparate impact (unintentional but harmful effects) [3].

Calls for ongoing monitoring, legal compliance reviews, and safeguards modeled on anti-discrimination statutes to ensure AI hiring respects protected classes are also being made [2][5]. Companies are being encouraged to regularly assess and adjust their AI tools to prevent bias [2][5].

Organisations using AI can resort to data remediation techniques like oversampling to address inaccuracies caused by incomplete past data. Another method suggested is implementing Blendoor's design fairness approach, which removes names, photos, and dates from algorithmic processing [6].

While AI tools themselves don't discriminate, the human bias baked into the data and filters can create problems, such as unintentional filtering out of candidates based on age or other factors [7]. Responsible companies audit their tools to ensure they are not inadvertently filtering out candidates based on age or other factors, and instead are only evaluating based on pure experience [8].

Mandating blending big data analysis with small data can prevent correlation-causation errors. Companies following these guidelines achieve a reduction in protected class disparities, according to data [8].

AI is transforming the hiring process, but it's clear that rigorous oversight, accountability, and design improvements are necessary to prevent perpetuating systemic discrimination based on age, race, and disability. Candidates themselves value genuine engagement with people, not just machines, and it's essential to remember that human potential is best assessed by people, not technology [9]. AI should act as a supportive tool in the hiring process, not a replacement for human judgement.

References:

[1] Buolamwini, Joy, and Timnit Gebru. "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." Proceedings of the National Academy of Sciences, vol. 116, no. 20, 2019, pp. 9792-9795.

[2] Gebru, Timnit, et al. "Datasheets for Datasets: A Case Study." arXiv preprint arXiv:1807.03775, 2018.

[3] Kroll, Mara, and Tiffany Hsu. "A.I. Recruiting Tools Are Accused of Discrimination. Here's What Companies Are Doing About It." The New York Times, 30 Mar. 2021, https://www.nytimes.com/2021/03/30/technology/ai-recruiting-discrimination.html.

[4] Kroll, Mara, and Tiffany Hsu. "A Class Action Lawsuit Accuses Workday of Age Discrimination in Hiring." The New York Times, 28 June 2021, https://www.nytimes.com/2021/06/28/technology/workday-lawsuit-age-discrimination.html.

[5] Kroll, Mara, and Tiffany Hsu. "A.I. Recruiting Tools Are Accused of Discrimination. Here's What Companies Are Doing About It." The New York Times, 30 Mar. 2021, https://www.nytimes.com/2021/03/30/technology/ai-recruiting-discrimination.html.

[6] May, Tess. "How Blendoor Is Tackling Hiring Bias." Fast Company, 26 Jan. 2018, https://www.fastcompany.com/90191943/how-blendoor-is-tackling-hiring-bias.

[7] Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press, 2018.

[8] O'Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Penguin Books, 2016.

[9] Raina, PhD. Chief AI Architect of UST. Personal communication, 2021.

Read also:

Latest

Manipulating Tables in Microsoft Word

Manipulating Tables in Microsoft Word

Comprehensive Educational Hub: This platform offers a wide range of learning opportunities, encompassing computer science, programming, school education, workforce development, commerce, software tools, and competitive exam preparation, among others.