Resources › Whitepaper › Towards establishing fairness in ai based candidate screening › Fairness challenges in using machine learning
Fairness challenges in using machine learning
We start by looking at concerns raised by researchers from a sociotechnical standpoint at the intersection of AI tools and society, given fairness is first and foremost a social construct.
We searched for publications that covered topics related to fairness, ethics and bias in the use of algorithms in hiring and more broadly in people decision making. The intention here was not to conduct a comprehensive literature review but to uncover common topics across a selected sample of publications. For a detailed review of literature surrounding ethics in AI-enabled candidate selection we refer the reader to (Hunkenschroer & Luetge, 2022) and (Giermindl et al., 2021) for a good overview of broader issues in the domain of people analytics.
We selected 9 articles (Ajunwa, 2019; Bogen & Rieke, 2018; Dattner et al., 2019; Hunkenschroer & Luetge, 2022; Ifeoma Ajunwa, 2019; Raghavan et al., 2019; Sanchez-Monedero et al., 2020; Tambe et al., 2019; Tippins et al., 2021) that specifically discussed ethical and fairness challenges in using AI for hiring and found four key themes in the issues raised by the authors:
- Bias: Does the AI systematise and amplify biases encoded in data and/or by poor algorithm design choices?
- Validity: Are inputs and outcomes accurate with regard to what they are expected to measure?
- Accountability: Who is accountable for the outcomes of AI based decisions
- Transparency: Can the outcomes of AI be explained? Both the interpretability of outcomes and design principles.