Resources › Whitepaper › Towards establishing fairness in ai based candidate screening › Intro using ai technologies in talent acquisition
Intro: Using Ai technologies in talent acquisition
The use of AI-based technologies in talent acquisition is becoming increasingly commonplace as a way to increase process efficiency, diversity, candidate experience and the quality of hires. AI based screening tools that interact with applicants via video, voice, chatbots, games and psychometric tests are prevalent in the market and have become a magnet for innovation given the poor candidate and employer experience of the traditional assessments.
Along with this trend, has come the growing awareness of the risks in using some AI technologies as highlighted by researchers and amplified by news media around algorithmic and automation biases (Ifeoma Ajunwa, 2019; Jeffrey Dastin, 2018; Raghavan et al., 2019; Rebecca Heilweil, 2019). There is room for valid skepticism due to an absence of any form of accreditation of vendors, who often use new scientific approaches and claims that are unpublished and lacking scientific scrutiny.
What is often neglected in looking at these tools is that it is not only the AI behind the assessment that requires scrutiny but the impact of the whole system for both the candidate and the decision maker. For example, a candidate assessment tool with a poor user experience on mobile devices can lead to lower completion rates within the mobile user group while the AI behind the tool can still be upheld as unbiased based on bias testing conducted on candidates who completed the assessment.
Similarly, on the decision maker front, a system that simply ranks candidates based on a fitness score without providing further insights can lead to an over-reliance on the ranking. It highlights the need for a holistic system view of fairness beyond limited bias testing of outcomes as currently accepted as sufficient evidence of fairness.
Paradoxically, the criticisms of AI are coming at a time when the spotlight is on the progress of diversity in organisations and the movement to interrupt human bias to create more racial and gender equality.
The impact of unconscious and conscious biases of the human selectors on hiring decisions are well examined (Bertrand & Mullainathan, 2004; Horace McCormick, 2016; Kline et al., 2021; Moss-Racusin et al., 2012). Recent research has shown that unconscious bias training, long held as a way to mitigate bias, is not effective in interrupting human biases, resulting in institutions such as the UK and US governments defunding such training (Sean Coughlan, 2020; Tiffany L. Green & Nao Hagiwara, 2020; Williamson & Foley, 2018). This highlights the challenges in establishing long term behaviour change in humans related to unconscious biases, some of which have evolutionary underpinnings (Haselton et al., 2005). On the other hand, data and algorithms-based approaches can in fact interrupt various unconscious biases in human hiring decisions. The right AI tools can highlight and help remove bias from the recruitment process and deliver a more diverse workforce.
Against this background, we find that it is important to understand the case for and against AI in the hiring process. We start by first identifying key themes found in literature related to challenges in using AI in recruitment followed by a discussion on how bias can arise in AI and methods to mitigate it. Finally, we propose a set of guidelines, named FAIR (Fair AI in Recruitment), to help developers, operators and users of AI systems to examine and ensure fairness. Starting with a working definition of fairness, FAIR takes a holistic system view of fairness defining quantifiable properties that can be tested to ensure fairness. We acknowledge that fairness is a complex topic with many contextual nuances related to social and individual circumstances.
Our goal here is to offer a starting point for establishing a standard for fairness in AI based hiring tools by building awareness among practitioners of the fairness related concerns surrounding the use of AI and steps that can be taken to mitigate those. It is also important to note that we use the terms “artificial intelligence (AI)” and “machine learning (ML)” interchangeably as they are commonly used to refer to contemporary technologies surrounding data and algorithms. More accurately, machine learning is considered a sub-field of AI that covers a broader scope than machine learning (Russell & Norvig, 2003).