Intro: Fair Framework
Ai (Artificial Intelligence) technology is poised to transform every industry, just as electricity did 100 years ago. Between now and 2030, it will create an estimated $13 trillion of GDP growth. In recent years, HR and recruitment technology has become dense with Ai products. Most CHRO’s inboxes are overwhelmed with emails about new solutions. There’s big hairy audacious claims of ROI and liberal use of the latest buzzwords for what are often simple and unsophisticated matching tools.
At the same time, there is a growing awareness of the risk in using some Ai technology amidst news articles around algorithmic and automation bias. There is room for valid scepticism with an absence of any form of accreditation of vendors, who often use new scientific approaches and claims that are unpublished, and lack scientific scrutiny. Regulation is light years behind tech innovation. As the market gets denser with new products, so does the rhetoric around the dangers of Ai. In the HR industry a lot of these fears centre around the amplification and automation of human biases via Ai. This is valid, but it also ignores the power of Ai (aka data) to identify and mitigate bias if used wisely. Fear is limiting our capacity for real change.
If you are committed to a culture of decision-making with data and not decision-making from “gut instinct”, then Ai literacy and empowerment need to be prioritised in your organisation. This resistance to Ai has happened at the same time the spotlight is on bias interruption in our organisations and institutions. The campaign for racial justice and equality has been amplified by the Black Lives Matter movement.
The right Ai tool can remove bias from your recruitment process and deliver a more diverse workforce. The right data disperses the burden of ignorance inside a company, and can transform your culture. It can do this more effectively than rounds of unconscious bias training which research has shown does not work to change attitudes. This finding has led the UK government to defund all such training. There is no shortcut to making the process of Ai literacy easy for CHROs. The bar must be held high when you are making life changing decisions on the basis of data.
- Self-education: Something this paper is designed to help you with.
- Self-regulation: Thorough impact assessments looking at the holistic candidate experience not just the algorithmic components overseen by joint team comprising HR, legal and cyber security.
- Support: This should be in the form of a guiding framework for making the right decisions
This paper offers a guiding framework Fair Ai for Recruitment (FAIR) centred on a close examination of what constitutes ‘fair’ and the additional steps to ensuring trust in the technology system which takes into account aspects of the technology vendor organisation and its own systems for transparency and bias mitigation. Ai can deliver powerful and better outcomes for recruiters and candidates, but we must ensure that all recruiting Ai is fair.
To read the rest of this paper, download from the title page.