ResourceseBookInclusive hiring1 bias in the recruiting process

1. Bias in the recruiting process

Bias in the recruiting process has existed as long as modern-day hiring practices have. Recently though there has been a welcome focus on removing bias from hiring as continued evidence has rolled in that homogenous teams are less productive and innovative than ones that embrace and support diversity.

Hiring companies originally addressed bias with ‘blind applications’ – a trend that gained traction a few years ago. This was largely limited to removing names on applications – the thinking being that it would remove any gender or racial profiling.

It made a difference, but bias still existed as those recruiting still made decisions according to the schools that people attended, as well as past experience they may have had—and access to both prestigious schools and impressive work experience is plagued by bias and favours those with privilege.

It’s worth noting that both experience and schools that people attended are two aspects that have now been shown to have no impact on a person’s ability to do a job.

At Sapia, building the most inclusive platform for hiring is a mission that drives us every day.

We know that humans need help removing bias, and this is why we firmly believe that technology has to be part of the solution. Yes, technology can amplify ‘humanity’ if we use it correctly. It’s true that Artificial Intelligence has a checkered history in helping remove bias.

Early attempts still ran through CVs and amplified biases based on gender, ethnicity and age. It’s worth being familiar with the ‘‘Amazon experiment’ – it highlighted just how flawed relying on CVs is as a quality and fairness filter for hiring.

This highlighted a second fundamental belief for us at Sapia – when it comes to using data to build predictive models to inform and guide decision- making, you need the right input data.

A common question asked of Ai is – if it’s built by humans, how can it not be biased? The idea that a human can encode their own biases in the Ai — well it’s just not true if the right science is followed.

Ai especially predictive machine learning models, are an outcome of a scientific process. It’s no different to any other scientific theory where a hypothesis is being tested using data (for instance, think about how the science around climate change has evolved).

The beauty of the scientific method is that every scientific theory is falsifiable, a condition first brought to light by the philosopher of science Karl Popper. A predictive machine learning model is no different.

To read the rest of this ebook, please download it from the title page.


Join our community

Join our community and get weekly news, tips and insights on recruitment, ethical Ai, better interviewing, and more.