Towards establishing fairness in Ai-based candidate screening

As Ai tools become more common, people are concerned that algorithmic biases can adversely impact hiring decisions. That’s understandable – we don’t have to look beyond 2018 and the Amazon Ai debacle to see instances of technologies causing unfair outcomes.

Luckily, there’s a solution. Our Smart Interviewer is subject to rigorous testing and analysis, to make sure that its functions and outcomes are fair, unbiased, accurate, and explainable. In a world without universal regulation of Ai, we need to increase trust by holding ourselves to a stringent standard.

This whitepaper unpacks our approach to ensuring fairness and trustworthiness in our Ai as a recruitment tool.