Written by: Team PredictiveHire
Want to make recruitment more human? Make it ‘trustless’
Last week, our smart interviewer technology was featured in a glowing piece by the Australian Financial Review. The story was picked up by LinkedIn News Australia, who conducted a poll asking users if they were “comfortable being interviewed by a bot”.
The poll garnered more than 6,500 responses. Perhaps unsurprisingly, 50% of respondents selected the response “No – it’s a job for humans.” Just under a third of LinkedIn users said that they believe chatbot interviewing is “the future”, while 21% said that it’s appropriate only for certain roles.
When you have over 6,500 responses, you can do some meaningful analysis. In this case, “It’s just for humans” was the prevailing opinion. But, in the comments section attached to the poll, we discovered more about how people feel toward Ai, both as a technological construct and as a tool for recruitment. We bucketed the comments into five recurring themes:
- We can’t trust the people that make Ai
- Ai can never remove bias
- Ai aims to replace humans
- Ai is dangerous
- People don’t like chatbots, because they aren’t human
Ai hasn’t made a good name for itself lately – take Amazon’s recent facial recognition debacle as a good example – so it’s easy to see why people are resistant to the prospect of Ai moving into a space historically handled by humans. Take a bird’s eye view, and the notion certainly looks preposterous: How could a machine, asking just five questions, ever hope to recreate the capabilities of a seasoned recruiter or talent acquisition specialist?
That is the problem, though: The more ‘human’ aspects of the recruitment process are ruining the game. Ghosting is rampant, both for candidates and recruiters. Ineradicable biases are creating unfairnesses that permeate organisations from top to bottom. The Great Resignation is putting immense pressure on hirers to move quickly, excluding thousands of applicants based on arbitrary criteria that shift from month to month. Consider, too, these sobering statistics:
- According to a recent global survey by CoderPad, 65% of tech recruiters believe their hiring process is biased
- Mentions of ‘ghosting’ in Glassdoor interview reviews is up 450% since the start of the pandemic (Business Insider, 2021)
- A toxic corporate culture is 10.4 times more likely to predict employee churn than compensation (the point here being that hiring poorly decimates an organisation in no time flat)
- 78% of job seekers have admitted to lying on their CVs
Ai is held to an impossible standard
For Ai to qualify as a useable, reliable tool, we expect it to be perfect. We compare it, unfairly, against some ultimate human ideal: The chirpy, well-rested recruiter on their best day. The kind of recruiter who has never ghosted anyone, who has no biases whatsoever, and who finds the right person for the right job, no matter what. Here’s the issue with this comparison: That kind of human doesn’t exist.
For Ai to be a valid and useful tool, and an everyday part of the human recruiter’s toolset, it doesn’t need to be perfect, flawless; it only needs to be better than the alternative. Can’t be done? For one example, Smart Interviewer, eliminates the problem of ghosting completely: Each of your candidates gets an interview, and every single person receives feedback. Even better? 98% of the candidates who use our platform find that feedback useful.
(That is to say nothing of the way it removes bias, as if that weren’t enough on its own.)
We need to make recruitment ‘trustless’
Ai has a way to go before it will earn the trust of the majority. Again, this is totally understandable. We believe that there is a better, and quicker, way to get there.
To borrow a concept commonly associated with cryptocurrency and blockchain technology, we want to create a trustless environment for our Ai and its activities. Not an environment without trust, but one in which trust is a foregone conclusion. In a trustless environment, dishonesty, either by admission or omission, is impossible. Just as you cannot forge blockchain entries, you cannot hide the workings and algorithms that make our Ai what it is.
That is the essence of our FAIR Framework. For hiring managers and organisations, this document provides an assurance as well as a template to query fairness related metrics of Ai recruitment tools. For candidates, FAIR ensures that they are using a system built with fairness as a key performance metric. For us, transparency on fairness is standard operating procedure.
Finally, think about this: When we say we want a ‘human’ recruitment process, what are we really saying? That we want something fallible, prone to biases, subject to the decisions of people who have bad days? What if a trustless Ai companion could help remove all that, without replacing the person? Is that not more human?