Written by: barbara-hyman
The Machine picked the wrong guy
AI used to inform decisions about people
I work with a team building a product-driven by AI which is used to inform decisions about people. This means I am often approached on social media or in-person by people who have a point of view about that, often with fear or frustration about being picked (or rejected) by a machine.
This week I received an email from a commerce/law graduate who had recently applied for a role at one of the big ‘accounting’ professional services firms. This student, let’s call him Dan, had to complete an online game in order to qualify for the next step which was a video interview.
To give himself the maximum chance of ‘doing well’ in the game, Dan created a dummy profile ‘Jason’ to see what the experience was like and get an inside read of the questions so that when he did it for real he would really nail it. This first time round he fudged the test as it was a trial run and he left most answers blank. When Dan went and did this for real, he was conscientious of course and wrote thoughtful answers and tried to pick the right behaviour in the balloon popping game!
Jason, who scored 44% received a video interview. Jason does not exist.
Dan, who scored 75% did not progress to the next round.
The machine picked the wrong guy
Let me be upfront and say that we too use machines to help identify the best-fit applicants for roles.
Every business like ours that works in this space recognises that this is new technology, and so still very much in the early stages of development. Like humans, machines will make mistakes. In our business, we call them false positives (people recommended who just aren’t right) or false negatives (people who are missed by the machine who could be right for the role).
Dan’s questions are legitimate…
- -how can Jason who scored lower get through the process?
- -how can Jason who fudged it and left most of the questions blank, get through?
- -how does the number of balloons you popped, or let pop, dictate whether you are going to be a good hire?
When you are rejected by humans, either you hear nothing or you may get an explanation like — ‘you aren’t a good culture fit’ when they reject you. Machines may give you a score.
For me what this reveals is that any business who uses AI and ML for candidate selection, it’s critical to have empathy for the person who is experiencing this, in this case, empathy for the candidate experience.
- What does it feel like to be judged based on how well you play an online game? Is that a human experience? How gameable is it? Is it fair to all, given that some people have grown up playing video games and others haven’t?
- How do you give feedback in a way that’s human and helpful and ideally developmental? The feedback given here was to say you didn’t make the % threshold.
An individual’s traits, strengths and weaknesses can be predicted.
Machines can make better selection decisions about people because they have access to a larger more comprehensive set of data, can process data faster, and if built with the right objective data, they can be far less bias than humans.
When used in recruitment, they need to work for both parties — the organisation and the candidate. Building trust in these technologies is critical in our space. It can’t all be about the organisation getting their efficiency gains.
This means :
- Making these experiences reflective of human experiences — it should feel like an interview not like a game.
- Giving the candidate something back, by way of qualitative feedback rather than making you feel like a number or a percentage on a distribution chart.
Recruitment wants to rise above being a process. So AI in recruitment should enable that if it’s to be trusted by candidates.