Sapia Labs

The science behind our ethical AI

How our AI Smart Interviewer achieves unprecedented results

Interrupting

Bias

Humans have biases, those both conscious and unconscious. We just can’t shake them. So what do you do about them when it comes to recruitment?

How do you remove bias from the hiring process?

You need a human-centred AI Smart Interviewer – it’s as simple as that. By using our Smart Interviewer to interview for you, you eliminate bias at the top of your recruitment funnel and give every person a fair chance.

How does an AI assessment tool reduce bias?

  1. First of all, through the use of the structured interview. Text-based chat is the optimal format for accurate candidate assessment. Because our Smart Interviewer can analyse word usage and sentence structure, it can assess candidates with the most impartiality. The structured interview is the best selection method for predicting good job performance.
  2. In its assessment, our AI only needs five to seven structured interview questions. These questions are related to situational judgment and past behaviour. Our platform doesn’t use any demographic details: No names, locations, social-economic indicators, or anything else one might find on a resumé.
  3. Our Smart Interviewer does not directly score candidates from language alone. Language is known to encode latent signals related to gender and other demographic markers. Instead, our AI first infers various measures such as personality traits, behavioural competencies, and communication skills – things that matter to the given job function. Its score is based on assessment of those measures. This approach significantly reduces any language-level biases, and also allows for testing of bias.

 

Learn more about how our AI mitigates gender bias

When making decisions, think of options as if they were candidates. Break them up into dimensions and evaluate each dimension separately. Then, delay forming an intuition too quickly. Instead, focus on the separate points, and when you have the full profile, then you can develop an intuition.

Daniel Kahneman, Psychologist and Nobel Laureate

Machine learning, explained

How do our machine learning models work?

We use a novel approach to build scoring models, and it’s quite different to classical machine learning approaches – ones that purely depend on past data on hiring and job performance outcomes.

When you rely solely on past data in building models, you run the risk of inducing historical data biases. Instead, we use an optimization approach, that uses bias constraints to ensure the resulting models are not biased against known demographic groups.

The model build process also includes a human in the loop, where a hiring manager/recruiter provides a desired candidate profile, instead of purely relying on past hiring/performance to discover a profile that might suit the given job.

We have come up with a best practice approach for upholding fairness, called our FAIR framework. It describes how machine learning, used in candidate selection, can be tested for bias and fairness. For example, by default, we run tests such as the EEOC recommended 4/5th rule (as well as other statistical tests) before models are released. Test results are recorded in a model card, too, which is an approach pioneered by Google.

Fairness score: A first for the HR and recruitment industry

Interrupting bias and increasing diversity is one thing – but tracking key metrics and reporting on them is another. Other organizations will struggle to set, measure, and achieve their fairness and DEI objectives, but you won’t. You will have real-time analytics and helpful scores to prove you’re making progress, and making the world a better place.

That’s what our Fairness Score does. We track and evaluate hiring diversity across female, minority ethnic, and other groups, and give you simple ratings on two key elements:

  • Our AI Fairness Score measures our AI’s recommendation fairness.
  • Customer Fairness Score measures your hiring fairness, based on recommendations from our AI.

It’s always easy to see where you’re at. A score below 0.9 is a red flag for unfairness toward these groups; a score of 0.9-1.1 shows a green flag, for fair treatment; and a score above 1.1 shows a yellow flag, indicating that members of these groups are being hired in disproportionately high numbers.

Blog

ICO Recommendations for AI Tools in Recruitment

At Sapia.ai, we’re dedicated to creating a hiring experience that is transparent, inclusive, and respectful of every candidate’s privacy. This month, the UK Informa Read More
Blog

The 5 Interview Questions Candidates Love Most – And Why You Should Be Asking Them

Read More
Blog

The Live Job Interview: A Pivotal Moment in Volume Hiring

Read More

Sign up to our newsletter

If you’re ready for change, we can make it happen

Book Now