Building trust and transparency

Sapia.ai releases its independent bias audit

As a leader in ethical Ai for assessment, it is critical we are transparent about the functions and impacts of our technology. As part of this ongoing commitment, we have completed an independent Disparate Impact Analysis (DIA) of our Smart Interviewer – with fantastic results for customers, practitioners, and candidates.

Jump to executive summary

Giving you confidence in our Ai

The world is rapidly adapting to the power, potential, and risk of Ai, particularly for recruitment. New York City’s forthcoming law on the use of independently-audited Ai technology is a prime example. Now, our customers in the US and across the world have certainty that they are using technology with no evidence of practically significant disparate impact. If you are considering adopting Ai for recruitment, the results of this new audit will give you confidence that our technology can fairly power your talent strategy.

Executive summary


Sapia.ai (formerly Predictive Hire) is a leading provider of artificial intelligence powered employment application screening tools, including Chat Interview, a text-based interview tool in which applicants provide textual responses to standard interview questions and then are scored based on these responses.

Sapia.ai engaged BLDS, LLC, a nationally recognized statistics and economics firm, to perform a Disparate Impact Analysis (DIA) of Sapia.ai’s Chat Interview tool for purposes of conducting a bias audit.

This report covers BLDS’s DIA of Sapia’s Chat Interview models for North America. The models tested were developed and deployed in two jurisdictions: the United States and Canada.

A Chat Interview model provides a score ranging on an interval from 0 (representing a poor fit) to 1 (representing a strong fit) for applicants for a specific employment opening. The Chat Interview model further provides a ‘Yes’ recommendation for the top 40% of the applicants, a ‘Maybe’ recommendation for the middle 20% of applicants, and a ‘No’ recommendation for the bottom 40% of applicants. The score and recommendations are intended to provide a tool for employers to screen applicants and to prioritize which applicants to move forward in the interview process. Employers using Chat Interview are free to use the recommendations and/or the scores however they see fit. Because employers may use the Chat Interview scores and recommendations in various ways, BLDS employed a wide range of DIA tools and metrics to uncover any disparate impact that could arise across numerous different use cases.

At an overall level, BLDS analyzed model scores using two commonly used metrics for determining whether there is evidence of practically significant disparate impact: the Standardized Mean Difference, or SMD, and the Adverse Impact Ratio, or AIR.
Using the SMD, BLDS performed a total of 23 tests for protected groups on the North American models. Under the SMD test, BLDS found no evidence of practically significant disparate impact for any protected group assessed in the United States or Canada.

Using the AIR, BLDS performed a total of 49 tests for protected groups on the North American models. First, the use case where applicants advanced as the result of a ‘Yes’ recommendation was tested using the AIR. Second, the use case where applicants who received a ‘Yes’ or ‘Maybe’ recommendation was tested using the AIR. None of these tests revealed evidence of practically significant disparate impact for any protected group in the United States or Canada.

For more information, contact Barb Hyman here.

Our ongoing commitment to ethical Ai

How we build trust through transparency

When you’re a leader in ethical Ai, you must make choices about the design, data, and science behind your products. At Sapia.ai, we choose to maximize transparency, explainability, and fairness.

This resource explains the strategic choices that make up our Ai technology, and why you can trust it as a tool for recruitment and people decision-making.

For practitioners

Our FAIR™ Framework

Ai can deliver powerful and better outcomes for recruiters and candidates, but we must ensure that all recruiting Ai is fair.

  • For candidates, FAIR™ ensures that they are using a system built with fairness as a key performance metric
  • For hiring managers and organisations, this guide provides an assurance as well as a template to query fairness related metrics of Ai recruitment tools.

This set of guidelines helps HR leaders make smart decisions so they can trust the Ai tools that they use.

Blog

Why candidates prefer Chat Interviews over Video Interviews

Read More
Blog

Making our Smart Interviewer™
even smarter with Gen AI

Sapia. Read More

Blog

Females score higher on chat interviews, yet are hired less

Read More