Fair AI

AI and technology can seem scary. But, when explained and held to a strict code of ethics, AI can be fair, unbiased, and an ideal companion to human decision makers.

Download our FAIR framework

Sapia Labs

FAIR

That’s why we’ve developed the FAIR AI for Recruitment (FAIR™) framework. It helps educate the world on how we assess AI technology for use in organizations. It also sparks conversations for AI developers in the space, propelling the potential and reliability of Ai further forward.

We believe AI can deliver better outcomes for recruiters and people, but we must ensure that all recruiting AI is fair and explainable.


Understanding bias in AI

In an AI system, bias can originate at three key points: data, algorithms and user interaction. A typical system uses data and algorithms to model real-world environments, in order to come up with predictive outcomes that help solve a problem. Therefore, it’s vital we understand these areas in order to build fairer AI systems.

The FAIR™ Framework

In order to demonstrate that an Ai system adheres to the fairness definition, FAIR™ expects it to demonstrate four properties – in other words, it must prove that it is unbiased, valid, explainable and inclusive. Organizations also need to build trust with users, and can extend FAIR™ to achieve this by demonstrating three key areas: Data privacy and security, team diversity, and transparency.

UnbiasedThe outcomes from Ai should not be biased towards a group defined by a protected attribute. This can be demonstrated with an applicable set of bias tests such as the 4\5th rule, error rate parity and other statistical tests.

ValidAs predictive applications, outcomes of Ai need to demonstrate validity, specifically criterion validity. In other words, evidence must be provided on how well the Ai is able to predict what it is designed to do so.

ExplainableDocumentation and tool support to interpret the outcomes of the Ai solution. This includes interpretation of individual outcomes by providing insights to both candidates and hiring managers on what the Ai has learnt about the candidate in assigning a score.

InclusiveThe measure of inclusivity attempts to establish that all candidates are treated equally in the process of using an Ai system. It is an end to end system consideration that goes beyond the AI components.

Read our FAIR framework here

Building trust in ethical AI through transparency

When you’re a leader in ethical AI, you must make choices about the design, data, and science behind your products. At Sapia.ai, we choose to maximize transparency, explainability, and fairness.

This resource explains the strategic choices that make up our AI technology, and why you can trust it as a tool for recruitment and people decision-making.

Read our ethical Ai paper here
Blog

Situational Judgment Tests vs. AI Chat Interviews: A Modern Perspective on Candidate Assessment

Choosing the right tool for assessing candidates can be challenging. For years, situational judgment tests (SJTs) have been a common choice for evaluating behaviour and Read More
Blog

Keeping Interviews Real with Next-Gen AI Detection

It’s our firm belief that AI should empower, not overshadow, human potential. While AI tools like ChatGPT are brilliant at assisting us with day-to-day tasks and impr Read More
Blog

ICO Recommendations for AI Tools in Recruitment

At Sapia.ai, we’re dedicated to creating a hiring experience that is transparent, inclusive, and respectful of every candidate’s privacy. This month, the UK Informa Read More