We hope that the debate over the value of diverse teams is now over. There is plenty of evidence that diverse teams lead to better decisions and therefore, business outcomes for any organisation.
This means that CHROs today are being charged with interrupting the bias in their people decisions and expected to manage bias as closely as the CFO manages the financials.
But the use of Ai tools in hiring and promotion requires careful consideration to ensure the technology does not inadvertently introduce bias or amplify any existing biases.
To assist HR decision-makers to navigate these decisions confidently, we invite you to consider these 8 critical questions when selecting your Ai technology.
You will find not only the key questions to ask when testing the tools but why these are critical questions to ask and how to differentiate between the answers you are given.
Another way to ask this is: what data do you use to assess someone’s fit for a role?
First up- why is this an important question to ask …
Machine-learning algorithms use statistics to find and apply patterns in data. Data can be anything that can be measured or recorded, e.g. numbers, words, images, clicks etc. If it can be digitally stored, it can be fed into a machine-
learning algorithm.
The process is quite basic: find the pattern, apply the pattern.
This is why the data you use to build a predictive model, called training data, is so critical to understand.
In HR, the kinds of data that could be used to build predictive models for hiring and promotion are:
If you consider the range of data that can be used in training data, not all data sources are equal, and on its surface, you can certainly see how some carry the risk of amplifying existing bases and the risk of alienating your candidates.
Using data that is invisible to the candidate may impact your employer brand. And relying on behavioural data such as how quickly a candidate completes the assessment, social data or any data that is invisible to the candidate might expose you to not only brand risk but also a legal risk. Will your candidates trust an assessment that uses data that is invisible to them, scraped about them or which can’t be readily explained?
Increasingly companies are measuring the business cost from poor hiring processes that contribute to customer churn. 65% of candidates with a positive experience would be a customer again even if they were not hired and 81% will share their positive experience with family, friends and peers (Source: Talent Board).
Visibility of the data used to generate recommendations is also linked to explainability which is a common attribute now demanded by both governments and organisations in the responsible use of Ai.
Video Ai tools have been legally challenged on the basis that they fail to comply with baseline standards for AI decision-making, such as the OECD AI Principles and the Universal Guidelines for AI.
Or that they perpetuate societal biases and could end up penalising nonnative speakers, visibly nervous interviewees or anyone else who doesn’t fit the model for look and speech.
If you are keen to attract and retain applicants through your recruitment pipeline, you may also care about how explainable and trustworthy your assessment is. When the candidate can see the data that is used about them and knows that only the data they consent to give is being used, they may be more likely to apply and complete the process. Think about how your own trust in a recruitment process could be affected by different assessment types.
1st party data is data such as the interview responses written by a candidate to answer an interview question. It is given openly, consensually and knowingly. There is full awareness about what this data is going to be used for and it’s typically data that is gathered for that reason only.
3rd party data is data that is drawn from or acquired through public sources about a candidate such as their Twitter profile. It could be your social media profile. It is data that is not created for the specific use case of interviewing for a job, but which is scraped and extracted and applied for a different purpose. It is self-evident that an Ai tool that combines visible data and 1st party data is likely to be both more accurate in the application for recruitment and have outcomes more likely to be trusted by the candidate and the recruiter.
At PredictiveHire, we are committed to building ethical and engaging assessments. This is why we have taken the path of a text chat with no time pressure. We allow candidates to take their own time, reflect and submit answers in text format.
We strictly do not use any information other than the candidate responses to the interview questions (i.e. fairness through unawareness – algorithm knows nothing about sensitive attributes).
For example, no explicit use of race, age, name, location etc, candidate behavioural data such as how long they take to complete, how fast they type, how many corrections they make, information scraped from the internet etc. While these signals may carry information, we do not use any such data.
Another way to ask this is – Can you explain how your algorithm works? and does your solution use deep learning models?
This is an interesting question especially given that we humans typically obfuscate our reasons for rejecting a candidate behind the catch-all explanation of “Susie was not a cultural fit”.
For some reason, we humans have a higher-order need and expectation to unpack how an algorithm arrived at a recommendation. Perhaps because there is not much to say to a phone call that tells you were rejected for cultural fit.
This is probably the most important aspect to consider, especially if you are the change leader in this area. It is fair to expect that if an algorithm affects someone’s life, you need to see how that algorithm works.
Transparency and explainability are fundamental ingredients of trust, and there is plenty of research to show that high trust relationships create the most productive relationships and cultures.
This is also one substantial benefit of using AI at the top of the funnel to screen candidates. Subject to what kind of Ai you use, it enables you to explain why a candidate was screened in or out.
This means recruitment decisions become consistent and fairer with AI screening tools.
But if Ai solutions are not clear why some inputs (called “features” in machine learning jargon) are used and how they contribute to the outcome, explainability becomes impossible.
For example, when deep learning models are used, you are sacrificing explainability for accuracy. Because no one can explain how a particular data feature contributed to the recommendation. This can further erode candidate trust and impact your brand.
The most important thing is that you know what data is being used and then ultimately, it’s your choice as to whether you feel comfortable to explain the algorithm’s recommendations to both your people and the candidate.
Assessment should be underpinned by validated scientific methods and like all science, the proof is in the research that underpins that methodology.
This raises another question for anyone looking to rely on AI tools for human decision making – where is the published and peer-reviewed research that ensures you can have confidence that a) it works and b) it’s fair.
This is an important question given the novelty of AI methods and the pace at which they advance.
At PredictiveHire, we have published our research to ensure that anyone can investigate for themselves the science that underpins our AI solution.
INSERT RESEARCH
We continuously analyse the data used to train models for latent patterns that reveal insights for our customers as well as inform us of improving the outcomes.
It’s probably self-evident why this is an important question to ask. You can’t have much confidence in the algorithm being fair for your candidates if no one is testing that regularly.
Many assessments report on studies they have conducted on testing for bias. While this is useful, it does not guarantee that the assessment may not demonstrate biases in new candidate cohorts it’s applied on.
The notion of “data drift” discussed in machine learning highlights how changing patterns in data can cause models to behave differently than expected, especially when the new data is significantly different from the training data.
Therefore on-going monitoring of models is critical in identifying and mitigating risks of bias.
Potential biases in data can be tested for and measured.
These include all assumed biases such as between gender and race groups that can be added to a suite of tests. These tests can be extended to include other groups of interest where those group attributes are available like English As Second Language (EASL) users.
On bias testing, look out for at least these 3 tests and ask to see the tech manual and an example bias testing report.
INSERT IMAGE
At PredictiveHire, we conduct all the above tests. We conduct statistical tests to check for significant differences between groups of feature values, model outcomes and recommendations. Tests such as t-tests, effect sizes, ANOVA, 4/5th, Chi-Squared etc. are used for this. We consider this standard practice.
We go beyond the above standard proportional and distribution tests on fairness and adhere to stricter fairness considerations, especially at the model training stage on the error rates. These include following guidelines set by IBM’s AI Fairness 360 Open Source Toolkit. Reference: https://aif360.mybluemix.net/) and the Aequitas project at the Centre for Data Science and Public Policy at the University of Chicago
We continuously analyse the data used to train models for latent patterns that reveal insights for our customers as well as inform us of improving the outcomes.
We all know that despite best intentions, we cannot be trained out of our biases. Especially the unconscious biases.
This is another reason why using data-driven methods to screen candidates is fairer than using humans.
Biases can occur in many different forms. Algorithms and Ai learn according to the profile of the data we feed it. If the data it learns from is taken from a CV, it’s only going to amplify our existing biases. Only clean data, like the answers to specific job-related questions, can give us a true bias-free outcome.
If any biases are discovered, the vendor should be able to investigate and highlight the cause of the bias (e.g. a feature or definition of fitness) and take corrective measure to mitigate it.
If you care about inclusivity, then you want every candidate to have an equal and fair opportunity at participating in the recruitment process.
This means taking account of minority groups such as those with autism, dyslexia and English as a second language (EASL), as well as the obvious need to ensure the approach is inclusive for different ethnic groups, ages and genders.
At PredictiveHire, we test the algorithms for bias on gender and race. Tests can be conducted for almost any group in which the customer is interested. For example, we run tests on “English As a Second Language” (EASL) vs. native speakers.
If one motivation for you introducing Ai tools to your recruitment process is to deliver more diverse hiring outcomes, it’s natural you should expect the provider to have demonstrated this kind of impact in its customers.
If you don’t measure it, you probably won’t improve it. At PredictiveHire, we provide you with tools to measure equality. Multiple dimensions are measured through the pipeline from those who applied, were recommended and then who was ultimately hired.
8. What is the composition of the team building this technology?
Thankfully, HR decision-makers are much more aware of how human bias can creep into technology design. Think of how the dominance of one trait in the human designers and builders have created an inadvertent unfair outcome.
In 2012, YouTube noticed something odd.
About 10% of the videos being uploaded were upside down.
When designers investigated the problem, they found something unexpected: Left-handed people picked up their phones differently, rotating them 180 degrees, which lead to upside-down videos being uploaded,
The issue here was a lack of diversity in the design process. The engineers and designers who created the YouTube app were all right-handed, and none had considered that some people might pick up their phones differently.
In our team at PredictiveHire, from the top down, we look for diversity in its broadest definition.
Gender, race, age, education, immigrant vs native-born, personality traits, work experience. It all adds up to ensure that we minimise our collective blind spots and create a candidate and user experience that works for the greatest number of people and minimises bias.
What other questions have you used to validate the fairness and integrity of the Ai tools you have selected to augment your hiring and promotion processes?
We’d love to know!
Walk into any store this festive season and you’ll see it instantly. The lights, the displays, the products are all crafted to draw people in. Retailers spend millions on campaigns to bring customers through the door.
But the real moment of truth isn’t the emotional TV ad, or the shimmering window display. It’s the human standing behind the counter. That person is the brand.
Most retailers know this, yet their hiring processes tell a different story. Candidates are often screened by rigid CV reviews or psychometric tests that force them into boxes. Neurodiverse candidates, career changers, and people from different cultural or educational backgrounds are often the ones who fall through the cracks.
And yet, these are the very people who may best understand your customers. If your store colleagues don’t reflect the diversity of the communities you serve, you create distance where there should be connection. You lose loyalty. You lose growth.
We call this gap the diversity mirror.
When retailers achieve mirrored diversity, their teams look like their customers:
Customers buy where they feel seen – making this a commercial imperative.
The challenge for HR leaders is that most hiring systems are biased by design. CVs privilege pedigree over potential. Multiple-choice tests reduce people to stereotypes. And rushed festive hiring campaigns only compound the problem.
That’s where Sapia.ai changes the equation: Every candidate is interviewed automatically, fairly, and in their own words.
With the right HR hiring tools, mirrored diversity becomes a data point you can track, prove, and deliver on. It’s no longer just a slogan.
David Jones, Australia’s premium department store, put this into practice:
The result? Store teams that belong with the brand and reflect the customers they serve.
Read the David Jones Case Study here 👇
As you prepare for festive hiring in the UK and Europe, ask yourself:
Because when your colleagues mirror your customers, you achieve growth, and by design, you’ll achieve inclusion.
See how Sapia.ai can help you achieve mirrored diversity this festive season. Book a demo with our team here.
Mirrored diversity means that store teams reflect the diversity of their customer base, helping create stronger connections and loyalty.
Seasonal employees often provide the first impression of a brand. Inclusive teams make customers feel seen, improving both experience and sales.
Adopting tools like AI structured interviews, bias monitoring, and data dashboards helps retailers hire fairly, reduce screening time, and build more diverse teams.
Organisations invest heavily in their employer brand, career sites, and EVP campaigns, especially to attract underrepresented talent. But without the right data, it’s impossible to know if that investment is paying off.
Representation often varies across functions, locations, and stages of the hiring process. Blind spots allow bias to creep in, meaning underrepresented groups may drop out long before offer.
Collecting demographic data is only step one. Turning it into insight you can act on is where real change and better hiring outcomes happen.
The Diversity Dashboard in Discover Insights, Sapia.ai’s analytics tool, gives you real-time visibility into representation, inclusion, and fairness at every stage of your talent funnel. It helps you connect the dots between your attraction strategies and actual hiring outcomes.
Key features include:
With the Diversity Dashboard, you can pinpoint where inclusion is thriving and where it’s falling short.
It’s also a powerful tool to tell your success story. Celebrate wins by showing which underrepresented groups are making the biggest gains, and share that progress with boards, executives, and regulators.
Powered by explainable AI and the world’s largest structured interview dataset, your insights are fair, auditable, and evidence-based.
Measuring diversity is the first step. Using that data to take action is where you close the Diversity Gap. With the Diversity Dashboard, you can prove your strategy is working and make the changes where it isn’t.
Book a demo to see the Diversity Dashboard in action.
Why neuroinclusion can’t be a retrofit and how Sapia.ai is building a better experience for every candidate.
In the past, if you were neurodivergent and applying for a job, you were often asked to disclose your diagnosis to get a basic accommodation – extra time on a test, maybe the option to skip a task. That disclosure often came with risk: of judgment, of stigma, or just being seen as different.
This wasn’t inclusion. It was bureaucracy. And it made neurodiverse candidates carry the burden of fitting in.
We’ve come a long way, but we’re not there yet.
Over the last two decades, hiring practices have slowly moved away from reactive accommodations toward proactive, human-centric design. Leading employers began experimenting with:
But even these advances have often been limited in scope, applied to special hiring programs or specific roles. Neurodiverse talent still encounters systems built for neurotypical profiles, with limited flexibility and a heavy dose of social performance pressure.
Hiring needs to look different.
Truly inclusive hiring doesn’t rely on diagnosis or disclosure. It doesn’t just give a select few special treatment. It’s about removing friction for everyone, especially those who’ve historically been excluded.
That’s why Sapia.ai was built with universal design principles from day one.
Here’s what that looks like in practice:
It’s not a workaround. It’s a rework.
We tend to assume that social or “casual” interview formats make people comfortable. But for many neurodiverse individuals, icebreakers, group exercises, and informal chats are the problem, not the solution.
When we asked 6,000 neurodiverse candidates about their experience using Sapia.ai’s chat-based interview, they told us:
“It felt very 1:1 and trustworthy… I had time to fully think about my answers.”
“It was less anxiety-inducing than video interviews.”
“I like that all applicants get initial interviews which ensures an unbiased and fair way to weigh-up candidates.”
Some AI systems claim to infer skills or fit from resumes or behavioural data. But if the training data is biased or the experience itself is exclusionary, you’re just replicating the same inequity with more speed and scale.
Inclusion means seeing people for who they are, not who they resemble in your data set.
At Sapia.ai, every interaction is transparent, explainable, and scientifically validated. We use structured, fair assessments that work for all brains, not just neurotypical ones.
Neurodiversity is rising in both awareness and representation. However, inclusion won’t scale unless the systems behind hiring change as well.
That’s why we built a platform that:
Sapia.ai is already powering inclusive, structured, and scalable hiring for global employers like BT Group, Costa Coffee and Concentrix. Want to see how your hiring process can be more inclusive for neurodivergent individuals? Let’s chat.