We have no survivable and sustainable future without science, just as we do not have you without it.
Since the start of the coronavirus epidemic, many companies have turned to advanced HR algorithms to find out who is the best candidate for open positions. Most often, HR algorithms utilize face-finding programs, games, quizzes, and software that examines visual or linguistic patterns to decide who gets an interview.
An Australian company called Sapia (Formerly PredictiveHire), founded in October 2013, appears to have gone much further. It has developed a machine-learning algorithm to assess the likelihood of frequent job changes for a given candidate. – MIT Technology Review.
According to Barbara Hyman, CEO of HR, their clients are employers who rely heavily on HR algorithms to sift through vast applications. These employers mainly operate in customer service, retail, sales, or healthcare sectors, among others. They consider the potential “how often change jobs” metric crucial when hiring.
When someone applies for a job through an HR company, they must first “convince” a chatbot, which is an embodiment of these advanced HR algorithms, of their worthiness. The HR algorithm poses open-ended questions and subsequently analyzes personality traits like initiative, intrinsic motivation, or resilience.
Importantly, this HR algorithm also gauges the frequency of how often the candidate might change jobs in the future, a metric highlighted as the “risk of escape” on the Sapia website. The primary aim of the HR company’s recent study was to refine their HR algorithm to predict this trait accurately. They assessed 45,899 candidates who previously responded to 5-7 open-ended questions about their experiences and situational awareness through the Sapia chatbot.
The chatbot, using insights from HR algorithms, probed for personality traits that Sapia’s research indicates might correlate with how often someone changes jobs. Traits like a heightened openness to new experiences or a perceived lack of practicality were among them.
Nathan Newman, an associate professor at John Jay College of Criminal Justice in New York, who penned a 2017 study on data analysis’s potential pitfalls, highlighted the recent work by Sapia to MIT Technology Review.
This encompasses the increasingly favored personality tests rooted in machine learning. These HR algorithms aim to filter out potential workers more likely to support unionization or ask for wage hikes. MIT Technology Review noted that employers, armed with HR algorithms, are keenly monitoring employee communications, like emails or online chats. They harness this data to deduce if a colleague might be on the verge of quitting. This intel aids them in determining the bare minimum wage hike they could offer to retain said employee.
Uber’s management systems, driven by HR algorithms, reportedly strive to ensure employees remain disconnected from physical offices and digital forums. This strategy ensures they can’t unintentionally unite and collectively demand improved pay or conditions.
If a simple automated chat interview can infer a candidate’s likelihood of job-hopping, it presents significant opportunities, especially when assessing candidates with no prior work history.
This work shows that the language one uses when responding to interview questions related to situational judgment and past behaviour is predictive of their likelihood to job hop. This paper explores:
Find out how you can identify job-hopping attitudes before you hire. To get your copy of the Research Paper click here.
Finally, you can try out Sapia’s Chat Interview right now, or leave us your details to get a personalised demo
Also, have you seen the 2020 Candidate Experience Playbook? Download it here.
If you’ve experimented with tools like ChatGPT, Claude, or Gemini, you’ve probably experienced this: ask the same question twice and you’ll often get two different answers.
This is by design. Gen AI models are probabilistic. They generate answers by predicting the “next most likely word,” with an intentional dose of randomness (temperature, sampling) to make them feel more “human.”
When you use that design principle in recruitment, you’re playing with fire.
Imagine using generic AI for:
If the same input produces different outputs depending on the day, you have a trust problem:
Hiring decisions are high-stakes. Candidates deserve certainty and fairness. Employers need defensibility. Probabilistic creativity is great for drafting emails or brainstorming headlines. It does not belong where the output affects someone’s career.
What we’re now seeing in the market is a proliferation of “thin wrappers”. Hiring tools that are built quickly on top of open-source AI models. The logic is simple: take a model like Qwen, Mistral, or LLaMA, put a UI around it, and call it a recruitment solution.
The problem? These wrappers inherit all the instability of their foundation models. And worse, they add risk x10:
This is the hidden risk of generic AI in the hiring process. On the surface, it looks sleek, fast, and innovative. Underneath, it’s a house built on sand.
At Sapia.ai, we’ve taken a very different path. We’ve built a for-purpose AI system designed specifically for hiring, utilising methods published in peer-reviewed journals.
Over the last eight years, we’ve conducted more than 8 million structured, conversational interviews across 50 countries and 20 languages. Every response is scored against validated competencies, ensuring that our assessments are:
This isn’t a thin wrapper. It’s an AI system designed from the ground up for hiring, with fairness, science, and trust at its core.
The convergence of Talent Acquisition, Talent Management, and Reskilling means the pressure on HR leaders has never been higher. Everyone wants internal mobility, but the default playbook (job boards, CV self-mapping) rarely delivers.
If the tools you adopt today are built on randomness and inference, you’re not just risking a poor candidate experience. You’re risking lawsuits, compliance failures, and reputational damage.
If instead, you invest in measurement, structure, and science, you create a workforce data asset that compounds in value, unlocking hiring intelligence, mobility pathways, and skills development at scale.
Generative AI has transformed how we create at pace and at scale. But let’s not confuse creativity with science. Recruitment isn’t about “good enough, most of the time.” It’s about fairness, rigour, and trust.
For those who want to understand more, check out our ebook Understanding Responsible AI in Recruitment.
This is the state of hiring in 2025. Too often, candidates are ghosted, ignored, and reduced to a CV. Recruiters are forced to make decisions in data poverty, with scraps of information like grades, job titles, or where someone has worked before. Privilege gets rewarded; potential gets overlooked.
For the first time, we now have evidence that AI, when designed responsibly, brings humanity back to hiring.
Sapia.ai has released the Humanising Hiring report. The largest analysis ever conducted into candidate experience with AI interviews. The study draws on more than 1 million interviews and 11 million words of candidate feedback across 30+ countries.
Unlike surveys or anecdotal reviews, this research is grounded in what candidates themselves chose to share at one of the most stressful moments of their lives: applying for a job.
30% more women apply when told AI will assess them, resulting in a 36% closure of the gender gap
98% hiring equity for people with disabilities through a blind, untimed, mobile-first interview design
Here’s what candidates themselves revealed:
“None of the other companies I’ve applied to do this sort of thing. It’s so unique and wonderful to give this sort of insight to people… whether we get the job or not, we can take away something very valuable out of the process.”
“That felt so personal, as if the person genuinely took the time to read my answers and send me a summary of myself… that was pretty amazing.”
“This study stands out as one of the most comprehensive examinations of candidate experience to date. Analysing over a million interviews and 11 million words of candidate feedback, the findings make clear that responsibly designed AI has the potential to fundamentally improve hiring — not just by increasing speed, but by advancing fairness, enhancing the human aspect, and leading to stronger job matches.”
— Kathi Enderes, SVP Research & Global Industry Analyst, The Josh Bersin Company
The research challenges the idea that AI dehumanises the hiring process. In fact, it proves the opposite: when thoughtfully designed, AI can restore dignity to candidates by giving them a real interview from the very first interaction, giving them space to share their story, and giving them timely feedback.
With Sapia.ai’s Chat Interview:
Every candidate gets the same structured, role-relevant questions.
Interviews are untimed, so candidates can answer at their own pace.
Bias is monitored continuously under our FAIR™ framework.
Every candidate receives personalised feedback.
This isn’t automation for the sake of speed. It’s intelligence that puts people first, and it works. Leading global brands, including Qantas, Joe & the Juice, BT Group, Holland & Barrett, and Woolworths, have all transformed their hiring outcomes while enhancing the candidate experience.
Applicant volumes are exploding. Boards are demanding ROI on people decisions. And candidates expect fairness and agency. Sticking with the status quo — ghosting, inconsistent interviews, CV screening — comes at a real cost in brand equity, lost talent, and wasted time.
It’s time to move from data poverty to data richness, from broken processes to brilliant hiring.
This is the first time candidate feedback on AI interviews has been analysed at such scale. The insights are clear: hiring can be brilliant.
👉 Download the Humanising Hiring report now to see the full findings.
Barb Hyman, CEO & Founder, Sapia.ai
Every CHRO I speak to wants clarity on skills:
What skills do we have today?
What skills do we need tomorrow?
How do we close the gap?
The skills-based organisation has become HR’s holy grail. But not all skills data is created equal. The way you capture it has ethical consequences.
Some vendors mine employees’ “digital exhaust” by scanning emails, CRM activity, project tickets and Slack messages to guess what skills someone has.
It is broad and fast, but fairness is a real concern.
The alternative is to measure skills directly. Structured, science-backed conversations reveal behaviours, competencies and potential. This data is transparent, explainable and given with consent.
It takes longer to build, but it is grounded in reality.
Surveillance and trust: Do your people know their digital trails are being mined? What happens when they find out?
Bias: Who writes more Slack updates, introverts or extroverts? Who logs more Jira tickets, engineers or managers? Behaviour is not the same as skills.
Explainability: If an algorithm says, “You are good at negotiation” because you sent lots of emails, how can you validate that?
Agency: If a system builds a skills profile without consent, do employees have control over their own career data?
Skills define careers. They shape mobility, pay and opportunity. That makes how you measure them an ethical choice as well as a technical one.
At Sapia.ai, we have shown that structured, untimed, conversational AI interviews restore dignity in hiring and skills measurement. Over 8 million interviews across 50+ languages prove that candidates prefer transparent and fair processes that let them share who they are, in their own words.
Skills measurement is about trust, fairness and people’s futures.
When evaluating skills solutions, ask:
Is this system measuring real skills, or only inferring them from proxies?
Would I be comfortable if employees knew exactly how their skills profile was created?
Does this process give people agency over their data, or take it away?
The choice is between skills data that is guessed from digital traces and skills data that is earned through evidence, reflection and dialogue.
If you want trust in your people decisions, choose measurement over inference.
To see how candidates really feel about ethical skills measurement, check out our latest research report: Humanising Hiring, the largest scale analysis of candidate experience of AI interviews – ever.
What is the most ethical way to measure skills?
The most ethical method is to use structured, science-backed conversations that assess behaviours, competencies and potential with consent and transparency.
Why is skills inference problematic?
Skills inference relies on digital traces such as emails or Slack activity, which can introduce bias, raise privacy concerns and reduce employee trust.
How does ethical AI help with skills measurement?
Ethical AI, such as structured conversational interviews, ensures fairness by using consistent data, removing demographic bias and giving every candidate or employee a voice.
What should HR leaders look for in a skills platform?
Look for transparency, explainability, inclusivity and evidence that the platform measures skills directly rather than guessing from digital behaviour.
How does Sapia.ai support ethical skills measurement?
Sapia.ai uses structured, untimed chat interviews in over 50 languages. Every candidate receives