Last week, our smart interviewer technology was featured in a glowing piece by the Australian Financial Review. The story was picked up by LinkedIn News Australia, who conducted a poll asking users if they were “comfortable being interviewed by a bot”.
The poll garnered more than 6,500 responses. Perhaps unsurprisingly, 50% of respondents selected the response “No – it’s a job for humans.” Just under a third of LinkedIn users said that they believe chatbot interviewing is “the future”, while 21% said that it’s appropriate only for certain roles.
When you have over 6,500 responses, you can do some meaningful analysis. In this case, “It’s just for humans” was the prevailing opinion. But, in the comments section attached to the poll, we discovered more about how people feel toward Ai, both as a technological construct and as a tool for recruitment. We bucketed the comments into five recurring themes:
Ai hasn’t made a good name for itself lately – take Amazon’s recent facial recognition debacle as a good example – so it’s easy to see why people are resistant to the prospect of Ai moving into a space historically handled by humans. Take a bird’s eye view, and the notion certainly looks preposterous: How could a machine, asking just five questions, ever hope to recreate the capabilities of a seasoned recruiter or talent acquisition specialist?
That is the problem, though: The more ‘human’ aspects of the recruitment process are ruining the game. Ghosting is rampant, both for candidates and recruiters. Ineradicable biases are creating unfairnesses that permeate organisations from top to bottom. The Great Resignation is putting immense pressure on hirers to move quickly, excluding thousands of applicants based on arbitrary criteria that shift from month to month. Consider, too, these sobering statistics:
For Ai to qualify as a useable, reliable tool, we expect it to be perfect. We compare it, unfairly, against some ultimate human ideal: The chirpy, well-rested recruiter on their best day. The kind of recruiter who has never ghosted anyone, who has no biases whatsoever, and who finds the right person for the right job, no matter what. Here’s the issue with this comparison: That kind of human doesn’t exist.
For Ai to be a valid and useful tool, and an everyday part of the human recruiter’s toolset, it doesn’t need to be perfect, flawless; it only needs to be better than the alternative. Can’t be done? For one example, Smart Interviewer, eliminates the problem of ghosting completely: Each of your candidates gets an interview, and every single person receives feedback. Even better? 98% of the candidates who use our platform find that feedback useful.
(That is to say nothing of the way it removes bias, as if that weren’t enough on its own.)
Ai has a way to go before it will earn the trust of the majority. Again, this is totally understandable. We believe that there is a better, and quicker, way to get there.
To borrow a concept commonly associated with cryptocurrency and blockchain technology, we want to create a trustless environment for our Ai and its activities. Not an environment without trust, but one in which trust is a foregone conclusion. In a trustless environment, dishonesty, either by admission or omission, is impossible. Just as you cannot forge blockchain entries, you cannot hide the workings and algorithms that make our Ai what it is.
That is the essence of our FAIR Framework. For hiring managers and organisations, this document provides an assurance as well as a template to query fairness related metrics of Ai recruitment tools. For candidates, FAIR ensures that they are using a system built with fairness as a key performance metric. For us, transparency on fairness is standard operating procedure.
Finally, think about this: When we say we want a ‘human’ recruitment process, what are we really saying? That we want something fallible, prone to biases, subject to the decisions of people who have bad days? What if a trustless Ai companion could help remove all that, without replacing the person? Is that not more human?
Barb Hyman, CEO & Founder, Sapia.ai
Every CHRO I speak to wants clarity on skills:
What skills do we have today?
What skills do we need tomorrow?
How do we close the gap?
The skills-based organisation has become HR’s holy grail. But not all skills data is created equal. The way you capture it has ethical consequences.
Some vendors mine employees’ “digital exhaust” by scanning emails, CRM activity, project tickets and Slack messages to guess what skills someone has.
It is broad and fast, but fairness is a real concern.
The alternative is to measure skills directly. Structured, science-backed conversations reveal behaviours, competencies and potential. This data is transparent, explainable and given with consent.
It takes longer to build, but it is grounded in reality.
Surveillance and trust: Do your people know their digital trails are being mined? What happens when they find out?
Bias: Who writes more Slack updates, introverts or extroverts? Who logs more Jira tickets, engineers or managers? Behaviour is not the same as skills.
Explainability: If an algorithm says, “You are good at negotiation” because you sent lots of emails, how can you validate that?
Agency: If a system builds a skills profile without consent, do employees have control over their own career data?
Skills define careers. They shape mobility, pay and opportunity. That makes how you measure them an ethical choice as well as a technical one.
At Sapia.ai, we have shown that structured, untimed, conversational AI interviews restore dignity in hiring and skills measurement. Over 8 million interviews across 50+ languages prove that candidates prefer transparent and fair processes that let them share who they are, in their own words.
Skills measurement is about trust, fairness and people’s futures.
When evaluating skills solutions, ask:
Is this system measuring real skills, or only inferring them from proxies?
Would I be comfortable if employees knew exactly how their skills profile was created?
Does this process give people agency over their data, or take it away?
The choice is between skills data that is guessed from digital traces and skills data that is earned through evidence, reflection and dialogue.
If you want trust in your people decisions, choose measurement over inference.
To see how candidates really feel about ethical skills measurement, check out our latest research report: Humanising Hiring, the largest scale analysis of candidate experience of AI interviews – ever.
What is the most ethical way to measure skills?
The most ethical method is to use structured, science-backed conversations that assess behaviours, competencies and potential with consent and transparency.
Why is skills inference problematic?
Skills inference relies on digital traces such as emails or Slack activity, which can introduce bias, raise privacy concerns and reduce employee trust.
How does ethical AI help with skills measurement?
Ethical AI, such as structured conversational interviews, ensures fairness by using consistent data, removing demographic bias and giving every candidate or employee a voice.
What should HR leaders look for in a skills platform?
Look for transparency, explainability, inclusivity and evidence that the platform measures skills directly rather than guessing from digital behaviour.
How does Sapia.ai support ethical skills measurement?
Sapia.ai uses structured, untimed chat interviews in over 50 languages. Every candidate receives
Walk into any store this festive season and you’ll see it instantly. The lights, the displays, the products are all crafted to draw people in. Retailers spend millions on campaigns to bring customers through the door.
But the real moment of truth isn’t the emotional TV ad, or the shimmering window display. It’s the human standing behind the counter. That person is the brand.
Most retailers know this, yet their hiring processes tell a different story. Candidates are often screened by rigid CV reviews or psychometric tests that force them into boxes. Neurodiverse candidates, career changers, and people from different cultural or educational backgrounds are often the ones who fall through the cracks.
And yet, these are the very people who may best understand your customers. If your store colleagues don’t reflect the diversity of the communities you serve, you create distance where there should be connection. You lose loyalty. You lose growth.
We call this gap the diversity mirror.
When retailers achieve mirrored diversity, their teams look like their customers:
Customers buy where they feel seen – making this a commercial imperative.
The challenge for HR leaders is that most hiring systems are biased by design. CVs privilege pedigree over potential. Multiple-choice tests reduce people to stereotypes. And rushed festive hiring campaigns only compound the problem.
That’s where Sapia.ai changes the equation: Every candidate is interviewed automatically, fairly, and in their own words.
With the right HR hiring tools, mirrored diversity becomes a data point you can track, prove, and deliver on. It’s no longer just a slogan.
David Jones, Australia’s premium department store, put this into practice:
The result? Store teams that belong with the brand and reflect the customers they serve.
Read the David Jones Case Study here 👇
As you prepare for festive hiring in the UK and Europe, ask yourself:
Because when your colleagues mirror your customers, you achieve growth, and by design, you’ll achieve inclusion.
See how Sapia.ai can help you achieve mirrored diversity this festive season. Book a demo with our team here.
Mirrored diversity means that store teams reflect the diversity of their customer base, helping create stronger connections and loyalty.
Seasonal employees often provide the first impression of a brand. Inclusive teams make customers feel seen, improving both experience and sales.
Adopting tools like AI structured interviews, bias monitoring, and data dashboards helps retailers hire fairly, reduce screening time, and build more diverse teams.
Organisations invest heavily in their employer brand, career sites, and EVP campaigns, especially to attract underrepresented talent. But without the right data, it’s impossible to know if that investment is paying off.
Representation often varies across functions, locations, and stages of the hiring process. Blind spots allow bias to creep in, meaning underrepresented groups may drop out long before offer.
Collecting demographic data is only step one. Turning it into insight you can act on is where real change and better hiring outcomes happen.
The Diversity Dashboard in Discover Insights, Sapia.ai’s analytics tool, gives you real-time visibility into representation, inclusion, and fairness at every stage of your talent funnel. It helps you connect the dots between your attraction strategies and actual hiring outcomes.
Key features include:
With the Diversity Dashboard, you can pinpoint where inclusion is thriving and where it’s falling short.
It’s also a powerful tool to tell your success story. Celebrate wins by showing which underrepresented groups are making the biggest gains, and share that progress with boards, executives, and regulators.
Powered by explainable AI and the world’s largest structured interview dataset, your insights are fair, auditable, and evidence-based.
Measuring diversity is the first step. Using that data to take action is where you close the Diversity Gap. With the Diversity Dashboard, you can prove your strategy is working and make the changes where it isn’t.
Book a demo to see the Diversity Dashboard in action.