Back

2023: A Bubbly Year In Hiring đŸ„‚

golive

In 2023, the world embraced the transformative power of artificial intelligence (AI) more than ever before. With the explosion of ChatGPT into individuals’ pockets at the end of last year, we’ve seen interest and trust in AI solutions from organizations skyrocket.

The year 2023 stands as a testament to the potential of AI in creating an enriched, unbiased, and empowering hiring experience.

We’ve compiled the highlights that have made this year our most exciting so far – click the button below to immerse yourself in the year that’s been.

reduce bias in hiring

The Sapia.ai Difference: Our Candidate Experience 

Numbers speak louder than words. And our 2023 candidate numbers resound with success. Almost 1.4 million candidates shared their stories this year in a Chat Interview, giving every one of them a fair and equal chance at success.

450,000 candidates completed a Video Interview, giving them the chance to share more about who they are, and enabling hiring teams to ‘meet’ shortlisted candidates at a time that suited them.

Of the candidates who started a Chat Interview, 91.8% completed it, showing that candidates appreciate the opportunity to open up and bring their real and best selves when applying for a job.

The impact that allowing candidates to interview, and receive personalised feedback that they can use in their lives and careers, is huge.

Not only did 96% of candidates find the feedback they received useful, but the follow-on effect to our customer’s brands is a testament to the visionary talent leaders seeing the benefits of investing in their candidate experience.

79% employer brand advocacy and 84% consumer brand advocacy show how a candidate’s experience with AI, when curated in a respectful, low-pressure way, can positively impact your consumer brand.

Reducing Bias: A Key Achievement in 2023

One of the most significant pillars of our mission that we’re proud of is our unwavering commitment to reducing bias in the hiring process. In 2023, we’ve made substantial strides in ensuring that every candidate gets a fair chance, regardless of their background, making the hiring process more equitable and just.

In March of 2023, independent research was published that underscored the transformative impact of Sapia.ai in workplaces. The study highlighted how our platform significantly reduces the gender gap in technology hiring, a traditionally male-dominated sector. This research serves as a testament to our commitment to fostering diversity and inclusion, and our unwavering pursuit to level the playing field in the recruitment process.

The strides we’ve made with Sapia.ai are not just about challenging the status quo; they are about shaping a future where opportunities are boundless, and talent is recognized, irrespective of gender, ethnicity, age, or any other irrelevant piece of information about a person other than their potential to perform in the role.

bias in hiring

Customer Impact: The True Measure of Our Success

Our customer’s satisfaction is the heartbeat of Sapia.ai. And in 2023, the pulse is strong. Our customer testimonials echo our commitment to excellence in the hiring process. They tell a story of transformed hiring experiences, saved time and resources, and how Sapia.ai has been a game-changer for businesses.

You can read our testimonials in our Year in Review highlight here.

By automating the screening and assessment of candidates, we’ve made a remarkable difference in streamlining the recruitment process. This year, we’re proud to share that we’ve saved each recruiter and hiring manager who uses Sapia.ai almost 30 hours per month. This has not only led to improved efficiency but has allowed hiring teams to focus more on their people. The time saved translates to increased productivity, enhanced candidate experience, and a more personable approach to hiring.

New Customer Partnerships: Strengthening Our Reach in 2023

This year, we had the pleasure of welcoming some esteemed brands to our growing network. These organizations are not just our valued clients; they are visionary leaders who embrace Responsible AI to make the hiring process better and more equitable.

We’re proud to collaborate with Kmart and Target Australia, global entities renowned for their commitment to diversity and fairness. Starbucks Australia, another of our prestigious customers, is making strides in efficient, responsible hiring practices with our platform. We’re also excited about our global partnerships with Joe & the Juice, a dynamic and innovative brand with a presence across the UK, Europe, and the US; Edge, an American organization known for its forward-thinking leadership; and Ecentric Payments, a pioneer in the South African digital payment solutions sector.

These partnerships demonstrate our shared vision to transform the hiring landscape, making it fairer and more efficient for all.

bias in hiring

Expanding Our Integrations in 2023

This year, we also made meaningful strides in strengthening our partnerships with other major organizations in the HR space.

We’re thrilled to announce the successful completion of our integrations this year with SAP SuccessFactors, LiveHire, iCIMS, and Avature.

These partnerships have been instrumental in expanding our reach and capabilities. Through these integrations, we can now offer a seamless, comprehensive experience encompassing every stage of the hiring process for users of these platforms.

This simplifies the recruitment workflow and ensures a smooth, hassle-free journey for both candidates and hiring teams.

Product Innovations of 2023

In April we introduced our multilingual capability, starting with French and Spanish, and rapidly expanding to Danish, Dutch, Finnish, German, Italian, and Swedish. This significant expansion in our language offering for candidates facilitates easier communication for speakers of languages other than English fostering inclusivity and paving the way for a more diverse talent pool. The ability to accommodate various languages is instrumental in meeting our goal of making hiring equitable and accessible on a global scale.

In July we uplifted the review section of our Video Interviews, enabling a panel-style review process with multi-assessor capability. With new permissions sets that ensure hiring managers can only see their ratings, this feature helps to reduce the possibility of conformity bias, encouraging hiring managers to rate candidates solely based on their perspective and not be influenced by others’ opinions.

Also in July, we achieved a significant milestone with the development of our first proprietary large language model, which played a crucial role in the creation of our innovative Artificially Generated Text Detector. This language model, built with advanced machine-learning techniques, represents a significant leap forward in our commitment to innovation in AI. It can differentiate human-like text from artificially generated text, thanks to our large collection of human answers.

The detector can efficiently and effectively scan for artificial content, created with tools like ChatGPT. This innovative feature has been instrumental in ensuring the authenticity of our Chat Interview, reducing plagiarism by an astounding 78% almost overnight. By curbing plagiarism, we ensure each candidate has a fair opportunity to showcase their unique skills and competencies; and preserve the integrity of our platform in maintaining fairness in the recruitment process.

In October we launched Talent Hub, a game-changing feature that has accelerated the shortlisting process for recruiters and hiring managers. This tool provides a central hub for all Sapia.ai insights and data, reducing the clicks to review and progress a candidate by an impressive 75%. Now, the entire recruitment process is faster, smoother, and more efficient than ever before.

This month we released SMS reminders, which will help to boost our already industry-leading completion rates, enabling our customers to assess a broader talent pool.

Lastly, one of the most exciting innovations to have kicked off in 2023 was the proof of concept of Phai, our career site chatbot powered by generative AI.

Our customers have been crying out for us to bring the power of chat earlier in the candidate journey, connecting with candidates on a personalized level from the moment they land on the career site. Phai is the first step in this direction, with general availability slated for Q1, 2024; and a roadmap of functionality planned that will elevate the candidate experience even further.

To interact with Phai, simply click the chat button on the bottom left of your browser window. 

Global Recognition: Our Scientific Research

In 2023, our efforts in scientific research and practical application of AI garnered global recognition, amplifying our commitment to innovation and responsible AI. We were profoundly honored to have had the opportunity to present four research papers at the 2023 Society for Industrial and Organizational Psychology (SIOP) annual conference, the leading global event for I/O psychologists.

This recognition underscores our dedication to rigorous scientific research and our quest for knowledge, driving us to continue exploring new frontiers. We are also thrilled to announce the news that 6 research papers from Sapia.ai have been accepted for presentations at the 2024 SIOP conference and a poster at the NVIDIA GTC conference, one of the leading venues for showcasing Gen AI innovations.

Moreover, our work has been acknowledged by the National AI Centre in Australia, emphasized in a recent paper on the country’s expanding AI ecosystem. This recognition serves as a testament to our impactful contributions to the Responsible AI ecosystem in Australia and beyond. It also highlights our pivotal role in shaping the future of AI and its practical applications, further reinforcing our commitment to enhancing the recruitment process using ethical AI.

Looking Forward to 2024

As we reflect on an extraordinary 2023, we’re filled with immense gratitude for our partnerships, innovations, and recognitions that have shaped the year.

Leaping into 2024, we have some incredibly exciting plans shaping up that will elevate the experience for candidates and hiring teams and move us closer to achieving our mission of using ethical AI to make hiring fairer, more efficient, and more effective.


Blog

Neuroinclusion by design. Not by exception.

Why neuroinclusion can’t be a retrofit and how Sapia.ai is building a better experience for every candidate.

In the past, if you were neurodivergent and applying for a job, you were often asked to disclose your diagnosis to get a basic accommodation – extra time on a test, maybe the option to skip a task. That disclosure often came with risk: of judgment, of stigma, or just being seen as different.

This wasn’t inclusion. It was bureaucracy. And it made neurodiverse candidates carry the burden of fitting in.

We’ve come a long way, but we’re not there yet.

Shifting from retrofits to inclusive-by-design

Over the last two decades, hiring practices have slowly moved away from reactive accommodations toward proactive, human-centric design. Leading employers began experimenting with:

  • Sharing interview questions in advance

  • Replacing group exercises with structured simulations

  • Offering a variety of assessment formats

  • Co-designing assessments with neurodiverse candidates

But even these advances have often been limited in scope, applied to special hiring programs or specific roles. Neurodiverse talent still encounters systems built for neurotypical profiles, with limited flexibility and a heavy dose of social performance pressure.

Hiring needs to look different.

Insight 1: The next frontier of hiring equity is universal design

Truly inclusive hiring doesn’t rely on diagnosis or disclosure. It doesn’t just give a select few special treatment. It’s about removing friction for everyone, especially those who’ve historically been excluded.

That’s why Sapia.ai was built with universal design principles from day one.

Here’s what that looks like in practice:

  • No time limits — Candidates answer at their own pace
  • No pressure to perform — It’s a conversation, not a spotlight
  • No video, no group tasks — Just structured, 1:1 chat-based interviews
  • Built-in coaching — Everyone gets personalised feedback

It’s not a workaround. It’s a rework.

Insight 2: Not all “friendly” methods are inclusive

We tend to assume that social or “casual” interview formats make people comfortable. But for many neurodiverse individuals, icebreakers, group exercises, and informal chats are the problem, not the solution.

When we asked 6,000 neurodiverse candidates about their experience using Sapia.ai’s chat-based interview, they told us:

“It felt very 1:1 and trustworthy
 I had time to fully think about my answers.”

“It was less anxiety-inducing than video interviews.”

“I like that all applicants get initial interviews which ensures an unbiased and fair way to weigh-up candidates.”

Insight 3: Prediction ≠ Inclusion

Some AI systems claim to infer skills or fit from resumes or behavioural data. But if the training data is biased or the experience itself is exclusionary, you’re just replicating the same inequity with more speed and scale.

Inclusion means seeing people for who they are, not who they resemble in your data set.

At Sapia.ai, every interaction is transparent, explainable, and scientifically validated. We use structured, fair assessments that work for all brains, not just neurotypical ones.

Where to from here?

Neurodiversity is rising in both awareness and representation. However, inclusion won’t scale unless the systems behind hiring change as well.

That’s why we built a platform that:

  • Doesn’t rely on disclosure

  • Removes ambiguity and pressure

  • Creates space for everyone to shine

  • Measures what matters, fairly

Sapia.ai is already powering inclusive, structured, and scalable hiring for global employers like BT Group, Costa Coffee and Concentrix. Want to see how your hiring process can be more inclusive for neurodivergent individuals? Let’s chat. 

Read Online
Blog

Skills Measurement vs Skills Inference – What’s the Difference and Why Does It Matter?

There’s growing interest in AI-driven tools that infer skills from CVs, LinkedIn profiles, and other passive data sources. These systems claim to map someone’s capability based on the words they use, the jobs they’ve held, and patterns derived from millions of similar profiles. In theory, it’s efficient. But when inference becomes the primary basis for hiring or promotion, we need to scrutinise what’s actually being measured and what’s not.

Let’s be clear: the technology isn’t the problem. Modern inference engines use advanced natural language processing, embeddings, and knowledge graphs. The science behind them is genuinely impressive. And when they’re used alongside richer sources of data, such as internal project contributions, validated assessments, or behavioural evidence, they can offer valuable insight for workforce planning and development.

But we need to separate the two ideas:

  • Skills Measurement: Directly observing or quantifying a skill based on evidence of actual performance. 
  • Skills Inference: Estimating the likelihood that someone has a skill, based on indirect signals or patterns in their data. 

The risk lies in conflating the two.

The Problem Isn’t Inference of Skills. It’s the Data Feeding It

CVs and LinkedIn profiles are riddled with bias, inconsistency, and omission. They’re self-authored, unverified, and often written strategically – for example, to enhance certain experiences or downplay others in response to a job ad. 

And different groups represent themselves in different ways. Ahuja (2024) showed, for example, that male MBA graduates in India tend to self-promote more than their female peers. Something as simple as a longer LinkedIn ‘About’ section becomes a proxy for perceived competence.

Job titles are vague. Skill descriptions vary. Proficiency is rarely signposted. Even where systems draw on internal performance data, the quality is often questionable. Ratings tend to cluster (remember the year everyone got a ‘3’ at your org?) and can often reflect manager bias or company culture more than actual output.

Sophisticated ≠ Objective

The most advanced skill inference platforms use layered data: open web sources like job ads and bios, public databases like O*NET and ESCO, internal frameworks, even anonymised behavioural signals from platform users. This breadth gives a more complete picture, and the models powering it are undeniably sophisticated.

But sophistication doesn’t equal accuracy.

These systems rely heavily on proxies and correlations, rather than observed behaviour. They estimate presence, not proficiency. And when used in high-stakes decisions, that distinction matters.

Transparency (or Lack Thereof)

In many inference systems, it’s hard to trace where a skill came from. Was it picked up from a keyword? Assumed from a job title? Correlated with others in similar roles? The logic is rarely visible, and that’s a problem, especially when decisions based on these inferences affect access to jobs, development, or promotion.

Presence ≠ Proficiency

Inferred skills suggest someone might have a capability. But hiring isn’t about possibility. It’s about evidence of capability. Saying you’ve led a team isn’t the same as doing it well. Collecting or observing actual examples of behaviour allows you to evaluate someone’s true competence at a claimed skill. 

Some platforms try to infer proficiency, too, but this is still inference, not measurement. No matter how smart the model, it’s still drawing conclusions from indirect data.

By contrast, validated assessments like structured interviews, simulations, and psychometric tools are designed to measure. They observe behaviour against defined criteria, use consistent scoring frameworks (like Behaviourally Anchored Rating Scales, or BARS), and provide a transparent, defensible basis for decision-making. In doing this, the level or proficiency of a skill can be placed on a properly calibrated scale. 

But here’s the thing: we don’t have to choose one over the other.

A Smarter Way Forward: The Hybrid Model

The real opportunity lies in combining the rigour of measurement with the scalability of inference.

Start with measurement
Define the skills that matter. Use structured tools to capture behavioural evidence. Set a clear standard for what good looks like. For example, define Behaviourally Anchored Rating Scales (BARS) when assessing interviews for skills. Using a framework like Sapia.ai’s Competency Framework is critical for defining what you want to measure. 

Layer in inference
Apply AI to scale scoring, add contextual nuance, and detect deeper patterns that human assessors might miss, especially when reviewing large volumes of data.

Anchor the whole system in transparency and validation
Ensure people understand how inferences are made by providing clear explanations. Continuously test for fairness. Keep human oversight in the loop, especially where the stakes are high. More information on ensuring AI systems are transparent can be found in this paper.

This hybrid model respects the strengths and limits of both approaches. It recognises that AI can’t replace human judgement, but it can enhance it. That inference can extend reach, but only measurement can give you higher confidence in the results.

The Bottom Line

Inference can support and guide, but only measurement can prove. And when people’s futures are on the line, proof should always win.

References

Ahuja, A. (2024). LinkedIn profile analysis reveals gender-based differences in self-presentation among Indian MBA graduates. Journal of Business and Psychology.

 

Read Online
Blog

Making Healthcare Hiring Human with Ethical AI

Hiring for care is unlike any other sector. Recruiters are looking for people who can bring empathy, resilience, and energy to the most demanding human roles. Whether it’s dental care, mental health, or aged care, new hires are charged with looking after others when they’re most vulnerable. The stakes are high. 

Hiring for care is exactly where leveraging ethical AI can make the biggest impact.

Hiring for the traits that matter

The best carers don’t always have the best CVs.

That’s why our chat-based AI interview doesn’t screen for qualifications. It screens for the the skills that matter when caring for others. The traits that define a brilliant care worker, things like:

Empathy, Self-awareness, Accountability, Teamwork, and Energy. 

The best way to uncover these traits is through structured behavioural science, delivered through an experience that allows candidates to open up. Giving candidates space to give real-life, open-text answers. With no time pressure or video stress. Then, our AI picks up the signals that matter, free from any demographic data or bias-inducing signals.

Candidates’ answers to our structured interview questions aren’t simply ticking boxes. They’re a window into how someone shows up under pressure. And they’re helping leading care organisations hire people who belong in care and those who stay.

Inclusion, built in

Inclusivity should be a core foundation of any talent assessment, and it’s a fundamental requirement for hirers in the care industry. 

When healthcare hirers use chat-based AI interviews, designed to be inclusive for all groups, candidates complete their interviews when and where they choose, without the bias traps of face-to-face or phone screening. There are no accents to judge, no assumptions, just their words and their story.

And it works:

  • 91.8% of carer candidates complete their interviews
  • Carer candidates report 9/10 Candidate Satisfaction with their interview experience 
  • 80% of candidates would recommend others to apply 
  • Every candidate receives personalised feedback, regardless of the outcome

Drop-offs are reduced, and engagement & employer brand advocacy go up. Building a brand that candidates want to work for includes providing a hiring experience that candidates want to complete. 

Real outcomes in care hiring

Our smart chat already works for some of the most respected names in healthcare and community services. Here’s a sample of the outcomes that are possible by leveraging ethical AI, a validated scientific assessment, wrapped in an experience that candidates love: 

Anglicare – a leading provider of aged care services
  • Time-to-offer dropped from 40+ days to just 14
  • Candidate pool grew by 30%
  • Turnover dropped by 63%
Abano Healthcare – Australasia’s largest dental support organisation
  • 1,213 recruiter hours saved  in the first month (67 hours per individual hiring team member) 
  • $25,000 saved in screening and interviewing time
Berry Street – a not for profit family & child services organisation
  • Time-to-hire down from 22 to 7 days
  • 95.4% of candidates completed their chat interviews

A smarter way to hire

The case study tells the full story of how Sapia.ai helped Anglicare, Abano Healthcare, and Berry Street transform their hiring processes by scaling up, reducing burnout, and hiring with heart. 

Download it here:

Read Online