Back

Introducing InterviewBERT: A world-first algorithm for better interviews

Sapia labs, our R&D department, has developed a world-first innovation that will help us deepen our understanding of the contextual meaning of words in written job interviews. Called InterviewBERT, this algorithm combines Google’s model for Natural Language Processing (NLP) with our proprietary dataset of more than 330 million words. BERT, meet Smart Interviewer. Together, they’ll usher in a new generation of pre-employment assessment tools and recruitment software solutions.

Put simply, InterviewBERT makes Smart Interviewer, the most sophisticated conversational Ai in the world. Ours is no simple chat-bot – already, Smart Interviewer is capable of discovering personality traits and communication skills, accurately and reliably, using a candidate’s written responses. With InterviewBERT, Smart Interviewer will learn more about candidates than ever before, faster than ever before. With this speed and accuracy also comes reductions to the unfairnesses and biases that plague the hiring process.

Why, and how, are we the first to transform pre-employment assessment technology with BERT?

Through sound Ai infrastructure, we have been able to accumulate a vast and accurate dataset. This dataset grows by the minute – we interview a new candidate every 30 seconds – and, coupled with the expertise of our Sapia labs team, we can assess candidate suitability for a role in milliseconds.

“The smartest companies know that the fairest and most accurate way to assess someone’s suitability for a role is through a structured interview,” our CEO, Barb Hyman, said. “Text increases accuracy and speed of assessing candidates, while removing biases that come through voice or video interviews.

InterviewBERT and Google NLP | PredictiveHire recruitment software

Dr Buddhi Jayatilleke, Chief Data Scientist and head of Sapia labs, said the team is excited at the finding that InterviewBERT had such a profound impact on trait accuracy.

“Written language encodes personality signals predictive of ‘fit’,” Dr Jayatilleke said. “The ability to understand people through language has limitless applications, and we are excited to keep inventing more ways to use language data for our customers.”

Dr Jayatilleke said decades of research had confirmed that language has long been seen as a source of truth for personality. 

“What our R&D team has proven is just how powerful language data is when you combine it with enormous data volumes and scientific rigour,” he said. “This capability can be used for assessment and for offering personalised career coaching – a game changer for job seekers, universities, and employers.”

Sapia labs will present its findings from a new research paper, Identifying and Mitigating Gender Bias in Structured Interview Responses, at a SIOP symposium in April. 


Blog

What is a good ATS actually supposed to do?

According to Aptitude Research, 58% of us companies are currently dissatisfied with their ATS provider.

One in four are actively looking to replace their tech.

This dissatisfaction comes from a phenomenon known as overstacking.

Put simply, companies over invest in a raft of HR technologies, throwing big dollars into solutions that don’t provide clear benefit or ROI.

The thinking is this: “Everyone has an ATS. If I implement one at my company, I won’t get fired.”

But obviously, as Aptitude’s research shows, most ATSs are not doing what they’re expected to do – they don’t provide enough efficiency, and they don’t solve for things like quality of hire and time to hire; the metrics that CFOs look at very closely.

So what’s the real problem?

Augment your ATS, don’t replace it

The dissatisfaction is due not to an inherent fault in ATS technology, but a fundamental misunderstanding of what an ATS is supposed to do.

Look at it this way: an ATS is like your laptop computer. It has all of the parts that make up a good (or bad) computer: chips, CPUs, keys, a screen, and so on. In this sense, an ATS is more hardware than software. You need it, but you need more, too.

What you add to your ATS – your computer – is what transforms it into a tool that can be used to produce and extract value. If you bought a computer and tried to create documents on it without installing Microsoft Word, for example, you can hardly blame the computer for the missing functionality. It’s not fundamentally designed for word processing – it’s the platform that facilitates it.

Therefore, if your ATS is not helping to improve key performance indicators like quality of hire or time-to-fill, the ATS isn’t the problem. The problem is you don’t have the ATS plug-in designed specifically to satisfy those KPIs.

So, without first considering what a good ATS is, and what a good ATS should do, spending big money to replace it will not solve the problem – only delay it.

How to know if your ATS is salvageable

First, you need to ask yourself (and your business) the following questions:

  • Is your HR tech stack actually providing value, and in a way you can prove?
  • What do you actually need your ATS to do, that it isn’t?
  • What are some better, more cost-effective ways to make your ATS work better for you?

By partnering with your existing ATS, Sapia.ai’s smart hiring automation solution can help you achieve 90% completion rates and 90% candidate satisfaction rates – and you can even achieve a time-to-fill of as little as 24 hours.

We’ve even helped one customer, Spark NZ, achieve a near-complete removal of hiring bias.

This isn’t a case of simply throwing good money after bad. It’s about making your ATS into a solution that actually works for you – and in ways that you can prove.

You could replace your tech and call it job done. Maybe you’ll be gone, off to a new business, in the three or four years it will take for the cycle of dissatisfaction to repeat. But that is not a good solution.

Read Online
Blog

What does ‘ethical’ AI actually mean, and how do you pick one?


The discussion on ethical AI is gaining significant momentum. With the increasing use of artificial intelligence (AI) in various industries, there is a growing need to ensure that AI is employed ethically and built with ethical considerations in mind.

We’re going to explore the importance of ethical AI and discuss four key components to consider when integrating AI technology into organizations: fairness, accuracy, explainability, and privacy.

The need for ethical AI

AI offers several benefits, one of which is speed. Automating tasks that were previously performed by humans can save time and resources. However, it is crucial to carefully consider the problems AI is meant to solve.

For example, when addressing the scheduling of interviews, the underlying issue may not be the automation of the process but rather the need to hire and retain the right people. Quality should always be prioritized over mere automation.

Sapia.ai’s AI Smart Interviewer goes beyond speed and automation to find candidates that are properly matched to the needs and values of our customers. For one of our retail customers, this approach has achieved a 50% reduction in churn.

That’s what you stand to gain.

Objectivity and removing bias

One of the primary reasons organizations turn to AI is to introduce objectivity and mitigate human bias. While human bias is a natural aspect of decision-making, it can hinder the identification of talent and result in unfair judgments.

AI can provide a more objective assessment by focusing on relevant data that is not influenced by subjective factors like appearance or body language. It is important to understand that AI should not be the sole decision-maker but rather an input that aids the decision-making process.

Four components of ethical AI

  1. Fairness: It is essential to evaluate whether AI systems exhibit bias. Good AI vendors should provide data that demonstrates fairness, allowing organizations to assess the impact of the tool on equity in terms of race, gender, and broader demographics. Using training data that is as close to first-party and proprietary data as possible helps minimize biases inherent in third-party datasets.
  2. Accuracy: AI should provide meaningful and reliable inputs and outputs. Organizations must verify whether the AI system’s output is relevant and can effectively inform decision-making processes. Meaningless or irrelevant outputs can lead to misguided decisions.
  3. Explainability: Transparency and explainability are critical aspects of ethical AI. The ability to understand and explain the decision-making process of AI systems is vital. Candidates, as well as organizations, should be able to comprehend the technology being employed and the factors influencing its decisions.
  4. Privacy: As the importance of data privacy continues to grow, organizations must handle candidate data responsibly. Respecting the sanctity of personal data builds trust. It is crucial to only collect necessary data, comply with data protection regulations like GDPR, and ensure that data is not shared with third parties without consent.

Building trust through ethical AI

Trust is the foundation of successful HR and talent acquisition processes. Prioritizing ethical AI contributes to building trust with candidates and creating a positive hiring experience.

Treating data with respect, maintaining data sovereignty, and being transparent about the technology used instills confidence in candidates that their data is handled responsibly.

Ethical AI is not just a buzzword; it is a necessary consideration in today’s AI-driven world. By prioritizing fairness, accuracy, explainability, and privacy, organizations can ensure that AI systems operate ethically and responsibly. Integrating ethical AI practices into HR and talent acquisition processes builds trust, fosters positive cultures, and ultimately leads to better decision-making and outcomes.

Read Online
Blog

A CV tells you nothing

CVs are still the most frequent data source used

This seems obvious but yet even today this is the key data source used in screening and hiring. For grad recruitment, your degree, your university and your uni results are key filters used in screening.

It’s already been four years since Ernst & Young removed university degree classification as an entry criterion as there is ‘no evidence’ it equals success. Students are savvy and they know how competitive it is to secure a top graduate job. In the UK, the Higher Education Degree Datacheck (Hedd) surveys students and graduates about degree fraud. The annual results are pretty consistent – about a third of people embellish or exaggerate their academic qualifications when applying for jobs. Read more here >

We analysed ~13,000 CVs, received over a 5 year period, all for similar roles for a large sales-led organisation. From this data set, 2660 were hired and around 9600 rejected. We wanted to test how meaningful the CV is as a data source for hiring decisions.

Can you pick which group was hired?

Look at these two word-clouds. One represents the words extracted from the CVs of those who were and the other from those who were rejected. Which would you pick?

A word cloud depicts the relative frequency of words appearing in the set of resumes by the size of the words in the word cloud, i.e. words in larger font size appears more than the ones in smaller font size. Given that the two word clouds show no significant differences in the words in larger or smaller font sizes means that the two groups are indistinguishable based on the words used within CV’s.

The Bottom Line? The CV is not a reliable data source to guide hiring decisions.

P.S If you had picked Group 2 you would have been right.

Not only does the CV not matter, but it also turns out our prior experience doesn’t count for much either.

Josh Bersin, the premier topic expert in our space, articulates how hard it is to predict performance through traditional testing in this way .

“Managers and HR professionals use billions of dollars of assessment, tests, simulations, and games to hire people – yet many tell me they still get 30-40% of their candidates wrong.”

And now the definitive publication for all things HR, leadership etc. the Harvard Business Review, has shared research that prior experience is also a poor predictor of performance. Read more >

So what signals DO tell you something about whether a person is a good fit for your role, or for your organisation?

Whether their background is similar to yours or the person in your team who is a star? Whether they have played a competitive sport at a senior level (because that’s a good indicator of drive and resilience)? Or maybe whether they are a different ethnicity, gender, educational background to the rest of your team because, you know … diversity is meant to be good for business!

The list of performance ‘signals’ are as many as the number of people (interviewers) you have interviewing new hires. It’s a deeply personal decision like who you choose as a partner and we all feel like we know what to look for. But we don’t.

And no amount of interview or bias training or even interview experience is ever going to make us better at these decisions.

But experience does matter, but it’s a different type of experience. It’s the experience that comes from doing something 10 x, 10,000 times, a million times, with feedback on what worked, what didn’t, under what context etc. And of course, if one could remember all that.

Think of a different context- the grading of an exam. If you ask your teenager or university-aged son/daughter what would make them trust an exam result, they would likely say
1. Consistency
2. Anonymity
3. Data-driven, i.e some kind of formula for assessment, that assures consistency and fairness.
4. The experience of the assessor.

The fact is … just as no human driver will ever match the learning capability and velocity of a self-driving Tesla car, no assessor will ever be as good as a machine that’s done it a million times. The same applies for AI in recruitment.

No human recruiter will ever match the power, smarts and anonymity presented by a machine learning assessment algorithm.

We would love to see you join the conversation on LinkedIn!


Suggested Reading:

https://sapia.ai/blog/everybody-lies/

Read Online