It scares me sometimes when I think about the big decisions I’ve made on gut feel and will probably continue to make relying on my instincts.
Personally, I would love to be armed with meaningful data and insights whenever I make important life decisions. Such as what’s the maximum price I should pay for that house on the weekend, who to partner with, who to work for, and who to hire into my team. Data that helped me see a bigger picture or another perspective would be very valuable. For most of those decisions there is so much information asymmetry which makes it feel even riskier. For sure I could check out glassdoor when choosing my next job but it comes with huge sample bias and not much science behind it.
So why is there still an (almost) universal blind acceptance that these decisions are best entrusted to gut feel? Especially given the facts show we are pretty crap at making good ‘gut’ based decisions.
I’m one of those people that believe in the power of AI — to remove that asymmetry, to dial down the bias, to empower me with data to make smarter!
At a recent HR conference, a quick pulse around the room confirmed there is high curiosity and appetite to understand AI. What we’re missing is the clarity about the opportunities and what success looks like from using it. The concern about how to navigate the change management exercise that comes with introducing data and technology into a previously entirely human-driven process is daunting.
The best human resources AI is not about taking the human out of hiring and culture decisions. Far from it. It’s about providing meaningful data to help us make better decisions faster.
Having worked in the ‘People and Culture’ space for a while, I know building trust in how the organisation makes decisions — especially people decisions — is hard in the absence of data. Yet we all know that transparency builds trust. So how can you build that trust through transparency when the decision-maker is a human — and the humans make decisions in closed rooms and private discussions.
Remember that feeling when the recruiters call up and say you weren’t a good fit — who feels great about that call? A total black box cop-out response!
It doesn’t have to be this way, and the faster we can get to better decision making the better. Seven months ago, I joined a team of data scientists who had spent the prior three years building technology that relies on AI to work its magic and equip recruiters with meaningful and actionable insights when hiring.
I’m no data scientist. I have had to learn the ins and outs of our AI pretty fast. And because our technology is at work in the people space, I’m learning how to ensure the AI is safe, fair and our customers trust it and us to do the right thing with it.
If we reduce it to its core process, a machine learning algorithm is trying to improve the performance of an outcome based on the input data it receives. In some instances, such as in deep learning algorithms, it’s trying to simulate the functioning of the human brain’s neural networks, to figure out the patterns between the data inputs and data outputs.
Because it has no feelings, it’s going to be free of the biases humans bring to these critical decisions. Plus machines are more malleable to learning and way faster at it. This is more critical these days when roles are changing dynamically and swiftly as industries are disrupted.
Our team plays in predictive analytics for recruitment space. What this means is our AI seeks out the lead indicators of job success: the correlating factors between values, personality and job performance. We all intuitively know that behaviours drive leading indicators. But we struggle to assess for those consistently well.
Our job is to augment your intelligence and ability to make the right decision. By knowing how people treat others, what drives them, and their values, you become better informed about the real DNA of a person and how they might function in your team.
A powerful motivator to use AI is to build confidence and trust in the process from both candidates and people leaders by dialling down the human element (getting rid of the bias) and revealing the patterns for success. Less room for bias = more fairness for candidates = more diverse hiring. Key to this is we don’t look at any personal information — the machine doesn’t know or care about your age, gender, colour or educational background.
For our customers having this data is empowering and helps them make smart decisions. For all the people who are affected by those decisions, they can feel relieved that they were considered on their merits, not based on someone’s gut feel.
But if I have to choose between trusting biased humans and (a sometimes) biased machine they create, I know which one I would trust more. At least with a machine, you can actually test for the bias, remove it, and re-train it.
Candidate experience: Everybody’s talking about it, few companies are actively investing in it.
According to a Sapia-sponsored Aptitude Research report from earlier this year, 68% of companies admit they have no plans to address the interview portion of their candidate experience throughout 2022 and 2023. Despite this, 50% of these companies know they’re losing talent due to their application and interview processes. What’s more, according to Forbes, companies that prioritize candidate experience can see their average quality-of-hire improve by 70%.
Why the unwillingness to address such an important facet of recruitment? In most cases, the teams responsible for enacting change to candidate experience are steeped in the everyday throes of talent acquisition, and don’t have time right now to examine their processes. Statistically speaking, this is probably where you’re at. Totally understandable; the 2023 labor market is tough. If your house is on fire, you’re probably not focussed on how well you treat the visitors at your doorstep.
Recently, on our Pink Squirrels! podcast, we sat down with Lars van Wieren, CEO at Starred, a candidate experience measurement tool. Lars offered some practical tips on getting started with candidate experience: Benchmarking it, measuring it at different stages of the process, and setting your business up to review and act on the findings.
As the saying goes, what gets measured, gets managed. Lars recommends starting with a basic benchmark for your candidate experience. This need not be difficult, and you don’t necessarily need a fancy tool to start gathering these data.
Simply ask your candidates: How likely are you to recommend our company to a friend or colleague? This is, in essence, a Net Performer Score (NPS) question, and the scale (1 to 10) should reflect that.
Ideally, you should be gathering feedback on your candidate experience at each stage of the application process, but to begin with, ask the question at the very end. And to get the best, least-biased data, you need to ask all applicants whether or not they’ve been shortlisted or hired – if you only ask those who have been shortlisted, or the few people who have been successful, you’re likely to get magnanimous results that don’t reflect your true candidate experience.
The NPS tracking question is easily configurable and embeddable into automated emails, meaning it can be set up through your ATS with little additional work.
When you begin to analyze the data, keep things simple: Dump the data into a spreadsheet, and look at your average numbers. If your score is below 0, you’ve got work to do – if it’s 0 to +30, you’re doing well. 30+ and over, well done!
(If you’re reading this, it’s probably not likely that you’ll get a 30+ score on the first go-round. That’s okay – the goal is to find out how much work you’ve got to do.)
The benefit of benchmarking NPS is that it gives your business a single, easy-to-understand proxy for the health of your candidate experience. Once you’ve got the number, you can start to make small changes to your application experience and see how that affects the overall number.
For example, you might consider making the following changes to improve your candidate experience:
At the same time, you might consider looking at your candidate abandonment rate – we’ve got a post on measuring and improving it here. Candidate experience scores and abandonment rates are almost always linked. Improve one, you improve the other.
Our joint report with Aptitude Research uncovered some interesting data on the importance of two-way feedback between candidates and employers.
Gathering and acting on mutual feedback:
Feedback is critical. And, to make it as accurate and indicative as possible, your feedback should ideally be gathered at each stage of the application process: Application, screening, interviewing, assessment, offer, and rejection.
By doing this, you’ll know exactly where your candidate experience is lacking – and you can make fast, effective changes.
Multi-step candidate experience feedback may not be easy to do with your current setup, but it is relatively simple to configure if your ATS/chosen software solution has the capability.
Generally speaking, the task of improving candidate experience is that of your entire talent acquisition or recruitment team. But it’s a good idea to appoint an internal candidate experience champion – someone who is responsible for collating the benchmark data and regularly reporting on it.
What’s the reporting cadence? Depends on the amount of applications you have, and the length of your application process. A monthly score update check-in works best for most. Monthly measurement will likely give you an insightful trendline.
While the task of improving candidate experience is never done, it needn’t require an overhaul to your entire recruitment business. Start small, make iterative improvements over time, and focus on making at least one more candidate smile.
Tristan Harris, the ex-Googler, now an evangelist for humane Ai and featured in The Social Dilemma shares the stunning fact that 70% of the videos we watch on YouTube, on average 60 minutes a day, come from the recommendation of algorithms.
His point? That choice is illusory.
When a machine can understand you better than you can understand yourself you lose your power. You also loose choice.
The now-famous historian Yuval Noah Harari has written and spoken about the threat to humanity from the type of artificial intelligence that knows us better than we know ourselves.
This is the worst kind of Ai.
When you think that you are in control of those choices, but you can’t see it to even know it’s happening. Google and Facebook and many other Silicon Valley behemoths have mastered this. They own this space.
Not all Ai can be weaponised against you.
The predictive technology that underpins tools like Spotify, Netflix do enrich peoples’ enjoyment in music and movies.
Not to weaponize your choices. Not to be used against you.
We, humans, move from being hacked to hacking ourselves.
Finally, Ai that gives you back your human agency.
300,000 candidates for jobs ranging from retail, sales, call centres, carers, graduates, HR managers, … this is how they feel about the use of Ai that is designed for their benefit.
Imagine what the world would look like if the whole world had better self-awareness.
Today not knowing yourself carries an even greater cost.
To keep up to date on all things “Hiring with Ai” subscribe to our blog!
Finally, you can try out Sapia’s Chat Interview right now, or leave us your details to get a personalised demo
As humans, we often don’t trust what we can’t see and we can’t trust what we don’t understand.
Transparency and explainability are fundamental ingredients of trust, and there is plenty of research to show that high trust relationships create the most productive relationships and cultures.
We are committed to building ethical and engaging assessments. This is why we have taken the path of a text chat with no time pressure. We allow candidates to take their own time, reflect and submit answers in text format. Apart from the apparent errors related to facial expressions, we believe that technologies such as voice to text can add an extra layer of errors. We also refrain from scraping publicly available data such as LinkedIn nor do we use behavioural data like how fast a candidate completes or how many corrections they make. Lastly, we strictly use the final submitted answers from the candidates and nothing else.
Our approach has led to candidates loving the text experience, as measured by the feedback they leave and NPS.
No demographic details are collected from candidates nor used to influence their ranking. Only the candidates answer to relevant interview questions are analysed by our scientifically validated algorithm to assess their fit for the role.
Biases can occur in many different forms. Algorithms and Ai learn according to the profile of the data we feed it. If the data it learns from is taken from a CV, it’s only going to amplify our existing biases. Only clean data, like the answers to specific job-related questions, can give us a true bias-free outcome. We continuously test the data that trains the machine for known biases such as between gender and race groups, so that if ever the slightest bias is found, it can be corrected. Potential biases in data can be tested for and measured. These include all assumed biases such as between gender and race groups that can be added to a suite of tests. These tests can be extended to include other groups of interest where those groupings are available like English As Second Language (EASL) users.
Here are a few examples:
Sapia uses all of these tests and more.
To keep up to date on all things “Hiring with Ai” subscribe to our blog!
Finally, you can try out Sapia’s SmartInterview right now, or leave us your details here to get a personalised demo.