PredictiveHire and Iceland Foods are finalists in the 2021 Recruiter Awards for the category In-House Innovation in Recruitment.
Established in 2002, the Recruiter Awards are the UK’s most prestigious honours in recruitment.
The awards recognise best practice and celebrate achievement by agencies and in-house recruiters during the prior 12 months, also throwing a spotlight on marketing and technology. The In-House Innovation in Recruitment category recognises outstanding innovation by an in-house recruitment team that has led to the achievement of strategic business goals.
The success story PredictiveHire has been nominated for starts with 2020 creating a crisis for Iceland, as it did for many. Increased trade and COVID-19 absences meant store leaders were massively drained of time, yet there was a surge in both the need to hire and the number of applicants. As it stood, the recruitment process was 100% manual, and all done at store level by store managers.
Iceland had to innovate. Hiring needed to be centralised and automated.
Iceland developed a set of ‘non-negotiable’ criteria for an automated platform and began looking. They ultimately chose PredictiveHire as their interview automation partner – they loved the notion of ‘hiring with heart’.
Once PredictiveHire was selected, we had integrated with their API (Kallidus) and the applications started rolling in within four weeks. Iceland estimated in the first 4 months they saw 5x payback, 8,000 hours freed up for their time-poor store managers across the organisation, and over 50,000 applications were processed every month. They also found that 97% of candidates who followed the link to apply completed the application process and 99% of candidates reported a positive experience.
You can read the full Iceland Foods + Kallidus + PredictiveHire story here.
The award will be presented at the annual Gala in London on September 23, 2021.
The rigorous judging process is done by a judging panel of 33 credible and experienced industry professionals including Rob McCargow, Director of Artificial Intelligence at PwC UK, James Fieldhouse, M&A Managing Director at BDO, and Karolina Minczuk, Relationship Director at Natwest Bank.
Other nominees are BDO in partnership with Amberjack, BUPA, GQR, and Virgin Media in partnership with Amberjack.
PredictiveHire is a frontier interview automation solution that solves three pain points in recruiting – bias, candidate experience, and efficiency. Customers are typically those that receive an enormous number of applications and are dissatisfied with how much collective time is spent hiring. Unlike other forms of assessments which can feel confrontational, PredictiveHire’s FirstInterview™ is built on a text-based conversation – totally familiar because text is central to our everyday lives
Every candidate gets a chance at an interview by answering five relatable questions. Every candidate also receives personalised feedback (99% CSAT). Ai then reads candidates’ answers for best-fit, translating assessments into personality readings, work-based traits and communication skills. Candidates are scored and ranked in real-time, making screening 90% faster. PredictiveHire fits seamlessly into your HR tech-stack and with it you will get off the Richter efficiency, reduce bias and humanise the application process. We call it ‘hiring with heart.’
If you’d like to stay up to date with PredictiveHire, you can subscribe to our newsletter here.
I think back to my days as a recruiter, you filled jobs by posting adverts. That was 15 years ago. The saying was: “Post and pray” because you never knew what would come back.
The average time to fill a role, as we advised the business, was 30 days.
Even then, there was flexibility on that because of the ‘war on talent’. It was hard to find people. Skilled people. The ‘right’ talent. When we needed to find talent fast than from time-to-time, we would engage a 3rd party recruiting agency to help us. However, that was costly.
So, even with the proper sourcing tools in hand – the business just needed to wait. Here were the reasons that recruiters gave for not delivering quickly:
Reasons, and perhaps excuses. And the business just had to wait.
According to a Job Vite – time to fill remains anywhere between 25 (retail) or 48 (hospitality) days (when I read this, I nearly fell off my chair!). This is surprising since technology has come such a long way since then.
Why are hiring managers waiting this long for these high-volume skills? And the wait will undoubtedly be increased due to the volumes of applications – thanks to C-19. What is the cost associated with waiting? A straightforward formula I found published by Hudson (for non-revenue generating employees) is:
(Total Company Annual Revenue) ÷ (Number of Employees) ÷ 365 = Daily Lost Revenue
Here’s a working example. Let’s take a retailer. They generate 2.9 billion in revenues and have 11,000 employees. This means that their daily lost revenue PER vacant position is $722.
I’ve observed talent teams who recruit in high volume scenarios; spending hours screening thousands of CV’s – with inherent bias’s creeping in by the 13th CV. Then fatigue sets in. And by the 135th CV, unconscious biases have turned into bold conscious judgements;
Keeping your process consistent and fair is a challenge and the quality of the screening process diminishes.
Then there is the phone screen. If you only took 30 into this stage and spoke to them for 10 minutes each, then it will take the recruiter five hours.
And time is not concentrated nor time-bound to one session – it elapses. You aren’t sitting for 1.6 hours at a time nor can you schedule back-to-back phone screens, so the realistic time frame for this is about a week.
From there, it’s coordinating Hiring Manager interviews, conducting their interviews, getting feedback, making decisions, giving offers, taking reference checks and finalising compliance steps to make the hire. This is where it ends up being a long and drawn-out process.
Plus they can drive a far better process. How? By getting a trustworthy understanding of the candidate and their personality modelled against the organisations’ success DNA (the “Success DNA” is the profile of what success looks like in your organisation).
When candidates apply their first step is an automated interview.
It takes 15-20 minutes to complete, and all candidates receive a personality assessment based on what they wrote (which they love).
Personality can be deduced from the text that candidates write (scientifically proven) and then there is also the feedback from thousands of candidates talking to the accuracy of these personality assessments.
Here’s a tiny sample of all the feedback >>
For Talent Acquisition to build its credibility in the business, it needs to demonstrate its impact on the bottom line and provide tangible solutions to address this need for speed. Tools like Sapia can help with solving for these speed and cost challenges, and the benefits of providing a consistent, bias-free candidate experience are just the icing on the cake.
To keep up to date on all things “Hiring with Ai” subscribe to our blog!
More money is flowing into Environmental, Social and Governance (ESG) than ever. In 2021, investors poured $649 billion into ESG-focused funds worldwide, up 90% from the $542 billion invested in 2020. In the UK, over 21% of investors plan to back funds and companies with comprehensive ESG strategies by 2025. And in Australia, more than 55% of super funds are using responsible investment approaches to inform strategic asset allocation.
All this investment has prompted a sharper focus on social issues across major companies – the S in Environmental, Social, and Governance. The great news is that investment in the big S, in turn, means more money and attention toward progress in Diversity, Equity and Inclusion (DEI).
Executives that care about diversity know that an effective strategy must start at the top – take Australian superannuation fund HESTA and its 40:40 vision as an example. But, to be truly successful, we need DEI goals at all levels, and we need to track, accurately, the degree to which we meet them.
Both boards and shareholders want measurable change in DEI, and fast. According to a Harvard Business Review study of S&P 500 earnings calls, the frequency with which CEOs talk about issues of equity, fairness and inclusion has increased by 658% since 2018. You can bet that this will only increase further in the coming years.
According to another HBR article, 40% of US companies discussed DEI in their Q2 2020 earnings calls, which is a huge step up from the 4% of companies that did the year before. And with 1,600 CEOs pledging to take action on DEI, setting goals and tracking progress remain top priorities.
DEI and ESG are big challenges, and we might take myriad possible approaches in trying to solve them. Some companies may start at the executive level (HESTA, as an example), while others may invest in partnerships and outreach programs. The spectrum of options can easily become overwhelming.
|“Interestingly, I’m just looking at our workforce profile and have been discussing the changes in diversity since we updated our recruitment approach last March. Not only have we hired three times more ethnic minorities and 1.5 times more women, but we now have twice as many LGBTQI+ colleagues in our business than we did three years ago! Other initiatives have played a part, but I’d imagine the game changer has been Sapia as we’ve had some direct feedback from a transgender colleague that they felt more confident with our recruitment process than they did in other applications! |
David Nally, HR Manager, Woodie’s UK
So why not start with the people you bring into your company, at all levels? Why not begin with the way you attract, assess, and select talent?
With a Smart conversational Ai, you can set realistic DEI targets and measure them, at scale, with little extra effort – ensuring you access the best talent from the widest possible pool. A Smart Interviewer is different to the simple chatbots used to automate routine tasks according to a fixed set of rules. For example, our conversational Ai is able to analyse interview responses to gain deeper insights about each candidate’s personality and competencies, in a fair and objective way.
Our Smart Interviewer helps you track and meet these three key diversity goals.
Our proprietary interview response database is made up of more than 500,000,000 words, enabling us to conduct the most sophisticated response analysis in the recruitment industry. We can do this on a macro scale (e.g. across countries, cultures, industries, and role types); or for individual companies.
Take these findings, combining data from a range of our customers, globally:
Figure 1: Gender stats across applicants, Ai recommendations and hired
Thanks to our machine-learning capabilities, and the size of our database, we can provide the hiring team with real-time analytics on the following parameters:
By employing a smart interviewing Ai at the first stage of recruitment, we can prove progress with regards to inclusivity and bias reduction. These aggregate company data show that while the expected number of female applicants exceeded the number of those that actually applied, the number of recommendations made by our Smart Interviewer also beat expectations (effectively compensating for the top-of-funnel bias). We can also see that the rate of observed female hires far exceeded the expected number.
What does this show? With just three metrics, you can see the progress being made in your recruitment process – and if performance is below expectation, you can see the stage at which targets are not being hit.
It is important to note that the recommendations of our Ai are based solely on its analysis of candidate responses in the chat-based interview. Its suitability criteria is based, among other factors, on HEXACO personality modeling and accurate assessments of various job related competencies such as team work, critical thinking and communication skills.
Our data also keep biases in check at each stage of the recruitment process, depending on the role type. As you can see, for all three roles, this company’s hiring outcomes were within regulatory limits (as stipulated by the US Equal Employment Opportunity Commission (EEOC)) across the three stages of their funnel: Applications received, recommendations made, and the hiring decisions ultimately made by the hiring team. The final step, it is important to note, happens independently of our Ai: It is a human decision. Despite this, the outcome data is recorded, so that the company can compare its outcomes against inputs and recommendations to see if late-funnel biases are occurring.
Figure 2: Role-type-based gender bias. Mid line indicates 0 bias. Shaded areas indicate the tolerance level. Right of line favours females and left favours males.
The feedback from candidates is extremely positive: Company A’s strivings for fairness and equality in its processes has resulted in a candidate satisfaction score of 98.7% for females, and 98.1% for males. Better still, the interview dropout rate across the board is less than 10%.
As with gender, our ethnicity analytics help hiring managers to easily set and accurately track goals for ethnicity representation in recruitment. Company A (whose data were shown in Figure 2) is, again, leading the way in this regard: Its BAME (Black, Asian and Ethnic Minorities) recommendation rate is at 46.5%, exceeding expectations – meanwhile, its non-BAME recommendation rate sits at 37.1%.
Our data has also helped Company A to increase its hiring commitments for First Nations people: The rate currently sits at 4.5%, from 4,000 candidates, above the national average of 1.8% (2018-19). This number is expected to increase over the coming year.
The data we collect helps us, as well as our customers, understand the extent to which personality determines role suitability and general workplace success. It also helps us to eliminate long-standing biases that negatively impact certain candidates, despite the fact that said candidates may be highly suitable to the roles for which they are applying.
For example, people high in trait agreeableness (compassionate, polite, not likely to dissent or proffer controversial viewpoints) tend to underperform in the traditional face-to-face interviews. Hiring managers may assume, based on this, that they are unable to lead, or are not a ‘culture fit’. However, a face-value assessment of agreeableness is not a reliable predictor of candidate potential. Only scientific analysis of HEXACO traits can make this call with accuracy.
Take these two visualizations, showing how different personality traits affect the recommendations made by our Ai. Females (red dot) and males (blue dot) are slightly different in agreeableness, but there is virtually no difference in their conscientiousness, a strong predictor of job performance. As a result of being able to measure conscientiousness accurately, our system can effectively allow for higher levels of agreeableness – or cancel out the negative face-value judgements typically made in face-to-face interviews. Despite these personality differences, as shown in Figure 1, Sapia Ai recommendations for both male and female groups remain similar (~40%). This results in a fairer chance for all, and a wider pool of candidates. In this case, this is to the benefit of females.
Figure 3: Male (blue) and Female (red) personality trait differences
The world is changing, and we can no longer continue to take a “We’ll see what happens” approach to the ‘S’ in ESG. Many investors are pushing companies for better diversity and inclusion outcomes. At Sapia, our data show that fair, scientifically valid, and explainable Ai can produce better outcomes for peoples of all genders and ethnicities. The companies that have adopted our Ai approach are seeing strong improvement in their own DEI practices and results.
Over and above assisting our clients, our commitment to DEI is embodied in a guiding vision of our own: Our FAIR Framework. This embeds an approach that ensures our systems and processes are ethical and transparent. Many similar Ai systems operate in a ‘black box’, providing little knowledge about how their algorithms help make important decisions or create issues like amplifying biases. We are committed to a fairer world, free of bias – and, with every candidate interviewed, our data is bringing us closer.
Last week, our smart interviewer technology was featured in a glowing piece by the Australian Financial Review. The story was picked up by LinkedIn News Australia, who conducted a poll asking users if they were “comfortable being interviewed by a bot”.
The poll garnered more than 6,500 responses. Perhaps unsurprisingly, 50% of respondents selected the response “No – it’s a job for humans.” Just under a third of LinkedIn users said that they believe chatbot interviewing is “the future”, while 21% said that it’s appropriate only for certain roles.
When you have over 6,500 responses, you can do some meaningful analysis. In this case, “It’s just for humans” was the prevailing opinion. But, in the comments section attached to the poll, we discovered more about how people feel toward Ai, both as a technological construct and as a tool for recruitment. We bucketed the comments into five recurring themes:
Ai hasn’t made a good name for itself lately – take Amazon’s recent facial recognition debacle as a good example – so it’s easy to see why people are resistant to the prospect of Ai moving into a space historically handled by humans. Take a bird’s eye view, and the notion certainly looks preposterous: How could a machine, asking just five questions, ever hope to recreate the capabilities of a seasoned recruiter or talent acquisition specialist?
That is the problem, though: The more ‘human’ aspects of the recruitment process are ruining the game. Ghosting is rampant, both for candidates and recruiters. Ineradicable biases are creating unfairnesses that permeate organisations from top to bottom. The Great Resignation is putting immense pressure on hirers to move quickly, excluding thousands of applicants based on arbitrary criteria that shift from month to month. Consider, too, these sobering statistics:
For Ai to qualify as a useable, reliable tool, we expect it to be perfect. We compare it, unfairly, against some ultimate human ideal: The chirpy, well-rested recruiter on their best day. The kind of recruiter who has never ghosted anyone, who has no biases whatsoever, and who finds the right person for the right job, no matter what. Here’s the issue with this comparison: That kind of human doesn’t exist.
For Ai to be a valid and useful tool, and an everyday part of the human recruiter’s toolset, it doesn’t need to be perfect, flawless; it only needs to be better than the alternative. Can’t be done? For one example, Smart Interviewer, eliminates the problem of ghosting completely: Each of your candidates gets an interview, and every single person receives feedback. Even better? 98% of the candidates who use our platform find that feedback useful.
(That is to say nothing of the way it removes bias, as if that weren’t enough on its own.)
Ai has a way to go before it will earn the trust of the majority. Again, this is totally understandable. We believe that there is a better, and quicker, way to get there.
To borrow a concept commonly associated with cryptocurrency and blockchain technology, we want to create a trustless environment for our Ai and its activities. Not an environment without trust, but one in which trust is a foregone conclusion. In a trustless environment, dishonesty, either by admission or omission, is impossible. Just as you cannot forge blockchain entries, you cannot hide the workings and algorithms that make our Ai what it is.
That is the essence of our FAIR Framework. For hiring managers and organisations, this document provides an assurance as well as a template to query fairness related metrics of Ai recruitment tools. For candidates, FAIR ensures that they are using a system built with fairness as a key performance metric. For us, transparency on fairness is standard operating procedure.
Finally, think about this: When we say we want a ‘human’ recruitment process, what are we really saying? That we want something fallible, prone to biases, subject to the decisions of people who have bad days? What if a trustless Ai companion could help remove all that, without replacing the person? Is that not more human?