Voluntary employee turnover can have a direct financial impact on organisations. And, at the time of this pandemic outbreak where the majority of the organisations are looking to cut down their employee costs, voluntary employee turnover can create a big concern for companies. And thus, the ability to predict this turnover rate of employees can not only help in making informed hiring decisions but can also help in saving a substantial financial crisis in this uncertain time.
Acknowledging that, researchers and data scientists from Sapia, a AI recruiting startup, built a language model that can analyse the open-ended interview questions of the candidate to infer the likelihood of a candidate’s job-hopping. The study — led by Madhura Jayaratne, Buddhi Jayatilleke — was done on the responses of 45,000 job applicants, who used a chatbot to give an interview and also self-rated themselves on their possibility of hopping jobs.
The researchers evaluated five different methods of text representations — short for term frequency-inverse document frequency (TF-IDF), LDS, GloVe Vectors for word representations, Doc2Vec document embeddings, and Linguistic Inquiry and Word Count (LIWC). However, the GloVe embeddings provided the best results highlighting the positive correlation between sequences of words and the likelihood of employees leaving the job.
Researchers have also further noted that there is also a positive correlation of employee job-hopping with their “openness to experience.” With companies able to predict the same for freshers and the ones changing their career can provide significant financial benefits for the company.
Apart from a financial impact of on-boarding new employees, or outsourcing the work, increased employee turnover rate can also decrease productivity as well as can dampen employee morale. In fact, the trend of leaving jobs in order to search for a better one has gained massive traction amid this competitive landscape. And thus, it has become critical for companies to assess the likelihood of the candidate to hop jobs prior to selections.
Traditionally this assessment was done by surfing through candidates’ resume; however, the manual intervention makes the process tiring as well as inaccurate. Plus, this method only was eligible for professionals with work experience but was not fruitful for freshers and amateurs. And thus, researchers decided to leverage the interview answers to analyse the candidates’ personality traits as well as their chances of voluntary turnover.
To test the correlation of the interview answers and likelihood of hopping jobs, the researchers built a regression model that uses the textual answers given by the candidate to infer the result. The chosen candidates used the chatbot — Chat Interview by Sapia for responding to 5-7 open-ended interview questions on past experience, situational judgement and values, rated themselves on a 5-point scale on their motives of changing jobs. Further, the length of the textual response along with the distribution of job-hopping likelihood score among all participants formed the ground truth for building the predictive model.
To initiate the process, the researchers leveraged the LDA-based topic modelling to understand the correlation between the words and phrases used by the candidate and the chances of them leaving the company. Post that, the researchers evaluated four open-vocabulary approaches that analyse all words for understanding the textual information.
Open vocabulary approaches are always going to be preferred over closed ones like LIWC, as it doesn’t rely on category judgement of words. These approaches are further used to build the regression model with the Random Forest algorithm using the scores of the participants. Researchers used 80% of the data to train the model, and the rest of the 20% was used to validate the accuracy of the model.
Additionally, researchers also experiment with various text response lengths, especially with the shorter ones, which becomes challenging as there is not much textual context to predict. However, they found a balance between the short text responses along with the data available and trained the model predicts for even those.
To test the accuracy, the models are evaluated based on the actual likelihood of the turnover with relation to the score produced by the model. To which, the GloVe word embedding approach with the minimum text of 150 words achieved the highest correlation. This result demonstrated that the language used in responding to typical open-ended interview questions could predict the chances of candidates’ turnover rate.
Leveraging data from over 45,000 individuals researchers built a regression model in order to infer the likelihood of the candidates leaving the job. It will not only remove the dependency of companies on candidate resumes and job histories but also enhances the process of hiring into a multi-measure assessment process that can be conducted digitally for recruiting.
By Sejuti Das, Analytics India Magazine, 02/08/2020
To keep up to date on all things “Hiring with Ai” subscribe to our blog!
You can try out Sapia’s Chat Interview right now, or leave us your details here to get a personalised demo.
I think back to my days as a recruiter, you filled jobs by posting adverts. That was 15 years ago. The saying was: “Post and pray” because you never knew what would come back.
The average time to fill a role, as we advised the business, was 30 days.
Even then, there was flexibility on that because of the ‘war on talent’. It was hard to find people. Skilled people. The ‘right’ talent. When we needed to find talent fast than from time-to-time, we would engage a 3rd party recruiting agency to help us. However, that was costly.
So, even with the proper sourcing tools in hand – the business just needed to wait. Here were the reasons that recruiters gave for not delivering quickly:
Reasons, and perhaps excuses. And the business just had to wait.
According to a Job Vite – time to fill remains anywhere between 25 (retail) or 48 (hospitality) days (when I read this, I nearly fell off my chair!). This is surprising since technology has come such a long way since then.
Why are hiring managers waiting this long for these high-volume skills? And the wait will undoubtedly be increased due to the volumes of applications – thanks to C-19. What is the cost associated with waiting? A straightforward formula I found published by Hudson (for non-revenue generating employees) is:
(Total Company Annual Revenue) ÷ (Number of Employees) ÷ 365 = Daily Lost Revenue
Here’s a working example. Let’s take a retailer. They generate 2.9 billion in revenues and have 11,000 employees. This means that their daily lost revenue PER vacant position is $722.
I’ve observed talent teams who recruit in high volume scenarios; spending hours screening thousands of CV’s – with inherent bias’s creeping in by the 13th CV. Then fatigue sets in. And by the 135th CV, unconscious biases have turned into bold conscious judgements;
Keeping your process consistent and fair is a challenge and the quality of the screening process diminishes.
Then there is the phone screen. If you only took 30 into this stage and spoke to them for 10 minutes each, then it will take the recruiter five hours.
And time is not concentrated nor time-bound to one session – it elapses. You aren’t sitting for 1.6 hours at a time nor can you schedule back-to-back phone screens, so the realistic time frame for this is about a week.
From there, it’s coordinating Hiring Manager interviews, conducting their interviews, getting feedback, making decisions, giving offers, taking reference checks and finalising compliance steps to make the hire. This is where it ends up being a long and drawn-out process.
Plus they can drive a far better process. How? By getting a trustworthy understanding of the candidate and their personality modelled against the organisations’ success DNA (the “Success DNA” is the profile of what success looks like in your organisation).
When candidates apply their first step is an automated interview.
It takes 15-20 minutes to complete, and all candidates receive a personality assessment based on what they wrote (which they love).
Personality can be deduced from the text that candidates write (scientifically proven) and then there is also the feedback from thousands of candidates talking to the accuracy of these personality assessments.
Here’s a tiny sample of all the feedback >>
For Talent Acquisition to build its credibility in the business, it needs to demonstrate its impact on the bottom line and provide tangible solutions to address this need for speed. Tools like Sapia can help with solving for these speed and cost challenges, and the benefits of providing a consistent, bias-free candidate experience are just the icing on the cake.
To keep up to date on all things “Hiring with Ai” subscribe to our blog!
You can try out Sapia’s FirstInterview right now, or leave us your details here to get a personalised demo.
The Royal Commission has brought about a lot of scrutiny on the banks, and for good reason. But we have to give them credit where it’s due.
Which is funny, as I’d argue that hiring a staff member is a much riskier proposition for a business than a bank having one of its customers default on a loan.
Imagine if your bank lent you money with the same process that your average recruiter used to hire for a role.
They would ask you to load all of your personal financial information into an exhaustive application form. Your salary, your weekly spend, your financial commitments. All of it.
The same form would include a lot of probing questions, such as:
Then, assuming your form piqued their interest, they would bring you in for one on one meeting with the bank manager. That manager would grill you with a stern look, asking the same questions. This time though, they will be closely watching your eye movement to see if you were lying when you answered.
In each part of the process, you get a score, and then if that number is above a certain threshold, you get the loan.
It’s almost laughable, right?
Only people who desperately need money would put themselves through that process. And they’re likely not the best loan candidates.
Banks work hard to attain incredibly high accuracy levels in assessing loan risk.
Meanwhile in HR, if you use turnover as a measure of hiring accuracy its as low as 30–50% per cent in some sectors. If you combine both turnover and performance data (how many people who get hired really raise a company’s performance), it might be even lower than that.
Banks wouldn’t exist if their risk accuracy was anywhere close to those numbers.
Well, that’s how most recruitment currently works — just usually involving more people.
There are more parallels here than you think.
Just like a bank manager, every recruiter wants to get it right and make the best decisions for the sake of their employer. As everyone in HR knows, hiring is one of the greatest risks a business can take on.
But they are making the highest risk decision for an organisation based on a set of hypotheses, assumptions and lots of imperfect data.
So, let’s flip the thought experiment.
Well, the process wouldn’t involve scanning CVs, a 10-minute phone call, a face to face interview and then a decision.
That would be way too expensive given exponentially more people apply for jobs than apply for loans each year. Not to mention the process itself is too subjective.
I suspect they would want objective proof points on what traits make a candidate successful in a role, data that matches the candidate against those proof points and finally, further cross-validation with other external sources.
They wouldn’t really care if you were white, Asian, gay female. How could you possibly generalise about someone’s gender, sexuality or ethnicity and use it as a lead indicator of hiring risk. (Yet, in HR this is still how we do it.)
Finally, they’d apply a layer of technology to the process. They would make it a positive customer experience for the candidates and with a mobile-first design. Much like a loan, you’ll lose your best customers if the funnel is long and exhaustive.
I’m not saying that banks are a beacon of business. The Royal Commission definitely showed otherwise. But for the most part, they have gotten with the times and upgraded their processes to better manage their risk. It’s time HR do the same.
You can try out Sapia’s Chat Interview right now, or leave us your details to book a demo
This came up in my feed last week prompting me to share my own 2 cents on why machines are better at hiring decisions than humans.
Did you know that the Wikipedia list of cognitive biases contains 185 entries? This somewhat exhausting article lays out in excruciating detail biases I didn’t know could exist and arrives at the conclusion that they are mostly unalterable and fixed regardless of how much unconscious bias training you attend in your lifetime.
I get asked A LOT about how I can work for a company that sells technology that relies on ‘machines’ to make people decisions.
I will keep it simple … 2 reasons
Because as per above, our biases are so embedded and invisible mostly we just can’t check ourselves in the moment to manage those biases. (I would rather hire women, ideally, mums, who like the same podcast series as me and straight through to offer stage if they like Larry David humour )
And Machines can be ‘trained’ …humans can’t, as easily or efficiently
But the myriad and ever-present news articles about ‘algorithmic bias’ has lumped all machine learning into one massive alphabet soup of ‘don’t trust the machine!
Really? Are we also biased against machines now? I saw Terminator 2 as well and worry about machines taking over the world ….but that’s a massive leap from the practice of bringing data, objective data into the most critical decision you will make as a people leader, who to hire. The divorce rate is for me the proof point that humans suck at making critical people decisions.
I’ve been in the People space for a while. I was lucky enough to work with 2 organisations BCG and the REA Group that value their people above all else. They also value making money and having your engineers and consultants sucked up in recruiting days and campaigns is a massive investment of your scarce and valuable capacity. I have found most companies don’t even know how much it costs to hire one person because no one is tracking the time investment.
We are all time poor and so we often default on hiring based on ‘pedigree’ . Someone has GE on their CV, they must be great as GE only hires great people. That’s a pretty loose /random data point for making a hiring decision
So here is a non data scientist view of why you should trust machine learning to find the right people and when you shouldn’t
First credit to this post which helped me put this into non tech speak .
Why use Machine Learning at all for decision-making ? Because it underwrites making repeatable, objectively valid (ie data based) decisions at scale.
Value to the organisation:
• Use less resources to hire
• Every applicant gets a fair go at the role
• Every applicant is interviewed
• Hire the person who will succeed vs someone your gut tells you will succeed
How do you ensure there is no or limited bias in the machine learning ?
Take a look at:
– what’s the data being used to build the model
– what are you doing to that data to build the model
If you build models off the profile of your own talent and that talent is homogenous and monochromatic, then so will be the data model and you are back to self reinforcing hiring
If you are using data which looks at age, gender, ethnicity and all those visible markers of bias , then sure enough, you will amplify that bias in your machine learning
Relying on internal performance data to make people decisions, that’s like layering bias upon bias. The same as building a sentencing algorithm with sentencing data from the US court system, which is already biased against black men.
Reality is that machine learning is by its very definition aiming to bias decisions, and removing bias is driven by what bits of training data you use to feed the machine. This means you can make sure the data you train with has no bias.
Machine learning outcomes are testable and corrective measures remain consistent, unlike in humans. The ability to test both training data and outcome data, continuously, allows you to detect and correct the slightest bias if it ever occurs.
Tick to objective data which has no bio data (that means a big NO to CV and social media scraping )
Tick to using multiple machine learning models to continuously triangulate the model vs rely on one version of truth
So instead of lumping all AI and ML into one big bucket of ‘bias’ , look beneath the surface to really understand what’s going into the machine as that’s where amplification risks looms large
Oh and the reason why I hate Simon Sinek …
I don’t actually at all, but if a candidate said that to me in an interview I’d probably hire them for it because I would make some superficial extrapolation about their personality based on it:-
• first it would tell me they watch ted talks and so that eeks of cleverness and learning appetite
• second it would tell me they are confident to be contrarian and that would make me believe that they are better leaders
• third I would infer they are not sucked into the vortex of thinking that culture is the panacea to every people problem.
See how easy it is to make an unbiased hiring decision?
Soon (maybe already) you will be putting yours and your loved ones lives in the hands of algorithms when you ride in that self driven car. Algorithms are extensions to our cognitive ability helping us make better decisions, faster and consistently based on data. Even in hiring.