Most candidates aren’t being rejected – they’re being ghosted. Mentions of ghosting on Glassdoor is up 450% since the start of COVID. We know that it’s bad for employer brand and long-term prospecting, so why does silent candidate rejection happen so often?
Here are some of the causes most commonly cited for ghosting candidates:
We’re not out to bash recruiters, talent acquisition professionals, or hiring managers. Finding talent during the Great Resignation is difficult. Time is precious. Offering everyone a high-touch candidate experience, therefore, seems far beyond scope.
Problem is, candidates expect feedback. At the very least, they need closure. Rejection by silence has a unique sting. Consider the following responses, offered by people who applied for jobs and were either ghosted, or received a templated rejection:
“Discarded. Treated like number.”
“Crushed. Doubted my competence and value.”
“Depressed, unsure of reasons, uncertainty with quality of CV and skills or experience.”
The preceding is part of a new study, What Type of Explanation Do Rejected Job Applicants Want? Implications for Explainable Ai, by researchers at UNSW, Australia. It aims to prescribe an ideal framework for positive candidate rejection.
Here is a snapshot of some of the findings.
This point may sound obvious, but here it is: 53% of study respondents wanted to know why they did not make the cut. Just under a third wanted to know how they might improve, and 12% of respondents wanted to find out more about the competition – including whether or not the successful candidate was an inside hire.
According to the wants of candidates, when crafting a rejection letter, it is recommended that you focus on at least one of these factors:
If possible, err on the side of extra transparency. If it was an inside hire, say so. If the losing candidate was neck-and-neck with the winner, tell them. People want the truth, it seems, and without sugar-coating.
This is interesting: As part of the study, respondents were asked how much they would pay for a tailored explanation for rejection. 44% of respondents said they wouldn’t pay anything for feedback; 25% of respondents said they might pay more than $20.
We might first surmise from this result that applicants don’t place value on feedback, but this isn’t the case; for the most part, they believe they have already paid for it. Said one respondent, “The idea about paying for feedback is idiotic and I beg you not to put it into the universe. If I take the time to apply for a job they should have the courtesy to provide feedback. Job hunting is hard enough and expensive don’t add more cost to excuse inexcusable conduct.” Fair enough.
Phai, our smart interviewing Ai, gives every single one of your candidates an interview. But it also provides tailored personality insights and coaching tips to every single one of your candidates, whether or not they are successful.
We do this because we can quickly and accurately analyse how people align to the HEXACO personality inventory. It’s high tech stuff, but the result is what matters: More than 98% of candidates love the feedback they receive, and rate it as useful. We help people understand themselves better, and equip them to attack jobs with the techniques best suited to their personalities.
If you’re using Phai, that means you’re really helping people. If that wasn’t enough of a reward on its own, know that good candidate feedback is also helping your employer brand immeasurably. It’s a dream solution for volume hiring.
Faking is a common issue with traditional self-report assessments in personnel selection (Levashina et al., 2014).
The major concern with faking is that it may affect construct and criterion-related validity (Tett & Simonet, 2021). Concerningly, some research reports the prevalence of self-report faking to be as high as 30-50% depending on the assumed faking severity (Griffith et al., 2007).
In this paper, we examine a parallel adversarial input type in modern text/chat-based interviews: plagiarism. Plagiarism poses a threat similar to faking in self-reports impacting construct and criterion-related validity. Furthermore, both plagiarism and faking impact fairness. The rank order of applicants may be altered by both practices, thereby changing the hiring decisions (Levashina et al., 2014).
While not studied exclusively in the selection space, plagiarism has been a major concern for the education sector and extensively studied in the literature (Park, 2003). One aspect that has received considerable attention is gender differences in plagiarism. Results remain inconclusive, with some evidence that men are more likely to plagiarize than women (Jereb, et al, 2018; Negre et al., 2015).
We also explore differences in plagiarism rates across different job families and device types (i.e., mobile vs. desktop).
Data from over 200,000 candidates (56% female) who applied to various organizations across the world. Candidates participated in an online chat-based structured interview, answering 5-7 open-ended questions on the Sapia Chat Interview™ platform. Over 1 million individual textual answers were checked against answers from past candidates (over 6.4 million answers) for plagiarism. Plagiarism detection calculates the Jaccard similarity coefficient between the new submission and all existing answers, and answers resulting in a Jaccard coefficient (Wang et al., 2013) over 0.75 were marked as plagiarized and flagged for hiring manager review.
Results show that 3.28% of candidates plagiarized at least one answer, which is significantly lower than the up-to 30-50% of candidates estimated to be faking self-report measures (Griffith et al., 2007).
Consistent with previous findings on self-report faking, males plagiarized significantly more than females. Plagiarism rates also differed significantly across role families, with the highest level of plagiarism observed among candidates who applied to ‘Call center sales’ roles and the lowest plagiarism rates observed for ‘Graduate’ roles. Additionally, we found candidates answering on a mobile phone plagiarized significantly higher than those using a desktop computer.
This work represents an important first step in investigating plagiarism detection in online, open-text chat interviews. While the prevalence is much lower than faking in self-reports, there are still fairness implications, especially given men are more likely to plagiarism than women. This is why it is so important to flag candidates who plagiarize so the hiring manager is made aware and can manually review their responses.
References:
Griffith, R. L., Chmielowski, T., & Yoshita, Y. (2007). Do applicants fake? An examination of the frequency of applicant faking behavior. Personnel Review, 36(3), 341–355.
Jereb, E., Urh, M., Jerebic, J., & Šprajc, P. (2018). Gender differences and the awareness of plagiarism in higher education. Social Psychology of Education : An International Journal, 21(2), 409–426.
Levashina, J., Weekley, J. A., Roulin, N., & Hauck, E. (2014). Using Blatant Extreme Responding for Detecting Faking in High-stakes Selection: Construct validity, relationship with general mental ability, and subgroup differences. International Journal of Selection and Assessment, 22(4), 371–383.
Negre, J. S., Forgas, R. C., & Trobat, M. F. O. (2015). Academic Plagiarism among Secondary and High School Students: Differences in Gender and Procrastination. Comunicar. Media Education Research Journal, 23(1).
Park, C. (2003). In Other (People’s) Words: Plagiarism by university students–literature and lessons. Assessment & Evaluation in Higher Education, 28(5), 471–488.
Tett, R., & Simonet, D. (2021). Applicant Faking on Personality Tests: Good or Bad and Why Should We Care? Personnel Assessment and Decisions, 7(1).
Wang, S., Qi, H., Kong, L., & Nu, C. (2013). Combination of VSM and Jaccard coefficient for external plagiarism detection. 2013 International Conference on Machine Learning and Cybernetics, 04, 1880–1885.
Most candidates aren’t being rejected – they’re being ghosted. Mentions of ghosting on Glassdoor is up 450% since the start of COVID. We know that it’s bad for employer brand and long-term prospecting, so why does silent candidate rejection happen so often?
Here are some of the causes most commonly cited for ghosting candidates:
We’re not out to bash recruiters, talent acquisition professionals, or hiring managers. Finding talent during the Great Resignation is difficult. Time is precious. Offering everyone a high-touch candidate experience, therefore, seems far beyond scope.
Problem is, candidates expect feedback. At the very least, they need closure. Rejection by silence has a unique sting. Consider the following responses, offered by people who applied for jobs and were either ghosted, or received a templated rejection:
“Discarded. Treated like number.”
“Crushed. Doubted my competence and value.”
“Depressed, unsure of reasons, uncertainty with quality of CV and skills or experience.”
The preceding is part of a new study, What Type of Explanation Do Rejected Job Applicants Want? Implications for Explainable Ai, by researchers at UNSW, Australia. It aims to prescribe an ideal framework for positive candidate rejection.
Here is a snapshot of some of the findings.
This point may sound obvious, but here it is: 53% of study respondents wanted to know why they did not make the cut. Just under a third wanted to know how they might improve, and 12% of respondents wanted to find out more about the competition – including whether or not the successful candidate was an inside hire.
According to the wants of candidates, when crafting a rejection letter, it is recommended that you focus on at least one of these factors:
If possible, err on the side of extra transparency. If it was an inside hire, say so. If the losing candidate was neck-and-neck with the winner, tell them. People want the truth, it seems, and without sugar-coating.
This is interesting: As part of the study, respondents were asked how much they would pay for a tailored explanation for rejection. 44% of respondents said they wouldn’t pay anything for feedback; 25% of respondents said they might pay more than $20.
We might first surmise from this result that applicants don’t place value on feedback, but this isn’t the case; for the most part, they believe they have already paid for it. Said one respondent, “The idea about paying for feedback is idiotic and I beg you not to put it into the universe. If I take the time to apply for a job they should have the courtesy to provide feedback. Job hunting is hard enough and expensive don’t add more cost to excuse inexcusable conduct.” Fair enough.
Phai, our smart interviewing Ai, gives every single one of your candidates an interview. But it also provides tailored personality insights and coaching tips to every single one of your candidates, whether or not they are successful.
We do this because we can quickly and accurately analyse how people align to the HEXACO personality inventory. It’s high tech stuff, but the result is what matters: More than 98% of candidates love the feedback they receive, and rate it as useful. We help people understand themselves better, and equip them to attack jobs with the techniques best suited to their personalities.
If you’re using Phai, that means you’re really helping people. If that wasn’t enough of a reward on its own, know that good candidate feedback is also helping your employer brand immeasurably. It’s a dream solution for volume hiring.
The Royal Commission has brought about a lot of scrutiny on the banks, and for good reason. But we have to give them credit where it’s due.
Compared to HR teams across the country, banks know a thing or two when it comes to managing risk. Which is funny, as I’d argue that hiring a staff member is a much riskier proposition for a business than a bank having one of its customers default on a loan.
Imagine if your bank lent you money with the same process that your average recruiter used to hire for a role.
They would ask you to load your all of your personal financial information into an exhaustive application form. Your salary, your weekly spend, your financial commitments. All of it.
The same form would include a lot of probing questions, such as: Will you pay this money back on time? When have you borrowed in the past and paid back on time? Describe a time that you struggled to repay a loan and what you did about it?
Then, assuming your form piqued their interest, they would bring you in for one on one meeting with the bank manager. That manager would grill you with a stern look, asking the same questions. This time though, they will be closely watching your eye movement to see if you were lying when you answered.
In each part of the process you get a score, and then if that number is above a certain threshold, you get the loan.
It’s almost laughable, right?
Banks wouldn’t have any customers if they used that approach. Only people who desperately need money would put themselves through that process. And they’re likely not the best loan candidates.
Banks work hard to attain incredibly high accuracy levels in assessing loan risk.
Meanwhile in HR, if you use turnover as a measure of hiring accuracy its as low as 30–50% per cent in some sectors. If you combine both turnover and performance data (how many people who get hired really raise a company’s performance), it might be even lower than that.
Banks wouldn’t exist if their risk accuracy was anywhere close to those numbers.
Well, that’s how most recruitment currently works — just usually involving more people.
There’s more parallels here than you think.
Just like a bank manager, every recruiter wants to get it right and make the best decisions for the sake of their employer. As everyone in HR knows, hiring is one of the greatest risks a business can take on.
But they are making the highest risk decision for an organisation based on a set of hypotheses, assumptions and lots of imperfect data.
So, let’s flip the thought experiment.
What if a bank’s risk management department was running recruitment? What would the risk assessment look like?
Well, the process wouldn’t involve scanning CVs, a 10 minute phone call, a face to face interview and then a decision.
That would be way too expensive given exponentially more people apply for jobs than apply for loans each year. Not to mention the process itself is too subjective.
I suspect they would want objective proof points on what traits make a candidate successful in a role, data that matches the candidate against those proof points and finally, further cross validation with other external sources.
They wouldn’t really care if you were white, Asian, gay female. How could you possibly generalise about someone’s gender, sexuality or ethnicity and use it as a lead indicator of hiring risk. (Yet, in HR this is still how we do it.)
Finally, they’d apply a layer of technology to the process. They would make it a positive customer experience for the candidates and with mobile-first design. Much like a loan, you’ll lose your best customers if the funnel is long and exhaustive.
I’m not saying that banks are a beacon of business. The Royal Commission definitely showed otherwise. But for the most part, they have gotten with the times and upgraded their processes to better manage their risk. It’s time HR do the same.