This research paper is part of our accepted submission to SIOP, and will be presented at the 2023 SIOP Conference in Boston.
Faking is a common issue with traditional self-report assessments in personnel selection (Levashina et al., 2014). The major concern with faking is that it may affect construct and criterion-related validity (Tett & Simonet, 2021). Concerningly, some research reports the prevalence of self-report faking to be as high as 30-50% depending on the assumed faking severity (Griffith et al., 2007).
In this paper, we examine a parallel adversarial input type in modern text/chat-based interviews: plagiarism. Plagiarism poses a threat similar to faking in self-reports impacting construct and criterion-related validity. Furthermore, both plagiarism and faking impact fairness. The rank order of applicants may be altered by both practices, thereby changing the hiring decisions (Levashina et al., 2014).
While not studied exclusively in the selection space, plagiarism has been a major concern for the education sector and extensively studied in the literature (Park, 2003). One aspect that has received considerable attention is gender differences in plagiarism. Results remain inconclusive, with some evidence that men are more likely to plagiarize than women (Jereb, et al, 2018; Negre et al., 2015).
We also explore differences in plagiarism rates across different job families and device types (i.e., mobile vs. desktop).
Data from over 200,000 candidates (56% female) who applied to various organizations across the world. Candidates participated in an online chat-based structured interview, answering 5-7 open-ended questions on the Sapia Chat Interview™ platform. Over 1 million individual textual answers were checked against answers from past candidates (over 6.4 million answers) for plagiarism. Plagiarism detection calculates the Jaccard similarity coefficient between the new submission and all existing answers, and answers resulting in a Jaccard coefficient (Wang et al., 2013) over 0.75 were marked as plagiarized and flagged for hiring manager review.
Results show that 3.28% of candidates plagiarized at least one answer, which is significantly lower than the up-to 30-50% of candidates estimated to be faking self-report measures (Griffith et al., 2007).
Consistent with previous findings on self-report faking, males plagiarized significantly more than females. Plagiarism rates also differed significantly across role families, with the highest level of plagiarism observed among candidates who applied to ‘Call center sales’ roles and the lowest plagiarism rates observed for ‘Graduate’ roles. Additionally, we found candidates answering on a mobile phone plagiarized significantly higher than those using a desktop computer.
This work represents an important first step in investigating plagiarism detection in online, open-text chat interviews. While the prevalence is much lower than faking in self-reports, there are still fairness implications, especially given men are more likely to plagiarism than women. This is why it is so important to flag candidates who plagiarize so the hiring manager is made aware and can manually review their responses.
References:
Griffith, R. L., Chmielowski, T., & Yoshita, Y. (2007). Do applicants fake? An examination of the frequency of applicant faking behavior. Personnel Review, 36(3), 341–355.
Jereb, E., Urh, M., Jerebic, J., & Šprajc, P. (2018). Gender differences and the awareness of plagiarism in higher education. Social Psychology of Education : An International Journal, 21(2), 409–426.
Levashina, J., Weekley, J. A., Roulin, N., & Hauck, E. (2014). Using Blatant Extreme Responding for Detecting Faking in High-stakes Selection: Construct validity, relationship with general mental ability, and subgroup differences. International Journal of Selection and Assessment, 22(4), 371–383.
Negre, J. S., Forgas, R. C., & Trobat, M. F. O. (2015). Academic Plagiarism among Secondary and High School Students: Differences in Gender and Procrastination. Comunicar. Media Education Research Journal, 23(1).
Park, C. (2003). In Other (People’s) Words: Plagiarism by university students–literature and lessons. Assessment & Evaluation in Higher Education, 28(5), 471–488.
Tett, R., & Simonet, D. (2021). Applicant Faking on Personality Tests: Good or Bad and Why Should We Care? Personnel Assessment and Decisions, 7(1).
Wang, S., Qi, H., Kong, L., & Nu, C. (2013). Combination of VSM and Jaccard coefficient for external plagiarism detection. 2013 International Conference on Machine Learning and Cybernetics, 04, 1880–1885.
To find out how to improve candidate experience using Recruitment Automation, we have a great eBook on candidate experience.
Hiring with heart is good for business: candidate experience in C-19 times. Sapia launches its Candidate Experience eBook. This book provides an insight into the changing face of the candidate experience.
If there was ever a time for our profession to show humanity for the job searchers, that time is now. Unemployment in Australia has passed a two-decade high. The trend is similar for other countries. That means there are a lot more candidates in the market looking for work.
With so many more candidates, the experience of a recruiting process matters more. What are candidates experiencing? Are they respected, regardless of whether they got the job or not? Is their application appreciated. Are they acknowledged for that?
This may be the time to rethink your candidate experience strategy.
This story won’t be unfamiliar to you: An Australian based consulting firm advertised for a Management Consultant and decided to withdraw the advert after 298 candidates had applied. That was in their first week of advertising.
When candidate supply outstrips demand, that is bound to happen. Inundation of your Talent Acquisition team becomes an every-day thing. Employers are feeling swamped with job applications.
Being effective is much harder when there are more candidates to get through every day.
>> When the role for which you are hiring requires a relatively low skill level.
In the example provided above, the Management Consultant role had several essential requirements which should have limited applications. Included in the applicant list were hoteliers, baristas, waiting-staff and cabin crew (it’s heartbreaking). So when it comes to roles with a much lower barrier to entry, the application numbers can quadruple.
The traditional ‘high-volume low-skill role’ has now become excruciatingly high-volume. This trend is being seen across recruitment for roles like customer service staff, retail assistants and contact centre staff.
>>When your organisation is a (well-loved) consumer brand.
Frequently, candidates will apply to work for brands that they love. Fans of Apple products, work for Apple. They also apply to work and get rejected in their millions. So, how do you keep people as fans of your brand when around 98% of them will be rejected in the recruiting process? That’s not only a recruiting issue – it’s a marketing issue too.
Thousands of organisations and their Talent Acquisition teams are grappling with both dynamics right now.
The combination of unemployment and being in Covid-19 lockdown means that consumer buying is being impacted. Their confidence is down. Buying is also down. With people applying for more jobs and spending less as consumers, the hat has somewhat switched. For many who were consumers, they have now become candidates. That may be how they are currently experiencing your brand. As candidates first, customers second.
Candidate experience is defined as the perception of a job seeker about an organisation and their brand based on their interactions during the recruiting process. Customer experience is the impression your customers have of your brand as a whole throughout all aspects of the buyer’s journey.
Is there a difference? It’s all about how the human feels when interacting with your brand. A person is a person, regardless of the hat they are wearing at the time!
Millions, even billions, of dollars are spent each year by organisations crafting a positive brand presence and customer experience. Organisations have flipped 180 degrees to become passionately customer-centric. It makes sense to do so. Put your customers first, and that goes straight to the bottom line.
What is perhaps less recognised is the loss of revenue and customer loyalty which is directly attributed to negative candidate experiences.
How about those loyal customers who want to work for your brand? They eagerly apply for a job only to get rejected.
For those who have tried in the past, you may well know that it can take an extraordinarily long time to ‘define’ a Candidate Experience strategy, create its metrics, find a budget and then execute on it.
Have a look inside the ‘too hard’ basket and there you may well find many thousands of well-meaning ‘candidate experience’ initiatives, that are still lying dormant! So many want to focus on candidate experience, but may shy away from doing so. This is because it’s perceived as time-consuming and expensive.
Plus, right now there is so much on which CHROs need to focus. From ensuring workers’ wellbeing to enabling remote working. Who has the time to also worry about the experiences of candidates?
However, that has changed. Boosting candidate experience is no longer too hard, too expensive, nor too time-consuming. Technology becomes more manageable, quicker and cheaper over time. Also (borrowing from Moore’s law), its value to users grows exponentially.
The good news is that for those organisations who genuinely want to improve candidate experience, it has become much easier to do so. Finally, it is possible to give great experiences at scale while also driving down costs and improving efficiencies.
Win-win is easily attainable. In the Sapia Candidate Experience Playbook, read how organisations are hiring with heart. All by creating positive experiences for candidates while also decreasing the workload for the hiring team.
Tech Den is the HR Tech Summit’s flagship program – celebrating excellence in HR start-ups and entrepreneurship across Australia.
Vying for the ultimate prize, a $20k marketing campaign in HRD Australia publications, hundreds of HR Tech start-ups will be whittled down to just a few finalists.
Five lucky finalists pitched their solutions to a panel of judges and investors with Sapia coming out on top.
The competition was fierce as we went head-to-head with Crewmojo, Gradsift, Voop.Global and Referoo.
We were delighted to come out on top!
Thank you HR Tech for the opportunity and the wonderful prize. We look forward to using it!
It used to be all about Mobile First, now it’s about AI-first. Google now calls itself an ‘AI first’ company.
“How do you decipher the truth from puffery? Are there any shortcuts to really understand where AI is best applied in your business?”
I’m not a data scientist, but I spend my days talking to users and buyers of AI technology who are befuddled and increasingly cynical about the hype. Here is my 3-step guide to cutting through the noise.
Most products are using standard statistical techniques like regression. Without any machine learning baked into the technology, they are just matching tools. There are efficiency gains, but no ‘smarts’ and no learning in technology.
For example, in recruitment a stock-standard AI product would merely find you people with the same profile as those you have already hired, matching applicant profiles to hired profiles. CV parsers do this kind of thing. Now, that can be helpful to you if all you want to do is a short-cut to the same profile fast, and it can definitely save your recruitment team time.
But unless you know that these characteristics also match performance, you will not make a difference to your organisational outcomes.
If your Sales Director tells you that every new hire over the last year is hitting or exceeding budget, then absolutely keep using that tool. If she tells you that a third or more are underperforming or leaving the business, your AI tool is merely amplifying that bias and doing quantifiable damage to your company’s bottom line.
If you want both efficiency and business bottom-line impact – your AI needs to have machine learning baked in.
Takeaway: If the organisation selling you an AI tool has no Data Scientists, there is no machine learning in the product.
If I imagine a Maslow’s hierarchy of AI it would look like the below:
First up you need to have the data.
About 50% of companies don’t pass this threshold, but assuming you have it the next step is understanding the data context: What is the business problem you are trying to solve?
Next is the housekeeping. This means consolidating, cleaning, categorising and cataloguing the data. And then finally the optimisation is at the top – this is where the magic happens. Optimisation is the last mile and is what gets you to the big savings, but you need everything underneath it in order first.
In recruitment a genuinely smart AI tool with machine learning baked-in works best in these conditions:
Takeaway: It’s critical to have a solid understanding of what you’re trying to fix, and the means to measure the changes you’re making. Ask yourself why you are considering AI if you can’t quantify the problem, to begin with.
AI is about optimisation. For credit card companies it’s detecting fraud quickly. For online retailers better product recommendations. In recruitment, it’s finding the best new hires in a massive group of rookie players.
In each case, you are optimising for efficiency and accuracy, as the cost of getting it wrong is huge.
It means trusting the technology to find the patterns. You have to suspend theory, and your assumptions, a lot. You feed in a large amount of relevant and unbiased data and the machine learns on its own, finding the patterns. It is looking for the ‘signal in the noise’. Humans are unpredictable and more often than not unreliable.
The current hiring processing by humans is extremely resource-consuming and the result is not always satisfying. Using AI will free up your time if you allow it to, improving efficiency or outcomes, often both. But AI built just off CV data only adds bias and we’ve all see how badly that ends.
Takeaway: Predicting human decision making is not easy and not quick. The only way to get to the ‘answer’ is to start now and expect this to be a journey.
If you liked this article, suggested reading:
A CV tells you nothing
To AI or not to AI