This seems obvious but yet even today this is the key data source used in screening and hiring. For grad recruitment, your degree, your university and your uni results are key filters used in screening.
It’s already been four years since Ernst & Young removed university degree classification as an entry criterion as there is ‘no evidence’ it equals success. Students are savvy and they know how competitive it is to secure a top graduate job. In the UK, the Higher Education Degree Datacheck (Hedd) surveys students and graduates about degree fraud. The annual results are pretty consistent – about a third of people embellish or exaggerate their academic qualifications when applying for jobs. Read more here >
We analysed ~13,000 CVs, received over a 5 year period, all for similar roles for a large sales-led organisation. From this data set, 2660 were hired and around 9600 rejected. We wanted to test how meaningful the CV is as a data source for hiring decisions.
Look at these two word-clouds. One represents the words extracted from the CVs of those who were and the other from those who were rejected. Which would you pick?
A word cloud depicts the relative frequency of words appearing in the set of resumes by the size of the words in the word cloud, i.e. words in larger font size appears more than the ones in smaller font size. Given that the two word clouds show no significant differences in the words in larger or smaller font sizes means that the two groups are indistinguishable based on the words used within CV’s.
P.S If you had picked Group 2 you would have been right.
Josh Bersin, the premier topic expert in our space, articulates how hard it is to predict performance through traditional testing in this way .
“Managers and HR professionals use billions of dollars of assessment, tests, simulations, and games to hire people – yet many tell me they still get 30-40% of their candidates wrong.”
And now the definitive publication for all things HR, leadership etc. the Harvard Business Review, has shared research that prior experience is also a poor predictor of performance. Read more >
Whether their background is similar to yours or the person in your team who is a star? Whether they have played a competitive sport at a senior level (because that’s a good indicator of drive and resilience)? Or maybe whether they are a different ethnicity, gender, educational background to the rest of your team because, you know … diversity is meant to be good for business!
The list of performance ‘signals’ are as many as the number of people (interviewers) you have interviewing new hires. It’s a deeply personal decision like who you choose as a partner and we all feel like we know what to look for. But we don’t.
And no amount of interview or bias training or even interview experience is ever going to make us better at these decisions.
But experience does matter, but it’s a different type of experience. It’s the experience that comes from doing something 10 x, 10,000 times, a million times, with feedback on what worked, what didn’t, under what context etc. And of course, if one could remember all that.
Think of a different context- the grading of an exam. If you ask your teenager or university-aged son/daughter what would make them trust an exam result, they would likely say
3. Data-driven, i.e some kind of formula for assessment, that assures consistency and fairness.
4. The experience of the assessor.
The fact is … just as no human driver will ever match the learning capability and velocity of a self-driving Tesla car, no assessor will ever be as good as a machine that’s done it a million times. The same applies for AI in recruitment.
No human recruiter will ever match the power, smarts and anonymity presented by a machine learning assessment algorithm.
We would love to see you join the conversation on LinkedIn!
There has been some negative media attention lately surrounding the use of Artificial Intelligence (AI) in the recruitment space with warnings ranging from the fact that AI produces a shallow candidate pool to more serious things like amplification of bias.
There are many instances of AI being used in a way that has harmful outcomes, but it is important to clarify that this is about how AI is being implemented and not an issue with the use of AI itself.
When AI is used appropriately, responsibly, and following regulatory guidelines it is an incredibly powerful tool that can create fair outcomes for candidates who are selected without bias – in a way that no other tool at our disposal can.
This is why we think it’s worthwhile that more people better understand AI and some of the differences in the way it is used and implemented.
Most media articles refer to AI as if it represents a singular master algorithm and fail to identify how varied the implementations of it are. Almost all AI we have today falls into the category of “narrow AI”, in other words algorithms, mostly machine learning, built to solve a specific problem. E.g. classify sentiment, detect spam, label images, parse resumes. These purpose built AI are highly dependent on the nature of the underlying training data and the expertise of the developers in making the right assumptions and tests of validity of their models. When built in the right way and used responsibly, AI has the ability to empower humans. This is why at Sapia.ai we have made various conscious design choices and adhered to a framework called FAIR™ that tests for bias, validity, explainability and inclusivity of our AI based tools.
The biggest cause for alarm is when AI is applied to analysing video, which can lead to irrelevant inputs like clothing, background, and lighting being used as predictors of personality and job-fit. Video and speech patterns also make it nearly impossible to remove demographic information like race and gender as inputs.
Additionally, analyzing facial expressions is problematic, especially when evaluating certain candidates like those with Autism Spectrum Disorder or other forms of neurodiversity.
This is why Sapia.ai does not, and will not, use AI scoring for video interviews or even voice transcriptions from videos or audio given the word error rate introduced in transcribing speech. Instead, we opt for text – which we implement in a friendly no pressure environment that feels like you are texting a friend.
It’s worth noting that no data other than the answers given by the candidate are used in the ‘fit score’ calculation – that is, we never use demographic data, social media, CV or resume data (which also contain demographic signals, even when de-identified), or behavioral metrics such as time to complete.
Even a candidate’s raw text itself contains gender and ethnicity signals that can introduce bias, if not mitigated. This is why we only use feature scores (e.g., personality, behavioral competencies, and communication skills) derived according to a clearly defined rubric in our scoring algorithms, which our extensive research shows contain significantly less gender and ethnicity information than raw text.
Another common concern is that AI will result in more uniformity rather than diversity in the workforce as algorithms narrow the pool in order to search out an employer’s ideal candidate. There are several things worth noting here.
First, identifying what the ideal candidate is – that is, what knowledge, skills, abilities, and other characteristics are important for success in the role – is what a job analysis is for and should, legally, be what your selection tool is designed to measure.
This is also not specific to AI, as all selection systems are designed to identify which candidates have a profile of traits and characteristics that indicate they will likely be successful in the role. This doesn’t automatically mean that every hire is going to be exactly the same, though. When you focus on the traits and characteristics that will set someone up to be successful, considering potential more than background or pedigree, you’re more likely to uncover hidden talent and hire more successful people from a broader, more diverse range.
Relying solely on past data to build your model also runs the risk of introducing historical data biases. This is actually why it is so important to consider the ideal candidate profile and use that to inform your scoring model. We strongly believe in keeping the human in the loop, which is why our scoring models are centred around the human-determined (via job analysis) ideal candidate profile and then optimized to ensure all bias constraints (e.g., 4/5ths rule and effect sizes) are met.
Using this approach, Sapia has helped clients achieve their DEI goals and increase their diversity hires, including impressive statistics like hiring 3x more ethnic minorities, 1.5x more women, and 2x more LGBTQ+ candidates in just 3 months.
Lastly, it’s worth acknowledging that there is often a “black box” mystery of how AI recruitment tools work. People don’t trust what they don’t understand. While we don’t expect everyone to be an expert in AI or Natural Language Processing, we do strongly believe in building trust through transparency and work hard to make sure that our models are easily understood and open to scrutiny. From third-party audits to detailed model cards to in-depth dashboarding and reporting, we aim to maximize transparency, explainability, and fairness.
We believe a fairer future can only be achieved when AI is used responsibly. AI is not the enemy, rather it’s the experience and motivation behind those promoting it that can make the difference between what is good AI and what is harmful AI.
We know that the global pandemic has caused a disruption in global workforces. Much has already been said about the Great Resignation, and how it has morphed into the Great Reshuffle, a period in which many are looking to reinvent themselves in the light of new jobs and careers. No industries or role types have been spared, either, it seems – even recruiters are leaving positions in the tens of thousands.
With a reshuffle, however, comes uncertainty, doubt, and anxiety. The war on talent may have benefited some, but the path to career reinvention is by no means guaranteed. Consider the following factors, factors job-hunters must face every day:
It’s little wonder that some Great Reshufflers, especially emerging adults (ages 18-24), are experiencing anxiety about working in the post-COVID world. Instability is the only constant. Consider, too, that some people are better at dealing with uncertainty – or, in technical terms, they are higher-than-average in the HEXACO personality traits Flexibility (or Adaptability, as it’s sometimes known).
This hypothesis is supported by at least one study, published last year in the International Journal of Social Psychiatry. It suggested that, “…due to the outbreak of ‘Fear of COVID-19’, people are becoming depressed and anxious about their future career, which is creating a long-term negative effect on human psychology.”
The traditional face-to-face interview is typified by stilted small talk and a general air of nervousness. If a candidate is low in Extraversion, high in Agreeableness, or high in the Anxiety and Fearfulness scales of the Emotionality personality domain, their experience of walking into a blind interview is likely to be worsened by the additional stressors left by COVID-19.
Consider, as is likely to be the case, that the candidate might possess a combination of all three traits, in the proportions laid out above. These people, especially if they are young, may not even bother to apply for a job in today’s climate.
The ramifications of this are obvious: You risk, at best, filling your workforce with open, disagreeable, type-A employees. At worst, you risk baking unfairnesses or bias into your recruitment process, at the cost of good candidates who don’t shine in awkward face-to-face situations.
Take this small data visualisation from our TalentInsights dashboard as a key example. Please note here that the following results apply to the outcomes of the hiring process, and not Smart Interviewer’s recommendations.
It presents an assessment of candidate hiring outcomes according to key HEXACO personality traits. The red dots represent female candidates, the blue dots male. Immediately, we can see that when it comes to Conscientiousness – one of the best predictors of workplace success – females and males are more or less identical.
The main differences between the two genders occur, however, in the domains of Agreeableness and Emotionality. Combined, these two traits are good predictors of anxiety and/or aversion to fear. As you can see, females tend to be higher in Agreeableness and Emotionality than males.
Though the difference is not incredibly significant, it is still present – and it may require a slight change to the way you bring female candidates into your hiring process. The data proves, of course, that your best candidates are just as likely to be female as male – but your recruitment tactics may be producing outcomes that favour males.
We’ve said it before, and it’s the whole reason we exist: A blind, text-based Chat Interview with a clever, machine-learning Ai. Smart Interviewer is our smart interviewer, and it has now analysed more than 500 million candidate words to arrive at the kinds of data points you see above. It helps you combat bias at the top of your funnel, and gives you the Talent Analytics you need at the bottom.
And it works. Take it from the candidates high in Agreeableness:
“I have never had an interview like this online in my life… able to speak without fear or judgement. The feedback is also great to reflect on. I feel this is a great way to interview people as it helps an individual to be themselves and at the same time the responses back to me are written with a good sense of understanding and compassion also. I don’t know if it is a human or a robot answering me, but if it is a robot then technology is quite amazing.”
– Graduate Candidate A
“[It was] approachable, rather than daunting. I found the process to be comprehensive and easy to complete. I also enjoyed that the range of questions were different than those commonly asked. The visual aspects of the survey makes the task seem approachable rather than daunting and thus easier to complete.”
– Graduate Candidate B
The future of work is uncertain. But with a fair and unbiased assessment tool, you can prevent the best talent from being lost under the dust of the Great Reshuffle – and save a lot of time and money doing it.
You would think in this day and age organisational diversity would be a moot point. With global social reforms across gender, sexuality, disability and race equality, one could believe the challenge of diversity has been overcome.
Sadly, this is not the case. Some fu(cked) facts:
So why do we continue to see inequality in employment?
Despite the best of intentions, hiring managers or recruiters can discourage groups of potential applicants. They do so by using restrictive terms which are gendered or ageist. This can extend to unnecessary education standards which are not required to do the role.
More often than not, recruiters and hiring managers are overwhelmed with application volumes. To save time CV screening is done for job titles, big brand company names, and favouring certain universities or education providers.
In some instances, unintentionally or intentionally, applicants will be filtered out of the screening process based on their name. Researchers of Harvard and Princeton found that blind auditions increased the likelihood that female musicians would be hired by an orchestra by 25 to 46%. Whilst one seminal study found that African American sounding names had a 50% lower call back rate for an interview when compared with typical White named individuals.
Would you believe there are over 100 different forms of cognitive biases? Confirmation bias, affinity bias, similarity bias, halo effect, horn effect, status quo bias, conformity bias… the list goes on. These biases make diverse hiring an even more difficult process as you don’t even know that you are missing out on the best candidates!
Time and time again research has shown that diverse organisations are more effective, perform better financially and have higher levels of employee engagement.
A recent McKinsey report, “Delivering through Diversity” showed that organisations with gender-diverse management were 21% more likely to experience above-average profits. Whilst companies with a more culturally and ethnically diverse executive team were 33% more likely to see better-than-average profits. This figure grows to 43% when the board of director level is also diverse in gender, ethnicity, sexual orientation.
More compelling is that for every 1% rise in workforce gender and cultural diversity, there is a corresponding increase of between 3 to 9 per cent in sales revenue!
Not only is diversity a social and ethical problem for organisations, but it is also a commercial one.
Blind screening: Removing information that reveals the candidate’s race, gender, age, names of schools, etc to reduce unconscious bias that creeps into hiring decisions.
For our customer, a global airline, cabin crew are at the heart of delivering great customer experience. With 9000+ cabin crew creating iconic experiences for passengers every day, they want to maintain their strong brand. They intend to do this through hiring the best in customer service to give their applicants an iconic experience.
An iconic brand also attracts an enormous number of applications some of which don’t fit the criteria. Sifting through so many CVs to uncover the right candidates is extremely time-consuming for the recruiters.
Some of the challenges the team faced in their existing processes included:
The results were amazing.
A post-campaign survey showed a perfect score from the recruitment team rating the technology as faster, fairer and delivering better candidates.
No matter the good intentions, humans will always lean on their biases when making decisions. Interrupting bias in recruitment needs a systemic solution. Something that can operate independently, in the absence of a human trusted to do the right thing.
While Sapia does not claim to completely solve for bias within an organisation, using a chat-based assessment at the top of the recruitment funnel will help you to interrupt, manage and therefore change, biases that reduce diversity in hiring.
Chat is inclusive for all candidates
Candidates chat through text every day. It’s natural, normal and intuitive. Chat interviews provide an opportunity for them to express themselves, in their way, with no pressure.
Playing games to get a job is not relevant. Talking to a camera is not fair. What if you are unattractive, introverted, not the right colour or gender, or don’t have the right clothes? When you use chat over other assessment tools, you’re solving for adoption, candidate satisfaction, inclusivity and fairness. Our platform has a 99% candidate satisfaction score, and a 90% completion rate. Here’s the 2020 Candidate Experience Playbook.
We use an intrinsically blind assessment design
Blind screening means an interview that is truly blind to the irrelevant markers of age, gender and ethnicity. That just can’t see you. And therefore, cannot judge you. Sapia does not use any information other than the candidate responses to the interview questions to infer suitability for the job your candidates are applying for. As a company we call this ‘fairness through unawareness’. The algorithm knows nothing about sensitive attributes and therefore cannot use them to assess a candidate. Sapia only cares if the candidate is suitable for the job, and nothing else.
Why is organizational diversity important?
Are there some examples of organizational dimensions of diversity?
What does diversity mean?
To keep up to date on all things “Hiring with Ai” subscribe to our blog!