Here’s a hot take: The science of Emotional Intelligence (EQ) is dubious, confusing, and anything but settled. When it comes to talent identification, that can be a problem.
We tend to measure EQ in the same way we do IQ: Using a test with a series of questions. But emotion and cognitive ability are totally different, and as sciencealert.com points out, ‘It’s much more difficult to measure EI scores as often emotion-based questions do not have one correct answer.’ Add to this the fact that many EQ tests rely on self-reported data, and you can see how IQ and EQ are not simply two equal sides of the coin that make up a person.
That’s not to say that Emotional Intelligence doesn’t exist, just that it’s a roundabout way of measuring personality traits and behaviours that other mechanisms, such as the HEXACO personality inventory, do more reliably and effectively. EQ also carries the issue of ranking certain traits as more desirable or ‘better’ than others – for example, extraversion, agreeableness, and openness.
When we say someone has good or high EQ, what we tend to mean is that they’re friendly, kind, self-aware, and generally speaking, extraverted. They can adjust their tone and approach depending on who they’re talking to. They’re not known to be rude, or brash, or talk too much.
That’s an estimation of someone with good EQ, and this is the problem: It’s an empirical judgment. And while we think we’re describing someone who is emotionally intelligent, we’re really describing someone who is high in agreeableness, emotionality, openness, and other more valid measures of personality. Sounds like a great person, sure, but not necessarily a better type of person for every situation.
Consider this: Many studies have shown that disagreeable people tend to perform better over their career than people who are polite, kind, and friendly. A great proportion of CEOs, be they women or men, are high in disagreeableness. It’s easy to see why: though there are many downsides to disagreeableness, it pays, in many situations, to possess the ability to be combative, straightforward, and brutally honest. To think of disagreeableness as inherently worse than agreeableness is misguided and, at worst, discriminatory.
And even if that is not true, and all of the varied and ever-changing definitions of Emotional Intelligence lead to better job performance, how do we even measure it accurately?
In the context of hiring, EQ is often used as a gut-feel heuristic we apply to people with whom we gel. Even in structured face-to-face interviews, it can be very difficult to assign as score to the different measures of EQ.
Imagine someone is sitting across from you in an interview. By sight, they appear to be an average person in every way. So, by your questions and their responses, how do you measure their:
Again, aside from face-value judgments of agreeableness and social tact, it’s near-on impossible to assess EQ in any fair or meaningful way. That’s not even accounting for the many biases we, as humans, bring to the hiring process. You might, with some accuracy, be able to appraise a person’s EQ once it’s been proven, but that’s not useful at all in recruitment. In hiring, you’re hedging against unknowns, hoping for the best.
That’s what makes accurate personality assessment so critical – and why we built our Ai Smart Interviewer. It finds you the people you need based on an accurate, HEXACO-based assessment of their personality. One interview, via chat, is all it takes.
We look at the critical power skills – communication, emotionality, empathy, openness, and so on – and profile all candidates fairly against one another. So you’re ranking suitability on objective and repeatable measures. No guesswork involved. No bias.
You bet it works. 94% of the 2+ million candidates we’ve interviewed found their personality insights accurate and valuable. On average, 80% of the candidates who experience our interview process recommend you as an employer of choice, even if they don’t get the job.
Someone with an ostensibly high EQ is, in most cases, someone you might want. But appearances can be deceiving, and humans, by nature, are not good at objectively assessing personality. We’re just not, period.
Get the help you need, and you’ll quickly hire the people you want.
More money is flowing into Environmental, Social and Governance (ESG) than ever. In 2021, investors poured $649 billion into ESG-focused funds worldwide, up 90% from the $542 billion invested in 2020. In the UK, over 21% of investors plan to back funds and companies with comprehensive ESG strategies by 2025. And in Australia, more than 55% of super funds are using responsible investment approaches to inform strategic asset allocation.
All this investment has prompted a sharper focus on social issues across major companies – the S in Environmental, Social, and Governance. The great news is that investment in the big S, in turn, means more money and attention toward progress in Diversity, Equity and Inclusion (DEI).
Executives that care about diversity know that an effective strategy must start at the top – take Australian superannuation fund HESTA and its 40:40 vision as an example. But, to be truly successful, we need DEI goals at all levels, and we need to track, accurately, the degree to which we meet them.
Both boards and shareholders want measurable change in DEI, and fast. According to a Harvard Business Review study of S&P 500 earnings calls, the frequency with which CEOs talk about issues of equity, fairness and inclusion has increased by 658% since 2018. You can bet that this will only increase further in the coming years.
According to another HBR article, 40% of US companies discussed DEI in their Q2 2020 earnings calls, which is a huge step up from the 4% of companies that did the year before. And with 1,600 CEOs pledging to take action on DEI, setting goals and tracking progress remain top priorities.
DEI and ESG are big challenges, and we might take myriad possible approaches in trying to solve them. Some companies may start at the executive level (HESTA, as an example), while others may invest in partnerships and outreach programs. The spectrum of options can easily become overwhelming.
|“Interestingly, I’m just looking at our workforce profile and have been discussing the changes in diversity since we updated our recruitment approach last March. Not only have we hired three times more ethnic minorities and 1.5 times more women, but we now have twice as many LGBTQI+ colleagues in our business than we did three years ago! Other initiatives have played a part, but I’d imagine the game changer has been Sapia as we’ve had some direct feedback from a transgender colleague that they felt more confident with our recruitment process than they did in other applications! |
David Nally, HR Manager, Woodie’s UK
So why not start with the people you bring into your company, at all levels? Why not begin with the way you attract, assess, and select talent?
With a Smart conversational Ai, you can set realistic DEI targets and measure them, at scale, with little extra effort – ensuring you access the best talent from the widest possible pool. A Smart Interviewer is different to the simple chatbots used to automate routine tasks according to a fixed set of rules. For example, our conversational Ai is able to analyse interview responses to gain deeper insights about each candidate’s personality and competencies, in a fair and objective way.
Our Smart Interviewer helps you track and meet these three key diversity goals.
Our proprietary interview response database is made up of more than 500,000,000 words, enabling us to conduct the most sophisticated response analysis in the recruitment industry. We can do this on a macro scale (e.g. across countries, cultures, industries, and role types); or for individual companies.
Take these findings, combining data from a range of our customers, globally:
Figure 1: Gender stats across applicants, Ai recommendations and hired
Thanks to our machine-learning capabilities, and the size of our database, we can provide the hiring team with real-time analytics on the following parameters:
By employing a smart interviewing Ai at the first stage of recruitment, we can prove progress with regards to inclusivity and bias reduction. These aggregate company data show that while the expected number of female applicants exceeded the number of those that actually applied, the number of recommendations made by our Smart Interviewer also beat expectations (effectively compensating for the top-of-funnel bias). We can also see that the rate of observed female hires far exceeded the expected number.
What does this show? With just three metrics, you can see the progress being made in your recruitment process – and if performance is below expectation, you can see the stage at which targets are not being hit.
It is important to note that the recommendations of our Ai are based solely on its analysis of candidate responses in the chat-based interview. Its suitability criteria is based, among other factors, on HEXACO personality modeling and accurate assessments of various job related competencies such as team work, critical thinking and communication skills.
Our data also keep biases in check at each stage of the recruitment process, depending on the role type. As you can see, for all three roles, this company’s hiring outcomes were within regulatory limits (as stipulated by the US Equal Employment Opportunity Commission (EEOC)) across the three stages of their funnel: Applications received, recommendations made, and the hiring decisions ultimately made by the hiring team. The final step, it is important to note, happens independently of our Ai: It is a human decision. Despite this, the outcome data is recorded, so that the company can compare its outcomes against inputs and recommendations to see if late-funnel biases are occurring.
Figure 2: Role-type-based gender bias. Mid line indicates 0 bias. Shaded areas indicate the tolerance level. Right of line favours females and left favours males.
The feedback from candidates is extremely positive: Company A’s strivings for fairness and equality in its processes has resulted in a candidate satisfaction score of 98.7% for females, and 98.1% for males. Better still, the interview dropout rate across the board is less than 10%.
As with gender, our ethnicity analytics help hiring managers to easily set and accurately track goals for ethnicity representation in recruitment. Company A (whose data were shown in Figure 2) is, again, leading the way in this regard: Its BAME (Black, Asian and Ethnic Minorities) recommendation rate is at 46.5%, exceeding expectations – meanwhile, its non-BAME recommendation rate sits at 37.1%.
Our data has also helped Company A to increase its hiring commitments for First Nations people: The rate currently sits at 4.5%, from 4,000 candidates, above the national average of 1.8% (2018-19). This number is expected to increase over the coming year.
The data we collect helps us, as well as our customers, understand the extent to which personality determines role suitability and general workplace success. It also helps us to eliminate long-standing biases that negatively impact certain candidates, despite the fact that said candidates may be highly suitable to the roles for which they are applying.
For example, people high in trait agreeableness (compassionate, polite, not likely to dissent or proffer controversial viewpoints) tend to underperform in the traditional face-to-face interviews. Hiring managers may assume, based on this, that they are unable to lead, or are not a ‘culture fit’. However, a face-value assessment of agreeableness is not a reliable predictor of candidate potential. Only scientific analysis of HEXACO traits can make this call with accuracy.
Take these two visualizations, showing how different personality traits affect the recommendations made by our Ai. Females (red dot) and males (blue dot) are slightly different in agreeableness, but there is virtually no difference in their conscientiousness, a strong predictor of job performance. As a result of being able to measure conscientiousness accurately, our system can effectively allow for higher levels of agreeableness – or cancel out the negative face-value judgements typically made in face-to-face interviews. Despite these personality differences, as shown in Figure 1, Sapia Ai recommendations for both male and female groups remain similar (~40%). This results in a fairer chance for all, and a wider pool of candidates. In this case, this is to the benefit of females.
Figure 3: Male (blue) and Female (red) personality trait differences
The world is changing, and we can no longer continue to take a “We’ll see what happens” approach to the ‘S’ in ESG. Many investors are pushing companies for better diversity and inclusion outcomes. At Sapia, our data show that fair, scientifically valid, and explainable Ai can produce better outcomes for peoples of all genders and ethnicities. The companies that have adopted our Ai approach are seeing strong improvement in their own DEI practices and results.
Over and above assisting our clients, our commitment to DEI is embodied in a guiding vision of our own: Our FAIR Framework. This embeds an approach that ensures our systems and processes are ethical and transparent. Many similar Ai systems operate in a ‘black box’, providing little knowledge about how their algorithms help make important decisions or create issues like amplifying biases. We are committed to a fairer world, free of bias – and, with every candidate interviewed, our data is bringing us closer.
In June 2022, we announced that, thanks to our partnership with AWS, we now have introduced regional data hosting. This means that customers and their candidates will have increased speed when they use the Sapia platform, and means companies using the platform can have confidence that candidate data is treated in line with data sovereignty legislation, such as the EU’s General Data Protection Regulation (GDPR).
Here is the full list of improvements to data security and sovereignty for Sapia customers.
Sapia’s platform is built on AWS, and is protected by anti-virus, anti-malware, intrusion detection, intrusion protection, and anti-DDoS protocols. We comply with most major cybersecurity requirements, including ISO 27001, Soc 2 Type 1 (Type 2 in progress), and GDPR.
We use AWS’ serverless solution, which can automatically support billions of requests per day. Our sophisticated tech stack includes React.js, GraphQL, MongoDB, Node.js and Terraform.
Regional data hosting
Sapia offers regional data hosting via AWS. All data is processed within highly secure and fault-tolerant data centres, located in the same geography as the one in which the data is stored. All data is stored in and served from AWS data centres using industry standard encryption; both at rest and in while transit.
Availability and reliability
Sapia uses a purpose-built, distributed, fault-tolerant, self-healing storage system that replicates data six ways across three AWS Availability Zones (AZs), making it highly durable. Our storage system is automatic, features continuous data backup, and allows for point-in-time restore (PITR).
It seems that using AI could consign fantastical or over-optimised resumes to the dustbin of history, along with the Rolodex and fax machines.
But how do we go about selecting the perfect (or as close to perfect as possible) candidates from AI-created shortlists?
It should be so easy to learn how to conduct an interview that adds the human element to the AI selection. The web is awash with opportunities to earn recruitment qualifications from a variety of bodies, both respected and dubious. There are so many manuals, guides and blog-posts on the best ways of interviewing. People have been interviewing people for hundreds of years.
We’ve all heard about bizarre interview questions (no explanation needed). We’ve felt the pain of people caught up in interview nightmares (from both sides of the desk). And we’ve scratched our heads and noses over the blogs on body language in face-to-face interviews(bias klaxon).
Even without the extremes, people have tales to tell. Did you ever come away from an interview for your ideal job, where something just felt wrong?
It’s clear that adding human interaction to the recruitment process is by no means straightforward. Highlighting these recurring problems doesn’t solve the underlying question, which is:
“We’ve used an algorithm to better identify suitable candidates. How do we ensure that adding the crucial human part of hiring doesn’t re-introduce the very biases that the algorithm filtered out?”
Searching for “Perfect interview Questions” gives 167,000,000 results. Many of them include the Perfect Answers to match. So it’s not simply about asking questions that, once upon a time, were reckoned to extract truthful and useful responses.
Instead we want questions that will make the best of that human interaction, building on and exploring the reasons the algorithm put these candidates on the list. Our questions need to help us achieve the ultimate goal of the interview: finding a candidate who can do the job, fit with the company culture AND stay for a meaningful period of time.
It’s generally agreed that we get better interview answers by asking open questions. I’d expand on that. They should ideally be questions that don’t relate specifically to the candidate’s resume, or only at the highest level, to get an in-depth understanding.
We should try to avoid using leading questions that will give an astute candidate any clues to the answers we’re looking for. And we should probably steer clear of most, if not all, of the questions that appear on those lists of ‘Perfect Interview Questions’, knowing that some candidates will reach for a well-practised ‘Perfect Answer’. We want them to display their understanding of the question and knowledge of the subject matter. Not their ability to recall a pre-rehearsed answer.
And so, we need to remember that we’re looking for the substance of the answers we get, not the candidate’s ability to weave the flimsiest material into an enchanting story.
So, here are some possible questions to get you thinking.
Of course, you’ll need to frame and adjust those questions to match the role and your company.
AI equips recruiters with impartial insights that resumes, questionnaires and even personality profiles can’t provide. Well-constructed, supervised algorithms overlook all the biases that every human has. And that can only be a good thing.
Statistically robust AI uses an algorithm, derived from business performance and behavioural science, to shortlist candidates. It can predict which ones will do well, fit well and stay. We can trust it to know what makes a successful employee, for our particular organisation and this specific role. It can tell us to invest effort with the applicants on that shortlist. However unlikely they seem at first glance.
So we can use all of our knowledge and skills to understand a candidate’s suitability and look beyond things that might have previously led us to a rejection.
AI is the recruiter’s friend, not a competitor. It can stop us wasting time chasing candidates who we think will make great hires but instead fail to live up to the expectation. And it can direct us to the hidden gems we might have otherwise overlooked.
Technology like AI for HR is only a threat if you ignore it.
Don’t be that company that still swears by dated processes because that’s the way it’s always been done. The opportunity here is putting technology to work, helping your organisation evolve for the better. The longer the delay, the harder it will be. So don’t be left at the back playing catch-up.
There are very few businesses these days that communicate by fax machines – and that’s for a reason. In a few years, you’ll look back and wonder “Why didn’t we all embrace Artificial Intelligence sooner?”