Blog

Back

Written by Nathan Hewitt

Enabling data-driven hiring decisions

The marriage of behavioural science, data science and AI technology

The introduction of artificial intelligence (AI) technologies into the world of HR and recruitment is not just an idea anymore, it is a reality. Neural networks, machine learning and natural language processing are all being introduced into different areas of HR.

These developments contribute to the function’s increased accessibility to data-driven insights and analytics, enabling better-informed people decisions.

In recruitment and talent acquisition, AI technologies have the potential to make a significant impact by identifying candidates who can perform well in individual business environments.

However, pre-hire assessment is a complex area, and without incorporating validated behavioural science we only end up with a 2D view – instead of the 3D view we actually wanted. This is why the marriage of data, computer and behavioural sciences is essential.

By bringing together organisational psychologists, data scientists and computer scientists we truly leverage the power of artificial intelligence – and change the way candidates are recruited. It takes the recruitment process beyond the technical excellence necessary to collect and report on data and insights.

By merging these scientific areas we get:

  • Computer science expertise providing the critical ‘how’ for collecting quality data.
  • Data science brilliance then revealing the ‘what’ of unseen connections within that data.
  • Well-constructed behavioural science explaining the ‘why’ behind those connections.

Through the combination of all three disciplines, we can access a whole extra world of meaning, enabling us to get closer to the core of what’s happening in organisations.

Behavioural science is the key to success

A recent Industrial & Organisational Psychology article pointed to the disruption taking place in the talent identification industry through new digital technologies. The authors noted that although big data is attractive, the data is often thrown together and interrogated using data science until correlations are found. This has become known as ‘dustbowl empiricism’.

My favourite for this at the moment has to be the strong correlation between the number of people who have drowned by falling in a pool, and the number of films Nicolas Cage has appeared in any given year. Who knew how dangerous Nicolas Cage could really be?

Despite the evident danger of watching Nicolas Cage films (particularly near water), I believe there is more value in explaining behaviour than in just predicting it.

For example, is there a correlation between owning a certain type of car and being a high performer?

Perhaps, but I don’t think to look for the best candidates in car parks is very useful. After all, people change cars, and so might the correlations change between particular car models and performance. To cite another famous example, as often as people change their eating preferences, so goes the link between curly fries and intelligence.

Understanding why data is linked can suggest better ways to improve performance than just updating the carpool or changing the canteen menu.

Linking a vehicle preference to well-established behavioural science may suggest that a client considers how a candidate is innovative elsewhere in their lives, such as in their adoption of other new technologies. Or they may look for other ways the candidate demonstrates a penchant for reliability (perhaps through previous work choices).

The scientific approach

This is where organisational psychologists come in.

They have an intimate knowledge of the theories that can help interpret and explain the links between personal attributes and performance, or other variables that matter. They know how to use these theories to solve real problems and they know how to design studies and measurement tools to ensure that scientific knowledge is applied correctly in an organisational setting.

I learned a lot of organisational psychology models and theories during my Masters and PhD studies. We focused on these and the research behind them when I taught MBA and Master of Organisational Psychology programs – sometimes noting gaps in current models and theories – and designing studies to help extend or debunk what we knew.

While completing my MBA and later in a corporate role, I became skilled in applying that knowledge to the problems managers and executives face.

As an organisational psychologist I often find that it isn’t just knowing behavioural science that matters, it is knowing the behavioural science detail to understand what is most relevant for a role or business problem.

For example, consider sales performance.

Thanks to the popularity of some psychometric instruments, ‘extroverted’ or ‘introverted’ are understood as reliable ways to describe elements of a person’s personality, and many people are convinced that being extroverted is important in a sales role.

However, the research on sales performance says otherwise. An International Journal of Selection and Assessment article shows that across a range of studies there isn’t a strong link between ‘extraversion’ (broadly) and sales performance, despite this being such a common view.

Knowing the detail matters here.

A broad description of extraversion may not do a candidate justice, particularly when we’re focused on understanding performance in a particular role.

Instead, we might be interested in a candidate’s level of dominance, their sociability, what they would be like in a group setting, or presenting to a group to make a sale.

Perhaps we’d be interested in whether they are independent, adventurous, or ambitious, all of which (as potential elements of extroversion) may have different implications for sales performance.

We might also focus on the particular nature of the sales role – many roles are becoming more formalised and structured, with down-to-the-minute journey plans and call times. No wonder then that the Journal of Selection and Assessment article found another personality factor, conscientiousness, to be relevant for predicting sales performance.

The business focus of pre-employment assessments

It’s the acceptance of how important behavioural science is to the new world of AI that has led me to Sapia, where we believe all people decisions should be based on science, data and analytics – not just gut feeling.

Sapia focuses on the things that matter.

We use validated behavioural science to build predictive models, centred on the issues your business wishes to address and their corresponding KPIs. The predictive model is based on your workforce data so it’s unique to your organisation, maximising predictive accuracy while also prioritising the candidate experience.

We use various techniques, including training a neural network to identify what drives performance in the organisation, based on the data we collect. We build our algorithms to achieve accurate predictions from the start, and the model improves over time through machine learning.

We’re now at a point where we can use behavioural science, data science and computer technology to understand the intricate links between candidate information and performance data. With that we can help reduce bias and level the candidate playing field and give managers a 3D view of their candidates, to enable them to make the best people decisions.

Dr. Elliot Wood is a registered organisational psychologist with a bachelor’s degree, various master’s degrees and a PhD in the field. He spent 12 years in academia, teaching master’s-level organisational psychology; supervising post-graduate research; and working on research grants and consulting projects. He then moved into organisational development–focused consulting in Australia and Asia, followed by an internal talent role in a multinational brewer. He is now Chief Organisational Psychologist at Sapia.

References

Tomas Chamorro-Premuzic, Dave Winsborough, Ryne Sherman and Robert Hogan, Industrial & Organisational Psychology,New Talent Signals: Shiny New Objects or a Brave New World?’

Murray R. Barrick, Michael K. Mount, Timothy A. Judge, International Journal of Selection and Assessment, ‘Personality and Performance at the Beginning of the New Millennium: What Do We Know and Where Do We Go Next?’

 


Blog

Why video is killing your DEI star


Despite all the rhetoric, it seems that the world is becoming worse at removing bias from our workplaces, leveling the playing field for all employees, and improving diversity, equity, and inclusion. 

COVID was tough for everyone, but the one good moment that seemed to come out of it was how people galvanized around the Black Lives Matter movement. Companies dedicated large advertising budgets to sophisticated promotional campaigns to convince us that they supported the movement.

At work, people demanded better from the companies they worked for. They demanded real and measurable progress on matters like diversity and inclusion, not just better benefits

Employees weren’t going to accept the hypocrisy of their employer, a consumer brand spending millions on advertising about how woke they are when nothing changed internally. Bias was just not something that people were prepared to accept. It seemed like progress was being made, at least in the workplace.

Fast forward to 2023, and things have gotten worse than they were before the movement. What happened to push us so far backward on all the progress we’d made? The answer is video interviewing, specifically when it comes to amplifying bias in recruitment.

Video interviewing: Simpler, but not fairer

Video interviewing took off as a solution to the challenges of remote recruiting. However, video is a flawed way of assessing potential candidates as a first gate. It invites judgment, adds stress to the candidate, puts added pressure around hair and makeup, and turns a simple interview into a small theater production. Additionally, simply automating interviews with video doesn’t create any efficiencies for hiring teams, who are still watching hours and hours of interviews. 

Video also excludes people who are not comfortable on camera, such as introverts, people with autism, and people of color. These factors do not influence a person’s ability to do a job, but using video at the start of the interview process puts them at a disadvantage. We are excluding a significant percentage of people by using video as a first gate.

We analyzed feedback comments from more than 2.3 million candidates across 47 countries using smart chat invented by Sapia.ai to apply for a role, and the overwhelming theme is that “it’s not stressful.” 

As an industry, we must put a stop to this. Already, there is growing cynicism when companies talk about “improving candidate experience” because we like to say we care about something that will win us good PR, but we do little to hold ourselves accountable. We care more about optics than results.

However, you cannot say you care about candidates or diversity and inclusion and only use video platforms to recruit people. Frustratingly, there is technology that solves for remote work, improves the candidate experience, and truly reduces bias, and that is text chat. 

Some of the most sought-after companies, like Automattic (the makers of WordPress), have been using it for years.

The answer is in chat

Chat is how we truly communicate asynchronously. It needs no acting, and we all know how to chat. Empowered by the right AI, text chat can be human and real. It can listen to everyone, it is blind, reduces bias, evens the playing field by giving everyone a fair go, and gives them all personalized feedback at scale. 

It can harness the true power of language to understand the candidate’s personality, language skills, critical thinking, and much more.

Video should only ever be used as a secondary interaction, for candidates who are already engaged in the process and have been shortlisted. In that case, it does give hiring teams a chance to meet candidates, and candidates are more likely to be comfortable with video as they know they’ve progressed, and they’ve had a chance to present themselves in a lower pressure format already.

Why are we settling for video as a first interaction, when we can actually do more than make empty marketing promises to candidates? Why choose a solution that erodes all the hard gains we’ve made in diversity and inclusion?

Read Online
Blog

How do we increase candidate and recruiter trust in AI?

Transcript:

Barb Hyman:

I am seeing organizations increasingly rely on AI that comes from social media or resume data. How do you see that? Does that bother you? Do you think we need to educate the market about the difference between first-party and third-party data and ask questions about how clean and unbiased the data is?

As a former HR leader, I couldn’t use technology that analyzes my candidate pool or my people based on what they do on social media. It horrifies me, and it kills trust. I feel like that kills trust, you know, because I’m on social media in my own personal way. What do you think about that trend, and how can we tackle it? 

Meahan Callaghan:

I think we need to educate people at the point of recruitment. We could let them know why they should feel safe using AI-based technology and that it doesn’t use third-party data or do anything unethical. 

If we provide warnings and information, people will start to look for trustworthy AI. Remember how banks got everyone to feel safe about transferring money online? We need an education piece on how this AI is different from that one.

Imagine if we said, “Before you’re about to go through AI-based technology for this recruitment process, we’re going to let you know why you should feel safe in doing so. It doesn’t use third-party data, it doesn’t do anything unethical.” 

Again, take internet banking: How did the banks get everyone to feel OK about transferring money online? 

I mean, all of us used to go and check the money even got there, and you know, there’s some people that still don’t use it today. I’ve got a friend with a fantastic organic beauty products business. Another one who’s got a collagen business. Both are constantly having to say, “We look the same as other products – but let me tell you how we’re not.” 

And I think there is an education piece on, let me tell you how we’re not.

Barb Hyman:

I love that you’ve taken the candidate’s view on that. We need to protect them and our brand, and trust is crucial. We shouldn’t blindly trust AI; we should be able to trust it because it’s safe to do so. That’s a great call to action for all of us in that space.


Listen to the full episode of our podcast with Meahan Callaghan, CHRO of Redbubble, here:

 

Read Online
Blog

How Victoria defeated COVID with individual action and data

Action and data

COVID has taught us that on reflection the focus on individual action with a community benefit as a goal is really a focus that leads to the greater good. In our home state of Victoria, Australia now 7 straight days with ZERO new cases. It has been an effort founded on facts and science over misinformation. In Victoria, many sacrificed a lot for their well-being for ALL. If anything, there is now proof, thanks to Victorians, that when we see facts, listen to science and let data show you how to lead that change, you can make it happen.

We’re using this approach to build a new vision for inclusive hiring.

AI, especially predictive machine learning models, are an outcome of a scientific process, it’s no different to any other scientific theory, where a hypothesis is being tested using data.

The beauty of the scientific method is that every scientific theory needs to be falsifiable, a condition first brought to light by the philosopher of science Karl Popper. In other words, a theory has to have the capacity to be contradicted with evidence.

It is how science is able to progress without bias.

There are three decisions that are made by a human in building that scientific experiment.

  1. Forming a meaningful hypothesis
  2. Data collection methodology (experiment set up)
  3. The data you rely on to test the hypothesis

One can argue 2 and 3 are the same as if the methodology is not sound the data collection wouldn’t be either. That’s why there is so much challenge and curiosity as there should be about the data that goes into an algorithm.

Think of an analogy in a different field of science: the science of climate change.

A scientist comes up with a hypothesis that certain factors drive an increase in objective measures of climate warming, eg CO2 emissions, cars on the road, etc. That’s a hypothesis and then she tests it using statistical analysis to prove or disprove that her hypothesis holds beyond random chance.

The best way to make sure you are following a sound scientific approach is to share your findings with the broader scientific community. In other words, publish in peer-reviewed mediums such as journals or conferences so that you are open to scrutiny and arguments against your findings.

Or to put it another way,  be open for your hypothesis to be falsified.

In AI especially, it is also important to keep testing whether your hypothesis holds over time as new data may show patterns that lead to disproving your initial hypothesis. This can be due to limitations in your initial dataset or assumptions made that are no longer valid. For example, assuming the only information in a resume related to gender are name and explicit mention of gender or a certain predictive pattern such as detecting facial expressions are consistent across race or gender groups. Both of these have been proven wrong*.

The only way to improve our ability to predict, be it climate change or employee performance, is to start applying the scientific method and be open to adjusting your models to better explain new evidence.

Therefore the idea that a human can encode their own biases in the AI — well it’s just not true if the right science is followed.

* Amazon scraps secret AI recruiting tool that showed bias against women (https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G)

* Researchers find evidence of bias in facial expression data sets (https://venturebeat.com/2020/07/24/researchers-find-evidence-of-bias-in-facial-expression-data-sets/)


To keep up to date on all things “Hiring with Ai” subscribe to our blog!

You can try out Sapia’s Chat Interview right now, or leave us your details to get a personalised demo

Read Online