An AI hiring firm says it can predict job-hopping based on your interviews. The idea of “bias-free” hiring, already highly misleading, is being used by companies to shirk greater scrutiny for their tools’ labor issues beyond discrimination.
The most common systems involve using face-scanning algorithms, games or other evaluations to help determine which candidates to interview.
Activists and scholars warn that these screening tools can perpetuate discrimination. However, the makers themselves argue that algorithmic hiring helps correct for human biases.
In a December 2019 paper, researchers at Cornell reviewed the landscape of algorithmic screening companies to analyze their claims and practices. Of the 18 they identified with English-language websites, the majority marketed as a fairer alternative to human-based hiring. Thus suggesting that they were latching onto the heightened concern around these issues to tout their tools’ benefits and get more customers.
But discrimination isn’t the only concern with algorithmic hiring. Some scholars worry that marketing language that focuses on bias lets companies off the hook on other issues, such as workers’ rights. A new preprint from one of these firms serves as an important reminder. “We should not let the attention that people have begun to pay to bias/discrimination crowd other issues,” says Solon Barocas, an assistant professor at Cornell University and principal researcher at Microsoft Research, who studies algorithmic fairness and accountability.
The firm in question is Australia-based Sapia (Formerly PredictiveHire), founded in October 2013.
According to the firm’s CEO, Barbara Hyman, its clients are employers that must manage large numbers of applications, such as those in retail, sales, call centers, and health care.
As the Cornell study found, it also actively uses promises of fairer hiring in its marketing language. On its home page, it boldly advertises: “Meet Smart Interviewer – Your co-pilot in hiring. Making interviews super fast, inclusive and bias free.
As we’ve written before, the idea of “bias-free” algorithms is highly misleading. But Sapia’s latest research is troubling for a different reason. It is focused on building a new machine-learning model that seeks to predict a candidate’s likelihood of job-hopping. That is the practice of changing jobs more frequently than an employer desires. The work follows the company’s recent peer-reviewed research that looked at how open-ended interview questions correlate with personality.
Applicants had originally been asked five to seven open-ended questions and self-rating questions about their past experience and situational judgment.
These included questions meant to tease out traits that studies have previously shown to correlate strongly with job-hopping tendencies, such as being more open to experience, less practical, and less down to earth. The company researchers claim the model was able to predict job hopping with statistical significance. Sapia’s website is already advertising this work as a “flight risk” assessment that is “coming soon.” Sapia’s new work is a prime example of what Nathan Newman argues is one of the biggest adverse impacts of big data on labor.
Machine-learning-based personality tests, for example, are increasingly being used in hiring to screen. This is to out potential employees who have a higher likelihood of agitating for increased wages or supporting unionisation. Employers are increasingly monitoring employees’ emails, chats, and data to assess which might leave and calculate the minimum pay increase to make them stay.
None of these examples should be surprising, Newman argued. They are simply a modern manifestation of what employers have historically done to suppress wages by targeting and breaking up union activities. The use of personality assessments in hiring, which dates back to the 1930s in the US, in fact began as a mechanism to weed out people most likely to become labor organizers. The tests became particularly popular in the 1960s and ’70s once organizational psychologists had refined them to assess workers for their union sympathies.
In this context, Sapia’s fight-risk assessment is just another example of this trend. “Job hopping, or the threat of job hopping,” points out Barocas, “is one of the main ways that workers are able to increase their income.” The company even built its assessment on personality screenings designed by organizational psychologists.
Barocas doesn’t necessarily advocate tossing out the tools altogether. He believes the goal of making hiring work better for everyone is a noble one and could be achieved if regulators mandate greater transparency.
By Karen Haoa, July 24, 2020, MIT Technology Review | https://www.technologyreview.com/
To keep up to date on all things “Hiring with Ai” subscribe to our blog!
Finally, you can try out Sapia’s Chat Interview right now, or leave us your details here to get a personalised demo.
Good pattern recognition allows you to make better decisions, short-circuit lengthy processes, avoid mistakes, and better understand risks.
But it has a downside too. Just because you can see a pattern in what has gone before, it is no guarantee that those same things will be true in the future.
Pattern recognition produces particularly flawed results in the hiring process.
When you hear hoofbeats, it’s probably horses. But you never know when it might be a zebra.
We all want to hire people like us, but true innovation comes through diversity.
Recruiters know that they should strip-out any markers that trigger unconscious bias when interviewing – but unconscious bias is hard to fight. The only way to remove those markers is via technology.
AI helps you discover the right patterns without bias.
Every role has a unique profile and every person has their own unique personality and aptitude DNA. We use a combination of natural language processing (NLP), a branch of AI-specific to text data and machine learning to predict with 85%+ accuracy if someone is right for a role.
NLP provides methods to program computers to process and analyse large amounts of human language data. It takes many forms, but at its core, it’s about communication, but we all know words run much deeper than that. There is a context that we derive from everything someone says.
Google, Facebook, IBM Watson are technologies that also rely on NLP to comb through large amounts of text data. The end result is insights and analysis that would otherwise either be impossible or take far too long.
Women are more conscientious than men in their text interview.
Men make on average 4.5% more language errors than women while taking 2% more time on average than women.
Interestingly men show higher levels of English fluency using more difficult words than their female counterparts, more than 4.5% on average.
These stats fluctuate depending on the role. For example, when applying for customer service roles, women take 6% more time than men while making 5% fewer language errors (language errors include grammar and spelling errors).
Women use more words on average in their text interview than men. We don’t find this to be the case.
Who writes more depends on the role family, but we find the difference to be +/- 2% on average (effect size, a more accurate way to measure the difference in averages is less than 0.2 across all role families. This is considered small). For example, in Graduate roles, men write more and in sales and hospitality roles females write more, while answering the same interview questions.
Our data shows that more extraverted candidates are preferred at the hiring stage for sales roles.
On average a hired candidate is 7 percentile points higher in extraversion than the candidate population average. As we track new hire performance in their first 12 months and beyond, we are starting to see a different profile turning up in the better sales performers – more introverts.
If you want to learn more about how we get to these insights from our FirstInterview AI screening tool get in touch here.
In sales, your single-minded focus on targets is far more important than how you present yourself. For recruiters who think otherwise, they may be operating with bias.
We ask this question often to drive our product strategy. In a software company, it’s very easy to get caught up in a landslide of features and topics and in a dynamic world of competition and feature parity, product roadmaps can easily get cluttered. It may seem overly simplistic, but it works.
In our case, we are being hired to save our customers time and money in recruitment.
Click here for some examples of hours and $’s saved by our customers.
By asking this simple question — what job is HR being hired to do?— you can start to get to the heart of what your strategy should be. And then measure that religiously!
Like product roadmaps in a tech company, HR’s roadmap too can get confused or cluttered by:
> New Trends (CX- candidate experience, EX-employee experience, AI everything)
> Survey fatigue – culture diagnostics, engagement surveys, exit surveys, 1000’s of verbatim to read
> Process fatigue – performance management processes, 9 box, the annual salary review process, post engagement survey processes, and so on
> New system implementations– that can potentially crowd out low friction and affordable solutions to drive down business costs
All of this activity can produce more noise than signal because it can easily miss the “why”. And once an HR function is more mature, it can be even more difficult to understand which of the many elements of HR are the ones truly driving the most value for the business.
1. Make sure it is the CEO’s definition of the job, not yours. Read our second article for more of the CEO perspective.
2. Define the job so it delivers on either lead or lag indicators that are proven to impact on your organisation’s business performance. For example, for a sales business, time to hire matters a lot. Having to wait 45 days to fill a sales role vs 10 days means 35 days of lost sales. That flows straight through to the bottom line. Engagement scores, on the other hand, are neither a proven lead or lag indicator of business performance. Engagement measured from a survey is more of a vanity metric.
3. Ensure that you are looking at the whole job, not just a piece of the job. It’s easy to get too narrow in your definition
It seems that using AI could consign fantastical or over-optimised resumes to the dustbin of history, along with the Rolodex and fax machines.
But how do we go about selecting the perfect (or as close to perfect as possible) candidates from AI-created shortlists?
It should be so easy to learn how to conduct an interview that adds the human element to the AI selection. The web is awash with opportunities to earn recruitment qualifications from a variety of bodies, both respected and dubious. There are so many manuals, guides and blog-posts on the best ways of interviewing. People have been interviewing people for hundreds of years.
We’ve all heard about bizarre interview questions (no explanation needed). We’ve felt the pain of people caught up in interview nightmares (from both sides of the desk). And we’ve scratched our heads and noses over the blogs on body language in face-to-face interviews(bias klaxon).
Even without the extremes, people have tales to tell. Did you ever come away from an interview for your ideal job, where something just felt wrong?
It’s clear that adding human interaction to the recruitment process is by no means straightforward. Highlighting these recurring problems doesn’t solve the underlying question, which is:
“We’ve used an algorithm to better identify suitable candidates. How do we ensure that adding the crucial human part of hiring doesn’t re-introduce the very biases that the algorithm filtered out?”
Searching for “Perfect interview Questions” gives 167,000,000 results. Many of them include the Perfect Answers to match. So it’s not simply about asking questions that, once upon a time, were reckoned to extract truthful and useful responses.
Instead we want questions that will make the best of that human interaction, building on and exploring the reasons the algorithm put these candidates on the list. Our questions need to help us achieve the ultimate goal of the interview: finding a candidate who can do the job, fit with the company culture AND stay for a meaningful period of time.
It’s generally agreed that we get better interview answers by asking open questions. I’d expand on that. They should ideally be questions that don’t relate specifically to the candidate’s resume, or only at the highest level, to get an in-depth understanding.
We should try to avoid using leading questions that will give an astute candidate any clues to the answers we’re looking for. And we should probably steer clear of most, if not all, of the questions that appear on those lists of ‘Perfect Interview Questions’, knowing that some candidates will reach for a well-practised ‘Perfect Answer’. We want them to display their understanding of the question and knowledge of the subject matter. Not their ability to recall a pre-rehearsed answer.
And so, we need to remember that we’re looking for the substance of the answers we get, not the candidate’s ability to weave the flimsiest material into an enchanting story.
So, here are some possible questions to get you thinking.
Of course, you’ll need to frame and adjust those questions to match the role and your company.
AI equips recruiters with impartial insights that resumes, questionnaires and even personality profiles can’t provide. Well-constructed, supervised algorithms overlook all the biases that every human has. And that can only be a good thing.
Statistically robust AI uses an algorithm, derived from business performance and behavioural science, to shortlist candidates. It can predict which ones will do well, fit well and stay. We can trust it to know what makes a successful employee, for our particular organisation and this specific role. It can tell us to invest effort with the applicants on that shortlist. However unlikely they seem at first glance.
So we can use all of our knowledge and skills to understand a candidate’s suitability and look beyond things that might have previously led us to a rejection.
AI is the recruiter’s friend, not a competitor. It can stop us wasting time chasing candidates who we think will make great hires but instead fail to live up to the expectation. And it can direct us to the hidden gems we might have otherwise overlooked.
Technology like AI for HR is only a threat if you ignore it.
Don’t be that company that still swears by dated processes because that’s the way it’s always been done. The opportunity here is putting technology to work, helping your organisation evolve for the better. The longer the delay, the harder it will be. So don’t be left at the back playing catch-up.
There are very few businesses these days that communicate by fax machines – and that’s for a reason. In a few years, you’ll look back and wonder “Why didn’t we all embrace Artificial Intelligence sooner?”