Blog

Back

Written by Nathan Hewitt

4 practical ways to solve your decentralized hiring challenges in 2023

How to improve decentralized hiring processes | Sapia Ai interview software

Decentralized recruitment, while enabling larger companies to hire efficiently, suffers in a labor-short market.

Under ordinary circumstances – like, say, the world before COVID and the Great Resignation – it’s ideal to let local hiring managers build their own workforces. Generally speaking, the decentralized approach is better for productivity, candidate experience, and the overall satisfaction of hiring managers, who look favourably on the trust and autonomy they get from head office.

However, when good candidates are hard to come by, the dearth of talent puts stress on the joints of such a sprawling network. We hear this frequently from companies who come to us to help improve efficiency, diversity, and quality of hire.

Here are the common problems companies are having with decentralized hiring in 2022:

  • Hiring managers are frustrated, because they have a trickle of applicants and little control over employer branding and recruitment marketing.
  • Consistency is hampered by inconsistent processes and rogue hiring managers, who frequently abandon workflows and ATS protocols in order to acquire warm bodies by any means necessary.
  • Job advertising budgets are distributed unevenly, resulting in consternation for already-strained teams.
  • Diversity is put on the backburner, both because hiring managers have the final say, and because they have little-to-no accountability over decisions.
  • The company’s recruitment centre (i.e. head office) is unable to collect and analyze sufficient data to diagnose and fix recruitment problems across its decentralized network.
  • The company is using an ATS with which either some (or all) of hiring managers are unhappy. Head office may know this, but in any case, it decides that the process of researching, purchasing, and implementing a new ATS is not worth the pain.
  • A staunch desire to stick to the status quo, or ‘the way we’ve always done things’, because the company assumes that this period of hiring difficulty will soon pass.

These challenges (and others) have effected a drop in confidence in the way companies interview and process candidates. An Aptitude Research and Sapia.ai report from earlier this year found that 33% of companies aren’t confident in the way they interview, and 50% have lost talent due to poor processes. Meanwhile, 22% of the average talent pool is drained at the application stage.

Statistically speaking, roughly one in five people, at minimum, are bailing out of your application process at the very beginning.

How to improve efficiencies across a strained decentralized hiring network

As with many things in business, the answer to alleviating organizational pain lies in small, iterative improvements. Our recommendations do not include haphazard technological upgrades, nor do we advocate for widespread process changes. These will more than likely cause your decentralized hiring network to fall apart.

Here are some good places to start.

Look at removing time-wasting entry barriers, like resumes and cover letters

This is particularly important for the retail and hospitality industries, but certainly applies to any companies that hire entry-level team members at volume. Given the average level of job experience at this level of employment, most resumes and cover letters aren’t useful in gauging candidate quality. On the contrary – they take up precious hiring manager hours, are cumbersome for candidates to write, and are the main cause of the 22-24% candidate drop out rate we mentioned above. That’s not even accounting for the fact that anywhere between 60-80% of resumes contain falsifications.

Implement a simple, standardized process for capturing a candidate experience NPS baseline

Decentralization, almost by definition, makes capturing useful information difficult. But if you use an ATS as a tool for centralization, consider adding a candidate NPS measurement step to your application process. It can be as simple as a Net Promoter Score scale (1 to 10). If you hire at volume across multiple localities or regions, asking this one simple question can help you produce meaningful insights about how candidates find your process. What gets measured, gets managed, and though there are many other data points you might want to collect, this is a good (and relatively easy) place to start. If you’re keen to learn more about this, check out our podcast episode on candidate experience with Lars van Wieren, CEO at Starred.

Speak to your hiring managers regularly

Quantitative data is gold, but qualitative data is platinum. Make a habit of interviewing (not surveying, interviewing) your hiring managers on the ground. You’ll uncover invaluable insights that may enable you to make fast changes at scale. We help our clients collect qualitative feedback from hiring managers as a matter of course, leading to increases in productivity and hiring manager satisfaction.

Here are some useful questions to ask your hiring managers:

  • Take me through how you run your local (e.g. instore) hiring process, from start to finish.
  • Explain your process for interviewing candidates.
  • Where do you think you waste the most time?
  • What doesn’t work as well as it should?
  • What kinds of candidates are you seeing, and how would you rate the overall quality?
  • How might we support you in hiring more effectively?

This kind of bottom-up research aims to understand how hiring managers are actually behaving and interacting with systems. Some may be breaking from established protocols, but if you ask them why and how, you might uncover tactics and efficiencies that can be brought back to the rest of the organization, thereby improving the way all hiring managers operate. Two adages apply here: ‘Necessity is the mother of invention’, and ‘People will always find the path of least resistance’.

This fact-finding method is better than surveys because surveys impose a limited scope in which potential problem areas are preset. “We’re asking you about these things,” you’re saying, “and therefore, we’re suggesting they’re most important.” As a result, other problems and possible solutions are likely to be excluded from discovery. You’ll always learn more by having real conversations, because they can go in any conceivable direction.

Look for novel ways to encourage applications from otherwise passive candidates

Again, incredibly useful for retail, but applicable in a wide range of industries and contexts. Think about the universal touchpoints you have with customers (a.k.a candidates) across your decentralized network. In retail, some good examples might be your receipts and carry bags. These provide you invaluable real estate to advertise your jobs and employer brand. Consider putting a URL or QR code on these assets, and you might drastically increase the amount of people who know about and apply for the jobs you advertise. This tactic has the added benefit of capitalizing on active and loyal customers; after all, if they’re buying from you, they’re a prime target for recruitment marketing.

Here’s a cool example of how we help our clients advertise their jobs in places their customers can easily see.

The best part about this manner of advertising? You already own the space, and the design can be centralized and rolled out at scale.


We’d be remiss if we didn’t point out that Sapia’s Ai Smart Interviewer is a dynamite solution for the inevitable pain points of decentralised recruitment. Our technology can be rolled out across your entire company, and takes care of the application, screening, interviewing, and assessment stages of your process.

Hiring managers save time – as much as 1,600 hours per month, for some of our customers – but they still get the option to approve and interact with short-listed candidates. Better still, our platform captures vital data on diversity and candidate experience, enabling you to see exactly how your network is performing, individually and collectively.

Best of all, Sapia tech integrates directly with the leading ATS platforms, and can be rolled out in as little as four weeks.

Woolworths Group, Australia’s largest private employer, uses Sapia to hire more than 50,000 candidates per year, nationwide. To see how they flourish in a labor-short market, check out our case study here.


Blog

7 critical questions to ask when selecting your ‘Ai for Hiring’ technology

 

Interrupting bias in people decisions

We hope that the debate over the value of diverse teams is now over.  There is plenty of evidence that diverse teams lead to better decisions and therefore, business outcomes for any organisation.

This means that CHROs today are being charged with interrupting the bias in their people decisions and expected to manage bias as closely as the  CFO manages the financials.

But the use of Ai tools in hiring and promotion requires careful consideration to ensure the technology does not inadvertently introduce bias or amplify any existing biases.

To assist HR decision-makers to navigate these decisions confidently, we invite you to consider these 8 critical questions when selecting your Ai technology.

You will find not only the key questions to ask when testing the tools but why these are critical questions to ask and how to differentiate between the answers you are given.

Question 1  

What training data do you use?

Another way to ask this is: what data do you use to assess someone’s fit for a role?

First up- why is this an important question to ask …

Machine-learning algorithms use statistics to find and apply patterns in data.  Data can be anything that can be measured or recorded, e.g. numbers, words,  images, clicks etc. If it can be digitally stored, it can be fed into a machine-

learning algorithm.

The process is quite basic: find the pattern, apply the pattern.

This is why the data you use to build a predictive model, called training data, is so critical to understand.

In HR, the kinds of data that could be used to build predictive models for  hiring and promotion are:

  • CV data and cover letters
  • Games built to measure someone’s memory capacity and processing speed
  • Behavioural data, e.g. how you engage in an assessment,
  • Video Ai can capture how you act in an interview—your gestures, pose, lean, as well as your tone and cadence.
  • Your text or voice responses to structured interview questions
  • Public data sources such as your social media profile, your tweets, and other social media activity

If you consider the range of data that can be used in training data, not all data sources are equal, and on its surface, you can certainly see how some carry the risk of amplifying existing bases and the risk of alienating your candidates.

Consider the training data through these lenses:

> Is the data visible or opaque to the candidate?

Using data that is invisible to the candidate may impact your employer brand. And relying on behavioural data such as how quickly a candidate completes the assessment, social data or any data that is invisible to the candidate might expose you to not only brand risk but also a legal risk. Will your candidates trust an assessment that uses data that is invisible to them, scraped about them or which can’t be readily explained?

Increasingly companies are measuring the business cost from poor hiring processes that contribute to customer churn. 65% of candidates with a positive experience would be a customer again even if they were not hired and 81% will share their positive experience with family, friends and peers (Source: Talent Board).

Visibility of the data used to generate recommendations is also linked to explainability which is a common attribute now demanded by both governments and organisations in the responsible use of Ai.

Video Ai tools have been legally challenged on the basis that they fail to comply with baseline standards for AI decision-making, such as the OECD AI Principles and the Universal Guidelines for AI.

Or that they perpetuate societal biases and could end up penalising nonnative speakers, visibly nervous interviewees or anyone else who doesn’t fit the model for look and speech.

If you are keen to attract and retain applicants through your recruitment pipeline, you may also care about how explainable and trustworthy your assessment is. When the candidate can see the data that is used about them and knows that only the data they consent to give is being used, they may be more likely to apply and complete the process. Think about how your own trust in a recruitment process could be affected by different assessment types.

> Is the data 1st party data or 3rd party data?

1st party data is data such as the interview responses written by a candidate to answer an interview question. It is given openly, consensually and knowingly. There is full awareness about what this data is going to be used for and it’s typically data that is gathered for that reason only.

3rd party data is data that is drawn from or acquired through public sources about a candidate such as their Twitter profile. It could be your social media profile. It is data that is not created for the specific use case of interviewing for a job, but which is scraped and extracted and applied for a different purpose. It is self-evident that an Ai tool that combines visible data and 1st party data is likely to be both more accurate in the application for recruitment and have outcomes more likely to be trusted by the candidate and the recruiter.


Trust matters to your candidates and to your culture …

At PredictiveHire, we are committed to building ethical and engaging assessments. This is why we have taken the path of a text chat with no time pressure. We allow candidates to take their own time, reflect and submit answers in text format.

We strictly do not use any information other than the candidate responses to the interview questions (i.e. fairness through unawareness – algorithm knows nothing about sensitive attributes).

For example, no explicit use of race, age, name, location etc, candidate behavioural data such as how long they take to complete, how fast they type, how many corrections they make, information scraped from the internet etc. While these signals may carry information, we do not use any such data.


2. Can you explain why ‘person y’ was recommended by the Ai and not ‘person z’?

Another way to ask this is – Can you explain how your algorithm works? and does your solution use deep learning models?

This is an interesting question especially given that we humans typically obfuscate our reasons for rejecting a candidate behind the catch-all explanation of “Susie was not a cultural fit”.

For some reason, we humans have a higher-order need and expectation to unpack how an algorithm arrived at a recommendation. Perhaps because there is not much to say to a phone call that tells you were rejected for cultural fit.

This is probably the most important aspect to consider, especially if you are the change leader in this area. It is fair to expect that if an algorithm affects someone’s life, you need to see how that algorithm works.

Transparency and explainability are fundamental ingredients of trust, and there is plenty of research to show that high trust relationships create the most productive relationships and cultures.

This is also one substantial benefit of using AI at the top of the funnel to screen candidates. Subject to what kind of Ai you use, it enables you to explain why a candidate was screened in or out.

This means recruitment decisions become consistent and fairer with AI  screening tools.

But if Ai solutions are not clear why some inputs (called “features” in machine learning jargon) are used and how they contribute to the outcome,  explainability becomes impossible.

For example, when deep learning models are used, you are sacrificing explainability for accuracy. Because no one can explain how a particular data feature contributed to the recommendation. This can further erode candidate trust and impact your brand.

The most important thing is that you know what data is being used and then ultimately, it’s your choice as to whether you feel comfortable to explain the algorithm’s recommendations to both your people and the candidate.

3. What assumptions and scientific methods are behind the product? Are they validated?

Assessment should be underpinned by validated scientific methods and like all science, the proof is in the research that underpins that methodology.

This raises another question for anyone looking to rely on AI tools for human decision making – where is the published and peer-reviewed research that ensures you can have confidence that a) it works and b) it’s fair.

This is an important question given the novelty of AI methods and the pace at which they advance.

At PredictiveHire, we have published our research to ensure that anyone can investigate for themselves the science that underpins our AI solution.


INSERT RESEARCH


We continuously analyse the data used to train models for latent patterns that reveal insights for our customers as well as inform us of improving the outcomes.

4. What are the bias tests that you use and how often do you test for bias?

It’s probably self-evident why this is an important question to ask. You can’t have much confidence in the algorithm being fair for your candidates if no one is testing that regularly.

Many assessments report on studies they have conducted on testing for bias.  While this is useful, it does not guarantee that the assessment may not demonstrate biases in new candidate cohorts it’s applied on.

The notion of “data drift” discussed in machine learning highlights how changing patterns in data can cause models to behave differently than expected, especially when the new data is significantly different from the training data.

Therefore on-going monitoring of models is critical in identifying and mitigating risks of bias.

Potential biases in data can be tested for and measured.

These include all assumed biases such as between gender and race groups that can be added to a suite of tests. These tests can be extended to include other groups of interest where those group attributes are available like  English As Second Language (EASL) users.

On bias testing, look out for at least these 3 tests and ask to see the tech manual and an example bias testing report.

  • Proportional Parity Test. This is the standard EEOC measure for adverse impact on selection and recommendations.
  • Score Distribution Test. This measures whether the assessment score distributions are similar across groups of interest
  • Fairness Test. This measures whether the assessment is making the same rate of errors across groups of interest

INSERT IMAGE


At PredictiveHire, we conduct all the above tests. We conduct statistical tests to check for significant differences between groups of feature values,  model outcomes and recommendations. Tests such as t-tests, effect sizes,  ANOVA, 4/5th, Chi-Squared etc. are used for this. We consider this standard practice.

We go beyond the above standard proportional and distribution tests on fairness and adhere to stricter fairness considerations, especially at the model training stage on the error rates. These include following guidelines set by  IBM’s AI Fairness 360 Open Source Toolkit. Reference: https://aif360.mybluemix.net/) and the Aequitas project at the Centre for  Data Science and Public Policy at the University of Chicago

We continuously analyse the data used to train models for latent patterns that reveal insights for our customers as well as inform us of improving the outcomes.

5. How can you remove bias from an algorithm?

We all know that despite best intentions, we cannot be trained out of our biases. Especially the unconscious biases.

This is another reason why using data-driven methods to screen candidates is fairer than using humans.

Biases can occur in many different forms. Algorithms and Ai learn according to the profile of the data we feed it. If the data it learns from is taken from a  CV, it’s only going to amplify our existing biases. Only clean data, like the answers to specific job-related questions, can give us a true bias-free outcome.

If any biases are discovered, the vendor should be able to investigate and highlight the cause of the bias (e.g. a feature or definition of fitness) and take corrective measure to mitigate it.

  1. On which minority groups have you tested your products?

If you care about inclusivity, then you want every candidate to have an equal and fair opportunity at participating in the recruitment process.

This means taking account of minority groups such as those with autism,  dyslexia and English as a second language (EASL), as well as the obvious need to ensure the approach is inclusive for different ethnic groups, ages and genders.

At PredictiveHire, we test the algorithms for bias on gender and race. Tests can be conducted for almost any group in which the customer is interested.  For example, we run tests on “English As a Second Language” (EASL) vs. native speakers.

  1. What kind of success have you had in terms of creating hiring equity?

If one motivation for you introducing Ai tools to your recruitment process is to deliver more diverse hiring outcomes, it’s natural you should expect the provider to have demonstrated this kind of impact in its customers.

If you don’t measure it, you probably won’t improve it. At PredictiveHire, we provide you with tools to measure equality. Multiple dimensions are measured through the pipeline from those who applied, were recommended and then who was ultimately hired.

8. What is the composition of the team building this technology?

Thankfully, HR decision-makers are much more aware of how human bias  can creep into technology design. Think of how the dominance of one trait in  the human designers and builders have created an inadvertent unfair  outcome.

In 2012, YouTube noticed something odd.

About 10% of the videos being uploaded were upside down.

When designers investigated the problem, they found something unexpected:  Left-handed people picked up their phones differently, rotating them 180  degrees, which lead to upside-down videos being uploaded,

The issue here was a lack of diversity in the design process. The engineers and designers who created the YouTube app were all right-handed, and none had considered that some people might pick up their phones differently.

In our team at PredictiveHire, from the top down, we look for diversity in its broadest definition.

Gender, race, age, education, immigrant vs native-born, personality traits,  work experience. It all adds up to ensure that we minimise our collective blind spots and create a candidate and user experience that works for the greatest number of people and minimises bias.

What other questions have you used to validate the fairness and integrity of the Ai tools you have selected to augment your hiring and promotion processes?

We’d love to know!

 


 

To keep up to date on all things “Hiring with Ai” subscribe to our blog!

You can try out PredictiveHire’s FirstInterview right now, or leave us your details to get a personalised demo

Read Online
Blog

The difference between psych tests and predictive analytics

One of the questions we get asked a lot is “what’s the difference between psych testing and predictive analytics”. So today we’re going to unpack this a little bit and look at how the two differ, and where they are similar

Psych Testing

Psych testing has been around for decades. It’s an old-school form of predictive analytics. You look at a big group of people who are in the same role and figure out what’s common with their profiles, define a set of questions to test for the common attributes for that role and then apply that as a broad-based test for anyone who is applying for that same role.

What makes psych testing compelling?

It’s been around for a while, so people are familiar with the practice.

Read more: The Changing Role of the Organisational Psychologist

What makes psych tests limiting?

It’s generally expensive, cumbersome to interpret, and based on a very big assumption that if you fit the profile you will be successful in the role.

Psychological assessments have long been used to identify ‘hidden talent’ or ‘potential’ in people with limited work experience. Whilst these traditional assessments have reduced the hiring and promotional error rates, they take time to analyse, are costly, and are built off competencies inherently imbued with bias. It gives a suggestion that you are a fit, but we know that this rarely correlates to actually being successful in the role.

Psych tests are testing your ability to do a test. That’s it. Traditional psychological assessments do not link to actual performance in the role, nor do they have any self-learning functionality. There is no performance data that feeds into psychological assessments and therefore they have limited predictive power.

Predictive Analytics

In the context of Sapia, we use actual performance data to predict a candidate’s likelihood of success in the role they have applied for. The applicant completes an online questionnaire, but in-between the questions asked and the applicant’s responses is a data model. This statistical model draws on many different objective data points to predicts a candidate’s success in the role.

This also enables an efficient and immediate feedback loop about the actual performance of the hired candidate, improving the accuracy of the predictive model over time. Very quickly the predictive model that you use to select high performers becomes completely customised for your business. You build your own bespoke Intellectual Property, which becomes even more valuable with use.

Where does the prediction come from?

We all try to find patterns to help us make decisions, whether it’s ‘this restaurant looks busy so it must be good’, to ‘this person went to the same university as me, so they must know what they’re talking about’. We are often blinded by our innate cognitive biases, such as our tendency to overweight the relevance of our own experience. We end up in a tourist trap eating overcooked steak because that’s what everyone else was doing.

Our predictions are based on analysing objective data – someone’s responses to a set of questions, compared to the objective performance metrics for that same person in the role. This is a much more reliable and fairer way to make the decision. The democracy of numbers can help organisations eliminate unconscious preferences and biases, which can surface even when those responsible have the best of intentions.

Alchemy of high performers

We work closely with the recruiter or hiring manager to drill down into the qualities of a high performer, and then structure a bespoke application process to search for this. This could be a high level of empathy for customer service or the drive and resilience needed in sales.

Like all AI, our system improves with data. It learns what kind of hires drive results for your business, and then automatically begins to look for this with future applications. Ultimately, the more applicants that apply, the better it gets in identifying which people best match your requirements. And the longer you leverage our system, the more effective it gets.

Hopefully, that’s given you a good overview of where we differ, and what some of the advantages of implementing this into your recruitment process. Still, looking for more?


You can try out Sapia’s Chat Interview right now, or leave us your details to book a demo

Read Online
Blog

Biased people are much harder to fix than algorithms

We worry intensely about the amplification of lies and prejudices from the technology that fuels social media like Facebook, yet do we hold the mirror up to ourselves and check our tendency to hire in our image?

How many times have you told a candidate they didn’t get the job because they were not the right “culture fit”?

The truth is that we humans are inscrutable in a way that algorithms are not, which means we are often not accountable for our biases.

In algorithms, bias is visible, measurable, trackable and fixable.

A compelling feature of our technology is that our AI can’t see you, hear you, and judge you on irrelevant personal characteristics (like gender, age, skin colour) – as a human can. That’s one reason why trusted consumer brands like Qantas, Superdry, and Bunnings use it to make fair unbiased hiring decisions.

To validate that algorithms are bias-free, we do extensive bias testing (impossible to do for humans). We know from this testing that there is no statistical difference between the way the algorithm works on men, women, and people of different ethnicity.

We use these tests for bias testing:

Our bias testing happens at 3 levels:

Score calculated by the predictive model for each candidate.
Recommendation grouping based on score percentile.
Feature values used by the predictive model for training.

For Gender-bias testing:

To analyse whether our test scores have any gender bias we use t-test and effect size. For testing our recommendations of YES, NO and MAYBE groups, we use chi-square, fisher-exact and the 4/5th rule. This last one is the standard test set by the EEOC for any assessment used for candidate selection.

For Ethnicity-bias testing:

We use the 4/5th rule and the ANOVA test.

For Feature-level bias testing

This is to ensure any of the feature values we are using to assess candidate fit are not of themselves biased, we use t-test, effect size and ANOVA test.

Diving into just one of these, using effect size is easy to understand the statistical measurement of the difference in average scores of males and females. If the effect size is positive in our test set, it means females have higher scores than males and vice versa.

The magnitude of the effect size also matters – the larger the magnitude, the more significant the difference is. We generally consider values smaller than +/- 0.3 a negligible difference, values from +/- 0.3 to +/- 0.5 a moderate level difference, and values larger than +/- 0.8 significantly large level of difference.

We periodically test for score and recommendation bias in our models and take action if the bias highlighted is non-negligible. e.g., the effect size is beyond the range of +/- 0.3 or more, we take action- stop the model until we can find the source of the bias and re-train/re-test the new model to make sure the new model is not biased.

For more insight on how our technology removes bias and how we track and measure bias, read diversity hiring

Read Online