Artificial Intelligence (mostly Machine Learning) is being used more and more for high-impact decision making, so it is important to ensure these models are used in a fair manner.
At Sapia, we recognise the impact these technologies have on candidates when used in screening. We are committed to ensuring fairness by making the evaluations more inclusive, valid, unbiased and explainable – this is the essence of our FAIR™ framework.
The Fair Ai for Recruitment (FAIR™) framework presents a set of measures and guidelines to implement and maintain fairness in Ai-based candidate selection tools. It does not dictate how Ai algorithms must be built, as these are constantly evolving. Instead, it seeks to provide a set of measures that both Ai developers and users can adopt to ensure the resulting system has factored in fairness.
The lack of transparency related to training data and behavioural characteristics of predictive models is a key concern raised when using machine learning based applications. For example, in most instances, there is no documentation around intended/unintended use cases, training data, performance, model behaviour, and bias-testing.
Recognising this limitation, researchers from Google’s Ethical Artificial Intelligence team and the University of Toronto proposed Model Cards in this research paper. A Model Card is intended to be used as a standard template for reporting important information about a model, helping users make informed decisions around the suitability of the model. The paper outlines typical aspects that should be covered in a Model Card, such as “how it was built, what assumptions were made during its development, what type of model behaviours could be experienced by different cultural, demographic, or phenotypic population groups, and an evaluation of how well the model performs with respect to those groups.”
Sapia Labs has adopted and customised the concept of a Model Card to communicate a broad range of important information about a model to relevant internal and external stakeholders. It acts as a model specification, and the single source for all model details.
The generation of the Model Card is automated, and is an integral part of the model build process, ensuring a Model Card is available with every model.
Having a standardised document for communicating a model specification has enabled faster and more effective decision making around models, especially on whether to go live or not. Integrating Model Cards is part of the continuous improvement process at Sapia Labs on the ethical use of ML/AI. The contents continue to evolve based on the team’s ongoing research and requests made by other stakeholders. As far as we know, this effort is an industry first for the employment assessment industry, and we are proud to be leading in this space.
Transcript:
Barb Hyman:
I am seeing organizations increasingly rely on AI that comes from social media or resume data. How do you see that? Does that bother you? Do you think we need to educate the market about the difference between first-party and third-party data and ask questions about how clean and unbiased the data is?
As a former HR leader, I couldn’t use technology that analyzes my candidate pool or my people based on what they do on social media. It horrifies me, and it kills trust. I feel like that kills trust, you know, because I’m on social media in my own personal way. What do you think about that trend, and how can we tackle it?
Meahan Callaghan:
I think we need to educate people at the point of recruitment. We could let them know why they should feel safe using AI-based technology and that it doesn’t use third-party data or do anything unethical.
If we provide warnings and information, people will start to look for trustworthy AI. Remember how banks got everyone to feel safe about transferring money online? We need an education piece on how this AI is different from that one.
Imagine if we said, “Before you’re about to go through AI-based technology for this recruitment process, we’re going to let you know why you should feel safe in doing so. It doesn’t use third-party data, it doesn’t do anything unethical.”
Again, take internet banking: How did the banks get everyone to feel OK about transferring money online?
I mean, all of us used to go and check the money even got there, and you know, there’s some people that still don’t use it today. I’ve got a friend with a fantastic organic beauty products business. Another one who’s got a collagen business. Both are constantly having to say, “We look the same as other products – but let me tell you how we’re not.”
And I think there is an education piece on, let me tell you how we’re not.
Barb Hyman:
I love that you’ve taken the candidate’s view on that. We need to protect them and our brand, and trust is crucial. We shouldn’t blindly trust AI; we should be able to trust it because it’s safe to do so. That’s a great call to action for all of us in that space.
Listen to the full episode of our podcast with Meahan Callaghan, CHRO of Redbubble, here:
We’re taking a quick look behind us before we crack on with making hiring fairer for even more candidates in 2023.
This year we built and upgraded 17 of our customers to our new secure platform, Edge 3; and released a host of features that have transformed our candidate and hiring team experience.
In the early part of the year we made a bunch of nifty design improvements to make Chat Interview even more intuitive. Then we added features like reminder emails, a progress bar, improved the experience of entering phone numbers for global candidates and introduced a planned delay in sending My Insights profiles.
In the world of video, we enabled customers to use standalone Video Interviews, and added some smaller changes like improving the compression of our video platform, and adding the ability for hiring teams to use pre-recorded videos to ask questions or play scenarios to candidates.
After we completely rebuilt our platform to make it more secure, user friendly and flexible; we continued to improve the experience for hiring teams with features like optimizing Talent Insights on mobile devices; improving ease of access while maintaining security with ‘remember this device’; and improving candidate search and management.
Alongside a number of additions to our people insights product, Discover Insights, our new Integrations dashboard taps into valuable data from ATS’s to give integrated customers a holistic view of candidate experience, efficiency and inclusivity along the hiring journey.
Our Integration Warriors did us proud this year. We completed 7 new ATS integrations, enabling a seamless candidate and user experience for customers of SmartRecruiters, Workday, SuccessFactors, eArcu, Greenhouse, PageUp and Avature.
We were also accepted into the Crown Commercial Service Marketplace, making it easy for UK government departments to access our offering; and we continued to work with some of the largest RPO partners globally to help transform the hiring process of some incredible brands.
Security and compliance are always a priority at Sapia. With Edge 3 came multi-region data hosting, more secure login features, and user hierarchies to ensure secure storage and access to candidate data.
And as a gift to round out the year, our auditor AssuranceLab completed our SOC 2 Type 2 surveillance, and we’re on track for successful accreditation, with a report available in Q1 2023.
Next year brings new challenges as we’ll continue to improve our offering while expanding our customer base globally.
On the horizon are some exciting improvements to Talent Insights; a host of new integrations including Workable, iCIMS, Oracle, Cornerstone, Bullhorn, Lever, Jobvite, JazzHR… and one release that we’re not quite ready to announce just yet.
Let’s just say it’ll be muy, muy grande, et très excitant!
The Royal Commission has brought about a lot of scrutiny on the banks, and for good reason. But we have to give them credit where it’s due.
Which is funny, as I’d argue that hiring a staff member is a much riskier proposition for a business than a bank having one of its customers default on a loan.
Imagine if your bank lent you money with the same process that your average recruiter used to hire for a role.
They would ask you to load all of your personal financial information into an exhaustive application form. Your salary, your weekly spend, your financial commitments. All of it.
The same form would include a lot of probing questions, such as:
Then, assuming your form piqued their interest, they would bring you in for one on one meeting with the bank manager. That manager would grill you with a stern look, asking the same questions. This time though, they will be closely watching your eye movement to see if you were lying when you answered.
In each part of the process, you get a score, and then if that number is above a certain threshold, you get the loan.
It’s almost laughable, right?
Only people who desperately need money would put themselves through that process. And they’re likely not the best loan candidates.
Banks work hard to attain incredibly high accuracy levels in assessing loan risk.
Meanwhile in HR, if you use turnover as a measure of hiring accuracy its as low as 30–50% per cent in some sectors. If you combine both turnover and performance data (how many people who get hired really raise a company’s performance), it might be even lower than that.
Banks wouldn’t exist if their risk accuracy was anywhere close to those numbers.
Well, that’s how most recruitment currently works — just usually involving more people.
There are more parallels here than you think.
Just like a bank manager, every recruiter wants to get it right and make the best decisions for the sake of their employer. As everyone in HR knows, hiring is one of the greatest risks a business can take on.
But they are making the highest risk decision for an organisation based on a set of hypotheses, assumptions and lots of imperfect data.
So, let’s flip the thought experiment.
Well, the process wouldn’t involve scanning CVs, a 10-minute phone call, a face to face interview and then a decision.
That would be way too expensive given exponentially more people apply for jobs than apply for loans each year. Not to mention the process itself is too subjective.
I suspect they would want objective proof points on what traits make a candidate successful in a role, data that matches the candidate against those proof points and finally, further cross-validation with other external sources.
They wouldn’t really care if you were white, Asian, gay female. How could you possibly generalise about someone’s gender, sexuality or ethnicity and use it as a lead indicator of hiring risk. (Yet, in HR this is still how we do it.)
Finally, they’d apply a layer of technology to the process. They would make it a positive customer experience for the candidates and with a mobile-first design. Much like a loan, you’ll lose your best customers if the funnel is long and exhaustive.
I’m not saying that banks are a beacon of business. The Royal Commission definitely showed otherwise. But for the most part, they have gotten with the times and upgraded their processes to better manage their risk. It’s time HR do the same.
Suggested Reading:
You can try out Sapia’s Chat Interview right now, or leave us your details to book a demo