What corporate America doesn’t want to admit right now is that when COVID-19 forced them to make lay-offs and tough decisions about the things that mattered to them, Diversity and Inclusion initiatives were often the first to go.
As noted by McKinsey in its report “Diversity Still Matters” this is not the first time companies have reneged on making Diversity and Inclusion a priority as soon as a crisis hits.
The McKinsey report stops short of taking aim at the hypocrisy of these companies, stating it may be “quite unintentional: companies will focus on their most pressing basic needs—such as urgent measures to adapt to new ways of working; consolidate workforce capacity; and maintain productivity, a sense of connection, and the physical and mental health of their employees.”
And, yes, as short-sighted as this may be on the part of these companies, you might be able to accept that given the havoc that COVID-19 has created in our economy, this loss of focus is somewhat understandable.
Then George Floyd died after a police officer held him down so he was unable to breathe. The world erupted to stand in solidarity for Black Lives Matter. Suddenly, corporate America seemed to care about equality again. We’ve seen unprecedented statements coming out from companies in support of the #blacklivesmatter movement. This with ice-cream behemoth Ben and Jerry’s perhaps the most memorable, publishing a page under the words “White supremacy” directly calling on President Trump to stop attacking protestors. Other top brands including Netflix, Google, Twitter, Nike and Reebok have also made bold stands supporting the Black Lives Matter human rights campaign.
This signifies a huge shift in how companies engage with these issues and I’m all for it, but when we’re fighting institutionalised racism, and corporate America is a very much part of the institution, it doesn’t matter how powerful a statement is. Unless you’re unwilling to take action and to change internally. I hope this marks a real change because until now many companies have made public statements and not taken any steps to make changes.
I should know. I’ve been trying to sell an AI-solution which removes bias from job applications to corporates for the past year. I’ve been in meetings where white executives have been hand-wringing that they don’t know how they can solve diverse representation in their companies. All this while I’m literally demonstrating exactly what might do just that.
Let me explain. Bias in the recruiting process has been an issue for as long as modern-day hiring practices existed. The idea of “blind applications” became a thing a few years ago. With companies removing names on applications thinking that it would remove any gender or racial profiling. It made a difference, but bias still existed though the schools that people attended, as well as past experience they might have had. Interestingly, these are two things that have now been shown to have no impact on a person’s ability to do a job.
Artificial Intelligence was touted as the end-solution, but early attempts still ran through CV’s and amplified biases based on gender, ethnicity, age – even if they weren’t recorded, AI created profiles comparing ‘blind’ candidates to those in roles currently (ie. white men) – as well as favoring schools and experience.
True bias in recruiting can only exist if the application is truly blind (no demographics are recorded) and is not based on a CV. Through matching a person’s responses to specific questions to their ability to perform a job. It has to be text-based so that true anonymity can be achieved – something video can’t do as people are still racially profiled.
I’m not in any way proposing this solves everything in relation to Diversity and Inclusion within corporate cultures. However, it does remove bias, and I have the evidence. What I’m seeing is something even a bit more sinister. Companies opting for solutions that give the appearance of solving the problem and taking action. All this while actually not solving the problem and maintaining the status quo. I’m starting to wonder if this is deliberate.
Is it possible that so many companies are scared of removing bias in their recruitment process because if they hire people of color, they might then be held accountable by their employees to turn their words around addressing racial discrimination into action? We’ll see. Also, if Black Lives really matter then the disproportionate number of Latinx and Black workers who lost their jobs will be given a fairer opportunity for future employment.
We cannot remove institutional racism with the mechanisms that have been used to enforce it. Lack of equal employment opportunities is one of those. Denying that solutions exist to address this, as well as using solutions that give an appearance of correcting it, are just ways of maintaining the status quo.
To keep up to date on all things “Hiring with Ai” subscribe to our blog!
Finally, you can try out Sapia’s Chat Interview right now, or leave us your details here to get a personalised demo
Have you seen the 2020 Candidate Experience Playbook?
If there was ever a time for our profession to show humanity for the thousands that are looking for work, that time is now. If there was ever a time for our profession to show humanity for the thousands that are looking for work, that time is now.
Despite all the rhetoric, it seems that the world is becoming worse at removing bias from our workplaces, leveling the playing field for all employees, and improving diversity, equity, and inclusion.
COVID was tough for everyone, but the one good moment that seemed to come out of it was how people galvanized around the Black Lives Matter movement. Companies dedicated large advertising budgets to sophisticated promotional campaigns to convince us that they supported the movement.
At work, people demanded better from the companies they worked for. They demanded real and measurable progress on matters like diversity and inclusion, not just better benefits.
Employees weren’t going to accept the hypocrisy of their employer, a consumer brand spending millions on advertising about how woke they are when nothing changed internally. Bias was just not something that people were prepared to accept. It seemed like progress was being made, at least in the workplace.
Fast forward to 2023, and things have gotten worse than they were before the movement. What happened to push us so far backward on all the progress we’d made? The answer is video interviewing, specifically when it comes to amplifying bias in recruitment.
Video interviewing took off as a solution to the challenges of remote recruiting. However, video is a flawed way of assessing potential candidates as a first gate. It invites judgment, adds stress to the candidate, puts added pressure around hair and makeup, and turns a simple interview into a small theater production. Additionally, simply automating interviews with video doesn’t create any efficiencies for hiring teams, who are still watching hours and hours of interviews.
Video also excludes people who are not comfortable on camera, such as introverts, people with autism, and people of color. These factors do not influence a person’s ability to do a job, but using video at the start of the interview process puts them at a disadvantage. We are excluding a significant percentage of people by using video as a first gate.
We analyzed feedback comments from more than 2.3 million candidates across 47 countries using smart chat invented by Sapia.ai to apply for a role, and the overwhelming theme is that “it’s not stressful.”
As an industry, we must put a stop to this. Already, there is growing cynicism when companies talk about “improving candidate experience” because we like to say we care about something that will win us good PR, but we do little to hold ourselves accountable. We care more about optics than results.
However, you cannot say you care about candidates or diversity and inclusion and only use video platforms to recruit people. Frustratingly, there is technology that solves for remote work, improves the candidate experience, and truly reduces bias, and that is text chat.
Some of the most sought-after companies, like Automattic (the makers of WordPress), have been using it for years.
Chat is how we truly communicate asynchronously. It needs no acting, and we all know how to chat. Empowered by the right AI, text chat can be human and real. It can listen to everyone, it is blind, reduces bias, evens the playing field by giving everyone a fair go, and gives them all personalized feedback at scale.
It can harness the true power of language to understand the candidate’s personality, language skills, critical thinking, and much more.
Video should only ever be used as a secondary interaction, for candidates who are already engaged in the process and have been shortlisted. In that case, it does give hiring teams a chance to meet candidates, and candidates are more likely to be comfortable with video as they know they’ve progressed, and they’ve had a chance to present themselves in a lower pressure format already.
Why are we settling for video as a first interaction, when we can actually do more than make empty marketing promises to candidates? Why choose a solution that erodes all the hard gains we’ve made in diversity and inclusion?
There are some steps we can take to eliminate bias in recruitment and it begins with not relying on CVs as a method of evaluating candidates.
CVs are full of information that is irrelevant to assessing a person’s suitability to do a job. They instead highlight things that we often use to confirm our biases, and draw our attention from other key attributes or aptitudes that might make someone especially suitable for a job.
For example, if a CV mentions a certain university it might pique our attention (a form of pedigree bias). This is problematic, as there may be socio- economic reasons why someone attended a certain university (or did not attend another) and CVs do little to reveal this. Situations like this confirm the bias that lead to it in the first place, compounding bias for these long-term systemic issues.
Additionally, CV data reduces a candidate pool in a way that is not optimising for better fits for the role, by relying on the wrong input data and criteria to find a candidate. Amazon discovered this when it abandoned its machine learning based recruiting engine that used CV data when it was discovered the engine did not like women.
Automation has been key to Amazon’s dominance, so the company created an experimental hiring tool that used artificial intelligence to give job candidates scores ranging from one to five stars.
The issue was not the use of Ai, but rather its application. Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry. As a result of being fed predominantly male resumes, Amazon’s system taught itself that male candidates were preferable. It penalised resumes that included the word ‘women’ as in “women’s chess club captain.” It also downgraded graduates of all-women’s colleges.
Studies have shown systemic unintended bias occurs when reviewing resumes that are identical apart from names that signify a racial background or gender, or a signifier of LGBTQIA+ status. The solution for this has been to remove names or any identifiable data from an interview or CV screening, but these have still experienced bias issues like those discussed earlier.
In order to be truly blind, any input data needs to be clean and objective. This means that it gives no insight into someone’s age, gender, ethnicity, socio-economic standing, education, or even past professional experience.
To truly disrupt bias, recruiters and hiring managers should utilise a new wave of HR tech tools such as Sapia, stepping away from using CV data as a way to determine job suitability.
____________________
To find out how to improve candidate experience using Recruitment Automation, we also have a great eBook on candidate experience.
New insights from Aptitude Research suggest recruitment automation can play a much greater role in talent acquisition than just improving efficiency for hiring managers, it can also make the interview process more human for candidates.
The research shows that when you shift the focus from an employer-driven view to a candidate-first view, then it is possible to reduce bias in hiring and improve the overall human element of talent acquisition.
For most companies, the value of automation is perceived through the recruiter and hiring manager experience, with the benefits to the candidate often ignored. However, recruitment automation has to be about more than simply moving candidates through the process quickly to have any significant benefit to a company.
When you focus on the impact and experience of the candidate, the benefits to both recruiters and candidates can significantly improve through recruitment automation. This approach has given rise to a movement called humanistic automation technology.
But humanistic automation sounds like an oxymoron right? Is it even possible?
The Aptitude Research showed not only is this possible, but that when Ai is used this way, it creates personal connection at scale, and works to reduce bias, something no other technology or even human-centred solution can deliver.
So, how exactly does it do this?
There have been some slight improvements in building connections through the hiring process recently, but only 50% of companies have a single point of contact for communication, which results in candidates feeling engaged or valued through the process.
Recruitment automation with a candidate-focus means that communication is personalised for high-engagement with the ability for the conversation to adapt to what it learns about a candidate almost immediately.
As a candidate finding out that you are not successful is tough, and worse, most companies just ghost those they don’t wish to move ahead with. Automation can ensure that every candidate is engaged and cared for even when they are not moving forward in the process – and that doesn’t mean a standard rejection email. Ai can deliver highly personalised communication that builds connection even for those unsuccessful in their application.
Although some companies have made efforts to remove bias from resumes, companies still have a lot of work to do on inclusion. For starters, many are relying on training programs, which have shown to be largely ineffective in delivering long-term change.
It’s true that recruitment automation can amplify bias, but automation that works to reduce bias is continually testing against biases in the system and has been shown to be effective in reducing the impact of bias in hiring decisions. Somethings humans cannot do (we’re inherently biased, whether we like it or not).
When you have the right data input gathered through blind screening and blind interviews – that don’t rely on CV data – then you can help companies achieve an equal and fair experience to all candidates.
Inclusive hiring is not limited to gender and race. Companies need a broader view of diversity, equity, and inclusion that includes individuals with disabilities and neurodiversity. This requires the right digital tools and technology to ensure that candidates have a positive experience. In many cases, chat and text are more inclusive over video or even phone screening and interviews for these candidates.
Most companies see feedback as a risky area and something they have no ability to do in a fair and timely manner. Essentially this is a lost opportunity for learning and development.
When you see feedback as a value proposition of an employer brand, its power in transforming your TA strategy becomes clear. Recruitment automation allows companies to deliver personalized feedback building trust and strengthening your employer brand.
Personalized feedback with tangible action items, means that candidates feel empowered even if they are rejected. Technology can help to deliver these action items in a human way, that even humans are not able to do at scale or even very well.
These insights are only made possible through natural language processing and machine learning that work in the background to reveal important information about the candidate. When a candidate feels like they are ‘seen’ that can be a transformational moment in their career paths.
Only recruitment automation can deliver individual feedback to everyone who takes time to do a job interview.
In an era of growing awareness around the privacy of data, only 1 in 4 candidates trust the data being will be used to drive hiring decisions. As companies look at recruitment automation through a candidate-centric lens, they must consider both the quality of the data they use and how to build trust between employers and candidates.
The biggest mistake that most companies make is using the wrong data. Resume data is not necessarily an indicator of performance or quality of hire.
Ethical Ai is something that hiring managers need to understand and use to evaluate providers. Providers using ethical Ai operate transparently, are backed by explanations, describe their methodology, and frequently publish their data.
Aptitude Research found that when data is transparent, it increases the trust in talent acquisition leaders, hiring managers, and senior leaders. With data transparency, 84% of talent acquisition leaders stated that they trust the data, and 78% of senior leaders trust the data.
55% of companies are increasing their investment in recruitment automation this year. These companies recognise that automation can improve efficiency, lift the administrative burden, reduce costs, and enable data-driven decisions.
This report focuses on a new look at automation through the eyes of the candidate
After all, automation is more than moving candidates through a process quickly. It should also enable companies to communicate in a meaningful and inclusive way and build trust between candidates and employers.