How Smart Chat Interviews Can Help Candidates With English As A Second Language (EASL)

We are often asked by talent leaders and hiring managers whether interviews conducted via a text-based chat disadvantage people who have English as a Second Language (EASL). 

While that may seem intuitive, the data tells a different story.

Aggregate results across a variety of Sapia.ai clients that use our AI Smart Interviewer indicate that EASL candidates, in general, perform better than Native English speakers. 

While these results may seem surprising, the science that underpins our AI Smart Interviewer has been created to mitigate bias, and we test this constantly.

Standard testing includes the “4/5th rule”, the industry standard test for adverse impact. It ensures the selection ratio of a minority group is at least four-fifths (80%) of the selection ratio of the majority group. 

When comparing Native English Speakers with Non-Native English Speakers (EASL), it is shown that EASL candidates are scored higher on average by our AI Smart Interviewer and therefore auto-progressed at a higher rate than those whose native language is English, achieving a 4/5ths rule score of 100%. 

Assessing language using Sapia.ai

When it comes to assessing language skills using Sapia.ai proprietary written language assessments, we have developed two aggregate measures called “basic communication skills” and “advanced communication skills”.

– Basic skills look for language fundamentals like spelling, grammar, readability etc.
– Advanced skills look at the sophistication of language (e.g. vocabulary). 


It is important to note that the dimensions used within each measure such as spelling and grammar are weighted in such a way that not all misspelled words or grammatically incorrect sentences result in a penalty. These aggregate measures are benchmarked and validated using our large interview dataset across multiple role families.


Further, in Sapia.ai assessments, these measures are not always weighted the same and are set depending on how important language skills are for a particular job.

For example, for a customer-facing retail role, “basic skills” might be set as “medium” and “advanced skills” as “low” or as simply ignored. A retail team member may be required to jot down notes or write the occasional report or email. Basic writing skills may be helpful but not essential, hence the “medium” weighting and minimal impact on their overall score. Other personality traits and behavioral competencies may play a stronger role in determining role-fit.

Secondly, the scores are benchmarked within a relevant population. A retail worker’s “basic skills” score is not compared against graduates or call center staff.

Here’s how the scoring might work:

Maria applies for a retail role and gets a basic skills score that puts her in the top 20% of the population, that is, within a population of retail candidates. This percentile is used in the final score calculation. That way no one is disadvantaged, and candidates are only compared within a comparable group. The basic skills score received by Maria that placed her in the top 20% of retail applicants is 54/100.

In comparison, Michael, a graduate applicant, receives a basic skills score of 72 and is in the top 30% of graduate applicants. Michael has scored higher than Maria in his basic skills, but in their respective populations, Maria has done better than Michael.

There are also other factors to consider when thinking about smart chat interviews and their impact on EASL candidates. 

In a spoken test or video test, candidates have fewer chances to re-record their answers. In our Chat Interview™, we give candidates unlimited chances to refine their answers, allowing them to edit the text until they are ready to submit. An EASL candidate will have as much opportunity as they want to refine their answers with no pressure.

Candidates can do the test at their own pace, so the time taken to complete the test is not a factor that will impact the scores. An EASL candidate will have enough time to work on the language and get it right.

You may still be wondering how we ensure EASL candidates’ personality traits and behavioral competencies are also accurately assessed.

Our Chat Interview™ uses Natural Language Processing, machine learning, and optimization methods to score structured interview responses, fairly and consistently.

Our scoring leverages data from over 1 billion words written by over 3.5 million diverse candidates across many different role families and regions.

Based on one’s use of language, we derive signals that matter, like personality and behavioral competencies, that are then used in a predictive algorithm based on the ideal candidate profile to generate a score and recommendation.

We don’t use simple keyword matching, and we consider more than just the words used. Phrasing, syntax, structure, and context all matter. Perfect grammar and spelling don’t matter for the majority of constructs.

Taken together, our highly tuned assessment models combined with the validity of structured interviews represent a far more enjoyable and reliable assessment experience for EASL candidates, especially when compared to traditional assessments.

Being data-driven means we can constantly and vigilantly check that EASL candidates are not disadvantaged in how they are assessed.

About Author

Laura Belfield
Head of Marketing

Get started with Sapia.ai today

Hire brilliant with the talent intelligence platform powered by ethical AI
Speak To Our Sales Team