No organisation wants to discriminate based on race, national origin, marital status, sexual orientation, or gender identity. Still, job seekers face wildly different experiences depending on which store they apply to, which coordinator screens their CV, or whether they know someone on the inside.
In 2026, unfairness in hiring is rarely intentional—it’s operational. When you’re processing thousands of applications, tiny inconsistencies compound into serious problems that affect your organisation’s reputation, limit your access to qualified candidates, and expose you to legal implications.
This guide walks through 12 unfair hiring practices examples, explains why they matter, and shows you practical fixes you can implement to eliminate unconscious bias.
Generally speaking, hiring discrimination doesn’t stem from malice.
If different stores use different application forms, or one hiring manager conducts structured interviews while another relies on “chemistry“, you’ll have a system that produces inconsistent outcomes.
Proxy-driven decisions make things worse. When organisations screen for brand-name employers or employment gaps, they crowd out evidence of job-related skills and disproportionately exclude career-switchers, carers, and under-represented candidates. Plus, communication black holes develop. When this happens, job applicants move on because they feel they’ve been kept in the dark.
The fix is process design. Offer interview-first access, blind rubric-based scoring, predictable reminders, and feedback for all—with audit trails to spot patterns before they become complaints.
(Legal note: definitions of discriminatory practices vary by jurisdiction and evolve as anti-discrimination laws develop. This article focuses on operational fairness, not legal advice.)
You can’t claim to pursue fair hiring practices if you commit any of these hiring mistakes:
Once you uncover unfair hiring practices and hidden biases in your recruitment process, you can fix the issues. Here are 12 common patterns we see, and ways to address them.
How it shows up: Insider jargon, gender-coded language, “must-haves” that are really “nice-to-haves,” and no accommodation notes.
Why it’s unfair: These things shrink your candidate pool in disproportionate ways, while discouraging carers, career-switchers, and candidates with disabilities from applying.
Fix: Rewrite job postings in plain language that focus on outcomes, and split essential requirements from optional requirements. Then, add accessibility statements and translate for priority languages.
How it shows up: Coordinators skim CVs to cut volume before assessing job-related skills. Because of this, hiring managers don’t have access to the best talent available.
Why it’s unfair: Pedigree proxies, like brand names and prestigious universities, trump actual capabilities. Reviewer variance explodes without consistent criteria.
Fix: Trigger a short, structured AI interview-first step for all job applicants. Then, hold CVs until blind first-pass scoring against rubrics is complete – or remove them entirely if hiring for entry-level roles. This is easy to do with an interview-first platform like Sapia.ai, which runs as an overlay to your ATS so you don’t have to replace your entire tech stack.
How it shows up: Comments like “Didn’t feel right for us” with no definition to clarify what “right” is.
Why it’s unfair: A reliance on “culture fit” vibes masks personal biases. It’s also impossible to audit.
Fix: Focus on values alignment and competency measurement. Define four to six competencies, write scenario prompts for each, score candidate answers to anchored rubrics, and add a second reader for borderline cases.
How it shows up: Interviewers ask different questions to every candidate. Said interviewers don’t use scorecards either, making decisions based on “chemistry” and “gut feel.“
Why it’s unfair: Unstructured interviews and shifting goalposts produce incomparable evidence, which leads to subjective outcomes—a common source of hiring discrimination.
Fix: Use standardised question sets per role family, and apply behavioural rubrics consistently. You should also run occasional calibration sessions for said rubrics, while capturing scores and rationales for audit trails. These things will ensure every candidate has a fair shot.
How it shows up: Multi-hour tasks with vague success criteria and no accessible alternative.
Why it’s unfair: Multi-hour tasks penalise certain candidates, such as shift workers, carers, and candidates with disabilities. Because of this, drop-off rates explode for certain groups.
Fix: Keep work samples to 30 minutes or less. Then, provide clear instructions and marking criteria, while offering accessible, mobile-friendly options. Also, use rubric-based scoring to provide equal opportunities for underrepresented groups. Doing so will help maximise fairness moving forward.
How it shows up: Weekend flexibility or shift rules are applied inconsistently across locations
Why it’s unfair: Local rule-making leads to unequal treatment. Worse, these practices are extremely hard to audit, which further reduces employment opportunities for other candidates.
Fix: First, build a central policy library. Then, capture eligibility questions the same way for everyone during the interview process. Finally, log exceptions with clear rationales.
How it shows up: Employee referral programs that create side doors to manager screens or job offers.
Why it’s unfair: This practice should be considered unfair because it prioritizes candidates based on who they know, not what they can do. Nobody wants to experience that.
Fix: Route every candidate—referred or not—through the same structured, blind first assessment. Then, compare referral vs. non-referral outcomes on a monthly basis for valuable insights.
How it shows up: Making offers based on prior pay.
Why it’s unfair: Pay anchoring bakes in historical wage disparities. It may also breach local regulations regarding salary history inquiries, which could lead to punitive damages.
Fix: Only use published salary bands and job-related factors. In addition, remove salary history fields from your applications, and train hiring managers to offer fair, consistent compensation.
How it shows up: Desktop-only forms, mandatory high-bandwidth video, and no screen-reader support.
Why it’s unfair: These practices can lead to disability discrimination. They can also eliminate qualified applicants based on income factors and irrelevant details, like if they own a desktop computer or not.
Fix: Build a mobile-first, text-friendly interview process. While you’re at it, ensure screen-reader compliance and and include a low-bandwidth mode. Lastly, publish clear accommodation routes.
How it shows up: No acknowledgement after a candidate applies, or a timeline to help candidates know what to expect moving forward. As such, candidates chase hiring managers for updates.
Why it’s unfair: A lack of communication between organisations and candidates drives disproportionate attrition among job seekers without insider confidence—especially in competitive job markets.
Fix: Promise timelines in your first message, send expiry-aware reminders via SMS and email, and provide feedback to all candidates at decision time. These things show a commitment to respect.
How it shows up: Certain scores are auto-rejected with no rubric or explanation.
Why it’s unfair: Job applicants and auditors can’t see the “why” behind decisions. As such, candidates may assume rejection due to disability status, age discrimination, racial discrimination, etc.
Fix: Base automation on structured prompts and anchored rubrics. Also, store rationales for every hiring cycle. Finally, keep humans in the loop to ensure threshold decisions are conducted fairly.
How it shows up: Personally identifiable information (PII) in free-text notes, uncontrolled downloads, and unclear retention schedules for all candidate data points.
Why it’s unfair: Privacy governance mistakes hit vulnerable groups the hardest, while eroding trust in your hiring process. As such, they tend to damage employer reputations.
Fix: Redact identifiers on first pass. Then, implement role-based access and regional data residency, while enforcing retention schedules. Last but not least, maintain exportable audit logs. These things will not only help you build trust with candidates, but also help you adhere to legal requirements.
Three stores used different screens, timelines, and interview styles for seasonal hiring. Predictably, application to interview completion lagged, and complaints about “culture fit” rejections rose.
Enter interview-first assessment: a 10 to 12 minute structured, mobile chat interview for all candidates before CV review. In addition, each of the three stores implemented blind, first-pass scoring against shared rubrics. Plus, the system sent two reminder cadences to improve completion, and hiring managers received explainable shortlists with scored evidence before live interviews.
The results were impressive: Completion rates climbed, time to first interview dropped from days to hours, stores received fewer “culture fit” declines, and representation stayed consistent across stages.
Find more case studies on real brands using AI interview-first process to improve fairness in hiring here.
You don’t need months to spot unfairness in your hiring process. Run these diagnostics to identify your highest-impact fixes in 30 minutes or less.
Pull your last 90 days of recruitment data and look for sharp drops between application and interview completion, as well as between interview and offer. Break this down by site or brand. If one location shows a steep decline when others don’t, you should investigate the inconsistency. Different drop-off rates can signal different standards or different levels of communication across your organisation.
Compare average rubric scores across different reviewers and hiring managers. A high spread between reviewers often signals a calibration problem—AKA different people interpret your hiring criteria in different ways. When one interviewer’s average score is significantly higher or lower than the rest of the team, you need a calibration session to educate employees on what “good” looks like.
Identify candidates who’ve been waiting the longest at each stage of your recruitment process. These long silences cause disengagement and perceived unfairness to spike. After all, job seekers who hear nothing for weeks are more likely to assume they’ve been rejected and accept other offers.
Track these metrics to assess progress toward a fair and inclusive hiring process:
Here’s how to pilot changes, prove they work, and prepare to scale in 30 days:
If you’re looking to accelerate your pilot without re-platforming, consider an overlay solution like Sapia.ai. Our platform integrates with your existing ATS to provides structured, mobile-friendly interviews to 100% of candidates. It also includes built-in rubrics and blind scoring features.
Use this checklist to track progress as you implement changes across your recruitment process:
Most unfair hiring is a “thousand paper cuts,” not “one villain.“
Give every applicant a structured, blind, mobile-friendly interview at the point of application. Then, score their responses against clear rubrics, and communicate predictably throughout the process. Finally, close the loop with feedback regardless of outcome.
When you fix the operational chaos, fairness improves—as do speed and conversions. Fortunately, you know exactly what to do to cut racial, age, and gender discrimination from your hiring processes.
Want to see a live example on your highest-volume role? Book a demo of Sapia.ai today!
Unfair hiring practices create unequal barriers, use inconsistent criteria, and/or produce unexplainable outcomes. Examples include CV-only screens that favour pedigree over skills, unstructured interviews that shift standards per candidate, and communication gaps that drive disproportionate drop-offs among underrepresented groups.
The 80% rule is a statistical test for adverse impact in hiring. If a protected group’s selection rate is less than 80% of the highest-performing group’s rate, it may signal discriminatory hiring practices.
Compare stage conversion rates, interviewer variance, and time-in-stage by demographic group and location. Look for patterns—different drop-off rates, inconsistent scoring, long silences—that indicate systemic problems rather than isolated incidents. Fortunately, audit trails and rubric-based decisions make patterns visible. Once you see the issues, do your best to fix them immediately.
Watch for CV-first screening, “culture fit” decisions with no definition of what “culture fit” is, different processes by site or referral source, multi-week silences, and black-box automation with no rationale. These signals produce chaos, which can lead to unfair outcomes for certain candidates.
Use an overlay solution that integrates with your existing ATS. Tools like Sapia.ai sit between application and CV review, delivering structured interviews and blind scoring while feeding results back into your current recruitment system. As such, the common “rip-and-replace” structure is not required.
Use templated feedback tied to your rubrics. Focus on what candidates did or did not demonstrate against specific criteria. Avoid vague statements or comparisons to other applicants. Frame feedback as developmental: “We were looking for X; your response showed Y.” And document everything.