The recent Mobley v. Workday lawsuit has put a spotlight on how AI-driven hiring tools can discriminate against minorities and people with disabilities. But why does this happen?
The answer lies in how these tools are trained and how human bias becomes embedded in technology. Any AI system—especially those built on large language models (LLMs)—risks inheriting bias unless deliberate steps are taken to mitigate it. And because the internet, which forms much of their training data, is rife with negativity and skewed narratives, those biases inevitably surface in AI decisions.
Online content doesn’t reflect reality in a balanced way. Research consistently shows that negative material spreads faster, further, and lingers longer than positive content.
This creates a visibility problem: even if most original posts are neutral or positive, negative narratives take up more space in our feeds and shape perceptions disproportionately.
This imbalance directly affects how disability and employment are discussed online.
A viral LinkedIn post criticizing ADA “overreach” is far more likely to dominate attention than a thoughtful article on inclusive hiring best practices.
Over time, these skewed narratives normalize exclusionary thinking and reinforce damaging stereotypes.
AI models learn what they see. If the training data overrepresents negative language about disability and work, the model absorbs those associations.
This creates a feedback loop: biased data trains biased models, which in turn produce biased outcomes, further reinforcing discriminatory practices.
When Applicant Tracking Systems (ATS) first appeared, their goal was simple: ensure every applicant received equal treatment. Over time, features were added to boost recruiter efficiency—ranking, filtering, and now AI-driven scoring. But no one stopped to question whether these tools were truly unbiased.
In over two decades of building recruitment technology, I was never once asked to validate whether an algorithm produced unbiased results.
Some uncomfortable truths about hiring technology:
So what do these tools rely on? Data from the internet—full of bias, negativity, and skewed narratives.
When HR leaders and hiring managers repeatedly encounter negative content—such as “AI interviews flagged disabled candidates as unfit” or “accommodations are too costly”—it normalizes exclusionary thinking.
This leads to availability bias: decision-makers more easily recall negative disability-related anecdotes than positive ones, even if positive examples are far more common. Echo chambers—forums and social groups that complain about supposed “unfair advantages” for disabled workers—further entrench these views, making inclusive policies harder to champion.
AI in hiring doesn’t have to be biased—but without intentional design and oversight, it will inevitably reflect the worst of our social biases.