AI Doesn’t Just Miss Disability—It Judges It
- Raghav Singh
- Jul 5
- 2 min read

A team at Penn State analyzed popular AI language models and found a disturbing pattern: even neutral or positive phrases about disability are scored more negatively than similar non-disability phrases.
The team found that when the same word ("good") was used, models produced “great” for non-disability phrases—but reverted to “bad” when paired with disability terms. In LLM prediction scenarios, changing “A man has ___” to “A deafblind man has ___” led models to predict “died” instead of neutral terms.
Key Implications for AI Use in Hiring
1. Discriminatory Resume Screening
Language models embedded in ATS platforms may:
Score resumes lower if they contain disability-related language (e.g., “autistic self-advocate,” “blind user tester”) due to negative sentiment bias.
Prefer candidates who use more “normative” language, systematically excluding those who disclose or describe disability—even if it’s relevant experience.
2. Bias in Pre-Screening Assessments
If AI tools assess candidates based on answers to open-ended questions:
Disabled applicants may be penalized for language patterns associated with neurodiversity, speech differences, or non-standard phrasing.
Sentiment or “toxicity” detection tools might flag neutral disclosures of disability as problematic, triggering rejections.
3. Exclusion via Personality or Cultural Fit Analysis
Some AI tools analyze "cultural fit" or "emotional intelligence" from text. These can misread expressions from disabled candidates, especially those who are autistic, use assistive technology, or communicate differently.Disability-related language could be unfairly labeled as “negative,” “off-topic,” or “concerning.”
Because most AI hiring tools are black boxes, companies won’t know they’re excluding disabled candidates. Unless tested specifically for disability bias, these models may scale ableism invisibly.
AI doesn’t need intent to discriminate—bias in the data is enough. Unless addressed, AI in hiring will automate exclusion of people with disabilities.
Comments