Contribution · Application — Human Resources
AI Resume Screening (with Bias Risk Awareness)
Resume screening is the canonical 'high-risk AI' use case. AI can extract skills, normalize education, parse experience, and rank candidates — but employment AI is now explicitly high-risk under the EU AI Act (Annex III), requires bias audits under NYC Local Law 144 and Illinois AI Video Interview Act, and is under EEOC enforcement guidance (Amazon's well-documented 2018 resume-screening bias is the canonical cautionary tale). Safe deployments treat AI as structured extraction and retrieval — not ranking decisions — with human recruiters making final shortlists.
Application facts
- Domain
- Human Resources
- Subdomain
- Talent Acquisition
- Example stack
- Claude Sonnet 4.6 for resume parsing and structured extraction · Pydantic schema for normalized skill / education / experience output · pgvector for semantic matching of job-requirement to candidate · Fairlearn or Aequitas for bias auditing across demographic cohorts · ATS integration (Greenhouse, Workday, Lever) for workflow
Data & infrastructure needs
- Job requisitions with required skills and experience
- Historic hiring outcomes for audit (never as training signal — perpetuates bias)
- Skills taxonomy / ontology (O*NET, ESCO, India NSQF)
- Bias-testing cohorts (where legally permitted to infer)
- Candidate-facing disclosures and consent records
Risks & considerations
- Disparate impact on protected classes (Title VII, Art. 16)
- Training data reflecting historic biased hiring (Amazon 2018)
- EU AI Act non-compliance — high-risk category obligations
- NYC LL 144 violations — missing bias audit or notice
- Candidate PII handling under GDPR / DPDPA
Frequently asked questions
Is AI resume screening legal?
Conditionally. Many jurisdictions impose specific requirements: NYC Local Law 144 mandates independent bias audits for automated employment decision tools; Illinois and Maryland regulate AI video interviews; the EU AI Act classifies hiring AI as high-risk with documentation, risk management, and human oversight obligations. India's constitutional equality-in-public-employment doctrine and Equal Remuneration Act apply to employer decisions.
Can AI discriminate even without intent?
Yes — disparate impact is well-documented. Amazon's 2018 case trained on historic resumes that reflected gender-imbalanced hiring, resulting in the model down-weighting the word 'women'. Any production system requires demographic-bias auditing (by protected classes accessible under applicable law), counterfactual testing, and ongoing drift monitoring.
What deployment pattern is safest?
Use AI for structured extraction (skills, education, years of experience) and retrieval / search (finding candidates matching requirements), but keep scoring and rejection decisions with human recruiters. Record-keeping should cover inputs, outputs, and the human reviewer. Provide candidate-facing notice consistent with NYC LL144 and EU AI Act transparency obligations.
Sources
- EEOC — AI and Title VII guidance — accessed 2026-04-20
- NYC Local Law 144 — Automated Employment Decision Tools — accessed 2026-04-20
- EU AI Act — Annex III high-risk categories — accessed 2026-04-20