
Laws and Regulations Governing AI Tools in Hiring
Employers that use AI tools in hiring can potentially violate multiple laws enacted to prevent discrimination. Violating these laws can lead to:
-
Government investigations
-
Lawsuits and class actions
-
Public exposure and reputational damage
-
Fines and penalties
-
Loss of candidate trust and talent
United States
1. Title VII of the Civil Rights Act of 1964
Prohibits employment discrimination based on race, color, religion, sex, or national origin.
AI Risk: If your hiring algorithm disproportionately rejects candidates from protected groups, it's a violation—even if the bias is unintentional.
​
2. Americans with Disabilities Act (ADA)
Prohibits discrimination against qualified individuals with disabilities in all employment practices.
AI Risk: AI tools that penalize gaps in work history or nontraditional communication styles may inadvertently screen out disabled applicants.
​
3. Age Discrimination in Employment Act (ADEA)
Protects individuals aged 40 and over from discrimination in hiring, promotion, and other employment practices.
AI Risk: Algorithms trained on biased data may favor younger candidates, violating ADEA.
​
4. Equal Employment Opportunity Commission (EEOC) Guidance
The EEOC has clarified that employers are responsible for ensuring that AI hiring tools comply with federal anti-discrimination laws.
AI Risk: Vendors and employers can both be held liable if tools create a disparate impact.
​
5. New York City Local Law 144 (2023)
Requires employers using automated employment decision tools (AEDTs) for hiring or promotion to:
-
Conduct annual bias audits by an independent auditor
-
Notify candidates of AEDT usage
-
Make audit results public
AI Risk: Failure to conduct audits or disclose results can lead to enforcement actions and fines. (Click for more)
International
1. General Data Protection Regulation (GDPR) – EU & UK
Regulates the processing of personal data and includes provisions on automated decision-making.
AI Risk: Article 22 gives individuals the right not to be subject to decisions based solely on automated processing that significantly affects them (like hiring). Employers must provide transparency and a way to contest decisions.
​
2. EU AI Act (Approved 2024, Enforcement 2026)
Sets strict rules on “high-risk” AI systems, including those used in employment. Requires transparency, risk assessments, and human oversight.
AI Risk: Violations may include use of opaque or unexplainable hiring algorithms without documented safeguards.​​
Read the implications of these regulations on the blog.
Use the Compliance Checklist to evaluate if your company is at risk.
