AI in Hiring Is Now Regulated: How Employers Can Navigate the 2026 State Law Patchwork
Illinois and New York City already regulate AI in employment decisions, and Colorado's AI Act takes effect June 30, 2026. Learn about bias audits, notice requirements, and multi-state compliance strategies for HR teams.
If your organization uses AI-powered tools for recruiting, screening resumes, evaluating employee performance, or making promotion decisions, 2026 is the year compliance can no longer be an afterthought. A wave of state and local laws regulating artificial intelligence in employment has taken effect — or will within months — creating a patchwork of requirements that HR teams must navigate carefully.
Illinois's HB 3773 took effect on January 1, 2026, making it a civil rights violation to use AI in employment decisions that result in discrimination. Colorado's Artificial Intelligence Act (SB 24-205) follows on June 30, 2026, with sweeping requirements for impact assessments, risk management programs, and mandatory reporting of algorithmic discrimination. And New York City's Local Law 144 — the first law of its kind in the nation — continues to enforce annual bias audits for automated employment decision tools.
Meanwhile, the federal government has issued guidance rather than binding rules. The EEOC has made AI a strategic enforcement priority, and CDC/NIOSH published a 2026 science bulletin introducing the concept of "algorithmic hygiene" for managing AI-related workplace hazards. But without a comprehensive federal AI employment law, employers operating across state lines face a genuinely complex compliance challenge.
For HR technology teams, the question is no longer whether AI regulation is coming — it's how to build systems and processes that satisfy the most demanding requirements while maintaining the efficiency gains AI promises.
The State-by-State Landscape
Illinois: AI as a Civil Rights Issue
Illinois HB 3773 is arguably the broadest state AI employment law enacted to date. By amending the Illinois Human Rights Act (IHRA), the law treats discriminatory AI use in employment as a civil rights violation — not just a regulatory infraction. Key provisions include:
- Broad scope: The law covers any machine-based system that generates outputs — predictions, recommendations, or decisions — influencing hiring, promotion, training, discipline, or termination. This includes both predictive analytics and generative AI tools.
- Disparate impact liability: Discrimination need not be intentional. If an AI tool has the effect of disproportionately disadvantaging a protected class (race, sex, age, disability, religion, sexual orientation, pregnancy, military status, and others under the IHRA), the employer may be liable.
- Notice requirements: Employers must notify applicants and employees whenever AI is used to influence or facilitate any employment decision. Employers should also monitor Illinois Department of Human Rights rulemaking for any additional implementation details on notice content and delivery.
- Proxy variable ban: The law specifically prohibits using zip codes or other variables that serve as proxies for protected characteristics.
- Private right of action: Individuals can file complaints through the Illinois Department of Human Rights and pursue civil claims, meaning enforcement isn't limited to a government agency.
Notably, Illinois does not mandate formal bias audits — but the private right of action creates a strong incentive for employers to audit proactively, since discriminatory outcomes can lead directly to civil litigation.
Colorado: The Most Prescriptive Framework
Colorado's Artificial Intelligence Act (SB 24-205) takes effect on June 30, 2026, and imposes the most detailed compliance obligations of any state law. It applies to "deployers" (employers using AI) of "high-risk AI systems" — defined as systems that are a substantial factor in making consequential decisions about employment, including hiring, promotion, and termination. Requirements include:
- Impact assessments: Before deploying any high-risk AI system, employers must conduct a documented impact assessment addressing the system's purpose, data sources, protections against discrimination, and any third-party testing results. These assessments must be updated regularly.
- Risk management programs: Employers must implement a written risk management program governing how AI risks are identified, monitored, and mitigated across all employment decisions.
- Annual reviews: Each deployed high-risk AI system must be reviewed annually for algorithmic discrimination — defined as unlawful differential treatment or adverse impact on the basis of protected characteristics.
- Notice and human appeal: Employers must notify individuals when AI plays a substantial role in a consequential decision about them, inform them of their right to correct personal data used by the system, and provide access to human review of adverse decisions where feasible.
- Mandatory reporting: If an employer discovers algorithmic discrimination in a deployed system, it must notify the Colorado Attorney General within 90 days.
- Penalties: Violations can result in penalties of up to $20,000 per instance, enforced exclusively by the Attorney General.
One important compliance note: employers who adopt recognized AI governance frameworks — such as the NIST AI Risk Management Framework or ISO 42001 — and follow the law's procedural safeguards may establish a rebuttable presumption of compliance, potentially shielding them from liability.
New York City: The Bias Audit Pioneer
New York City's Local Law 144, which has been in effect since July 2023, remains the most well-established AI employment regulation in the country. It requires:
- Annual independent bias audits: Any automated employment decision tool (AEDT) used for hiring or promotion must undergo a bias audit by an independent auditor within the prior year. The audit must calculate selection rates and impact ratios across race, ethnicity, and sex categories per EEO-1 Component 1 standards.
- Public disclosure: Employers must publish a summary of the most recent bias audit results on their website.
- Advance notice: Candidates must receive notice at least 10 business days before an AEDT is used in their assessment, with an explanation of the tool's purpose and an opportunity to request an alternative evaluation process.
- Penalties: The NYC Department of Consumer and Worker Protection enforces the law, with fines between $500 and $1,500 per violation per day.
While Local Law 144 applies only within New York City, its influence has been significant — many of the state laws that followed have borrowed its core concepts.
Federal Guidance: The Floor, Not the Ceiling
No comprehensive federal law currently regulates AI in employment decisions, but federal agencies have established a baseline that state laws build on.
EEOC: AI and Title VII
The EEOC has made clear that existing federal anti-discrimination laws — Title VII, the ADA, the ADEA, and GINA — fully apply to AI-driven employment decisions. In its 2023 technical assistance document on AI and disparate impact under Title VII, the agency emphasized several critical points:
- Employer accountability: Employers bear full responsibility for AI tool outcomes, even when the tool is built by a third-party vendor. Vendor assurances of "bias-free" technology do not insulate employers from liability.
- Four-fifths rule: The EEOC applies the four-fifths (80%) rule from the Uniform Guidelines on Employee Selection Procedures to AI selection rates. If a tool selects members of a protected group at less than 80% of the rate for the most favored group, it may indicate adverse impact warranting further scrutiny.
- Business necessity defense: If adverse impact is identified, employers must demonstrate that the AI tool is job-related and consistent with business necessity — and that no less discriminatory alternative is reasonably available.
The EEOC's enforcement posture was underscored by its settlement with iTutorGroup, where the company agreed to pay $365,000 after its AI recruiting software automatically rejected applicants over certain ages.
NIOSH: Algorithmic Hygiene
On the workplace safety side, CDC/NIOSH published a 2026 science bulletin introducing the concept of "algorithmic hygiene" — a framework that adapts traditional industrial hygiene principles to AI-enabled workplaces. The bulletin, authored by John P. Sadowski, Ph.D. and John Howard, M.D., makes several important points for employers:
- AI as a hazard modifier: Rather than treating AI as a standalone hazard category, NIOSH recommends understanding AI as a software-driven modifier of existing workplace hazards. Algorithms don't create new physical hazards directly, but they alter risk profiles of the systems and processes they control.
- Psychosocial risks matter: Algorithm-driven changes can create genuine psychosocial hazards — increased stress from productivity monitoring, reduced job autonomy, greater cognitive load, and intensified work pace. NIOSH recommends these be managed with the same rigor as physical hazards.
- Use existing frameworks: Occupational health professionals can use established hazard identification, exposure assessment, and hierarchy-of-controls methods to evaluate AI-related risks, even when the algorithmic system itself is opaque.
For HR technology teams, this NIOSH guidance is significant because it frames AI workplace risks within an established regulatory vocabulary — meaning OSHA could eventually use these concepts to evaluate employer practices around AI-driven work management.
What Employers Should Do Now
With Illinois already in effect, Colorado taking effect in less than three months, and NYC Local Law 144 continuing to evolve, employers using AI in any aspect of employment need to act now. Here is a practical compliance roadmap:
1. Inventory All AI Tools Used in Employment Decisions
Start with a comprehensive audit of every AI-powered tool your organization uses that touches employment decisions:
- Resume screening and applicant tracking systems
- Video interviewing platforms with AI-driven scoring
- Chatbots that screen or qualify candidates
- Performance evaluation tools using predictive analytics
- Employee monitoring systems that generate productivity scores
- Workforce planning or scheduling tools driven by algorithms
Document each tool's purpose, vendor, data inputs, and which employment decisions it influences. This inventory is the foundation for compliance across all jurisdictions.
2. Implement the "Highest Common Denominator" Approach
If your organization operates in multiple states — or hires remote workers in Illinois, Colorado, or New York City — apply the most stringent requirements across all operations rather than trying to maintain jurisdiction-specific policies. In practice, this means:
- Conduct annual bias audits for all AI employment tools (required by NYC, strongly advisable everywhere)
- Provide notice to every applicant and employee when AI influences their assessment (required by all three jurisdictions)
- Offer human review of adverse AI-driven decisions (required by Colorado, good practice everywhere)
- Document impact assessments for each high-risk system (required by Colorado, valuable evidence of due diligence elsewhere)
3. Build Vendor Accountability Into Procurement
When selecting or renewing AI vendors, include contractual provisions that address compliance:
- Require vendors to disclose the logic and data sets used by their tools
- Negotiate access to bias audit data and validation studies
- Include indemnification clauses for discriminatory outcomes
- Require vendors to support compliance with specific state laws — including providing the product name, developer information, and data categories that Illinois requires in employee notices
4. Establish an AI Governance Committee
Create a cross-functional team — including HR, legal, IT, and compliance — responsible for:
- Reviewing and approving new AI tool deployments
- Overseeing annual bias audits and impact assessments
- Monitoring regulatory developments across all jurisdictions
- Managing incident response if algorithmic discrimination is discovered (Colorado's 90-day Attorney General notification requirement makes this particularly urgent)
5. Integrate NIOSH's Algorithmic Hygiene Principles
Even if workplace safety isn't the immediate focus, NIOSH's algorithmic hygiene framework provides a structured approach to evaluating AI risks that complements legal compliance:
- Assess how AI tools alter existing job hazards — including pace of work, cognitive demands, and decision autonomy
- Treat psychosocial impacts of algorithmic management (surveillance stress, reduced autonomy) as legitimate occupational health concerns
- Apply the hierarchy of controls: can the AI risk be eliminated, substituted, engineered around, or managed through administrative controls?
- Document all assessments and control measures as part of your broader safety and health program
Looking Ahead
The regulatory trajectory is clear: more states will regulate AI in employment, and the requirements will become more prescriptive. SHRM's State of AI in HR 2026 report found that 62% of organizations are now using AI somewhere in their operations, with recruiting being the dominant use case — exactly the area these laws target most directly.
Employers that build robust AI governance programs now won't just avoid penalties — they'll be positioned to adopt new AI tools faster and more confidently as the technology and regulatory landscape continue to evolve. The organizations that treat compliance as a one-time checkbox exercise will find themselves perpetually scrambling to catch up.
The technology is powerful, and the efficiency gains are real. But so are the risks — and in 2026, the legal consequences of ignoring them are no longer theoretical.
Sources
- Illinois HB 3773 — LegiScan
- Illinois Anti-Discrimination Law to Address AI Goes Into Effect January 2026 — National Law Review
- Colorado SB 24-205 — Colorado General Assembly
- Colorado's Artificial Intelligence Act: What Employers Need to Know — Ogletree Deakins
- Colorado Anti-Discrimination in AI Law (ADAI) Rulemaking — Colorado Attorney General
- Automated Employment Decision Tools (AEDT) — NYC Department of Consumer and Worker Protection
- Enforcement of Local Law 144 — New York State Comptroller
- Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence — EEOC
- What Is the EEOC's Role in AI? — EEOC
- iTutorGroup to Pay $365,000 to Settle EEOC Age Discrimination Suit — EEOC
- Practical Strategies to Manage AI Hazards in the Workplace — CDC/NIOSH
- The State of AI in HR 2026 Report — SHRM
- NIST AI Risk Management Framework — NIST
Tags
Frequently Asked Questions
In 2026, Illinois (HB 3773, effective January 1, 2026) and New York City (Local Law 144, effective since July 2023) already regulate AI used in employment decisions, and Colorado's AI Act (SB 24-205) takes effect on June 30, 2026. Several other states have introduced similar legislation.
Illinois HB 3773 amends the Illinois Human Rights Act to make it a civil rights violation to use AI in employment decisions in a way that discriminates against protected classes — even unintentionally. Employers must notify applicants and employees whenever AI influences hiring, promotion, or termination decisions, including disclosing the AI system's name, developer, and purpose.
The Colorado Artificial Intelligence Act (SB 24-205), effective June 30, 2026, requires employers deploying high-risk AI systems in employment decisions to conduct impact assessments, implement risk management programs, provide notice to affected individuals, offer human appeal processes, and report any discovered algorithmic discrimination to the Colorado Attorney General within 90 days.
It depends on the jurisdiction. New York City's Local Law 144 requires annual independent bias audits for automated employment decision tools before they can be used. Colorado requires annual reviews and impact assessments. Illinois does not mandate formal bias audits, but employers may face liability for discriminatory AI outcomes regardless.
NIOSH published guidance in 2026 introducing an 'algorithmic hygiene' framework that encourages employers and safety professionals to assess AI-related workplace hazards using established occupational health methods — covering both physical risks from AI-controlled systems and psychosocial risks like increased monitoring stress and reduced worker autonomy.


