By 2026, using AI to monitor employees or make hiring decisions isn’t just a tech experiment-it’s a legal minefield. Companies that treat AI tools like productivity apps or hiring assistants without understanding the law are already facing fines, lawsuits, and public backlash. This isn’t science fiction. It’s happening right now, in states like Colorado, California, and New York City, where new laws are forcing employers to rethink how they use AI at work.
AI That Makes Decisions Can’t Be Ignored
If your company uses AI to screen resumes, rate employee performance, or decide who gets promoted, you’re now under legal scrutiny. These tools aren’t neutral. They learn from past data-and if that data reflects bias, the AI will too. A hiring algorithm trained on resumes from mostly male engineers might automatically downgrade applications from women. A productivity tracker that flags slow typing speeds might unfairly target older workers or people with disabilities. That’s not efficiency. That’s discrimination.
Under Colorado’s Artificial Intelligence Act (CAIA), any AI system used in hiring, firing, or promotions is classified as a "High-Risk System." That means employers must do three things: audit the system annually for bias, tell workers when AI is involved, and give them a way to appeal decisions. Failure to do any of these can lead to state enforcement actions. And it’s not just Colorado. California’s rules are even broader.
California’s Three-Layer AI Law
In California, employers have to juggle three separate laws that all apply to AI in the workplace. First, the CPPA’s Automated Decision-Making Technology (ADMT) regulations require employers to prove their AI tools don’t violate anti-discrimination laws. No exceptions. If the tool affects hiring, pay, or promotions, it’s covered.
Second, the AI Transparency Act (SB 942) forces companies to clearly label AI-generated content-like deepfakes of employees or automated voice messages. If an AI-generated video is used in a performance review, the system must include a hidden digital watermark showing the tool’s name, version, and when it was used. Tampering with that watermark? That’s a $5,000-per-day fine.
Third, the Generative AI Data Transparency Act (AB 2013) targets developers. If you’re building an AI tool that’s sold to employers, you have to disclose how it was trained. What data was used? Where did it come from? Did you test it for bias? If you don’t, you can be sued. And if your tool is used to create fake voice clones of employees? You’re liable.
These laws aren’t optional. They’re enforceable by city attorneys, county counsel, and the state AG. And they apply to every employer in California-no matter how small.
New York City: Audits and Public Disclosure
New York City’s Local Law 144-21 has been in effect since July 2023, and it’s one of the strictest. Any employer using automated tools for hiring or promotions must:
- Have an independent auditor test the tool for racial, gender, or age bias every year
- Post the audit summary and deployment date on their careers page
- Notify applicants in advance if AI will be used
- Allow applicants to opt out and request a human review
Violations cost between $500 and $1,000 per incident. That adds up fast. A company using one AI tool on 500 applicants a month could rack up $60,000 in fines in a year. And that’s before lawsuits from workers who felt unfairly treated.
Texas and Utah: The Light Touch
Not every state is cracking down. Texas’s TRAIGA law, effective January 2026, only bans intentional discrimination. No audits. No transparency. No data retention. Employers just have to avoid deliberately using AI to harm protected groups. And they get a 60-day window to fix mistakes before penalties kick in.
Utah’s UAIP law is simpler: if your AI interacts with a job candidate or employee, you must tell them. Plain and simple. "You’re talking to AI right now." And if that AI says something discriminatory? The employer is legally responsible-not the vendor. The AI’s words are your words.
These two states are trying to attract tech companies by avoiding heavy regulation. But for businesses operating across state lines, that creates a nightmare. Do you follow Texas’s light rules everywhere? Then you risk violating laws in California or Colorado. Do you follow California’s rules everywhere? You’re over-complying, but you’re safe.
Monitoring Tools Are Now Regulated Tools
Productivity software that tracks keystrokes, mouse movements, or website visits? If it’s used to decide promotions, bonuses, or terminations, it’s now an AI employment tool. That means it falls under CAIA, CPPA, and Local Law 144. You can’t just install it and forget it.
Imagine a monitoring tool that flags employees who take long breaks. If that tool disproportionately flags workers in certain neighborhoods-or those with caregiving responsibilities-it could be discriminating. Even if the company didn’t mean to, the law still holds them responsible. That’s why audits and bias testing aren’t optional anymore. They’re the new baseline.
Worker Rights Are Expanding
Workers today have more power than ever. They now have the right to:
- Know when AI is making decisions about them
- Request a human review of AI-generated outcomes
- Opt out of AI-based assessments (in NYC)
- Be protected from AI-generated deepfakes or voice clones without consent
- Report algorithmic discrimination without fear of retaliation
In Colorado, if an AI system is found to have discriminated against a group, the employer must report it to the state attorney general within 90 days. In California, workers can sue if their likeness is used in a fake video without permission. These aren’t theoretical rights. They’re enforceable under state law.
What Happens If You Ignore This?
Companies that delay compliance are playing Russian roulette. Fines are just the start. Lawsuits from employees, class actions, negative media coverage, and loss of talent are real risks. A 2025 study by the National Employment Law Project found that companies with untested AI hiring tools had 37% higher turnover among underrepresented groups. That’s not just a diversity issue-it’s a financial one.
And the clock is ticking. By August 2026, every company operating in multiple states will need to have systems in place that meet the strictest standards. Waiting until June 30, 2026, to fix things in Colorado means you’ve already missed deadlines in California and New York.
What Should Employers Do Now?
Here’s what actually works:
- Map every AI tool you use in hiring, promotion, or performance review. Don’t skip the ones from third-party vendors.
- Identify which laws apply based on where your employees live. Colorado, California, and NYC are the big three.
- Start annual bias audits now. Use certified third-party auditors-don’t rely on internal teams.
- Train managers to explain AI use to employees. Transparency builds trust.
- Keep records. Store all AI inputs, outputs, and audit results for at least four years.
- Update your employee handbooks. Add clear policies on AI use, appeals, and opt-outs.
Don’t wait for a lawsuit to force your hand. The legal landscape isn’t changing-it’s already changed. The question isn’t whether you’ll need to comply. It’s whether you’ll comply before you’re penalized.
Can I use AI to monitor my employees’ productivity?
Yes-but only if you comply with local laws. In California and Colorado, you must disclose the tool’s use, test it for bias annually, and allow employees to request human review. In New York City, you must publicly audit the tool and let workers opt out. If you’re using it to make decisions about pay, promotions, or termination, it’s legally an employment decision tool-and subject to strict rules.
What if I use AI from a third-party vendor like HireVue or Eightfold?
You’re still responsible. Even if the vendor built the tool, the law holds the employer accountable for discrimination, bias, or lack of transparency. Colorado’s CAIA and California’s CPPA rules make this clear: the deployer (you) owns the risk, not the developer. You must audit, disclose, and retain data-even for vendor tools.
Do I need to tell employees if I use ChatGPT for drafting job descriptions?
Only if it affects employment decisions. If ChatGPT is just helping you write a job ad, no disclosure is needed. But if you use it to screen resumes, answer interview questions, or evaluate candidates, then yes-you must disclose under Utah’s law and California’s transparency rules. The key is whether the AI is making a decision that impacts someone’s job.
What happens if my AI tool accidentally discriminates?
In Colorado and California, you must report it. Colorado requires employers to notify the state attorney general within 90 days of discovering algorithmic discrimination. California allows workers to sue for damages. Ignoring it doesn’t make it go away-it makes your liability worse. The law now expects proactive detection, not reactive damage control.
Is there a federal law on AI in the workplace?
No. There’s no comprehensive federal law yet. That’s why state laws are so important. You must comply with each state where you have employees. This patchwork of rules means most companies are adopting the strictest standards nationwide-like Colorado’s 4-year data retention and annual audits-to avoid legal risk.
Employment law in 2026 isn’t about how hard people work. It’s about how fairly the tools that judge them work. The companies that succeed won’t be the ones using the most AI. They’ll be the ones using it responsibly.