More on Regulation of AI and Privacy at Work

NEWSLETTER VOLUME 2.15

|

April 10, 2024

Editor's Note

More on Regulation of AI and Privacy at Work

Unless the gridlock in Congress ends before the November election (lol), I don't see any of these bills passing before next year, if ever. And during election years, all sides are looking for hook issues to blow out of proportion and introduce legislation about. These bills tend to address the issue poorly because the people behind the legislation aren't actually trying to solve the problem.

 

What is the problem? In the workplace, it's privacy, fairness, bias, and misplaced reliance on data and computing for decisions that affect people's lives and livelihoods. And the people responsible for solving it are mostly, well, us—the decision makers.

 

None of this is easy to make rules about because you need to make them broad enough that they will make a difference, but not so broad that they will force organizations to redo all their processes and systems. And at least some of the HR Tech companies are concerned about these issues and are working on both the tech and educating users.

 

We've been so busy figuring out all the cool things we can do with data and computers, we haven't always considered the bigger context and what could go wrong.

 

It may be too late for some things. The legal test for privacy rights is whether a person would reasonably expect to have privacy under the circumstances. If not, there's no privacy. Now that everyone has a camera and video recorder in their pocket and surveillance systems in their doorbells and "smart" homes, we have thoughtlessly shifted our expectations of privacy in exchange for a false sense of security.

 

Views on privacy are cultural; sensible and wise people don't always agree. We even have different views on basic things like what is appropriate personal space when standing in a line. For many, that changed during the pandemic when we taped the floors into six foot increments.

 

Opinions about privacy also can change dramatically based on the issues involved. The same person may have very strong views about their rights to prevent people from coming on their property while also believing that it's appropriate to legislate issues regarding the sex lives, gender, and reproductive health of others. It's complicated.

At work in the US, people have privacy rights in the bathroom and locker rooms. Otherwise, employers can generally track your movements, productivity, and are free to search anything they control, including devices, computers, and internet search history.

 

So, we're going to have to rethink our concepts of legal and practical privacy in all contexts. Maybe these bills will spark some discussion and get people focused on the problem. But I won't hold my breath.

 

- Heather Bussing

 

Employers that use artificial intelligence – and developers that create AI systems – could be subject to extensive new laws under several bills introduced by federal legislators. While much of the existing legal landscape on AI centers on broad, overarching principles, Congress is now considering bills that hone in on more specific issues like the workplace. We’ll outline the three bills that employers should care about most, covering issues ranging from overreliance on automated decision systems – or “robot bosses” – to workplace surveillance – or “spying bosses.”

Existing Federal AI Rules and Initiatives

Over the past several years, the federal government has ramped up its efforts to govern the development, design, and usage of AI. Here’s a sample of the laws, guidance, and standards already in place:

  • The AI in Government Act (enacted in 2020) requires the U.S. Office of Personnel Management to identify the skills and competencies needed for AI-related federal positions. The National AI Initiative Act (enacted in 2021) establishes an overarching framework for a national AI strategy and federal offices and tasks force to implement it. The AI Training Act (enacted in 2022) requires the U.S. Office of Management and Budget Director to establish or provide an AI training program for the acquisition workforce and other purposes.
  • The EEOC AI and Algorithmic Fairness Initiative (launched in 2021) requires AI tools used for hiring and other employment decisions to comply with federal equal employment opportunity laws. EEOC guidance issued in 2022 makes it clear that employers’ use of software, algorithms, and AI for assessing job applicants and employees may violate the Americans with Disabilities Act. And another EEOC technical assistance document released last year warns employers that the agency will apply long-standing legal principles in an effort to find possible Title VII violations related to the use of AI with employment-related actions.
  • The Executive Order On Safe, Secure, and Trustworthy AI (issued in 2023) contains new AI standards covering nearly every aspect of our daily lives, including many employment-related items such as initiatives to prevent AI-based discrimination. The executive order built on the White House’s Blueprint for an AI Bill of Rights (released in 2022). We previously covered the key employer takeaways in the executive order and the blueprint, which you can read here and here.

Proposed New Rules: Top 3 Bills Employers Should Know About

  1. No Robot Bosses - 2419, introduced by Sen. Bob Casey (D-PA)

The aptly named “No Robot Bosses Act” would ban employers from relying exclusively on automated decision systems (ADS) to make “employment-related decisions” – which is broadly defined to include decisions at the recruiting stage through termination and everything in between (such as pay, scheduling, and benefits). The bill would protect not only employees and applicants but also independent contractors.

Employers would be barred from even using ADS output to make employment-related decisions, unless certain conditions are met, such as the employer independently supporting that output via meaningful human oversight). The bill would impose additional requirements on employers (for example, training employees on how to use ADS) and establish a Technology and Worker Protection Division within the Department of Labor.

  1. Stop Spying Bosses – 262, introduced by Sen. Bob Casey (D-PA)

The “Stop Spying Bosses Act” targets (as its title suggests) invasive workplace surveillance. Technology that tracks employees – from their activity to even their location – is growing more common. This bill would require employers that engage in surveillance (such as employee tracking or monitoring) to disclose such information to employees and applicants. The disclosure would have to publicly and timely provide details about the data being collected and how the surveillance affects the employer’s employment-related decisions.

The bill also would:

  • ban employers from collecting sensitive data, such as data collection while an individual is off-duty or data that interferes with union organizing; and
  • establish a new Privacy and Technology division at the Department of Labor to enforce and regulate workplace surveillance.
  1. Algorithmic Accountability – 2892, introduced by Sen. Ron Wyden (D-OR)

A proposed “Algorithmic Accountability Act” seeks to regulate how companies use AI to make “critical decisions,” including those that significantly affect an individual’s employment. For example, companies would be required to:

  • assess the impacts of automated decision systems when making critical decisions – which would include identifying (among many other factors) any biases or discrimination; and
  • provide related ongoing training and education for all relevant employees, contractors, or other agents.

The Federal Trade Commission (FTC) would be required to create regulations to carry out the purpose of the bill.

What Other Bills Are Under Consideration?

Here’s a sample of other types of bills that have been introduced:

Federal AI Framework

Proposed bipartisan legislation would provide a national framework for bolstering AI innovation while strengthening transparency and accountability standards for high-impact AI systems. Another comprehensive bill would establish guardrails for AI, establish an independent oversight body, and hold AI companies liable – through entity enforcement and private rights of action – when their AI systems cause certain harms, such as privacy breaches or civil rights violations.

AI Labeling and Deepfake Transparency

One bill aims to protect consumers by requiring developers of AI systems to include clear labels and disclosures on AI-generated content and interactions with AI chatbots. Another bill would require similar disclosures from developers and require online platforms to label AI-generated content.

Labeling deepfakes is “especially urgent” this year, according to a press release from one of the bill’s cosponsors, because “at least 63 countries which are nearly half the world’s population and are holding elections in 2024 where AI-generated content could be used to undermine the democratic process.”

AI Cybersecurity and Data Privacy Risks

Several bills target cybersecurity and data privacy issues, including a bill that would make it an unfair or deceptive practice (subject to FTC enforcement) for online platformers to fail to obtain consumer consent before using their personal data to train AI models.

What’s Next

We’ll have a much better view of the chances of any of these proposals becoming law by later this summer.

It's Easy to Get Started

Transform compensation at your organization and get pay right — see how with a personalized demo.