Maybe We Really Want More and Better AI Monitoring Employment Decisions

NEWSLETTER VOLUME 1.15

|

August 17, 2023

Editor's Note

Maybe We Really Want More and Better AI Monitoring Employment Decisions

 

Like many states, Congress is trying to figure out how to regulate the use of AI in employment decisions. The latest is a proposed law cleverly called the No Robot Bosses Act. 

 

Its main provisions require human oversight of employment decisions whenever tools using AI are part of the decision-making process. 

 

I have no idea whether this will pass or what it will look like if it does. But the issue is complex and demonstrates some potentially incorrect assumptions about where the bias is, how AI works and is used, and how employment decisions get made in general.  

 

The reason AI systems are biased and may make harmful predictions and suggestions is because humans are biased and make harmful employment decisions, often unintentionally. So, requiring biased humans to oversee their biased AI systems that are based on biased data may not be the best approach. 

 

Computers are way better than people at auditing the outcomes of their work because they don't care whether they are right or not. They especially don't care if they get fired. Humans do. 

 

The key to using AI in employment decisions is to require employers to monitor and audit the outcomes of all employment decisions on hiring, promotion, and workforce composition and then report the data. 

 

Transparency and accountability have never been easier, faster, and more useful in reducing discrimination. Multi-variant regression analysis and reporting is part of what many AI systems do and are part of tools that offer pay equity and diversity audits. When you can see and monitor what's happening in your organization and assess the outcomes of your practices, you can address issues more quickly.  

 

The goal of these laws should be prevention rather than liability. I'd like to see more safe harbor provisions and incentives for employers to not only monitor and report but also to correct any problems. 

 

That probably means more and better AI than less. But I'm not sure that's where legislators are headed. 

 

- Heather Bussing

 

The No Robot Bosses Act Aims to Regulate Workplace AI

by  Adam Forman, Alexander Franchilli, Scarlett Freeman, Nathaniel Glasser, and Frances Green

at Epstein Becker & Green 

 

On July 20, 2023, U.S. Senators Bob Casey (D-PA) and Brian Schatz (D-HI) introduced the “No Robot Bosses Act.”  Other than bringing to mind a catchy title for a dystopic science fiction novel, the bill aims to regulate the use of “automated decision systems” throughout the employment life cycle and, as such, appears broader in scope than the New York City’s Local Law 144 of 2021, about which we have previously written, and which New York City recently began enforcing. Although the text of the proposed legislation has not yet been widely circulated, a two-page fact sheet released by the sponsoring Senators outlines the bill’s pertinent provisions regarding an employer’s use of automated decision systems affecting employees and would: 

  • prohibit employers’ exclusive reliance on automated decision systems; 
  • require pre-deployment and periodic testing and validation of automated decision systems to prevent unlawful biases; 
  • require operational training; 
  • mandate independent, human oversight before using outputs; 
  • require timely disclosures of use, data inputs and outputs, and employee rights with respect to the decisions; and 
  • establish a regulatory agency at the U.S. Department of Labor (“DOL”) called the “Technology and Worker Protection Division.” 

The bill does not define with specificity “automated systems,” nor does it define or limit the term “employment decision.” The fact sheet, however, sets forth examples of automated systems potentially subject to the “No Robot Bosses Act,” including “recruitment software, powered by machine learning algorithms,” “automated scheduling software” and “tracking algorithms” applicable to delivery drivers. These examples suggest a broad intended application that could include other types of monitoring technology.  But the fact sheet does not provide examples of the nature or scope of “employment” decision,” nor does it identify the industries or classes of employees subject to the law.  Moreover, at this time, the bill is silent as to enforcement mechanisms, penalties, or fines for violations. 

In addition, Senators Casey, and Schatz, joined by Corey Booker (D-NJ), have also introduced the “Exploitative Workplace Surveillance and Technologies Task Force Act of 2023.” Like the “No Robot Bosses Act,” the text of the bill is not yet available, but the Senators released a one-page fact sheet detailing that the proposed legislation would create a “dynamic interagency task force” led by the Department of Labor and the Office of Science and Technology Policy to study a range of issues related to automated systems and workplace monitoring technology. 

While these proposed bills are still in their early stages, lawmakers at the state, local, and federal levels continue to consider methods of regulating employment-related automated systems and artificial intelligence more broadly. At the same time, federal regulators and private plaintiffs are leveraging existing employment laws, including Title VII of the Civil Rights Act, in connection with employers’ use of technology that automates employment decisions.  For example, the EEOC recently published a technical assistance memorandum alerting and assisting employers to mitigate risk when using ‘automated decision tools’ in the workplace. Consequently, it is critical that employers, especially personnel involved in recruiting, hiring, and promotion, identify and assess potential risk in the use of AI tools in employment decision-making by: 

  1. Understanding and documenting the systems and vendors used in making employment decisions throughout the employment life cycle; 
  2. Assessing the need for an artificial intelligence governance framework, or other internal policies and procedures taking into account considerations related to safety, algorithmic discrimination, data privacy, transparency, and human oversight and fallback; 
  3. In conjunction with counsel, conducting impact assessments as to the use of automated systems; and 
  4. Ensuring compliance with all applicable laws governing automated decision systems and artificial intelligence. 

It's Easy to Get Started

Transform compensation at your organization and get pay right — see how with a personalized demo.