AI and Unintended Employment Law Consequences

NEWSLETTER VOLUME 2.1

|

January 08, 2024

Editor's Note

Machines have been doing "the work" for a long time. There are things we couldn't have without them, like pretty much everything. But until now, there was always a designated human that was required to be involved in the process.

 

After the machines completely take over and humans just have meetings and do emails all day . . . Wait—a bunch of us are there already. Sigh. But hey, the engagement scores are holding steady, so all must be well.

 

Big technological changes to things we are do now always bring new ways to do things down the line. Possibilities we didn't see before open up. It takes humans a while to adapt and then figure out the next iteration. For example, we still measure engine power in horses even though horses don't power vehicles and haven't in a long time. It was a metaphor that helped people understand something new. The reality is that most of us don't know how either horses or engines work.

 

When technology is changing, it's also easy to get caught up in the details of how to make the latest updates do the things you could do yesterday while losing sight of the bigger picture and questions, like:

 

  • Are we doing the old thing better or just faster?
  • Is this what we want to be doing?
  • Do we even know what we want to be doing?
  • Does the technology we're using help us get to what we want to be doing?

 

We also develop technology because we can. With AI, sometimes we use the data that's available rather than the information we need. Data collected for one purpose may not work for other purposes, even when it's related. Then data creates more data. First, we measure things, then we track those measurements, then we have folders of PowerPoint slides showing the data about the measurements. Lather, rinse, repeat. How do we know what it's telling us when we're not even sure what it was to begin with?

We also decide to measure things based on the data we can get, despite the fact that the information doesn't really tell us what we need to know. For example, there will be data about this blog post that tells someone how many people clicked on it, where the reader found it on the internet, how much time the page was open, and how far in the article someone scrolled. (Hi Someone!) Yet, none of that information tells us whether the reader actually read it, understood it, learned something, or thought it was useful.

As generative AI starts to write blog posts designed for search engine optimization and to induce clicks and scrolls, it will be interesting to see how we adapt. Will someone develop a filter for human written content? Will it matter? What do we do when AI just makes stuff up and we are flooded with even more misinformation?

 

Maybe critical thinking will become sexy.

 

We're in a time of big technological changes and we need to figure out where we want to go instead of just what we can do with this new thing.

Since employment law is based on the work people do, as that changes with AI, things are going to get a little weird for a while. Ultimately, it's probably time to rethink the relationships between work, time, and pay/value and whether it makes sense to keep doing it this way.

 

In the meantime, here's a great discussion of wage hour and employee classification questions when AI is doing "the work" and humans aren't involved or play limited roles.

 

- Heather Bussing

 

The increased use of artificial intelligence (AI) in the workplace has already raised issues about working time, proper classification, and discrimination. This alert addresses some of these issues.

Working Time and Classification Issues

The Fair Labor Standards Act (FLSA) has defined the 40-hour workweek and other compensable time for non-exempt employees since The War of the Worlds put Martians on the radio. AI was barely on science fiction’s mind. Now, AI is an everyday reality, and is especially plugged into the workplace. From recording employee productivity to maximizing efficiency in human resources (HR), AI holds tremendous potential for employers. But AI’s capabilities and detriments are also testing the limits of the FLSA and analogous state laws.

Additionally, as AI develops and its workplace presence increases, multiple unanswered questions regarding compensable time and inadvertent discrimination arise. Below, we outline potential legal issues that may occur as AI use expands in the employment network.

Using AI to Surveil and Monitor Employees

The COVID-19 pandemic expedited the use of AI-powered surveillance and monitoring tools for employee computer activity (and impromptu mouse jigglers as workarounds). Now, AI monitors the keystrokes, mouse activity, and/or webcams of many remote and in-person employees. Through that AI surveillance, employers can calculate a non-exempt employee’s compensable time without relying on the traditional clock-in-and-out.

But what happens when the entirety of an employee’s job is not wired to a desk or computer? For instance, an AI-monitor might not account for an employee’s time if the employee prints and reads reports away from their workstation. Similarly, if the employee meets a client in-person, the AI may not be able to capture that time.

If the employer glances at the AI-manicured timesheet, the employer might assign more at-station work for the employee. But once the unaccounted time is reported, and the at-station time tallied, that employee’s time could pass the 40-hour threshold for overtime pay.

The Band-Aid fix would be to make the employee responsible for recording their off-station time. However, the FLSA generally places the burden on the employer to accurately record all employee hours worked by non-exempt employees. So, as the AI technological and legal landscape evolves, the question remains of how to fully account for an employee’s time when AI monitoring is employed.

Employees Losing Their Exemption Status Due to AI

AI and robots might not be replacing all workers, but they may be changing employees’ exemption statuses. Currently, an employee may be exempt under the FLSA and analogous state laws under some tests if the employee exercises discretion and independent judgment regarding significant matters.

But what happens when the exempt employee only manages AI that chooses the best outcome in its technologic wisdom? If the employee retains no discretion, then they may no longer be exempt. Yet, in this early stage of AI where the AI has no ethical or moral sense, employers might still need an employee in the driver seat to make appropriate choices. Those choices may be considered discretionary for purposes of exemption under the FLSA and analogous state laws. However, the true amount and level of employee discretion remains unclear.

Other novel questions that affect exemption status likewise remain open: is it possible to hire or fire a robot for purposes of managerial exemptions? Does an employee “manage” a robot by adding new directives or by performing physical or technical maintenance? Also, exemption statuses may be impacted by state and local laws which further muddles these scenarios.

Changing into Wearable Technology Might Be Compensable

Generally, time spent changing into gear that is integral to the work, such as scrubs at a pork processing plant, is compensable under the FSLA. As wearable technology in the workforce becomes commonplace, the time to put on the tech may become compensable. (i.e. The half-hour spent by a warehouse employee to strap on an AI-integrated exoskeleton [especially in the bulky nascent stage of exoskeletons] might be compensable.) However, this raises additional issues. For example, an employee’s lunch break is unpaid if the break is uninterrupted by work. Would wearing cumbersome technology be considered an interruption? Or would the employer compensate the employee to change out of, and then back into, the gear?

On the other hand, donning items such as a hardhat or an AI-imbued watch might not be compensable due to the insubstantial time it takes to put on.

Turning Commute Time in an Autonomous Vehicle into Compensable Time

Compensable time during an employee’s drive to work was a non-issue with manually driven cars. Employees are generally not compensated for the time spent commuting before the start of the workday and after the end of the workday. For instance, a retail employee’s compensable time generally starts after clocking in at the store. Similarly, an office worker’s compensable time begins after arriving at the office and performing their first task.

With the accelerated development of autonomous vehicles (AV), however, drivers might one day be legally allowed to take their eyes off the road. And, for some employees, their eyes would fixate on their work during the commute. Meaningful work performed during that commute could then count towards the 40-hour threshold for non-exempt employees. De minimis contributions, on the other hand, might not be compensable. But the line between what kind of work could be compensable is not clear and leaves the issue open.

Potential for AI’s Algorithmic Discrimination

Gone are the days of dredging through stacks of resumés or, for some companies, synchronous interviews. AI is utilized to weed out candidates who lack the qualifications set by the hirer. But not all HR dreams come true. Without proper data application, parameters, and oversight, AI processes can perpetuate discrimination and violate federal or state law. One blatant example is EEOC v. iTutorGroup, where an AI hiring tool automatically rejected women over 55 and men over 60. Less blatantly, in Baker v. CVS Health Corporation et al, it is alleged that CVS used AI facial recognition software to make credibility assessments akin to a lie detector, which is illegal in Massachusetts. Another interesting question has recently arisen: can third-party AI vendors be held liable for discrimination under Title VII? In Mobley v. Workday, Inc., Workday is being sued in federal court for selling employee-screening AI that is allegedly discriminating against ethnic and older candidates.

Takeaways

Ultimately, it is still critical for the employer to be wired into the hiring process. In AI’s current stage, there is a large potential for algorithmic discrimination if the wrong boxes are checked or the incorrect data is used to control the AI’s decision making. Potential candidates may fly under the radar — and bring suit over perceived or actual discrimination — if hiring decisions are left to AI’s complete discretion.

The future of AI in the hiring process may eliminate these issues if vendors can incorporate oversight and other features. To help cure discriminatory issues, the EEOC and other entities have released guidance on using AI in the hiring process. But, for the time being, the types of AI allowed in the hiring process, and even if applicants must be informed that AI is being employed, is an open-ended question.

It's Easy to Get Started

Transform compensation at your organization and get pay right — see how with a personalized demo.