Using AI in Employment Decisions Gets You More Information – And That May Be Biased Too

NEWSLETTER VOLUME 1.4

|

June 02, 2023

Editor's Note

Using AI in Employment Decisions Gets You More Information – And That May Be Biased Too

The U.S. Equal Employment Opportunity Commission (EEOC) has issued guidance on artificial intelligence (AI) and the ways it may result in discrimination in employment selection decisions. "Selection" generally means any time you are making an employment decision that could adversely affect someone because of a protected factor. 

 

This article explains in detail what the guidance says and what situations it could apply to – basically anything where a machine learning tool makes recommendations or predictions that are used in employment decisions. 

 

The EEOC reminds employers that even though they are relying on a tool designed and developed by someone else, you are still on the hook for your employment decisions no matter what tools you use in making them. You can't outsource or delegate your duties not to discriminate. 

 

There are a couple of other things to keep in mind: 

 

  1. You may not know which tools use AI. The reality is that most of them do and have for years. As soon as we got Big Data about 20 years ago, tech has been all about what we can do with all that information. The process of sorting, comparing, finding patterns, and making predictions is basically machine learning or AI.
  2. Many of those tools were built based on the Four-Fifths Rule (explained in the article below). This means the tools are notdesigned to help eliminate discrimination in hiring decisions. They are designed to produce results that will not violate the Four-Fifths Rule. There is a tolerance for discriminatory results built into the tools, which means you will sometimes get suggestions that will result in discrimination.

 

You cannot outsource responsibility or liability for employment discrimination. AI can do amazing things. But you cannot rely on AI predictions and recommendations as being more objective or "right." These tools offer opinions, not facts. And they are always missing context, care, and compassion. 

 

AI offers new information. It can help us ask better questions and consider things we may not have otherwise. It does not give us answers. 

- Heather Bussing

 

EEOC Issues Guidance on Use of Artificial Intelligence Tools in Employment Selection Procedures Under Title VII

by James Paretti, Jr.

at Littler

 

On May 18, 2023, the U.S. Equal Employment Opportunity Commission (EEOC or “the Commission”), the federal agency charged with administering federal civil rights laws (including Title VII of the Civil Rights Act, the Americans with Disabilities Act (ADA), and the Age Discrimination in Employment Act (ADEA), among others), issued a “technical assistance document” entitled, “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.” EEOC technical assistance documents are not voted upon or otherwise approved by the full Commission, and are not intended to create new policy, but rather to apply existing law and Commission policy to new or specific fact patterns. They do not have the force and effect of law, and do not bind the public in any way—rather, they purport to provide clarity with regard to existing requirements under the law. This latest technical assistance document, addressing artificial intelligence (AI) employment selection procedures under Title VII, follows on the Commission’s May 2022 guidance on the use of AI tools and the ADA. 

The technical assistance document begins by noting that while Title VII applies to all employment practices, including recruitment, monitoring, evaluation, and discipline of employees, it is intended to address AI issues only with regard to “selection procedures,” such as hiring, promotion, and firing. It defines “artificial intelligence” with reference to the National Artificial Intelligence Initiative Act of 2020 as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments,” and notes that in the employment context, this has typically meant reliance on an automated tool’s own analysis of data to determine which criteria to use when making decisions. The Commission offers a number of examples of AI tools used in the employment selection procedures, including: 

[R]esume scanners that prioritize applications using certain keywords; employee monitoring software that rates employees on the basis of their keystrokes or other factors; ‘virtual assistants’ or ‘chatbots’ that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements; video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and testing software that provides ‘job fit’ scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived ‘cultural fit’ based on their performance on a game or on a more traditional test. 

The document is expressly focused on potential disparate or adverse impact resulting from the use of such tools and does not address issues of intentional discrimination via the use of AI-driven tools in making employment selection procedures. Generally speaking, adverse or disparate impact may result when an employer uses a facially neutral test or selection procedure that excludes individuals based on protected characteristics such as sex, race, color, or religion in disproportionate number.1 An employer can justify the use of a neutral tool that nevertheless has an adverse impact where the use of such tool is “job-related and consistent with business necessity” and there is no less-discriminatory alternative that is equally effective. The application of disparate impact principles, and the assessment of whether a selection tool is lawful under Title VII, is generally governed by the Uniform Guidelines on Employee Selection Procedures (UGESP) adopted by the EEOC in 1978. 

Insofar as it does not create new policy, the scope of the technical assistance is limited. That said, it does include several key takeaways for employers using selection tools that incorporate or are driven by AI: 

  • Liability for Tools Designed or Administered by a Vendor or Third Party. The guidance notes that where an AI-powered selection tool results in disparate impact, an employer may be liable even if the test was developed or administered by an outside vendor. The EEOC recommends that in determining whether to rely on an outside party or vendor to administer an AI selection tool, the employer consider asking the vendor what steps it has taken to evaluate the tool for potential adverse impact. It further notes that where a vendor is incorrect in its assessment (for example, informing the employers that the tool does not result in an adverse impact when in fact it does), the employer may still be liable. 
  • The “Four-Fifths Rule” is Not Determinative. UGESP has long noted the “four-fifths” rule will “generally” be regarded as a measure of adverse impact—but that it is not dispositive. By way of background, the four-fifths rule provides that where a selection rate for any race, sex, or religious or ethnic group is less than 80 percent (four-fifths) of the rate of the group with the highest selection rate, that generally indicates disparate impact. For example, assume an employer uses a selection tool to screen 120 applicants (80 male, 40 female) to determine which advance and receive an interview. The tool determines that 48 men and 12 women should advance to the interview round. The “selection rate” of the tool is 60% for men (48/80) but only 30% for women (12/40). The ratio of the two rates is 50% (30/60). Because 50% is less than 80% (four-fifths), the tool would generally be viewed as having an adverse impact under the four-fifths rule. The technical assistance document notes that while the four-fifths rule is a useful “rule of thumb,” it is not an absolute indicator of disparate impact—smaller differences in selection rates may still indicate adverse impact where, for example, the tool is used to make a large number of selections, or where an employer may have discouraged certain applicants from applying. The guidance notes that the EEOC may consider a tool that passes the four-fifths test to still generate an unlawful adverse impact if it nevertheless results in a statistically significant difference in selection rates. 
  • Employers Should Self-Audit Tools. Finally, the technical assistance urges employers to self-audit selection tools on an ongoing basis to determine whether they have an adverse impact on groups protected under the law, and, where it does, consider modifying the tool to minimize such impact. While such modification may be lawful going forward, employers are urged to explore this issue closely with counsel, insofar as it may implicate both disparate treatment and disparate impact provisions of Title VII under existing Supreme Court precedent. 

While its technical assistance does not offer particularly trenchant insight, it does reflect the attention the EEOC has and will likely continue to pay to issues of discrimination and artificial intelligence. In late 2021, the agency announced its “Artificial Intelligence and Algorithmic Fairness Initiative,” and its most recent public meeting in January of this year was devoted solely to the issue of AI and potential employment discrimination. We expect this focus will remain, particularly if the Commission obtains a Democratic majority in the future (the agency is currently split along party lines, with two Democratic commissioners, two Republican commissioners, and one vacancy to which a Democrat has been nominated; as a practical matter, this lack of majority has likely limited the agency’s ability to move forward on controversial or significant policy changes). 

Employers using or considering the use of AI-driven tools in recruiting and selecting applicants and employees are advised to keep a close eye on developments, as both the federal government and state and local governments have indicated an intent to regulate in this space. Littler Workplace Policy Institute (WPI) will likewise keep readers informed of relevant developments. 

Footnotes 

​1 In regulatory guidance discussed further below, “adverse impact” is formally defined as “a substantially different rate of selection in hiring, promotion, or other employment decision which works to the disadvantage of members of a race, sex, or ethnic group.” 29 CFR § 1607.16(B). 

It's Easy to Get Started

Transform compensation at your organization and get pay right — see how with a personalized demo.