AI Is Only Artificial; Don’t Let It Make Hiring Decisions

NEWSLETTER VOLUME 1.6

|

June 14, 2023

Editor's Note

AI Is Only Artificial; Don’t Let It Make Hiring Decisions

Computers are great at anything that can be matched, measured, or mathified. When they are trained on data about the existing world, they will make more of the existing world, only faster. And often with far less nuance.  

 

When people make decisions based on an artificial intelligence (AI) prediction or recommendation, that data will help confirm the prior prediction. Pretty soon you have a completely self-licking ice cream cone. And a mess. 

 

AI can give us helpful opinions and useful information. But don't let the computers make employment decisions. 

 

I loved this article's explanation about potential issues in using AI for hiring decisions and the review of both state and federal law on where the line is between useful information and discrimination. And flowers for the Bradley firm who included their summer associate (law student), Meaghan Pickles, as a named author! 

- Heather Bussing

 

AI, AI, Uh-Oh! Can Artificial Intelligence Programs Put You at Risk for Discrimination Lawsuits?

by Whitney Jackson and Anne Yuengert

at Bradley Arant Boult Cummings LLP

 

Artificial intelligence (AI) is the best way to save time and make fair decisions — right? Not so fast. As AI is more common in our day-to-day lives, we have seen it make mistakes and replicate human shortcomings. For many it came as a surprise when AI hiring algorithms appeared to replicate human biases as well. If you are an employer using AI hiring algorithms, you may be at risk of liability under federal law. 

The Problem 

Some companies with high-volume hiring enlisted the help of AI to assist in employment-related decisions. At first, this only included basic functions such as scanning résumés for key words. However, as the technology evolved, companies began using tools such as computer-scored video interviews or facial recognition technology to screen applicants. 

Examples of how AI has been used during the hiring process include: 

  • Résumé scanners that prioritize applications using certain key words; 
  • Employee monitoring software that rates employees on the basis of their keystrokes or other factors; 
  • “Virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements; 
  • Video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and 
  • Testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or on a more traditional test. 

While having a computer handle all of these tasks was helpful, some companies halted the use of AI in hiring decisions when the technology appeared to screen candidates based on protected statuses. For example, AI recruitment software used by Amazon was training itself to seek out men for technical roles. Additionally, researcher Joy Buolamwini found that facial-recognition software has failed to recognize women and people of color, which may lead to the software not accurately reflecting diverse candidates’ performance during a computer-screened video interview. Furthermore, AI hiring tools might unintentionally screen out applications from people with disabilities, even when they could perform the job with a reasonable accommodation. Depending on how it is programmed, AI software absorbs the collective attitudes and biases of whatever it reads online. And without teaching AI how to identify and mitigate bias, it will likely perpetuate it. However, employers utilizing AI can help prevent the perpetuation. 

EEOC Guidance on the Use of AI in Employment-Related Decisions 

The EEOC recently issued guidance on how employers’ use of AI can comply with the Americans with Disabilities Act (ADA) and Title VII. Employers using AI to make employment decisions should review the EEOC guidance. 

On May 18, 2023, the EEOC issued guidance to assist employers in “determine[ing] if their tests and selection procedures are lawful for purposes of Title VII disparate impact analysis.” Disparate impact discrimination occurs when a facially neutral policy or practice has the effect of disproportionately excluding persons based on a protected status (unless the procedures are job related and consistent with business necessity). If an employer administers a selection procedure, it may be responsible under Title VII, even if the test was developed by an outside vendor. The guidance makes clear that employers are responsible for selection procedures developed by third- party software vendors. 

If you want to have a software vendor develop or administer an algorithmic decision-making tool, ask the vendor, at a minimum, whether it has taken steps to evaluate whether the tool results in a disparate impact based on a characteristic protected by Title VII. If the tool results in a lower selection rate for individuals of a particular protected class, then you need to consider whether it is job related and consistent with business necessity and whether there are alternatives that may have less of an impact. 

On May 12, 2022, the EEOC issued guidance on AI about “how existing ADA requirements may apply to the use of” AI in employment decision making. It further “offers promising practices for employers to help with ADA compliance when using A.I. decision making tools.” 

Not surprisingly (and consistent with its May 18, 2023, guidance), the EEOC concluded that an employer who administers a pre-employment test may be responsible for ADA discrimination if the test discriminates against individuals with disabilities, even if the test was developed by an outside vendor. 

Regardless of who developed an algorithmic decision-making tool, the EEOC advises that employers take additional steps during implementation and deployment to reduce the chances that the tool will discriminate against someone because of a disability (intentionally or unintentionally). Suggested steps include: 

  • Clearly indicating that reasonable accommodations, including alternative formats and alternative tests, are available to people with disabilities; 
  • Providing clear instructions for requesting reasonable accommodations; and 
  • In advance of the assessment, providing all job applicants and employees who are undergoing assessment with as much information about the tool as possible, including information about which traits or characteristics the tool is designed to measure, the methods by which those traits or characteristics are to be measured, and the disabilities, if any, that might potentially lower the assessment results or cause screen out. 

State and Municipal Laws 

Additionally, states and municipalities are beginning to address the use of discriminatory AI hiring tools. 

In 2020, Illinois enacted the Artificial Intelligence Video Interview Act. This law requires employers that use AI-enabled analytics in interview videos to take the following actions: 

  • Notify each applicant about the use of AI technology. 
    • Explain the AI technology to the applicant, how it works, and what characteristics it uses to evaluate applicants. 
    • Obtain the applicant’s consent before the interview. 

The video must be destroyed within 30 days upon the request of the applicant, and employers must limit the distribution of the videos to only those individuals whose expertise is necessary to evaluate the applicant. 

If the employer relies solely on AI to make a threshold determination before the candidate proceeds to an in-person interview, that employer must track the race and ethnicity of the applicants who do not proceed to an in-person interview, as well as those applicants ultimately hired. 

The Illinois law does not include explicit civil penalties. 

In 2020, Maryland passed its AI-employment law, called H.B. 1202. H.B. 1202 prohibits employers from using facial recognition technology during an interview to create a facial template without consent. Consent requires a signed waiver that states: 

  • The applicant’s name; 
    • The date of the interview; 
    • That the applicant consents to the use of facial recognition; and 
    • Whether the applicant read the consent waiver. 

Like the Illinois law, the Maryland law does not include a specific penalty or fine for a violation of the law. 

  • New York City 

Most recently, New York City has enacted Local Law Int. No. 1894-A, which requires an independent “bias audit” of AI hiring tools at least one year before first use. The law also requires that information about the audit be publicly available, and that the company notify applicants that AI hiring algorithms will be used. The price tag is a $500 to $1,500 penalty for each violation. 

Notably, 1894–A defines an audit as “an impartial evaluation by an independent auditor” used to test the technology for any discriminatory impact based on race, ethnicity, or sex. It’s still unclear who is qualified to conduct an audit. So far, law firms are stepping in to offer the service. In the event employers need an audit performed, employers should not hesitate to call their attorney. 

Takeaways 

Like with any technology, AI hiring tools will evolve over time. We remain hopeful that these glitches in AI will soon be corrected. However, for now, employers using or planning to use AI hiring tools should ensure that their use of AI follows the law. You should: 

  • Review EEOC guidance to ensure that your AI hiring tools comply with the requirements of the ADA and Title VII. Specifically, employers should ensure that algorithms are not discriminating against individuals based on protected characteristic and disabilities. 
  • Research whether municipal or state laws require AI audits, restrictions on facial recognition services, or restrictions on AI analysis of video interviews. 
  • Ensure third-party vendors of AI technology are aware of and follow federal, state, and local requirements. 

In all of these discussions and assessments, consider involving your favorite employment lawyer (at least if you want to protect it under attorney-client privilege). 

*Meaghan Pickles is a summer associate at Bradley and is not a licensed attorney.

It's Easy to Get Started

Transform compensation at your organization and get pay right — see how with a personalized demo.