Do You Know Where Your AI Is?

NEWSLETTER VOLUME 1.7

|

June 23, 2023

Editor's Note

Do You Know Where Your AI Is?

We're seeing more regulation about using artificial intelligence (AI) in employment decisions. I will continue to remind everyone that you can't outsource employment decisions to computer systems or programs, even ones that claim to be intelligent. 

 

After reading this article, I realized that there are probably a lot of companies that are using AI and don't know it. To figure it out, answer this one question:  

 

Do you use any HR technology? 

 

If the answer is yes, because of course you do, then it's likely those programs have AI as part of their functionality. If they sort, match, predict, or give percentages in relation to people and work, they almost certainly work based on AI. 

 

It's OK. Just remember: 

 

  • The tools are only as good as the data they use 
  • All HR data is a representation of something about people, their work, and their lives 
  • All data is missing context and other information that may be important 
  • All predictions, suggestions, and percentages offered by an AI system are opinions, not facts 
  • Opinions may or may not be valid, correct, or useful 
  • Often, they are helpful, especially to ask better questions 

 

When you are making employment decisions, or really, any decision that has a significant effect on people and their lives, remember, you are the decider. You are going to have more context, better understanding, and a clearer view of what you are trying to do and what you actually want.  

 

Most importantly, you have a sense of what is fair and right under the circumstances that no AI system can ever match. 

 

Here's the latest on using AI in HR and some of the new state laws addressing it. 

- Heather Bussing

 

AI Users Beware: Federal, State and Local Legislators and Regulators to Crack Down on AI-Related Employment Discrimination

by Laura Killalea and Jean Kuei

at Pillsbury Winthrop Shaw Pittman LLP

 

According to a 2022 survey from the Society for Human Resource Management, approximately one in four organizations use automation and/or AI to support employment-related activities, such as recruitment and hiring. AI tools used in employment decision-making include chatbots that guide applicants through the application process, algorithms that screen resumes and predict job performance, and even facial recognition tools used in interviews to evaluate a candidate’s attention span. For employers, these tools may offer an efficient and effective way to recruit promising talent, but federal, state and local governments are increasingly focused on the potential for discrimination. 

 

Title VII prohibits employment discrimination on the basis of race, color, religion, sex (including pregnancy, sexual orientation and gender identity), national origin, age (40 and over), disability and genetic information (including family medical history). The law prohibits both intentional discrimination and “disparate impact” discrimination, which involves using neutral tests that have the effect of disproportionately excluding persons based on a protected characteristic. If, for example, an employer has a height requirement in hiring, this may have a disparate impact on women applicants, and, if so, will run afoul of Title VII. 

 

At first glance, AI tools might seem like a good alternative to flawed human decision-making. But evidence shows that, in fact, these tools often reproduce or worsen the human biases upon which they are built. This means that even if an AI tool is designed to avoid disparate treatment by protecting certain characteristics, the tool’s assessment of other qualities, such as gaps in work experience or verbiage used in a resume, may lead to discrimination by proxy, also called “algorithmic discrimination.” Discrimination by proxy is a form of disparate impact discrimination, in which a seemingly neutral criterion is used as a stand-in for a protected trait. Proxy discrimination can be intentional—such as using a person’s zip code as a proxy for class or race—or unintentional, such as not selecting individuals with gaps in their resumes and incidentally excluding working parents. 

 

Whether intentional or not, an employer can be held liable for discrimination that results from the use of AI and algorithmic decision-making tools. 

 

Updates in AI Regulation 

Federal Efforts: The Equal Employment Opportunity Commission (EEOC)
From launching the AI Bill of Rights to announcing heightened enforcement attention on potential bias in AI systems, the Biden administration is proceeding with its commitment to monitor and regulate AI, including the use of AI in employment decision-making. The EEOC, which has dubbed discrimination in AI and automated systems the “New Civil Rights Frontier,” is among the federal agencies leading the charge. In January, the EEOC published a draft strategic enforcement plan, which for the first time emphasized the pervasiveness of AI and automated systems in employment decision-making. The plan announced the agency’s focus on scrutinizing employment decisions, practices or policies in which the use of technology may contribute to discrimination based on a protected characteristic. Through monitoring and regulating, the EEOC seeks to root out discrimination. 

Since then, the EEOC has repeatedly shown its commitment to ensuring that these new technologies comply with federal EEO laws. In April, the EEOC joined the Consumer Financial Protection Bureau (CFPB), Department of Justice (DOJ) and Federal Trade Commission (FTC) in a joint statement communicating the agencies’ commitment to monitoring and regulating automated systems, including those used in employment recruitment, hiring and firing. In announcing the joint statement, EEOC Chair Charlotte Burrows underscored that the EEOC will utilize its enforcement mechanisms to ensure that “AI does not become a high-tech pathway to discrimination.” 

 

On May 18, the EEOC issued a new guidance document cautioning employers that AI and algorithmic decision-making tools are vulnerable to running afoul of Title VII. The EEOC warned that “in many cases” an employer will be responsible under Title VII for its use of AI tools or algorithmic decision-making “even if the tools are designed or administered by another entity, such as a software vendor.” To prevent such an outcome, the EEOC recommends that employers ensure that the technology vendor has taken steps to determine whether the use of the tool causes a significantly lower selection rate for individuals with a protected characteristic. The agency further suggests that employers conduct self-analyses on a regular basis to determine whether their employment practices—whether or not they are dependent on AI and algorithmic decision-making tools—disproportionately impact individuals of a protected class. 

 

State Regulation
In addition to action on the federal level, many states have introduced legislation to regulate AI-based employment tools. This includes California A.B. 331, which would require employers using AI tools to conduct an annual impact assessment and would expose employers to liability if their AI tools result in algorithmic discrimination. Similarly, the District of Columbia’s “Stop Discrimination by Algorithms Act of 2023” would require a third-party bias audit of any AI-based employment tool and would prohibit using algorithmic decision-making “in a discriminatory manner.” Bills are also under consideration in Connecticut, New Jersey, New York and Vermont. Additionally, two states—Illinois and Maryland—have passed laws restricting the use of facial recognition technology in job interviews. 

 

New York City Local Law 144
On July 5, 2023, New York City will begin enforcing its Automated Decision Tools law, making it the first jurisdiction to regulate an employer’s use of artificial intelligence. The New York City Department of Consumer and Worker Protection (DCWP) issued final regulations in early April 2023 to clarify the scope and implementation of the law. The law applies to New York City-based employers and employment agencies that use an “automated employment decision tool” (AEDT). The regulations define AEDT as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence” that issues “simplified output,” which substantially assist[s] or replace[s] discretionary decision-making,” like a score or recommendation, which ultimately has a determinative impact in the hiring or promotion process. Companies who use AEDTs are required to contract with an independent auditor to conduct a bias audit and meet certain notice requirements. The contours of the law are complex, and we therefore urge New York City-based employers to closely review their AEDTs and consult counsel to ensure compliance. 

 

Next Steps for Employers
Employers that use AI or automated systems to assist with hiring or other employment decision-making should proactively address potential bias and discrimination. Employers should develop clear, well-defined policies and procedures explaining the extent to which they use AI tools in employment decisions. Such procedures should be disclosed in employee handbooks and disseminated to employees. Employers using AI tools should also begin annual bias audits to determine whether the tools are resulting in biased or discriminatory employment decision-making practices. 

 

For those employers in jurisdictions already enforcing against the use of discriminatory AI, consider consulting counsel to ensure compliance. Other employers should not wait to act: any employer using AI tools in employment decision-making should contact their vendors, independent auditors and counsel to assess next steps. 

It's Easy to Get Started

Transform compensation at your organization and get pay right — see how with a personalized demo.