AI Doesn’t Eliminate Bias, But It Helps Us See It

NEWSLETTER VOLUME 2.11

|

March 18, 2024

Editor's Note

AI Doesn't Eliminate Bias, But It Helps Us See It

AI takes oodles, gobs, heaps, and slews of data and processing power to do things in moments that would take our pondering, distracted brains a really long time or be impossible.

 

Data starts out as information about our analog existence that can be categorized and mathified. Then all that computing power can be used to sort, compare, rank, calculate percentages, track changes over time, and make predictions, all of which creates more data. Then you can take the new data and do more cool things. Mathify, compute, repeat.

 

But there's lots of opportunity for bias to creep in and important information to get out.

 

Humans are biased. We're made that way. We make sense of the world from our limited, personal, perspective. And it takes time, education, and effort to expand those views. Learning and not knowing is uncomfortable. We'd rather be certain and preferably, right. We also tend to see what we expect or want to see. And what we want to see is often what makes us most comfortable.

 

This makes perfect sense. Until you have more information than you can ever understand and the ability to do stuff with it that you can't ever know exactly how it works. At that point, you have to get curious and learn to see past your own expectations. This is hard work.

 

But AI makes it possible to see things we would never notice or understand otherwise. It also helps us ask better questions, get to the information we need more quickly, and see what's really going on instead of what we think's going on.

 

Yes. AI can help us with diversity and equity. But it comes with bias, in the data, in the systems, and in ourselves. Before you use AI for employment decisions, vet both the data and the tool to make sure you will get what you need and want from the system. Then be open to the fact that there are issues you need to address because, well, we're human and there will be.

 

- Heather Bussing

 

Artificial intelligence (AI) continues to dominate headlines and even the most recent Super Bowl advertisements. The use of AI in the workplace is rapidly expanding in a wide variety of ways throughout the hiring process, including scanning and filtering resumes and AI-driven video interviews to assess candidates. If appropriately designed and applied, AI can help people find their most rewarding jobs and match companies with their most valuable and productive employees. Equally important, AI has been shown to advance diversity, inclusion, and accessibility in the workplace.

AI tools can help increase diversity in several noteworthy ways. First and foremost, AI tools can remove the human element and thus, at least in theory, remove the subjectivity involved with hiring and other employment decisions, helping ensure that candidates whose skills and experience are the best match for a role advance through the selection process. Notably, AI can anonymize certain information about candidates, reducing the possibility that characteristics, such as a candidate’s name, that may be associated with certain protected classes, are not considered during the evaluation process. AI tools have also been shown to increase the diversity of candidates interviewed by displaying biographical information only after the candidates pass certain skills tests or meet certain metrics. AI-powered virtual interviews can also help increase diversity by standardizing interviews so that bias does not seep into the process. A human is more inclined to deviate from the interview script whereas an AI tool is more likely to follow the script. In addition, AI-powered chatbots help advance diversity by providing consistent information to applicant questions so that all candidates are on a level playing field and have equal access to critical information and resources during the hiring process. These resources can be invaluable in helping applicants regardless of their backgrounds.

Despite the benefits of using AI during the hiring process to advance diversity goals, its use is not without challenges. Legal risks can arise if the AI tools are used to discriminate against protected classes either intentionally or unintentionally. In most cases, there are unintentional discrimination risks. As noted above, the use of AI tools can remove the human element but there are concerns that the AI tools will simply inherit (or maybe even worsen) existing biases. If an AI tool is trained on data that reflect past discriminatory decision-making, it may unintentionally perpetuate those biases, even if the goal is to promote diversity. This is known as the “garbage in, garbage out” problem with AI tools. Another potential legal risk is if the AI tools use neutral criteria to evaluate candidates in a way that disproportionately impacts protected classes. For instance, after the COVID-19 pandemic, many employers wanted to attract and recruit applicants closer to the office because these applicants were more likely to go to the office if the commuting distance was shorter. But in such cases legal risks arise because limiting an applicant pool by geographic distances could result in unintentional discrimination. Zip codes, for example, are often highly correlated with racial and/or ethnic groups.

Another concern is the evolving legal landscape with AI tools. Indeed, the steady implementation and rapid development of AI tools has led to a growing number of proposals for increased oversight, including measures regulating the use of AI in the employment context at the state and local level. For instance, New York City now has a broad AI employment law that regulates employers’ use of all AI tools used for hiring and promotion decisions. The law requires employers that use AI to screen candidates who apply for a job or a promotion for a role in New York City to conduct an annual bias audit, publish a summary of the audit results, inform the candidates that AI is being used, and give them the option of requesting an accommodation or an alternative selection process. The law is meant to promote transparency and give employers the opportunity to detect and correct unintentional bias in the candidate screening process.

Employers can take steps to increase diversity in hiring by utilizing AI tools that mitigate against employment discrimination risks. Employers can begin by identifying specific diversity goals that AI tools can help achieve, such as enhancing the size and diversity of the applicant pool, which will guide the selection of the appropriate tool. The selected AI tool should be one that is known for its transparency and fairness. Training—not only on how to use the AI tool but also on the limitations involved in its application—can be an effective way to guard against associated risks. Employers can monitor and audit AI uses and processes to proactively identify intentional misuse or potential discriminatory outcomes. Employers can also track AI legislation and litigation because this is a rapidly developing area. Situational awareness of the evolving legal environment will become increasingly important in the future.

Finally, employers should consider implementing AI policies and practices to ensure there are proper guardrails in place and that AI is used in a responsible and legally compliant way. The challenge with these policies, practices, and guardrails will be striking a balance between AI’s potential to foster a more diverse workplace, against nuances that can only be captured and contextualized by human judgment. However, a prophylactic, multi-pronged approach leveraging policies, recommended practices, and training, is critical for ensuring objectivity and that bias does not seep into AI processes unintentionally.

It's Easy to Get Started

Transform compensation at your organization and get pay right — see how with a personalized demo.