AI at Work, the Big Picture

NEWSLETTER VOLUME 1.14

|

August 09, 2023

Editor's Note

AI at Work, the Big Picture

 

AI isn't really intelligent. Systems using machine learning take lots of data and work to make sense of the information based on instructions from people designing and creating the systems. 

 

You get new insights, fast answers to questions, and can monitor changes as they happen. This is a really good thing. 

 

But the systems are based on data about what's already happened. Generally, more of something gets translated to good. This can cause bias or leave out "outliers" that may matter. 

 

In order to use data effectively, people have to pick and choose what to measure and make decisions about what matters. 

 

Then, all data and measurement strips away detail and context.  

 

So do words, particularly nouns. Flower can mean a whole lot of different things to different people. So can descriptions of skills, work, and any attribute about people. 

 

Then, in order to make the information useful, we define things. To define is to limit. 

 

When we use these systems, we get some things we may not realize or want, like bias, and we don't get some things we may want or really need, like context. 

 

This is a really useful post on the challenges of using AI at work, with excellent analysis of the legal issues, practical approaches, and helpful resources. 

 

- Heather Bussing

 

Revolutionary Change but No Free Lunch: What To Know About Algorithmic Discrimination and AI

by  Jason Downs and Greg Sunstrum

at Brownstein Hyatt Farber Schreck

 

The capacity and deployment of artificial intelligence (“AI”) is dizzying. As businesses vet and/or actively integrate AI into their business processes, it is critical to understand not only AI’s potential but the potential risks. This includes inadvertently contributing to systemic discrimination issues and being subject to claims of violation of existing legal protections. According to researchers, this technology can yield discriminatory results due to the nature of the data originally entered into the technology that may be corrupted by human error and bias. 

KEY TAKEAWAY 

Businesses are not shielded from risk by relying upon representations made by the vendor or software developer of an AI program or service they employ. Businesses using AI software can be held directly liable for potential violations of federal or state laws. 

WHAT IS THE BIG DEAL? 

  • Rapid growth and opportunity: According to PWC, AI could contribute $15.7 trillion to the global economy by 2030. 
  • Economywide impacts, especially in several notable industries: Health care and medical, financial services, retail information security and cybersecurity. 
  • Regulatory responses and oversight: Here, Brownstein addresses the Biden administration’s May 2023 new guidance to address AI. In addition to potential regulatory efforts to address AI deployment and development, companies should be aware of existing regulatory risks surrounding the use of AI. 

WHAT IS AI AND ALGORITHMIC DISCRIMINATION? 

  • AI: While there is no consensus of a definition of AI, it is often characterized as machines and automated systems with the capacity to “operate with varying levels of autonomy” to “determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities. 
  • Per the White House: “Algorithmic discrimination occurs when these systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex, religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.” According to the co-founder of AI Now, Kate Crawford, this can be either: 
    • Allocative harm: “when a system allocates or withholds a certain opportunity or resource,” or 
    • Representation harm: “when systems reinforce the subordination of some groups along the lines of identity.” 
  • Algorithmic discrimination can occur when a computerized model makes a decision or a prediction that has the unintended consequence of denying opportunities or benefits more frequently to members of a protected class than to an unprotected control set. A discriminatory factor can infiltrate an algorithm in a number of ways, but one of the more common methods is when the algorithm includes a proxy for a protected class characteristic because unrelated data suggests the proxy is predictive or correlated to a legitimate target outcome. 

WHO DOES THIS APPLY TO AND WHAT KEY AGENCIES ARE INVOLVED 

  • Automated system designers and developers should be proactive to ensure protection for individuals and communities from algorithmic discrimination as they design and improve systems. 
  • Entities that utilize and deploy automated systems should clearly understand that they are also subject to scrutiny and risk. 
  • Employers should be clear that Title VII applies to AI. 
  • Any entity using algorithmic decision-making tools to assist with hiring and employment-related decisions should be aware of the risks and responsibilities associated with those practices. 
  • In 2021, EEOC launched an initiative to ensure that the use of AI and other emerging technologies used in hiring and employment decisions comply with federal civil rights laws. 
  • Digital Marketers: In August 2022, the Consumer Financial Protection Bureau (CFPB) issued an interpretive rule stating that digital marketers who identify potential customers or place content in ways meant to affect consumer behavior may qualify as service providers under the Consumer Financial Protection Act. 
  • If their actions violate federal law, even if the violation might have stemmed from an algorithmic decision, the marketers can be held legally accountable. 
  • In 2022, the FTC issued a report to Congress warning “that AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance.” 
  • In April 2023, the FTC, DOJ, CFPB and EEOC issued a joint statement pledging to “uphold America’s commitment to the core principles of fairness, equality, and justice” in the use of AI. The agencies “… expressed concerns about potentially harmful uses of automated systems and resolved to vigorously enforce their collective authorities and to monitor the development and use of automated systems.” 

WHAT TO DO 

Human intervention should be the norm. 

  • Companies should start by evaluating the risks that their use of AI could lead to discrimination. 
  • Utilizing a law firm to oversee these processes provides a safeguard in terms of confidentiality regarding results and recommendations for improvement: 
  1. Ensure you have a sound understanding of the factors and algorithms your AI is considering and how information is utilized throughout processes. This includes vetting potential AI vendors for unbiased datasets and explainable AI decisions. 
  2. Compare those factors to existing federal and state laws. Determine whether the use of those factors or the process by which your AI systems weighs them is prohibited by any state or federal laws. Ensure that none of the factors serve, even inadvertently, as proxies for inappropriately considering any protected statuses. Consider whether even those factors that are legal present any risks. 
  3. Determine any discriminatory impact by working with appropriate experts. 
  4. Designate internal responsibility to an AI lead, information technology official or task force to develop a comprehensive corporate policy on acceptable AI use in the workplace. 
  5. Implement internal processes, training and oversight to continually monitor as AI and regulations evolve. 

RESOURCES 

Consumer Reports:  

Bad Input: Three short firms explain how biases in algorithms translate to unfair practices 

Consumer Financial Protection Bureau (CFPC):  

Hiring Technologists to Protect Consumers 

White House Office of Science & Technology:  

White Paper: Blueprint for an AI Bill of Rights 

U.S. Equal Employment Opportunity Commission (EEOC):  

Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964 

Combatting Online Harms Through Innovation 

Stanford University 

The AI Index Report: Measuring Trends in Artificial Intelligence 

Brownstein 

Biden Administration Takes on AI in New Guidance 

Proxy Problems—Solving for Discrimination in Algorithms 

FTC, Department of Justice (DOJ), CFPB 

and EEOC 

Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems 

Consumer Financial Protection Bureau 

CFPB and Federal Partners Confirm Automated Systems and Advanced Technology Not an Excuse for Lawbreaking Behavior 

It's Easy to Get Started

Transform compensation at your organization and get pay right — see how with a personalized demo.