left-caret

Client Alert

“AI” in Employment Law: EEOC Issues Title VII Guidance

May 25, 2023

By Kenneth W. Gage,Dan Richards,& Sara B. Tomezsko

The accelerating pace of artificial intelligence innovation illustrated by ChatGPT, Bard, and other applications that have captured the world’s attention bring the promise of better and more effective tools on which employers will increasingly rely to make decisions about applicants and employees. Although lawmakers at the federal, state, and local levels are just beginning to grapple with possible new laws concerning this technology, regulators are keenly focused on how current laws apply to this technology. For example, the use of artificial intelligence and automated systems in employment decision-making is a strategic enforcement priority for the Equal Employment Opportunity Commission (“EEOC”). Last week, the agency issued questions and answers on the use of AI in compliance with Title VII of the Civil Rights Act of 1964 (the “Guidance”). The agency previously issued similar guidance on complying with the Americans with Disabilities Act when using such tools.

Understanding artificial intelligence and automated systems

Employers should understand the Title VII risks associated with a growing number of commercial solutions offering AI and automated systems capabilities. That understanding includes knowledge of the relevant terminology in the first instance. To that end, the Guidance clarifies that:

  • Software is “information technology programs or procedures that provide instructions to a computer on how to perform a given task or function”;
  • An algorithm “is a set of instructions that can be followed by a computer to accomplish some end”; and
  • Artificial intelligence is a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”

According to the Guidance, an employer’s use of the following software, algorithmic, and AI tools (collectively, “AI”) may implicate issues under Title VII:

  • Résumé scanners that prioritize applications using certain keywords;
  • Employee monitoring software that rates employees on the basis of their keystrokes or other factors;
  • “Virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements;
  • Video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and
  • Testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or on a more traditional test.

When might AI violate Title VII?

The principal risk under Title VII from using AI for employment decision-making is potential exposure to disparate impact liability if use of the tool results in practically and statistically significant adverse consequences for protected classes of applicants or employees. Accordingly, the EEOC’s Guidance suggests that employers follow the Uniform Guidelines on Employee Selection Procedures (the “UGESP”) to assess the potential disparate impact of AI tools when employers use them to “hire, promote, terminate, or take similar actions toward applicants or current employees.” The EEOC “encourages” “self-analyses on an ongoing basis,” including to “determine whether their employment practices have a disproportionately large negative effect on a basis prohibited under Title VII or treat protected groups differently.”

In particular, the EEOC says that an AI tool that results in a selection rate for a group of applicants or employees that is “substantially” less than a selection rate for another group constitutes adverse impact. Specifically, “[a] selection rate for any race, sex, or ethnic group which is less than four-fifths (or eighty percent) of the rate for the group with the highest rate will generally be regarded by Federal enforcement agencies as evidence of adverse impact, while a greater than four-fifths rate will generally not be regarded by Federal enforcement agencies as evidence of adverse impact.” 29 C.F.R. § 1607.4(D). In addition to this Four-Fifths Rule, to establish adverse impact, the EEOC (and courts) has traditionally required a measure of statistical significance as a means of showing causation (i.e., that the disparate selection rates are not due to random chance).[1]

The EEOC cautions that, even where the AI tool is administered on employees or applicants by a third-party software vendor, an employer may still be liable under Title VII for disparate impact. The agency further warns that employers cannot rely on the vendor’s representations about the tool if they turn out to be incorrect. Accordingly, the EEOC specifically advises employers to explore with the vendor the extent to which the tool has been tested for disparate impact, and the extent to which it has been validated as a reliable means of measuring job-related characteristics or performance.

Using AI consistently with the law

The EEOC’s Guidance is not law, and some of the positions advanced are debatable. Still, the document reflects the EEOC’s views on enforcement, so employers should take stock of any AI tools used in employment decision-making and consider assessing their adverse impact. Employers should understand how they use AI and which employment-based decisions may be impacted in order to inform strategies around ongoing compliance efforts and transparency obligations that arise not only under Title VII, but also a growing and diverse patchwork of related state and local laws and legislation. Employers may wish to consult outside counsel for an independent bias audit on these tools and to discuss what other measures may be necessary in this rapidly evolving space.

 

[1] See Questions and Answers to Clarify and Provide a Common Interpretation of the Uniform Guidelines on Employee Selection Procedures at 22, https://www.eeoc.gov/laws/guidance/questions-and-answers-clarify-and-provide-common-interpretation-uniform-guidelines (Mar. 2, 1979).

Click here for a PDF of the full text

Contributors

Image: Kenneth W. Gage
Kenneth W. Gage

Partner, Employment Law Department


Image: Dan Richards
Dan Richards

Associate, Employment Law Department


Image: Sara B. Tomezsko
Sara B. Tomezsko

Partner, Employment Law Department


Image: Felicia A. Davis
Felicia A. Davis

Partner, Employment Law Department


Image: Emily R. Pidot
Emily R. Pidot

Partner, Employment Law Department


Image: Carson H. Sullivan
Carson H. Sullivan

Partner, Employment Law Department


Practice Areas

Employment Counseling and Preventive Advice

Employment Law

Employment Litigation


For More Information

Image: Dan Richards
Dan Richards

Associate, Employment Law Department

Image: Kenneth W. Gage
Kenneth W. Gage

Partner, Employment Law Department

Image: Sara B. Tomezsko
Sara B. Tomezsko

Partner, Employment Law Department

Image: Felicia A. Davis
Felicia A. Davis

Partner, Employment Law Department

Image: Emily R. Pidot
Emily R. Pidot

Partner, Employment Law Department

Image: Carson H. Sullivan
Carson H. Sullivan

Partner, Employment Law Department