Insight

EEOC Issues Guidance on Artificial Intelligence Hiring Tools

McGuireWoods Legal Insights, May 23, 2023

Miles Indest

Miles Indest

August 10, 2023 03:34 PM

On May 18, 2023, the U.S. Equal Employment Opportunity Commission (EEOC) issued a new technical assistance document titled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.”

This technical assistance is part of the EEOC’s 2021 agencywide initiative to ensure that the use of software such as artificial intelligence (AI), machine learning and other emerging technologies in hiring and other employment decisions complies with the federal civil rights laws enforced by the agency. The new guidance builds on the Uniform Guidelines in the Employee Selection Procedures (UGESP) adopted by the EEOC in 1978, as well as guidance issued last year addressing issues of using artificial intelligence in hiring within the context of the Americans with Disabilities Act.

The technical assistance addresses the potential discriminatory impact of using algorithmic decision-making tools, defined as the computer analysis of data that an employer relies on, either partly or in whole, when making decisions about employment. The guidance highlights the following examples of such software available to employers:

  • resume scanners that prioritize applications using certain keywords;
  • employee-monitoring software that rates employees on the basis of their keystrokes or other factors;
  • virtual assistants or chatbots that ask job candidates about their qualifications and reject those who do not meet predefined requirements;
  • video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and
  • testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills or perceived “cultural fit” based on their performance on a game or on a more traditional test.

How can employers tell if their algorithmic decision-making tools are in danger of violating federal employment discrimination laws? According to the EEOC, any selection tools that create an adverse selection rate toward individuals of one or more protected characteristics can be indicative of discrimination. The technical guidance reminds employers that although AI systems have the appearance of objectivity, they are developed by humans and therefore are subject to the societal and personal biases that can create disparate outcomes in hiring.

The EEOC provides direction on how to evaluate the extent to which bias may permeate an employer’s automated process. The technical assistance directly states that the “four-fifths rule” can be applied to AI tools to help identify disparate impact. This test, described in detail in the UGESP, defines a selection rate for one group as “substantially” different than the selection rate of another group if their ratio is less than four-fifths (or 80%). For example, an employer’s hiring tool creates a selection rate of black applicants of 30%, while its selection rate of white applicants is 60%. Because the ratio of those two rates (30/60 or 50%) is lower than four-fifths, this selection rate for black applicants is substantially different than the selection rate for white applicants and could evidence discrimination against black applications.

The EEOC reiterates that the four-fifths rule is a good rule of thumb, but quickly dashes employers’ hopes of calculating their way into compliance with a simple formula. In some situations, the four-fifths rule will not be a reasonable substitute for a test of statistical significance — for example, where many selections are made, causing any ratio to be irreflective of the actual impact on different protected groups. As with traditional selection processes, employers should subject AI tools to holistic review; compliance with any one test cannot disprove discriminatory outcomes. The EEOC recommends that employers conduct self-analyses and audits on an ongoing basis. However, the EEOC makes it clear that employers need not discard their existing AI tools, but should make amendments to remedy discriminatory selection rates. Because algorithms can be adjusted, not doing so may open an employer up to liability.

Many employers may hope to circumvent these concerns by outsourcing AI hiring tools to third-party vendors. The technical assistance, however, states that employers may still be liable for their agents’ violations of federal employment discrimination laws. Employers therefore should take steps to determine if vendors or developers are building and auditing their AI tools for any discriminatory impact. The EEOC recommends asking vendors specifically if they relied on the four-fifths rule, or other court-approved standards like statistical significance, when auditing their product.

Tips and Takeaways

The technical assistance urges employers to take a hands-on approach to auditing AI usage in their hiring processes. The following tips may aid employers in that task:

  • Maintain human oversight of AI tools. Employers should ensure automated hiring processes are subject to consistent review, not just to make sure these tools are providing accurate insights, but also to ensure that they are not reflecting existing biases of individuals building and maintaining the tools. Performing self-audits is crucial for employers to prevent discriminatory hiring practices.
  • Do not delegate compliance to AI vendors. Employers should perform due diligence around which AI tools they implement by asking vendors pointed questions about testing and audit practices with a focus on disparate impact. Employers also should review their commercial contracts with AI vendors to ensure that indemnities and other contractual allocation of risk are properly addressed.
  • Continue organizational bias training. Both implicit and explicit bias training are essential to identify potentially discriminatory practices and should form the foundation when building meaningful audit procedures for hiring practices, especially automated decision-making tools.

For questions about how artificial intelligence presents both risks and opportunities for employers, contact the authors of this article.

Trending Articles

2026 Best Lawyers Awards: Recognizing Legal Talent Across the United States


by Jamilla Tabbara

The 2026 editions highlight the top 5% of U.S. attorneys, showcase emerging practice areas and reveal trends shaping the nation’s legal profession.

Map of the United States represented in The Best Lawyers in America 2026 awards

Gun Rights for Convicted Felons? The DOJ Says It's Time.


by Bryan Driscoll

It's more than an administrative reopening of a long-dormant issue; it's a test of how the law reconciles the right to bear arms with protecting the public.

Firearms application behind jail bars

2026 Best Lawyers Awards in Canada: Marking 20 Years of Excellence


by Jamilla Tabbara

Honoring Canada’s most respected lawyers and spotlighting the next generation shaping the future of law.

Shining Canadian map marking the 2026 Best Lawyers awards coverage

Revealing the 2026 Best Lawyers Awards in Germany, France, Switzerland and Austria


by Jamilla Tabbara

These honors underscore the reach of the Best Lawyers network and its focus on top legal talent.

map of Germany, France, Switzerland and Austria

Best Lawyers 2026: Discover the Honorees in Brazil, Mexico, Portugal, South Africa and Spain


by Jamilla Tabbara

A growing international network of recognized legal professionals.

Map highlighting the 2026 Best Lawyers honorees across Brazil, Mexico, Portugal, South Africa and Sp

How to Sue for Defamation: Costs, Process and What to Expect


by Bryan Driscoll

Learn the legal standards, costs and steps involved when you sue for defamation, including the difference between libel and slander.

Group of people holding papers with speech bubbles above them

Build Your Legal Practice with Effective Online Networking


by Jamilla Tabbara

How thoughtful online networking supports sustained legal practice growth.

Abstract web of connected figures symbolizing online networking among legal professionals

Algorithmic Exclusion


by Bryan Driscoll

The Workday lawsuit and the future of AI in hiring.

Workday Lawsuit and the Future of AI in Hiring headline

Blogging for Law Firms: Turning Content into Client Connections


by Jamilla Tabbara

How law firms use blogs to earn trust and win clients.

Lawyer typing blog content on laptop in office

Reddit’s Lawsuit Could Change How Much AI Knows About You


by Justin Smulison

Big AI is battling for its future—your data’s at stake.

Reddit Anthropic Lawsuit headline

How to Choose a Good Lawyer: Tips, Traits and Questions to Ask


by Laurie Villanueva

A Practical Guide for Your First-Time Hiring a Lawyer

Three professional lawyers walking together and discussing work

The 2026 Best Lawyers Awards in Chile, Colombia and Puerto Rico


by Jamilla Tabbara

The region’s most highly regarded lawyers.

Map highlighting Chile, Colombia and Puerto Rico for the 2026 Best Lawyers Awards

Common-Law Marriage in Indiana: Are You Legally Protected?


by Laurie Villanueva

Understanding cohabitation rights and common-law marriage recognition in Indiana.

Married Indiana couple in their home

Why Jack Dorsey and Elon Musk Want to 'Delete All IP Law'


by Bryan Driscoll

This Isn’t Just a Debate Over How to Pay Creators. It’s a Direct Challenge to Legal Infrastructure.

Elon Musk and Jack Dorsey standing together Infront of the X logo

AI Tools for Lawyers: How Smithy AI Solves Key Challenges


by Jamilla Tabbara

Understand the features and benefits within the Best Lawyers Digital Marketing Platform.

Legal professional editing profile content with Smithy AI

Alimony Explained: Who Qualifies, How It Works and What to Expect


by Bryan Driscoll

A practical guide to understanding alimony, from eligibility to enforcement, for anyone navigating divorce

two figures standing on stacks of coins