Algorithmic bias is now a legal battleground. In Mobley v. Workday, a federal court allowed age discrimination claims to proceed against the HR software provider, framing the case as an early test of how civil rights law applies to AI-driven hiring. The plaintiffs argue that Workday’s screening technology filtered out older candidates across hundreds of job applications, often within minutes, before any human review.
AI is embedded across the hiring landscape—87% of employers now rely on it to evaluate candidates—but statutory frameworks haven’t kept pace with the scale or structure of automated decision-making. Mobley forces the courts to consider a key legal question: When an algorithm drives employment outcomes, can the vendor be held liable under anti-discrimination laws, even if the employer technically controls the final decision?
That question cuts to the heart of accountability in modern hiring, where algorithms don’t just assist. They decide.
Discrimination by Algorithm: What the Workday Suit Alleges
Derek Mobley, a seasoned professional with a degree from Morehouse College and experience across finance, IT, and customer service, submitted more than 100 job applications through Workday’s platform. He was rejected every time—often within minutes, sometimes in the middle of the night—without an interview.
He’s not alone. Four additional plaintiffs, all over 40, have joined the case, describing similar patterns of near-instant rejections and alleging that Workday’s applicant screening tools disproportionately excluded them based on age.
The claim centers on disparate impact. Plaintiffs argue that Workday’s algorithm functionally deprioritized older applicants, effectively excluding a protected class from employment consideration. According to the complaint, the discrimination flowed from the software’s design—ranking, screening, and filtering in ways that penalized age with no human intervention.
Workday denies responsibility. It maintains that the hiring decisions rest with its clients, not with the platform. The company argues that its technology simply compares candidate qualifications to client-defined job requirements and does not identify or act on protected characteristics.
That position hasn’t ended the case. A federal judge in California allowed the lawsuit to proceed as a collective action, rejecting Workday’s attempt to force individualized claims. While intentional discrimination claims were dismissed, the disparate impact allegations survived. The litigation now moves forward with the central premise intact: that exclusion was systemic, not accidental, and that algorithmic tools can trigger liability even when a vendor sits one step removed from the hiring decision.
Disparate Impact in the AI Era
The disparate impact framework under Title VII and the ADEA allows plaintiffs to challenge facially neutral employment practices that disproportionately harm protected groups. When older applicants are consistently screened out by a hiring system—even without explicit bias or intent—that practice can still trigger liability if the impact is significant and unjustified by business necessity.
For decades, this doctrine has functioned as a backstop against systemic exclusion. But AI introduces new friction.
Algorithmic tools often rely on proxy variables, inputs that correlate with protected traits like age or race without naming them outright. Educational background, location, word choice, and even resume formatting can operate as stand-ins for demographic characteristics. The result is a model that appears neutral on its face but consistently favors younger candidates, especially if trained on data reflecting a younger existing workforce.
That training data is central to the issue. When a model is built on past hiring decisions, it replicates whatever bias those decisions reflect.
If older applicants were historically overlooked, the algorithm may infer that youth is a positive hiring signal. And because many of these systems are black boxes—protected as proprietary or too complex to interpret—plaintiffs face an evidentiary wall. They can’t point to a specific exclusionary rule or score threshold; they have only patterns, outcomes, and statistical disparities.
In Mobley, the plaintiffs argue this is enough. They claim the volume and speed of rejections across hundreds of applications supports an inference that Workday’s system functionally screened them out based on age. Whether that’s enough for liability will depend on how courts apply the disparate impact standard to algorithmic systems.
The EEOC has taken steps to clarify that AI tools fall within the scope of Title VII enforcement. In recent guidance, the agency emphasized employers cannot contract away their obligations simply by outsourcing hiring to algorithmic systems. Precedent from earlier technological contexts—such as the abandoned Amazon resume screener that penalized female applicants—reinforces that AI systems often replicate, rather than resolve, historical bias.
Platform Liability and the Vendor Defense
Workday’s defense rests on role distinction. The company argues that it provides tools, not decisions. Clients choose whom to interview and hire. The software only executes client-defined criteria and delivers efficiencies at scale. That line may hold in theory, but in practice, courts are increasingly willing to examine where control ends and causation begins.
Plaintiffs in Mobley assert that Workday’s algorithm didn’t just assist employers—it functioned as a gatekeeper. They allege the tool didn’t merely flag candidates but filtered them, scored them, and eliminated them before any human interaction. That level of involvement reframes the software’s role from passive infrastructure to active participant.
Joint employer doctrine asks whether a third party exerts meaningful control over employment conditions. Vicarious liability principles evaluate whether an actor contributed to a harm in a foreseeable and proximate way. Even product liability analogies may be relevant here: if a product’s design predictably causes exclusion, should its maker be held accountable?
SaaS contracts often disclaim this kind of responsibility. Vendors like Workday typically assert that the employer controls job descriptions, qualification criteria, and final decisions.
But courts may weigh those disclaimers against operational reality. If the system’s architecture predictably yields discriminatory results, and if the vendor knows or should know that outcome is likely, then disclaimers may carry little weight.
What Employers and Counsel Should Do Today
The Mobley case may not yet define liability boundaries, but it has made one thing unavoidable: legal exposure grows as employers rely on systems they don’t fully understand. Waiting for a ruling won’t protect companies already using automated hiring tools. Counsel should act now to shape how those systems are deployed and defended.
The starting point is visibility. Most companies using third-party screening software haven’t audited how it scores candidates or what traits it may privilege. Legal teams should push for clarity:
● What criteria are being used to rank applicants?
● How does the system weigh experience versus education?
● Are rejection patterns monitored across age, race, or disability?
Vendor contracts deserve scrutiny. Boilerplate disclaimers that the client retains control carry less weight if the system effectively prevents candidates from reaching a human reviewer. Counsel should assess not just who owns the hiring decision on paper, but who shapes it in practice, and ensure contracts reflect those realities.
Risk mitigation also requires documentation. Employers should preserve records showing how hiring tools are configured, how often settings are reviewed, and when human intervention occurs. Counsel should work with HR and compliance leaders to formalize internal reviews, not as performative audits, but as evidence of diligence.
The role of outside counsel isn’t just to react when clients face litigation. It’s to ensure those clients can explain, with confidence and specificity, how their systems operate. That’s what opposing counsel will demand. That’s what a court will scrutinize. And that’s what this moment requires.
Building a Litigation-Ready AI Hiring Process
The most effective way to manage AI risk is to structure hiring systems with legal scrutiny in mind from the start. That means treating AI not as a replacement for hiring managers or recruiters but as a tool—one that supports decision-making, never substitutes for it.
The core problem in the Workday suit isn’t just the existence of bias—it’s the lack of meaningful human oversight. Plaintiffs allege they were screened out by an automated process that made categorical judgments with no person reviewing qualifications. That kind of delegation invites liability, especially when patterns of exclusion align with protected traits like age.
To mitigate that risk, employers need hiring systems that preserve decision-making authority for humans. That starts with designing workflows where AI performs a supporting role: sorting, flagging, or summarizing, not disqualifying. Any system that scores or ranks candidates should route borderline or outlier profiles for human review, particularly when the inputs or outputs correlate with age, disability, or race. Automating those steps may create efficiency, but it also strips out the discretion that protects employers from disparate impact claims.
Legal counsel should push for explainability at every layer of the process. That includes maintaining version histories of models, documenting how inputs are selected, and preserving logs that show how hiring decisions are made.
Ultimately, defensibility doesn’t come from disclaimers. It comes from structure. Employers that treat AI as a co-pilot are in the strongest position to defend their hiring decisions. And attorneys who help build that structure are raising the standard for what responsible use of AI in employment actually looks like.