Workday faces class-action for AI screening: should Canadian employers be concerned?

'It's not just a program, it's actively engaged in the recruitment process': employment lawyer recommends due diligence, human element to prevent AI discrimination

Workday faces class-action for AI screening: should Canadian employers be concerned?

Workday will be facing a proposed class-action suit in the United States alleging that its algorithmic job applicant screening software is biased.

U.S. District Judge Rita Lin said on Friday that Workday can be considered an employer in the U.S., since its algorithm-based screening software makes decisions about which job applicants its clients should consider for employment, and which it shouldn’t.

A job-hopeful who had submitted over 100 applications to Workday-listed postings (for jobs he said he was qualified for) alleged that Workday’s software screened him out because he is Black, over 40, and has anxiety and depression. Workday attempted to have the suit dismissed, alleging that its software is not an “employment agency” but a basic tool, and therefore not subject to human rights laws that govern employers.

The court disagreed and will allow the class action to go forward.

AI job applicant screening tools prone to bias

If it goes ahead, the suit against Workday will be the first proposed class-action lawsuit to challenge the use of an AI screening program, Reuters reported.

And it could have ramifications in Canada since Workday and other third-party AI screening tools are being used with increasing regularity here, says Zoya Alam, employment and labour lawyer at Goulart Workplace Lawyers in Oakville, Ont.

“A lot of companies and HR are beginning to use a lot of AI-powered technology to drive recruitment, which in some ways is beneficial,” she says, “but I think where the problem is arising is that AI uses data that it's trained on.”

The judge in the Workday case addressed this problem, acknowledging that Workday’s software uses the company’s own data to train its screening algorithm, perpetuating any biases that may already exist. Alam adds that the coders themselves may also be inadvertently adding more bias to the algorithms as they build the system.

“As they're putting in these codes, for example, if they're looking at indicators of use of vocabulary or language or past income in their algorithm, then it can systemically discriminate against certain people of certain social economic status, people of certain races, genders, things like that,” she says.

“What AI does is it uses that data, which could have conscious or unconscious biases, and then it starts to produce results.”

AI screening tools making active hiring decisions

With the ruling, the judge said Workday is not just a tool as it's actually making active decisions and participating in the process about whether to recommend candidates to move forward, while it’s rejecting others,” says Alam.

“So, it's not just a tool, it's not just a program, it's actively engaged in the recruitment process.”

Citing Amazon’s failed AI hiring program that was scrapped in 2017 because it was screening out female tech job candidates, Alam points out how existing bias can negatively affect diversity of a workforce.

In that example, the internet-crawling program used 10 years of applicant history of Amazon’s mostly-male tech workforce to model its screening requirements, resulting in an algorithm that penalized resumes with terms such as “women’s” and downgraded graduates of two specific women’s colleges, the names of which were not revealed.

Conversely, resumes with words like “executed” and “captured”, which men tend to use, were favoured, according to sources involved in the project.

“What seemed like an innocent training of data actually resulted in discrimination against women applying for technical jobs,” Alam says.

In 2022, a similar lawsuit was brought against U.S.-based Chinese tutoring company iTutorGroup, which was ordered last year by the Equal Employment Opportunity Commission (EEOC) to pay $365,000 to over 200 job applicants who were allegedly rejected due to their age.

In that case, women over 55 and men over 60 were rejected, according to the EEOC’s claim.

“These are software, these are tools, but they're engaging actively in this recruitment process,” says Alam, pointing out that employers in Canada could also be liable for choosing to engage the tools in their recruitment process.

Human element lost in AI hiring decisions

In addition to potentially injecting or perpetuating existing biases in job screening algorithms, Alam says that AI recruiting tools take the “human element” out of the process, which can negatively affect hiring outcomes.

“There's no room for human compassion, empathy, or vision of potential when you're using AI for recruitment,” she says. “Because the software is just based off of particular eligibility criteria, you don't have room for movement there, whereas if you have a human sitting on the other end looking at applications, you can look at the application as a whole, outside of just black-and-white algorithms and criteria.”

Conceding that AI recruiting tools have become necessary in the current employment landscape, especially in larger organizations, Alam admits that AI technologies have their place in improving efficiency. But due diligence needs to be employed before the tools are used, she adds. “You can't completely rely on it without doing some background work into it, at least at the forefront.”

Due diligence needed when employing third-party recruiters and AI screeners

An important part of that recommended due diligence is doing thorough background checks on vendors before engaging them and assessing their processes around algorithm training and DEI strategies.

HR and IT should be working together, Alam says, to review and evaluate historical selection rates of vendors and look at assessment reports.

“You also want to make sure that there's very clear indemnities and liability carve-outs for if something like this does arise,” she says, adding that these indemnities include who is responsible, specifically, for the AI system potentially discriminating against candidates.

In-house due diligence is important too, Alam says, in assessing historical and current hiring data as well as closely analysing the hires the AI recruiter recommends. For this she recommends a “secondary” screening stage, after the AI program makes its recommendations for a role, but before they are sent to the next step.

“Once Workday has provided those limited applicants that will be proceeding forward, maybe having a secondary check from someone who's in HR, for example, reviewing what the AI software has come up with, just as way to add checks and balances to what the software is doing,” she says.

“That way you also have insights into what is the software producing for you, in terms of your recruitment tool … even if you don't have access to historical data, then assessing, how is recruitment going for you? What is your workforce looking like, based off of the recruitment coming out of the engagement of these tools?”

Latest stories