New project aims to find out which filters could be most damaging
When it comes to artificial intelligence (AI) and recruitment, the process is not perfect.
Despite the many benefits of this approach, companies such as Amazon have learned the hard way that the data that comes out is only as good as the data that goes in.
To that end, the Inclusive Design Research Centre (IDRC) at Ontario College of Art & Design University (OCAD U) was recently given a planning grant from Kessler Foundation and Microsoft to explore bias in current hiring systems.
Why? Because “current automated profiling is usually based on existing data sets and cultural compatibility, which often discriminate against people with disabilities and other marginalized populations.”
"We hope to prompt a mindset change in hiring and recruitment," says Jutta Treviranus, director of the IDRC and professor in the faculty of design at OCAD U. "This research aims to help counteract the bias against the difference of disability in AI hiring applications and demonstrate AI algorithm alternatives that do not optimize past patterns but encourage the novel, exploratory and divergent.”
For whatever reason, people think these systems are more objective, she says: “We think that it's a way to make things easier, but we forget that they can do a great deal of harm as well.”
Bias against disabilities
While there has been much discussion around the bias found in AI systems when it comes to issues such as race and gender, disability is a lesser-known concern.
“I call this the Achilles heel of AI,” says Treviranus.
“Because the only common data point of disability is sufficient difference from the norm or the average, systems are not made for you and AI, predictive logic and probability and optimization is inaccurate or wrong. So the one unfortunate thing that's happened is many of the measures, the protections that are put into place against bias are not considering disability,” she says.
“So there is this dangerous issue that people think, ‘Oh, we have an AI auditing system’ or ‘We have privacy protections, and now we don't need to worry about AI bias anymore,’ because that company has been certified or that HR tool has been certified as having been audited for AI ethics.”
The types of protections that exist against bias, the AI auditing systems, depend upon a comparison between how the majority is treated and how a particular protected identity group is treated, says Treviranus.
“It's dependent upon these data clusters, meaning that how you identify a group is with specific characteristics that you can see in the data... So it's comparing a data cluster regarding someone of a particular race, language, age, gender, etc. But the problem with disability is that there is no such bounded border or classification or cluster because the defining characteristic of disability is diversity and difference,” she says.
“There's no commonality, really, other than you're out at the edge or you're an outlier, a small minority; even though, collectively, disability is one of the most common minorities out there and it touches everybody. So, the systems that we've created to ensure that there isn't discrimination are not catching discrimination against people with disabilities.”
For workers who identify as having a disability, a new site was launched that aims to help those prospective employees find meaningful work. The Ontario Disability Employment Network (ODEN) in Whitby, Ont. partnered with a U.S.-based organization, Our Ability, to offer an AI bot that will scan an employer’s career portal and bring that information back to the Jobs Ability Canada site and offer job-matching services to visitors.
Machine learning ‘inherently problematic’
There's something inherently problematic in machine learning and deep learning, in that it's based upon big data, says Treviranus.
“When you're using population data, and you're trying to make a competitive decision or a probabilistic predictive decision, it's currently using data from the past. So, you're optimizing, amplifying and accelerating the past… but you're also optimizing the discrimination and the exclusion. So that isn't very adaptive, it's not very future-friendly, and it reduces the type of innovation you need to make a substantive change.”
HR hiring tools or AI hiring apps are not catching the discrimination against disability, because in essence, it's a discrimination against difference in diversity, she says.
“While organizations are saying… ‘We want to hire more, we want to increase the diversity, inclusion and equity within our company,’ the AI hiring apps are actually doing the opposite. So, they are doing things like allowing you to filter for culture fit. Well, culture fit means that you are continuing the particular culture within your team or within your organization. And you're restricting anyone that may not fit into the culture,” says Treviranus. “It's propagating towards a monoculture.”
Combatting potential bias
As with any area or demographic that might lead to a bias in AI, disability is an area that needs to be addressed, says Somen Mondal, general manager, talent intelligence, at Ceridian in Toronto.
“You need to audit for results to make sure that there isn't an adverse impact being introduced to a specific demographic,” he says.
“Typically, the problem is that the companies that don't specialize specifically in talent acquisition or HR types of data… don't audit their results. If this is something of a side project in a company, they're just doing it and specific examples of them including name, age, sex, things like that those factors are variables or data, when you're putting it into the system, they should remove those kind of data points out.”
Employers also have to look at the demographics from one talent goalpost to another, says Mondal.
“A talent goalpost could be someone applied to a position and another talent goalpost could be someone was interviewed and someone was hired. You want to look at the demographics between one state or one goalpost to another and measure demographics to ensure that an adverse impact is not being introduced into the selection process,” he says.
“Whether it's a human or an algorithm, it's the same method: to make sure that you're not introducing an adverse impact.”
There’s also a big misconception that AI is trying to replace the human interaction in hiring, says Mondal.
“Computers are never going to be able to figure out cultural fit and never be able to negotiate build relationships. So… the human element is extremely important. We'd like to get as much of the top funnel bias out, reduce it as much as possible, and get the most qualified candidates in front of the recruiter so they can spend more time with them.”
AI can be an indispensable tool for recruitment, says one Canadian expert.
AI hiring apps
To better understand the damage done by AI hiring apps with respect to people with disabilities, OCAD U will be looking to answer several questions, says Treviranus. For example, how are people with disabilities filtered out? And what are the types of filters that are most damaging?
“The second part is we want to identify what is unique in terms of the characteristics that an AI engine is discriminating against? So there's prior research which looks at race and gender and language and origin and those sorts of things within AI hiring systems and strategies to ensure that there isn't discrimination or possible things of reducing the discrimination of diversity in those areas. But the issue with disability is that the data prediction with respect to ability or skills or those sorts of things are also issues.”
For example, if a person has a disruption in their education due to an episodic disability or the need to go into rehab, that's seen as a bad mark, she says.
“Ability is the last of the areas to see diversification and a different approach to doing something; all of the signs that the AI is looking at in terms of: ‘Is this person most capable? Or is this person capable?’ are things that are marks against you, if you have a disability, even though you are willing and able to do the job that's required.”
In an effort to eliminate prejudice from the candidate-interview process, a remote jobs web site is looking to make the procedure fairer. San Francisco-based job Torre is looking for employers to sign its Frank Artificial Intelligence in Recruiting (FAIR) manifesto which includes five key components: disclose when you’re using AI; make the factors transparent; disclose rankings to candidates; detect bias; and reduce discrimination systematically.