Younger generations consider AI more trustworthy, ethical
Apparently, many job candidates would prefer to be interviewed by a computer instead of a real, live person.
That’s according to a study by Intensions Consulting and futurist speaker Nikolas Badminton that found more than 30 per cent of Canadian workers under the age of 40 would rather be hired or have their work assessed by an unbiased computer program instead of a workplace manager.
And 26 per cent believe an unbiased computer program would be more trustworthy and ethical than their current leaders and managers, found the study, based on a survey 2,299 people.
The number was significantly higher among people aged 20 to 39 because this group presents a higher acceptance of technology overall, said Nick Black, managing partner of Intensions Consulting in Vancouver.
“In the younger end of generation X and all of generation Y, there’s a much higher level of acceptance of technology,” he said. “If you trust technology to potentially find your spouse in online dating or to navigate you across an entire city, why couldn’t it also assess your job performance? Navigating you through the world is a fundamentally more difficult technological task than assessing the outcomes of your employment.”
And while 31 per cent of workers under 40 feel an unbiased computer program is more trustworthy and ethical than a workplace manager, that doesn’t mean 69 per cent would prefer the traditional interview format.
“There’s approximately 40 per cent that disagree and 30 per cent that sit right in the middle that are neutral,” said Black.
“I find that very interesting, especially when you’re looking at the question of trust and ethics and, essentially, on the one hand trusting artificial intelligence and on the other hand trusting a human being. It’s not like human beings are really winning this. There’s 30 per cent right in the middle. And... whenever you see those neutral responses, they’re the ones that through marketing or through doing a very good job with the product, you can shift pretty easily into another area of the spectrum. So, as a researcher, if I was stepping into the future a little bit with the data, I think that 30 per cent is ripe to be moved over into the agree area.”
The study’s results were particularly interesting in the context of employees’ relationship with the workplace, he said.
“When people reflect on their workplace, they find that there is racism, there’s sexism, there’s nepotism, there’s ego and power-playing out there. So the idea that these people with all these traits and biases could all of a sudden just put all of that aside and impartially assess you as a candidate or impartially assess your performance, that in no way is there any of these biases at play is a fallacy. I think, for younger Canadians, they’re seeing this.”
And the technology to hire and assess performance already exists, said Black, citing an analysis of 17 studies of applicant evaluations found a simple equation outperformed human hiring decisions by at least 25 per cent, regardless of whether the job was on the front line, in middle management or in the C-suite, according to a 2014 study from researchers at the University of Minnesota, the University of Toronto and the Educational Testing Service.
“Irrespective of the data that’s being put forward to an interviewer and all the things that are telling you one thing or another thing about an individual, human beings are incredibly bad at overcoming their own subjective biases,” said Black.
However, there can be issues with computers.
“Not all AI algorithms are equally transparent. There is a wide recognition in the field that AI systems need to be designed with human ethics in mind, and transparency is a key aspect of ethical AI,” said Gabriel Murray, assistant professor at the University of Fraser Valley in Abbotsford, B.C.
“AI and machine learning systems excel at learning patterns and making predictions from large amounts of data. Some of these patterns may not be obvious to humans.”
Some algorithms — such as neural networks — provide accurate predictions or recommendations but the process behind the prediction is incredibly opaque. Decision trees, in contrast, are not capable of the same performance but are very transparent.
“You can draw a parallel with the way statistics and data analysis have revolutionized sports such as baseball and hockey… deriving player rankings that are often very different from traditional rankings. These same types of techniques from sports analytics can be employed in human resources analytics,” said Murray.
“AI and machine learning can help the HR industry establish best practices and predictors of employee success by combining human expertise with data-driven analysis.”
Human expertise needed
Maintaining that human expertise is crucial, said Rowan O’Grady, president of Hays Canada in Toronto.
“If you only rely on technology to make hiring decisions, you lose that personal connection, that gut feeling,” he said. “The interviewing process is completely riddled with bias. People who are interviewing, the whole exercise is to rule certain people in and out. That is a bias. Interviewing is almost defined by bias.”
There is, however, room for this type of technology, said O’Grady.
“Any kind of technology that brings objectivity to the process is a good thing. Anything that can produce something quantifiable is very useful.”
An aspect of hiring AI may never be able to address, however, is the issue of fit, he said.
“Fit is a huge influencing factor on hiring and retention,” said O’Grady. “Having technology that helps you screen candidates, match candidates, assess candidates is fantastic but, at the end of the day, you have an individual who’s working for another person.”
As much as an algorithm may remove bias from the hiring or performance management process, there’s still the issue of the candidate to address, he said.
“Candidates don’t assess fit themselves. They frequently get it wrong and things don’t work out.”
That’s according to a study by Intensions Consulting and futurist speaker Nikolas Badminton that found more than 30 per cent of Canadian workers under the age of 40 would rather be hired or have their work assessed by an unbiased computer program instead of a workplace manager.
And 26 per cent believe an unbiased computer program would be more trustworthy and ethical than their current leaders and managers, found the study, based on a survey 2,299 people.
The number was significantly higher among people aged 20 to 39 because this group presents a higher acceptance of technology overall, said Nick Black, managing partner of Intensions Consulting in Vancouver.
“In the younger end of generation X and all of generation Y, there’s a much higher level of acceptance of technology,” he said. “If you trust technology to potentially find your spouse in online dating or to navigate you across an entire city, why couldn’t it also assess your job performance? Navigating you through the world is a fundamentally more difficult technological task than assessing the outcomes of your employment.”
And while 31 per cent of workers under 40 feel an unbiased computer program is more trustworthy and ethical than a workplace manager, that doesn’t mean 69 per cent would prefer the traditional interview format.
“There’s approximately 40 per cent that disagree and 30 per cent that sit right in the middle that are neutral,” said Black.
“I find that very interesting, especially when you’re looking at the question of trust and ethics and, essentially, on the one hand trusting artificial intelligence and on the other hand trusting a human being. It’s not like human beings are really winning this. There’s 30 per cent right in the middle. And... whenever you see those neutral responses, they’re the ones that through marketing or through doing a very good job with the product, you can shift pretty easily into another area of the spectrum. So, as a researcher, if I was stepping into the future a little bit with the data, I think that 30 per cent is ripe to be moved over into the agree area.”
The study’s results were particularly interesting in the context of employees’ relationship with the workplace, he said.
“When people reflect on their workplace, they find that there is racism, there’s sexism, there’s nepotism, there’s ego and power-playing out there. So the idea that these people with all these traits and biases could all of a sudden just put all of that aside and impartially assess you as a candidate or impartially assess your performance, that in no way is there any of these biases at play is a fallacy. I think, for younger Canadians, they’re seeing this.”
And the technology to hire and assess performance already exists, said Black, citing an analysis of 17 studies of applicant evaluations found a simple equation outperformed human hiring decisions by at least 25 per cent, regardless of whether the job was on the front line, in middle management or in the C-suite, according to a 2014 study from researchers at the University of Minnesota, the University of Toronto and the Educational Testing Service.
“Irrespective of the data that’s being put forward to an interviewer and all the things that are telling you one thing or another thing about an individual, human beings are incredibly bad at overcoming their own subjective biases,” said Black.
However, there can be issues with computers.
“Not all AI algorithms are equally transparent. There is a wide recognition in the field that AI systems need to be designed with human ethics in mind, and transparency is a key aspect of ethical AI,” said Gabriel Murray, assistant professor at the University of Fraser Valley in Abbotsford, B.C.
“AI and machine learning systems excel at learning patterns and making predictions from large amounts of data. Some of these patterns may not be obvious to humans.”
Some algorithms — such as neural networks — provide accurate predictions or recommendations but the process behind the prediction is incredibly opaque. Decision trees, in contrast, are not capable of the same performance but are very transparent.
“You can draw a parallel with the way statistics and data analysis have revolutionized sports such as baseball and hockey… deriving player rankings that are often very different from traditional rankings. These same types of techniques from sports analytics can be employed in human resources analytics,” said Murray.
“AI and machine learning can help the HR industry establish best practices and predictors of employee success by combining human expertise with data-driven analysis.”
Human expertise needed
Maintaining that human expertise is crucial, said Rowan O’Grady, president of Hays Canada in Toronto.
“If you only rely on technology to make hiring decisions, you lose that personal connection, that gut feeling,” he said. “The interviewing process is completely riddled with bias. People who are interviewing, the whole exercise is to rule certain people in and out. That is a bias. Interviewing is almost defined by bias.”
There is, however, room for this type of technology, said O’Grady.
“Any kind of technology that brings objectivity to the process is a good thing. Anything that can produce something quantifiable is very useful.”
An aspect of hiring AI may never be able to address, however, is the issue of fit, he said.
“Fit is a huge influencing factor on hiring and retention,” said O’Grady. “Having technology that helps you screen candidates, match candidates, assess candidates is fantastic but, at the end of the day, you have an individual who’s working for another person.”
As much as an algorithm may remove bias from the hiring or performance management process, there’s still the issue of the candidate to address, he said.
“Candidates don’t assess fit themselves. They frequently get it wrong and things don’t work out.”