'The technology is emerging so quickly and decisions are being made so quickly that we need to get in front of them before they cause harm'

Artificial intelligence (AI) assistants will soon be available to many workers, and employers must urgently establish ethical guidelines around their use, according to one Canadian academic.
“We need ethical guidelines because the technology is emerging so quickly and decisions are being made so quickly that we need to get in front of them before they cause harm,” says Isabel Pedersen, professor in the Faculty of Social Science and Humanities, and firector of the Digital Life Institute, at Ontario Tech University.
“And the harm could be to individuals.”
Currently, two-thirds (66 per cent) of professionals feel comfortable leveraging AI in their roles, according to a previous report.
What can AI assistants do?
Advanced AI assistants are artificial agents with a natural language interface, Pedersen explains. This means they can exchange questions and answers in a conversation with a human subject.
While all AI chatbots – like ChatGPT – can already do that, “advanced AI assistants go to the next step and can plan and execute sequences of actions on the user's behalf, within a domain, and in line with expectations.”
She notes that advanced AI assistants haven't emerged yet, but they're “being planned for”.
A previous report from OntarioMD (OMD) noted that with AI scribe technology, doctors and nurse practitioners spend 70 to 90 per cent less time doing paperwork.
Asked about the pace of employer adoption of AI assistants, Pedersen declined to give specifics. However, she points to data around the early adoption of ChatGPT.
“ChatGPT gained a million users in five days after the launch,” she says.
Meanwhile, it took Netflix 3.5 years to reach one million users, she says, citing data from Statista. It took Airbnb 2.5 years, and it took Facebook 10 months to accomplish the same feat, she says.
In January 2023, ChatGPT reached 100 million monthly active users just two months after launch. That made OpenAI’s product the fastest-growing consumer application in history, Reuters reported.
“Agents” is “the next level of maturation or evolution of the digital assistants,” says Keith Bigelow, chief product officer at Visier, in a previous interview with Canadian HR Reporter.
What are the concerns about AI assistants?
The evolution of AI technology also poses new threats to users and organizations.
In the case of AI assistants, users would have to provide more personal information.
“They are going to be geared more to personalized interaction with the user, so they're planning for the users around the user's personal life may have more information about what a user does in their everyday life,” says Pedersen.
“There's more investment on the part of the user giving information to this agent, an assistant, in order for that agent to be more able to help.”
Many workers enter sensitive company information into public AI platforms, potentially exposing their employers to harm, according to a previous KPMG report.
How can businesses ensure the ethical use of AI?
Ethics is a big issue when it comes to using AI tools, but Pedersen notes that the concept of ethical work is nothing new.
“Corporate social responsibility or corporate responsibility has long been in place for businesses, so I don't think it's a new concept for companies to have ethical guidelines,” says the expert from Ontario Tech University.
“We've had business ethics, communication ethics, in place, and so it would be in keeping with the way businesses operate now.”
But when it comes to ethics in AI use, employers may need help. She notes that there is growing research about ethical AI alignment, the process of “encoding human values and goals into large language models”.
“There are new practices, such as algorithmic assessment tools. So before a company decides to use an AI algorithm, there are assessment tools which are being deployed…
“There are organizations that are helping businesses understand AI literacy, so that they will be able to understand what AI alignment is, and how we can avoid using AI technologies that could lead to bias or discrimination.
“The Canadian government is also starting to publish and make available assessment tools for its own operations, for public service and business. Employers need to align with these kinds of value systems and practices..”
Over-reliance on GenAI is the top ethical concern for workers, according to a previous report.