Should Canadian employers ban BYOAI?

'People will find workarounds': MIT researcher explains why enterprise-wide genAI solutions work best

Should Canadian employers ban BYOAI?

Individual productivity AI tools, such as conversational agents ChatGPT and digital assistants, are rapidly being adopted by employees in most industries. But due to security risks, some organizations are banning any unsanctioned use of “bring your own AI” (BYOAI).

However, recent research from MIT’s Center for Information Systems Research (MIT CISR) suggests this isn’t the best tactic.

Canadian HR Reporter spoke with MIT CISR research scientist Nick van der Meulen to find out how employers should be approaching the use of various AI tools in their workplaces.

The risks of employee BYOAI use for individual productivity

The current focus of genAI use in the workplace places much of the onus on the worker to get it right, van der Meulen explains; and that means also having the critical thinking skills required to discern if results are appropriate, correct or ethical.

The inconsistent nature of individual use means productivity gains may not be worth the risk, he explains; while genAI tools can boost individual productivity, broad, generalist training means that workers invest effort in crafting precise prompts and context that may not even work.

Because of the “broad nature” of BYOAI tools, employees risk exposing their employers to data loss, intellectual property leakage, copyright infringement, and security breaches.

“They’re trained to be generalists, not specialists,” says van der Meulen. “Even though you have a good prompt and good context, still [there is] output that is off, that is not quite right. But often the onus is on the employee to make sure that this is usable.”

Banning BYOAI can lead to shadow genAI

Some employers as well as governments attempt to solve these issues by banning BYOAI tools outright — for example, the Canadian government just announced a ban of DeepSeek on government devices, joining Taiwan, Italy, Thailand and the U.S. Employee use of ChatGPT has been banned by several large firms after data leaks, including Samsung, Amazon and several large banks. 

However, the MIT researchers warn against that tactic; as van der Meulen states, “Banning is not effective or not even desirable, because people will find workarounds. And then you kind of drive it underground, and then you have no idea of what the risks are.”

The research demonstrates that in a competitive environment where employees increasingly rely on genAI for creative and productivity-related tasks, forbidding their use could drive unsanctioned “shadow genAI” initiatives.

And it’s not just individual employees – repressive AI rules in an organization can result in whole departments or geographical regions participating in shadow GenAI.

“Imagine, you're a large, global organization. You have different verticals that you operate in, and you have one part in a specific geography, in a unit, where people are like, ‘Hey, I can just reach out to this technology vendor, and they can help spin something up, and we can create this solution that is just for us,’” van der Meulen explains.

“And in different parts of the company, this might be happening, and you don't have an overarching governance framework, you don't have all of your legal and your privacy and your ethical and compliance teams involved in this.”

What the MIT research suggests is that by taking the “onus of success” away from individual employees and placing it into the hands of the organization, productivity can be increased while mitigating risks.

“It’s focused on strategic objectives,” van der Meulen says.

“And the goal there is, if you integrate it into your processes, into your data, into your workflows, etc., then you're able to get better results as an organization.”

Provide choice and clear guidance for safe employee genAI use

As an alternative to bans, the research advocates a more nuanced approach: clear, enforceable guidelines and lots of options. Organizations can designate enterprise-approved platforms that meet security and compliance standards, reducing the risk of data breaches and intellectual property violations.

But don’t stop there, van der Meulen says. Providing options for employees to be able to cater their own experiences, within the clear guidelines, is key. For instance, creating an internal “app store” of approved genAI tools can streamline access and provide a central repository of vetted applications for employees.

Create a dedicated governance team internally, he adds, to oversee progress but also to temper the pace of adoption and prevent backtracking. If teams are allowed to race ahead in innovation, it could set back the whole organization if implementation happens too quickly.

“They're not there to slow down experimentation and development. They're there to help by creating standards, by creating these guidelines and standards for teams to follow,” van der Meulen says.

“Sometimes you need individual teams to just go slightly slower, for the organization as a whole to go faster, so that you're not continuously reinventing the wheel and having to go through similar things over and over again, and not wasting resources too.”

Policies to implement genAI company-wide

For instance, one executive in the research mentioned that their organization’s genAI policy categorizes usage into “always okay” and “never okay” zones – “always okay” for publicly available information, and “never okay” for sensitive data such as personally identifiable information or proprietary content.

In order to work towards implementing AI technology solutions at scale, it’s important to start small and build slowly, says van der Meulen; initially, organizations should pilot sanctioned genAI tools in low-risk environments to build familiarity and assess impact.

This controlled rollout allows employers to identify potential pitfalls and adjust policies before broader deployment, he says.

“If you want to do this and you want to create scale, and you want to create this within a safe and secure environment for your organization, specifically, also, if you're trying to create offerings to customers, you want to do it at an enterprise level, rather than kind of in pockets.”

Invest in enhanced AI employee training

MIT has identified what it calls “AI direction and evaluation practices” (AIDE).

As van der Meulen explains, this is a ubiquitous training program that organizations can implement that focuses on improving domain-specific knowledge and critical thinking skills: “Things like knowing that the generative AI models, how they work, is non-deterministic, so every time you put the same prompt in, you're going to get a slightly different output. Knowing that the data that it’s been trained on can be biased, and that there are risks around that.”

In addition to AIDE-type training, employers should prioritize offering employees enterprise-level tools.

An internal genAI app store can be powerful for this strategy, van der Meulen points out, because purchasing “seats” for employees for a single tool that many of them won’t use not only isn’t cost effective, it will result in continued use of BYOAI.

“You buy a seat for someone in finance, and they use it once to write a birthday card … they're like, ‘It didn't even do great at that. So why should I keep using it?’” he says.

“And meantime, you're just paying 10s of dollars each month for that seat.”

Once pilot programs demonstrate positive outcomes, companies can expand genAI tool usage to support more complex functions, aligning with clearly defined strategic objectives and company values.

But maintaining dialogues with employees – the ones who are using the tech – is vital, as well as collaborative use and testing in an accessible, open platform such as the internal genAI “app store”, that everyone participates in and uses.

“It's not just like, ‘Here is [the] learning module that you'll have to take, and you check off and now you know how to do this really well,’” says van der Meulen.

“It's something that you have to try yourself. You have to apply it in your own context to see what it can and cannot do … it’s a muscle that you have to create.”

Latest stories