A third of GenAI projects will be dumped by 2025: Gartner report

What GenAI tech staff should employers be hiring?

A third of GenAI projects will be dumped by 2025: Gartner report

At least 30% of generative AI (GenAI) projects currently underway will be dropped by 2025, according to new projections by Gartner.

Poor quality of data and inadequate risk controls are among the reasons organizations are scrambling to identify value of GenAI projects to shareholders, and “the financial burden of developing and deploying GenAI models is increasingly felt,” Gartner VP Analyst Rita Sallam said at a summit last month.

The unpredictability of GenAI costs, as well as the intangible and long-term nature of returns on investment, are both contributing to “impatient” executives who may pull the plug on projects after proof of concept, Gartner stated in a press release.

Preparing for streamlining of GenAI projects

These issues are compounded by the rush to implement AI solutions without fully understanding their potential impact, says Kevin Lee, assistant professor of human resources at the Sauder School of Business.

They also highlight the potential disconnect between organizational goals and the roles assigned to AI professionals brought on to help ramp up GenAI adoption.

"It sounds almost as if many of these new grads are being hired and then being given free reign with the broad notion of ‘Let's save the organization,’" says Lee. "But if they're new grads, they have technical expertise, they may know a lot about this technical product, but as to how it’s going to be useful, or what kind of analyses they should run... these are things that managers themselves need to actually think through a little bit."

Finding “synergy points” between those skills and the business needs of a company are essential factors of hiring tech and AI talent, Lee says, who recommends HR professionals take their time when hiring these individuals, and have more one-on-one conversations about expectations.

“[It’s about] what are those synergy points, and specifically, how can we then make this a job where it's not just ‘Please save the organization’ but rather ‘You have these skills, what can those skills specifically do, in the context of the various different challenges that our organization is facing?’”

Avoid ‘symbolic’ GenAI and tech hires

As companies grapple with the realities of AI integration, there is an increasing trend toward creating roles that serve more as symbols of innovation rather than functional necessities, says Lee, identifying what he calls “symbolic occupations” such as chief AI officers and data scientists, where job descriptions are often vague, and the practical impact of the role within the organization is uncertain.

"We're seeing the emergence of these symbolic occupations where, frankly, the job descriptions are vague, and the expectations are high," he says. "The symbolic value of having these roles in the organization is often more about signalling to external stakeholders that the company is forward-thinking, rather than having a clear, functional need for these positions."

In parallel, new roles focused on the ethical implementation of AI are emerging, indicating a growing awareness by employers of the potential risks associated with AI, including bias, privacy concerns, and the broader societal impact of AI technologies; according to a new report by Deloitte, positions related to ethics and policy are becoming increasingly important to organizations.

As Lee explains, roles such as “algorithmic brokers” are tasked with ensuring that AI systems are not only effective but also aligned with ethical standards and regulatory requirements.

An algorithmic broker would be an individual who can translate between the technical aspects of AI and the practical, ethical needs of the organization, Lee says. A broker would likely be an individual with knowledge of the organization as well as a strong handle on how technology is being implemented.

“Translation work that the broker does can also be super useful,” he says.

This role would require a deep understanding of both technology and business, ensuring that AI initiatives are not only aligned with the company’s objectives but also adhere to ethical guidelines.

Disconnect in technical skills of genAI hires, organizational needs

The Deloitte research identified “AI researcher”, “policy analyst” and “AI compliance manager” as the top three roles related to AI ethics that organizations have hired or are planning to hire.

A critical issue in the hiring of AI talent is the disconnect between technical expertise and organizational context. Many new hires, particularly recent graduates with strong technical backgrounds, often struggle to align their skills with the practical needs of the organization, says Lee.

This challenge is exacerbated by the pressure to meet both performance and ethical standards in AI implementation, he says.

“They don't have any work experience. They may know something about how to do data stuff, but as to how that actually translates into anything that might be valuable for an organization, or what organizations generally might be looking for, this is not really usually a part of their training.”

The Gartner report highlights justifying high costs of GenAI deployment to stakeholders without a definite translation into profits as a significant barrier to long-term investment.

"Before they hire a technical expert, managers maybe should think about ‘What is the purpose of doing this?’" Lee advises. "That seems kind of important, right? What would be the purpose of hiring somebody like this, beyond just this vague notion that data and AI is kind of important nowadays?"

Latest stories