Who should be responsible for AI ethics?

Survey finds 'radical shift' in accountability, with CEOs stepping up

Who should be responsible for AI ethics?

There’s been a “radical shift” when it comes to the roles responsible for leading and upholding AI ethics at organizations, according to a recent survey.

When asked which function is primarily accountable for AI ethics, 80 per cent of respondents pointed to a non-technical executive, such as a CEO, as the primary "champion" for AI ethics — compared to just 15 per cent in 2018.

CEOs (28 per cent) are also viewed as being most accountable for AI ethic, followed by oard members (10 per cent), general counsel (10 per cent), privacy officers (eight per cent), and risk and compliance officers (six per cent), finds the survey by IBM.

"As many companies today use AI algorithms across their business, they potentially face increasing internal and external demands to design these algorithms to be fair, secured and trustworthy; yet, there has been little progress across the industry in embedding AI ethics into their practices," said Jesus Mantas, global managing partner at IBM Consulting.

"Building trustworthy AI is a business imperative and a societal expectation, not just a compliance issue. As such, companies can implement a governance model and embed ethical principles across the full AI life cycle."

A group of leading employers is looking to mitigate the “data and algorithmic bias” in human resources and workforce decisions, including recruiting, compensation and employee development.

Making a difference

More than three-quarters of the business leaders surveyed agree that AI ethics are important to their organizations, up from about 50 per cent in 2018, finds IBM, which surveyed 1,200 executives in 22 countries across 22 industries.

At the same time, 75 per cent believe ethics are a source of competitive differentiation, and more than 67 per cent of respondents that view AI and AI ethics as important indicate their organizations outperform their peers in sustainability, social responsibility, and diversity and inclusion.

More than half of the leaders say their organizations have taken steps to embed AI ethics into their existing approach to business ethics, finds IBM, and more than 45 per cent of respondents say their organizations have created AI-specific ethics mechanisms, such as an AI project risk assessment framework and auditing/review process.

As companies continue to struggle to find the talent they need, employers are increasing their overall investment in talent acquisition technology and artificial intelligence (AI).

Latest stories