recent

Job responsibilities of generative AI leaders

Cybersecurity plans should center on resilience

5 predictions for fintech in 2024

Credit: Shutterstock

Ideas Made to Matter

Artificial Intelligence

New report documents the business benefits of ‘responsible AI’

By

As companies embrace artificial intelligence to drive business strategy, the topic of responsible AI implementation is gaining traction.

A new global research study defines responsible AI as “a framework with principles, policies, tools, and processes to ensure that AI systems are developed and operated in the service of good for individuals and society while still achieving transformative business impact.”

The study, conducted by MIT Sloan Management Review and Boston Consulting Group, found that while AI initiatives are surging, responsible AI is lagging.

While most firms surveyed said they view responsible AI as instrumental to mitigating technology’s risks — including issues of safety, bias, fairness, and privacy — they acknowledged a failure to prioritize RAI. The gap increases the possibility of failure and exposes companies to regulatory, financial, and customer satisfaction risks.

The MIT Sloan/BCG report, which includes interviews with C-level executives and AI experts alongside survey results, found significant gaps between companies’ interest in RAI and their ability to execute practices across the enterprise.

52%

52% of companies practice some level of responsible AI, but 79% of those say their implementations are limited in scale and scope.

Conducted during the spring of 2022, the survey analyzed responses from 1,093 participants representing organizations from 96 countries and reporting at least $100 million in annual revenue across 22 industries.

The majority of survey respondents (84%) believe RAI should be a top management priority, yet only slightly more than half (56%) confirm that RAI has achieved that status, and only a quarter said they have a fully mature RAI program in place.

Just over half of respondents (52%) said their firms conduct some level of RAI practices, but 79% of those admitted their implementations were limited in scale and scope.

Why are companies having so much trouble walking the talk when it comes to RAI? Part of the problem is confusion over the term itself — which overlaps with ethical AI — a hurdle cited by 36% of survey respondents who admitted there is little consistency given the practice is still evolving.

Other factors contributing to limited RAI implementations fall into the bucket of general organizational challenges:

  • 54% of survey respondents are struggling to find RAI expertise and talent.
  • 53% lacked training or knowledge among staff members.
  • 43% reported limited prioritization and attention by senior leaders.
  • Proper funding (43%) and awareness of RAI initiatives (42%) also hamper the maturity of RAI initiatives.

With AI becoming further entrenched in business, there’s mounting pressure on companies to bridge these gaps and prioritize and execute on RAI successfully, the report stated.

“As we navigate increasing complexity and the unknowns of an AI-powered future, establishing a clear ethical framework isn’t optional — it’s vital for its future,” said Riyanka Roy Choudhury, a CodeX fellow at Stanford Law School’s Computational Law Center and one of the AI experts interviewed for the report.

Getting RAI done right

Those companies with the most mature RAI programs — in the case of the MIT Sloan/BCS survey, about 16% of respondents, which the report called “RAI leaders” — have a number of things in common. They view RAI as an organizational issue, not a technical one, and they are investing time and resources to create comprehensive RAI programs.

Related Articles

Why companies need artificial intelligence explainability
5 data monetization tools that help AI initiatives
The argument for data-centric artificial intelligence

These firms are also taking a more strategic approach to RAI, led by corporate values and an expansive view of responsibility towards myriad stakeholders along with society as a whole.

Taking a leadership role in RAI translates into measurable business benefits such as better products and services, improved long-term profitability, even enhanced recruiting and retention. Forty-one percent of RAI leaders confirmed they have realized some measurable business benefit compared to only 14% of companies less invested in RAI.

RAI leaders are also better equipped to deal with an increasingly active AI regulatory climate — more than half (51%) of RAI leaders feel ready to meet the requirements of emerging AI regulations compared to less than a third of organizations with nascent RAI initiatives, the survey found.

Companies with mature RAI programs adhere to some common best practices. Among them:

Make RAI part of the executive agenda. RAI is not merely a “check the box” exercise, but rather part of the organization’s top management agenda. For example, some 77% of RAI leader firms are investing material sources (training, talent, budget) in RAI efforts compared to 39% of respondents overall.

Instead of product managers or software developers directing RAI decisions, there is clear messaging from the top that implementing AI responsibly is a top organizational priority.

“Without leadership support, practitioners may lack the necessary incentives, time, and resources to prioritize RAI,” said Steven Vosloo, digital policy specialist in UNICEF’s Office of Global Insight and Policy, and one of the experts interviewed for the MIT Sloan/BCG survey.

In fact, nearly half (47%) of RAI leaders said they involve the CEO in their RAI efforts, more than double those of their counterparts.

Take an expansive view. Beyond top management involvement, mature RAI programs also include a broad range of participants in these efforts — an average of 5.8 roles in leading companies versus only 3.9 roles from non-leaders, the survey found.

The majority of leading companies (73%) are approaching RAI as part of their corporate social responsibility efforts, even considering society as a key stakeholder. For these companies, the values and principles that determine their approach to responsible behavior apply to their entire portfolio of technologies and systems — along with and including processes like RAI.

“Many of the core ideas behind responsible AI, such as bias prevention, transparency, and fairness, are already aligned with the fundamental principles of corporate social responsibility,” said Nitzan Mekel-Bobrov, chief AI officer at eBay and one of the experts interviewed for the survey. “So it should already feel natural for an organization to tie in its AI efforts.”

Start early, not after the fact. The survey shows that it takes three years on average to begin realizing business benefits from RAI. Therefore, companies should launch RAI initiatives as soon as possible, nurturing the requisite expertise and providing training. AI experts interviewed for the survey also suggest increasing RAI maturity ahead of AI maturity to prevent failures and significantly reduce the ethical and business risks associated with scaling AI efforts.

Given the high stakes surrounding artificial intelligence, RAI needs to be prioritized as an organizational mandate, not just a technology issue. Companies able to connect RAI to their mission to be a responsible corporate citizen are the ones with the best outcomes.

Read the report

For more info Tracy Mayor Senior Associate Director, Editorial (617) 253-0065