Dive Brief:
- Even as they lead a gold rush into generative artificial intelligence, only 58% of executives in industries such as health care, technology and financial services have completed a preliminary assessment of AI risks, PwC said.
- Roughly three out of four of executives (73%) said they use or plan to use AI, including generative AI, with a slightly bigger share aiming to only focus the technology on operational systems, PwC said.
- “Companies slow to make AI an intrinsic part of their businesses may find themselves so far behind that it will be difficult to catch up,” PwC said, describing results from an April survey of 1,001 executives at private and public organizations. While seizing on the promising new technology, business leaders have begun to realize the challenge to “manage risk and preserve the incremental value that these solutions create.”
Dive Insight:
Rapid advances in AI outpace efforts to contain the business risks from the technology, the World Economic Forum said, describing results from a survey of Chief Risk Officers at large companies and international organizations.
Three out of four CROs said use of AI poses a risk to their organization’s reputation, while nine out of 10 favor stricter regulation of the technology’s development and use, the forum said in a July 2023 report. Nearly half of the survey respondents believe that advances in AI should be slowed or paused until its risks are better understood.
The “opaque” inner workings of AI pose a hazard of unintended sharing of personal data or the possibility that AI algorithms trigger biased decisions, CROs told the forum.
Also, AI is becoming an increasingly powerful tool for creating false information, the forum said in its Global Risk Report 2024.
“No longer requiring a niche skill set, easy-to-use interfaces to large-scale artificial intelligence models have already enabled an explosion in falsified information and so-called ‘synthetic content,’ from sophisticated voice cloning to counterfeit websites,” the forum said in January.
“Synthetic content will manipulate individuals, damage economies and fracture societies in numerous ways over the next two years,” the forum predicted.
President Joe Biden late last year issued an executive order aimed at speeding AI advances while shielding consumers, workers and businesses from its potential hazards.
Under the order, developers of advanced AI systems must report AI safety test results and other information to the Commerce Department. Nine federal agencies have also submitted assessments of the risks in the use of AI in the electric grid and other critical infrastructure.
AI also intensifies the threat to businesses from cyberattacks, the U.K. Department for Science, Innovation and Technology said in a May report.
“While AI technologies have advanced and enhanced efficiency and productivity, they remain susceptible to an ever-growing number of security threats and vulnerabilities,” the U.K. agency said, having looked at every phase of the “AI lifecycle” — from design and development to deployment and maintenance.
Curbing AI risks is a long-term effort, PwC said. “It’s an ongoing commitment that needs to be woven into every step of developing, deploying, using and monitoring AI-based technologies.”