Dive Brief:
-
Generative artificial intelligence is poised to substantially transform the corporate finance function in coming years, but the technology is currently running into adoption hurdles, including data privacy risks, according to a recent survey commissioned by finance software vendor OneStream.
-
Eighty percent of corporate finance leaders responding to the survey said that AI has the potential to increase productivity in the office of finance. Another 66% said generative AI in particular will become a core component of financial processes over the next five years. However, some respondents expressed concerns about implementation challenges such as employee training (32%) and data privacy regulations and procedures (31%).
-
“The interest in AI is certainly high, and there’s a lot of positive feedback from finance leaders in terms of what they think AI can bring for them, but there have been a few challenges,” Tiffany Ma, senior product marketing manager for OneStream AI Services, said in an interview.
Dive Insight:
The research aligns with other recent studies looking at CFOs’ attitudes towards AI.
In a survey unveiled last month by Paro, a technology-based finance talent pool provider, 83% of finance leaders viewed AI as a technology that is crucial to the future of finance, but 42% had not yet adopted it. The study cited hurdles such as regulatory risks and talent shortages.
“Everybody agrees that this is the future, but there are quite a few concerns about how to implement and govern it,” Paro CEO Anita Samojednik said in an interview.
The highly sensitive data handled by corporate finance teams is likely one reason why they have a low rate of AI adoption compared with their high level of interest, Ma said. Onestream’s research found that 43% of finance leaders are seeking to improve their data security posture, while another 43% have implemented or are considering new software tools designed to resolve AI challenges.
The poll of 800 financial decision-makers around the world was conducted in August and September by Hanover Research on behalf of OneStream.
The rapid rise of generative AI this year has sparked a debate in the U.S. and across the globe about the need for guardrails to address the technology’s potential risks.
“Look, privacy is not the only right at risk,” President Joe Biden said at the White House in October just before signing a sweeping AI executive order. “Without the right safeguards in place, AI can lead to discrimination, bias, and other abuses.”
U.S. lawmakers, both Republicans and Democrats, are focused on the issue as well. A comprehensive bill (S. 3312) introduced Nov. 15 by a bipartisan group of senators would, among other provisions, require companies deploying “critical-impact” AI technology to perform detailed risk assessments.
“Artificial intelligence comes with the potential for great benefits, but also serious risks, and our laws need to keep up,” Sen. Amy Klobuchar (D-Minn.), one of the bill sponsors, said in a press release.
Overall, dozens of targeted bipartisan AI bills are pending in Congress, including more than half a dozen that have progressed through their committees of jurisdiction, according to a report published last month by law firm Covington & Burling.
Meanwhile, AI legislative efforts in the EU are further ahead. Earlier this month, European Union officials reached a landmark deal on a comprehensive proposal dubbed the EU AI Act. According to a report published by the Washington Post, the legislation is poised to become the world’s “most ambitious law to regulate artificial intelligence, paving the way for what could become a global standard to classify risk, enforce transparency and financially penalize tech companies for noncompliance.”
Violators could face fines up to €35 million, or 7% of global turnover, depending on the infraction and the company’s size.