Dive Brief:
-
A large majority of businesses have already taken steps to ensure responsible generative artificial intelligence adoption, as many of them gear up for larger-scale investments in this area, according to the results of a survey by Big Four accounting firm KPMG.
-
Seventy percent of surveyed CEOs said they believe their organization is ready to navigate ethical concerns that may arise from GenAI use, according to a report on the findings released Thursday. Ongoing education and training topped the list of responsible GenAI use initiatives that are already being deployed, with 95% of respondents citing it. The research also pointed to measures such as regular audits and monitoring (82%), as well as human oversight (71%).
-
Responsible GenAI adoption is “something that is a top of mind for organizations, and I think it will be an even bigger focus over the next six months to a year,” Sanjay Sehgal, head of markets at KPMG, said in an interview.
Dive Insight:
The business demand for GenAI has skyrocketed since Microsoft-backed OpenAI introduced its groundbreaking ChatGPT model in November 2022, grabbing the world’s attention.
A study released by KPMG in March showed that 97% of organizations are planning GenAI investments over the next 12 months, with 43% expecting an allocation of $100 million or more, as previously reported by CFO Dive.
When considering their organization’s GenAI strategy for the next 12-18 months, 39% of CEOs say they will be scaling efforts, moving from pilots to industrialization across multiple functions or business units, according to the latest report. Forty-one percent of companies plan to increase their investment in GenAI next year, while 56% anticipate flat spending, the research found.
“CEOs see GenAI as central to gaining a competitive advantage and are working to rapidly advance deployment of the technology across their enterprises in a responsible way,” the report said.
CEOs indicated plans for additional responsible GenAI initiatives this year on top of those that are already in place. Eighty-one percent of the respondents said their organization planned to use “watermark” disclosures to alert consumers about content made with the assistance of GenAI, for example.
The focus on responsible GenAI use comes as the technology’s rapid rise is prompting concerns and scrutiny from governments around the world.
The issue has been a high priority for the White House. President Joe Biden last October signed a sweeping executive order directing the National Institute of Standards and Technology within the Commerce Department to establish guidelines to promote “safe, secure, and trustworthy” AI systems. He also ordered the Labor Department to lead the development of a set of standards to guide companies in addressing AI’s potential harm to their workers, including eroded privacy and job displacement.
In March, the Office of Management and Budget issued landmark AI risk mitigation guidelines for the entire federal government.
“I believe that all leaders from government, civil society, and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm while ensuring everyone is able to enjoy its full benefit,” Vice President Kamala Harris said in a press call at the time.
Under the guidelines, federal agencies will be required by Dec. 1 to implement concrete safeguards when using AI in a way that could impact Americans’ rights or safety, including actions to reliably assess, test and monitor the technology’s impacts on the public, mitigate the risks of algorithmic discrimination and provide the public with transparency into how the government uses AI.
To ensure accountability, the OMB policy requires federal agencies to take steps such as designating a chief AI officer to coordinate the use of AI across the organization.
While the guidance applies to the government, it has important implications for the private sector as well, according to Nikki Bhargava, a partner at law firm Reed Smith who specializes in emerging technologies among other areas.
“I think it’s important for the private sector to take a look at the framework that has been set up,” Bhargava said in an interview. “It’s one of few AI guidance documents that have been issued at the federal level.”
Also, the guidance could directly impact private sector entities that are developing AI technology that may be procured by the federal government, she added.
Meanwhile, EU lawmakers last month approved sweeping AI legislation that is expected to impact businesses globally. Among other requirements, companies that adopt AI for “high-risk” uses including in the context of critical infrastructure, employment, and essential private and public services, must take steps to assess and reduce risks; maintain use logs; be transparent and accurate; and ensure human oversight.
The new rules also forbid “emotion recognition” in the workplace and schools, as well as predictive policing when it is based solely on profiling individuals or assessing their characteristics.
Penalties include up to 35 million euros or 7% of a company’s total worldwide annual turnover — whichever is higher — for violations of the banned practices provision.