Dive Brief:
-
Cybersecurity experts remain skeptical about the wisdom of entrusting ChatGPT with confidential business data, such as trade secrets and sensitive financial details, despite new safeguards that are being rolled out by the popular AI tool’s creator.
-
Last week, Microsoft-back startup OpenAI announced that its ChatGPT public model can now turn off chat history within the tool. The company also said it was working on a new “ChatGPT business subscription” for professionals who need more control over their data as well as enterprises seeking to manage their end users.
-
While the announcement represents a step in the right direction, OpenAI still has a lot of work to do before ChatGPT can be widely viewed as a safe business tool, especially when it comes to use cases involving sensitive data, according to cybersecurity experts contacted by CFO Dive. “For the foreseeable future, organizations' best bet is to advise employees not to put sensitive information — such as personal data, intellectual property, financial data, customer data, encryption keys — into generative AI applications until sufficient controls can be implemented and verified to adequately protect that data,” said Nathan Wenzler, chief security strategist for Tenable, a cybersecurity company based in Columbia, Md.
Dive Insight:
Samsung Electronics announced Tuesday that it has banned its employees from using “generative AI” tools such as ChatGPT after discovering that such services were being misused, according to news reports. Other companies that have taken such measures include JPMorgan Chase and Verizon.
ChatGPT has the ability to interact in a conversational way, producing human-like responses to questions across a number of subject areas.
After it was launched in November, the generative AI technology quickly surpassed both Instagram and Spotify as the fastest-growing consumer application in history, according to KPMG.
OpenAI is now looking to translate this success into revenue. “We will have to monetize [ChatGPT] somehow at some point,” Sam Altman, the company’s CEO, tweeted in December. “The compute costs are eye-watering.”
Nearly three out of five corporate executives responding to a Salesforce survey released in March said generative AI is a potential game-changing technology, while one-third viewed it as over-hyped. In addition, two-thirds said they were prioritizing business use of the technology in the next 18 months, with one-third calling it a top priority.
However, the technology also prompted concerns. Seven in 10 executives said enterprise use of generative AI means exposing company data to new security risks. A skills gap and integration with the existing tech stack were among other top implementation concerns.
Privacy issues surrounding ChatGPT have frequently been in the news lately, including when Samsung employees reportedly put sensitive corporate data into the tool.
Corporate CFOs could be particularly slow to embrace some uses of ChatGPT, at least in the short term, according to Jack McCullough, founder and president of the CFO Leadership Council.
“They’re not going to be the early adopters on this sort of thing, partly because of temperament — they’re cautious by nature,” McCullough said in an interview. “The other thing is that privacy, data security and protection of assets are big issues for them. They’re not going to be on the cutting edge of turning certain things over from a technological standpoint until they see how it works with other functional groups.”
OpenAI’s latest changes show that it understands such privacy concerns and takes them seriously, according to Sergey Shykevich, threat intelligence group manager at Check Point, a cybersecurity firm headquartered in Israel. Still, he said companies would be wise to remain “very cautious” about uploading any sensitive or proprietary data to ChatGPT.
“At the end of the day, it is a third-party platform and not part of a protected asset of your corporation,” he told CFO Dive.