CFO oversight of a company’s financial accounts can make them attractive targets for “deepfake” attacks by cybercriminals. This points to the need for finance leaders to take proactive steps to minimize the risk.
The term, first coined in 2017, refers to a “specific type of synthetic media where a person in an image or video is swapped with another person’s likeness,” according to an MIT Sloan report. It can also refer to the likeness of someone’s voice that can be appropriated in deepfake audio recordings.
Left unchecked, deepfake attacks can have grave consequences for companies. In one of the most recent high-profile incidents, British engineering group Arup lost about $25 million after scammers used AI-manipulated “deepfakes” to falsely pose as the group’s CFO and request transfers be made to bank accounts in Hong Kong, CFO Dive previously reported.
Such manipulations are a growing threat to C-suite leaders: Bolstered by generative AI and other tools, deepfakes are on the rise: the number of deepfake attempts increased 31 times between 2022 and 2023, according to identity verification firm Onfido.
“You've taken that labor out of the steps for the threat actor,” said Michael Gray, chief technology officer at Foxborough, Mass.-based security services company Thrive. Generative AI, he noted, makes it easy to engage potential victims in conversations and allows scammers to manage hundreds of deepfake campaigns at the same time.
In addition, the rise of remote work during the pandemic normalized the use of video at work and laid the groundwork for fraudsters to hijack video conferences and audio call formats to achieve their own fraudulent purposes. Here are four ways CFOs can curtail deepfake threats:
Think hard before posting on social media and joining audio and video calls.
CFOs need to be cautious about material they post about themselves on social media, since any of this content can be used to build deepfakes.
“I put a lot of videos out, but with my CFO, you will see no videos out, and there’s a reason for that,” said Avani Desai, CEO of cybersecurity assessment firm Schellman. “I don't want a lot of her information available online, including her voice.”
It’s important to minimize the number of publicly available informal photos and to carefully guard who can see your personal social media. In one recent instance, a deepfake CEO contacted a CFO with a request, claiming they didn’t have time to take care of a task before going on holiday, Desai said.
“You can restrict your privacy settings on social media…the more vanilla the better,” she said. Executives may also want to consider some additional checks to verify if communication is authentic, such as using a kind of “safe word” protocol to verify the identity of speakers before proceeding with a call or sending an email to the purported speaker, experts say.
Still, given the reality of modern life and business, it might be challenging to keep photos and voices private because audio and video meetings may be a core part of CFOs’ jobs. The risk landscape around regular business activities will change over time, said Matthew Miller, principal of cybersecurity services at KPMG. For now, executives need to evaluate each potential vulnerability on a case-by-case basis.
“We do some risk assessments in terms of how much surface area they have about themselves out on the web that could be used to [create deepfakes],” he said. “It's going to evolve over time and become a bigger risk.”
Use encrypted channels to communicate.
Desai suggests CFOs use encrypted communications tools — including messaging and video platforms — to cut down on the risk of deepfake compromises. Digital watermarks, or tamper-resistant metadata, can be added to digital content to help verify its authenticity and minimize the risk of manipulation.
“It embeds a unique identifier in the media making it easier to trace and authenticate,” she said.
Review processes.
CFOs should assess which parts of their business are most likely to be affected by deepfake threats and adjust processes, said Miller. Combating deepfakes involves identifying the parts of the business that could be subject to this type of scam and looking at processes that might be breached by attackers.
In many organizations, CFOs authorize payments, but that may not be the case for every type of payment, he said.
“They may have some fairly old processes in place,” he said. Organizations may be using legacy operating procedures, old workflow tools and may not require multiple authentication steps, he said. Or payment authorizations may be dependent on relationships between particular individuals.
Adding multi-factor authentication, particularly for payments, is a crucial part of any deepfake prevention strategy, said Desai. This could involve a password along with some other type of identity verification, including use of authenticator apps. This step can apply to both work and personal accounts, she said.
Requiring physical security keys to complete authentication adds an additional layer of security. Security keys are small pieces of hardware users connect to their devices during the authentication process.
“Our CFO, to approve large transactions over a certain amount, has to use a hardware key,” she said.
Deploy AI tools to evaluate questionable content.
Deepfake detection software can help companies review communications for fakes, potentially evaluating and authenticating transaction requests.
While fraudsters use any available voice and video content as material to build and develop deepfakes that mimic executives, there are AI tools that can detect anomalies in pictures, videos and audio files to help identify possible deepfakes, said Desai. Some of these tools offer a confidence score.
“Maybe facial features are going to be different. Maybe your head movements are going to be different. Your body and face coordinations may not be the same…there’s voice inconsistency,” she said. “If you're pulling all this data from when I was speaking at a TEDx conference versus when I'm talking on my Christmas video, you're going to have some voice inconsistencies.”
The cost of deepfake detection software varies, but it likely isn’t going to be very different from other subscription-based tools, possibly “thousands of dollars per year,” Desai said. It’s hard to pinpoint how much firms need to invest in prevention and detection technology because the tools and risk landscape are constantly evolving, said Miller.