Dive Brief:
-
Artificial intelligence is quickly gaining traction among criminals, putting companies at heightened risk of exposure to sophisticated payment fraud scams, according to Baptiste Collot, CEO of fraud prevention company Trustpair.
-
Tactics such as voice phishing or “vishing,” a scam in which fraudsters try to manipulate the victim over the phone, are now much easier to pull off thanks to AI tools, Collot said in an interview. Audio “deepfake” technology can help a criminal to duplicate — with a high degree of accuracy — the voice of a trusted person at an organization as part of a payment fraud scam.
-
For the criminal, members of the company’s corporate finance team are among ideal targets for such scams, “because they are in charge of managing the money and sending payments to vendors,” he said.
Dive Insight:
AI tools capable of generating audio or video deepfakes are not new, but the use of such technologies has dramatically increased, according to a report published last month by the National Security Agency, the Federal Bureau of Investigation, and the Cybersecurity and Infrastructure Security Agency.
“Making a sophisticated fake with specialized software previously could take a professional days to weeks to construct, but now, these fakes can be produced in a fraction of the time with limited or no technical expertise,” the agencies said. “This is largely due to advances in computational power and deep learning, which make it not only easier to create fake multimedia, but also less expensive to mass produce.”
In addition, the market is now flooded with free, easily accessible tools that have become widely available to “adversaries of all types, enabling fraud and disinformation to exploit targeted individuals and organizations,” according to the report.
Vishing attacks were seen in 69% of companies in 2021, up from 54% in 2020, according to risk advisory firm Kroll. Hybrid vishing — which incorporates both email and telephone communication — spiked more than 266% from 2021 to 2022, according to cyber threat intelligence company PhishLabs.
Software company Retool, which offers a platform for building custom business tools, disclosed in a blog post last month that it suffered an August cybersecurity breach that involved vishing. The scheme allowed hackers to take over 27 of the company’s cloud customer accounts.
In the first stage of the attack, company employees received texts claiming that a member of IT was reaching out about an account issue that could prevent participation in the company's open enrollment for health care coverage. One employee fell for the scam and logged into the link provided by the attackers.
After logging into the fake portal, the employee received a phone call from an attacker, who again posed as an IT team member. With the help of deepfake technology, the attacker tricked the employee into supplying an additional piece information: a multi-factor authentication code.
“This situation was challenging,” Snir Kodesh, head of engineering at Retool, said in the blog post. “It’s embarrassing for the employee, disheartening to cybersecurity professionals, and infuriating for our customers. For those reasons, these kinds of attacks are difficult to talk about, but we believe that they should be addressed in the open.”
Key steps to avoid falling victim to such scams include employee training and investing in technology that can help detect when a payment is at risk of going to a fraudulent bank account, according to Collot.
“Both need to work together,” he said.