Dive Brief:
-
The rapid adoption of artificial intelligence tools is potentially making them “highly valuable” targets for malicious cyber actors, the National Security Agency warned in a recent report.
-
Bad actors looking to steal sensitive data or intellectual property may seek to “co-opt” an organization’s AI systems to achieve such ends, according to the report, which recommends various defensive measures such as promoting a “security-aware” culture to minimize the risk of human error, as well as ensuring the organization’s AI systems are hardened to avoid security gaps and vulnerabilities.
-
“AI brings unprecedented opportunity, but also can present opportunities for malicious activity,” NSA Cybersecurity Director Dave Luber said in a press release.
Dive Insight:
The report comes amid growing concerns about potential abuses of AI technologies, particularly “generative” kinds like Microsoft-backed OpenAI’s wildly popular ChatGPT model.
In February, OpenAI said in a blog post that it terminated the accounts of five state-affiliated threat groups who were using the startup’s large language models to lay the groundwork for malicious hacking efforts. The company acted in collaboration with Microsoft threat researchers, the post said.
“Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” Microsoft said in a separate blog post. “On the defender side, hardening these same security controls from attacks and implementing equally sophisticated monitoring that anticipates and blocks malicious activity is vital.”
The threat activity uncovered by OpenAI and Microsoft could just be a precursor for state-linked and criminal groups to rapidly deploy generative AI to strengthen their attack capabilities, cybersecurity and AI analysts said in a previous report by CFO Dive sister publication Cybersecurity Dive.
Malicious actors targeting AI systems may use attack vectors unique to AI, as well as standard techniques used against traditional information technology systems, the NSA said.
“In the end, securing an AI system involves an ongoing process of identifying risks, implementing appropriate mitigations, and monitoring for issues,” the agency’s report said.
The NSA said its guide, while intended for national security purposes, “has application for anyone bringing AI capabilities into a managed environment, especially those in high-threat, high-value environments.”
The report was developed in partnership with other government agencies across the world, including the U.S. Cybersecurity and Infrastructure Security Agency (CISA), as well as the U.K. National Cyber Security Centre and the Canadian Centre for Cyber Security.