Dive Brief:
- Fraud detection startup Traceable said Tuesday that its platform has been updated with new capabilities designed to help reduce the risks of integrating generative artificial intelligence technologies into critical “application programming interfaces.”
- The platform now features a first-of-its-kind dedicated dashboard that allows organizations to gain insights into the security posture of APIs that power connections between generative AI models and other application services, according to a press release. APIs enable two software components to communicate with each other using a set of definitions and protocols.
- “Ensuring the security of applications powered by Generative AI and Large Language Models is crucial in today’s organizations,” Sanjay Nagaraj, Traceable’s chief technology officer, said in the release. “With the introduction of our Generative AI API Security capabilities, we are helping enterprises to embrace the potential of AI technologies while securing their API ecosystem.”
Dive Insight:
The announcement comes as CFOs are seeking to balance the rewards of AI with its risks and costs, with many finance chiefs planning cautious investments.
APIs are increasingly becoming prominent vectors for cybercriminals, according to Traceable, whose platform is designed to detect potential malicious activity targeting such interfaces.
In a survey released by Traceable last September, 60% of respondents said their organizations had suffered a data breach due to API vulnerabilities in the past two years, leading to intellectual property theft and financial losses, among other challenges.
“APIs, if left vulnerable, can be the Achilles' heel of an organization,” said a report on the findings.
The rapid adoption of generative AI technologies has further exposed APIs to cybersecurity risks, according to Traceable’s Tuesday release. The company said its platform is now equipped to detect attacks that exploit unique characteristics of AI, such as “prompt injection,” where hackers feed generative AI systems malicious inputs that appear to be legitimate user prompts in an effort to bypass security controls.
The new capabilities come on the heels of a May 1 announcement by Traceable AI that it secured $30 million in funding from a group of investors, including Citi Ventures, the venture arm of Citigroup.