ChatGPT exploit could cause data breaches in healthcare, experts warn
A well-known ChatGPT vulnerability is being used by cybercriminals to attack artificial intelligence (AI) systems, including those used by healthcare organizations.
The exploit, which the National Institute of Standards and Technology ranks as a “medium risk,” is being exploited by nefarious actors, who can gain access to systems connected to the AI through an API integration.
According to a report by cybersecurity firm Verti, more than 10,000 attacks in a single week have been unleashed against organizations worldwide using the tactic, with financial institutions, healthcare entities and governments being prime targets.
The threat, labeled as CVE-2024-27564, allows hackers to make illegitimate attempts to access an AI system, whereby the unauthorized users can “inject malicious URLs into input parameters, forcing the application to make unintended requests on their behalf,” Verti said.
As a result of malicious action, hackers can ultimately glean insights inputted into the AI, such as health data, financial transactions, and more—anything the systems were tasked with analyzing.
The firm said most of the attacks they recorded were targeted at organizations in the U.S., both in the government and private sector.
The American Hospital Association released a statement on the vulnerability, saying it could lead to data breaches and reputational damage for targets in healthcare.
“This could allow an attacker to steal sensitive data or impact the availability of the AI tool,” Scott Gee, AHA deputy national advisor for cybersecurity and risk said in the statement. “This highlights the importance of integrating patch management into a comprehensive governance plan for AI when it is implemented in a hospital environment.”
“The fact that the vulnerability is a year old and a proof of concept for exploitation has been published for some time is also a good reminder of the importance of timely patching of software,” he added.
Currently, there is no patch or solution in place to block the tactic, as the exploit was deemed to be relatively low risk and not a suitable attack vector.
“A once-ignored vulnerability can quickly become a favorite attack vector. Automated attacks scan for weaknesses, not severity scores, and misconfigurations create easy entry points, even well-secured systems remain vulnerable when IPS or WAF rules are incorrectly set,” the cybersecurity firm wrote.
For now, they recommend security teams monitor network logs for attack attempts from suspicious IPs and prioritize “AI-related security gaps” in risk assessments of cybersecurity capabilities.
The full report from Verti can be found here.