HIMSS survey report: Beware insider threats to cybersecurity involving AI
Almost one in three healthcare organizations allow their people to use AI without formal restrictions. Half permit AI utilization as long as the models have been OK’d by management. Only 16% prohibit AI outright.
The findings are from a new survey conducted by the Healthcare Information and Management Systems Society, aka HIMSS. The project’s focus was not limited to AI—the researchers were interested in uncovering the broader landscape of cybersecurity across healthcare. But AI ends up consuming a considerable patch of real estate in the survey report.
The survey drew representative responses from 273 healthcare cybersecurity professionals working not only for providers (50%) but also for vendors (18%), consulting firms (13%), government entities (8%) and other organizations (11%).
Respondents ranged from C-suite leaders (50%) to non-executive management (37%) to non-management (13%).
What all had in common was holding some level of responsibility for day-today cybersecurity operations or cybersecurity activities.
Here are highlights from the AI section of the report.
AI use cases.
37% of respondents reported using AI for technical tasks like support and data analytics, 35% for clinical services such as diagnostics, and cybersecurity and administrative tasks (each 34%). HIMSS comments:
‘More AI technology use cases are anticipated for the future as AI becomes more prevalent.’
AI guardrails.
Nearly half the field, 47%, indicated that their organizations have approval processes, while 42% reported that they do not. An additional 11% were unsure whether such processes exist within their organizations. The authors remark:
‘An approval process serves as a proactive guardrail by vetting AI technologies before adoption, reducing the likelihood of unauthorized or inappropriate use. Meanwhile, monitoring AI usage functions as a reactive guardrail, providing ongoing oversight of AI activities to identify and address potential misuse, compliance issues or security risks.’
Active monitoring of AI.
31% of respondents reported their organizations actively monitor AI usage across systems and devices, while 52% said they do not and 17% did not know. HIMSS points out:
‘The lack of monitoring poses risks such as data breaches and others. There is a need for robust monitoring strategies to ensure safe and responsible use of AI technologies.’
Acceptable use policies.
42% of respondent stated that their healthcare organizations have written AUPs for AI, 48% indicated they do not, and 10% did not know. HIMSS notes:
‘An acceptable use policy sets clear guidelines for the safe and responsible use of technology, including AI, and can be standalone or integrated into a general policy based on the organization’s AI adoption.’
Future cybersecurity concerns involving AI.
75% of respondents cited data privacy as a top concern, followed by data breaches (53%) and bias in AI systems (53%). Nearly half expressed concerns about intellectual property theft (47%) and lack of transparency (47%), while 41% highlighted patient safety risks. HIMSS writes:
‘These findings underscore the need for robust safeguards, ethical frameworks and proactive measures to address the risks.’
Insider threat and AI.
A small percentage of respondents reported negligent insider threat activity (5%), malicious insider threat activity (3%), or both negligent and malicious insider threat activity (3%). HIMSS states:
‘While these numbers may seem small, it is likely that many organizations have not yet implemented monitoring specifically for AI-driven insider threats, leaving potential risks undetected.’
Amplifying the implied call to arms against insider threats, the authors write:
‘The growing reliance on AI tools and systems introduces new opportunities for both negligent and malicious insider activity, which can amplify risks to sensitive data and operational integrity.’
HIMSS doesn’t specify which geographic regions it included in the survey, but the group operates in North America, Europe, the U.K., the Middle East and Asia-Pacific.
Download the full report here.