Artificial intelligence, including subsets such as machine learning and “deep fakes,” is among the top five risks faced by healthcare providers in the coming year, according to a report released in January from Kodiak Solutions.
The substantial benefits of these technologies, including automated clinical documentation, faster and more accurate diagnosis, and improved patient and customer experience, will also create novel areas of risks involving privacy violations, hacking and ransomware, and the potential for bias, the report noted.
Current incarnations of AI pose several risks to healthcare systems that are significantly broader and more far-reaching than the well-known security challenges associated with maintaining high volumes of protected health information, said Kodiak senior vice president Dan Yunker, MBA. They include:
• Use of AI by criminals in breach support. “The AI models allow criminals to automate the breach process,” Mr. Yunker explained. “They use the AI to create a huge number of attack attempts, then read the responses to those attempts to create a whole new set of attempts. In this way, the work of months or years of manual probing can be performed in days, weeks or months.”
• Use of public AI. “Researchers have already shown that, through effective training, a public AI tool can be tricked into revealing secrets about itself or any of its users,” Mr. Yunker said.
• Misuse of an organization’s AI. “By injecting false training data into an organization’s AI, a criminal can command the system to generate false, misleading or incriminating results,” Mr. Yunker said.
• Mimicking humans with deep fake technology. “AIs are becoming increasingly effective at imitating humans using text (email, text messaging), verbal (telephony) and visual (video calls) methods,” Mr. Yunker said. “These imitations increasingly include individuals who are close to the human that is the target.”
Beyond the intrusions of bad actors, AI also can be compromised by errant results when the data used are of poor quality or contains hidden or purposeful biases. “It is incumbent on the humans to catch and find the root of these errors,” Mr. Yunker said. “As humans become dependent on the AI and lose their understanding of how the results are created, these errant results will move forward unnoticed.”
All these pressures increase the likelihood of a breach. “Once the domain of only nation-state hackers, these advances in AI tools are now used by the professional criminals and are working their way down to the amateur-class threat actor,” Mr. Yunker said. “This alone is driving the frequency of attack and the likelihood of a successful attack and breach to levels unthinkable a few years ago.”
Third-party partners, such as vendors and payors, are also potential avenues for an AI-facilitated security breach. In healthcare, one well-publicized manifestation of these attacks is for criminals to break into a vendor’s system to directly attack a hospital system or harm it by taking a valued service offline—as recently occurred when a cyberattack on UnitedHealth Group’s technology arm, Change Healthcare, prevented many pharmacies from transmitting patients’ insurance claims. “The second is that we see criminals take residence in a smaller hospital’s IT [information technology] environment and use it to gain access or attack other bigger hospitals, or the state health department,” Mr. Yunker said.
He also noted that AI cannot simply be put into place and left to perform without regular review, and that clinicians should not blindly follow an AI’s guidance without exercising their own judgment. “A few years ago, an AI that was used to predict diagnosis codes became so reliable that it was integrated with the organization’s [electronic health record]—a doctor had to approve the diagnosis with a click,” he said. “Somewhere along the line the data that was used to train the AI changed it, and it began sending messages that suggest incorrect codes. Having become used to the system’s accuracy, many of the physicians clicked through the message and applied the wrong codes.”
When adding protections against AI-related breaches, health systems should remember the common axiom that humans are much easier to hack than machines, Mr. Yunker suggested. “At the heart of almost all problems presented with AI, there are humans and the mistakes they make,” he said.
Therefore, health systems must continue to train their workforce to become as resilient in the face of these attacks as possible.
“AI also significantly increases the speed with which everything bad happens,” Mr. Yunker said. From phishing campaigns to changing an AI’s training data to alter its decision-making process, “this is all happening faster than our current human processes can comprehend.”
For instance, a team of cyberthreat hunters may take several weeks to determine that an attack is underway, whereas a hacker equipped with AI “will be in and out of their environment before anyone notices,” he said. “Therefore it is imperative that we move to a more profound level of constant monitoring of the IT environment.”
—Gina Shaw
Mr. Yunker reported no relevant financial disclosures.