Why Threats Against AI-Enabled Medical Devices Are Real And Alarming

Founder, Blue Goat Cyber | MedTech Cybersecurity Leader | Speaker & Author | 24x Ironman | Securing Innovation & Patient Safety.
AI is evolving healthcare. While it’s a powerful tool to support everything from automation to predictive analysis, it also brings about pause for its inherent risks. Simply put, AI is a cybersecurity risk.
In response, the Food and Drug Administration (FDA) issued guidance in January 2025 on its use and the potential threats to AI-enabled medical devices. The guidance seeks to help device manufacturers navigate the entire product life cycle concerning AI. The agency’s submission comes after a spike in devices set for approval using the technology. It has already authorized more than 1,000 devices with AI.
The FDA appears to be following the evolution of technology with this guidance and its concerns about AI and cyberattacks. The bigger question to some may be, “Why?” Why would a hacker want to manipulate AI? What’s the motivation? As someone dedicated to and fully immersed in the world of medical device cybersecurity, I’m sharing my insights and conclusions.
Why is the guidance in a TPLC framework?
The foundation of the FDA recommendations is a total product life cycle (TPLC) approach. This matters because AI plays a role in every part of the device’s life cycle. Just as cybersecurity is an ongoing risk, so is AI.
Starting the guidance from a TPLC framework intends to provide transparency and defend against bias control. Protection from the manipulation of AI goes much further than breaching protected healthcare information (PHI) for profit. It’s a serious risk to patient safety.
The FDA document discusses specific AI threats. Should an incident occur, the consequences are far-reaching. Why even take the risk?
AI can enhance medical devices, but is it worth the risk?
AI’s application in any industry has universal appeal. It can speed up analysis, make predictions, track real-time health metrics and streamline workflows. All of these things benefit providers and patients.
The adoption of AI comes at a time when the healthcare workforce shortage continues to expand. The American Hospital Association (AHA) cited a Mercer report projecting that the industry will have a scarcity of 100,000 critical healthcare workers by 2028.
This shortage and the growth of healthcare deserts are impacting access and availability. AI-enabled medical devices could help solve these two problems. Their use cases in diagnosis and treatment plans could save lives or improve the quality of life for patients.
The risk seems to be worth it. However, AI isn’t something we can fully trust or understand the repercussions of yet. AI turns to foe rather than friend when hackers leverage it.
Why do hackers want to manipulate AI in medical devices?
Proactive cybersecurity starts with understanding malicious actors’ motives, and one of the leading incentives is financial.
Selling PHI on the black market or causing a ransomware attack can be profitable. Hacking a medical device via AI could be a backdoor into a hospital’s network. If seized, cybercriminals have leverage. They can extort organizations; many have paid ransoms to ensure patient safety.
The promise of financial gain isn’t the only reason hackers are eager to manipulate AI. Their end game may be chaos. In looking at the FDA’s guidance, the agency calls out key risks that would fit this category, including:
Data Poisoning
Injecting corrupted or inauthentic data into datasets for training compromises the integrity of AI. The impact could be inaccurate diagnoses, which could cost lives and break down trust in the healthcare system.
Model Evasion
What if adversarial data became part of a learning model? That’s the gist of this technique, and it could affect how medical devices with AI diagnose. They could be wrong, endangering patients and possessing the potential to wreak havoc across the healthcare technology ecosystem.
Model Inversion
In this scenario, hackers would exploit the AI model to access sensitive information. If this is successful, attackers can employ model inversion on predictive models for diagnosis, violating patient privacy and undermining trust.
Performance Drift
Predictive accuracy from new inputs “drifts” from the model’s performance during training. In short, it impacts the model’s accuracy, possibly resulting in erroneous diagnoses and unnecessary treatments.
Bias
When AI learning models learn from datasets that aren’t diverse, it could limit what populations can safely use the devices for diagnoses. Misdiagnosis or failure to recognize conditions could lead to further mistrust from marginalized communities.
All of these risks could lead to patient harm and mistrust. Should this become more than an anomaly, it threatens AI’s use in any area of healthcare, including by medical device manufacturers.
MedTech must turn guidance into guardrails for safe innovation.
AI is a transformational force in healthcare, but without cybersecurity as its foundation, it becomes a high-stakes gamble. The FDA’s guidance provides an essential framework, but guidance alone isn’t enough. It’s how we act on it that will determine whether AI-enabled devices enhance care or compromise it.
From my experience advising MedTech innovators, I’ve seen how the TPLC approach isn’t just theory—it’s a practical necessity. Threats evolve. Devices update. Attackers innovate. So must we. It’s not just about securing a device for FDA approval—it’s about securing it for its entire lifespan in the wild.
What ties all of the AI manipulation threats together is their potential to erode the most critical asset in healthcare: trust. Trust that a diagnosis is accurate. Trust that a device won’t be weaponized. Trust that patient data won’t be sold to the highest bidder.
To the industry leaders reading this: We cannot afford to wait for mandates to secure our technologies. We must get ahead of the threat with secure development practices, robust risk management and a culture of cyber vigilance. This means building cybersecurity into the DNA of innovation—from R&D to postmarket surveillance.
The FDA’s guidance is a call to action. If we answer it thoughtfully, collaboratively and urgently, we can harness the power of AI without sacrificing patient safety. And we can preserve the public’s trust—something no algorithm can restore once it’s lost.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?