Shadow AI in Healthcare: What Mid-Market Clinics Must Do Before Regulators Act

  • Avatar for Sara Renfro
    Written By Sara Renfro

Clinicians at small and mid-sized practices are already using AI tools their employers haven’t approved. Documentation assistants, inbox drafters, summary generators. Staff reach for whatever cuts the admin backlog. Most of these tools sit outside any governance framework, and for a growing number of clinics, private AI for healthcare organizations deployed within their infrastructure has become the clear way forward, keeping clinical data entirely within their own walls.

This isn’t a fringe issue. A December 2025 survey by Wolters Kluwer found that over 40% of healthcare workers have encountered unauthorised AI tools in their workplace, and nearly one in five admitted to using one themselves.

The Problem Isn’t Rebellion: It’s Absence

When a GP spends forty minutes after clinic finishing referral letters that could take ten with a summarisation tool, they’re going to find something that helps. If the practice hasn’t provided an approved option, the gap gets filled by whatever is available. ChatGPT on a personal phone, a browser extension nobody vetted, a free transcription app with unclear data handling.

That’s not malice. Half the respondents in the Wolters Kluwer survey said they used shadow AI simply because it made their workflow faster. Another third said approved tools either didn’t exist or lacked the functionality they needed.

The real risk sits with the organisation, not the clinician. When patient data enters an external AI platform, it leaves the practice’s control entirely. No BAA. No audit trail. No way to know whether that data ends up in a training set. In the US, average healthcare breach costs now exceed $7 million. In the UK, ICO enforcement actions for data handling failures are climbing year on year.

Why Mid-Market Clinics Are More Exposed

Large hospital systems have dedicated AI governance committees, enterprise licences for ambient scribes, and IT departments that can monitor network traffic for unauthorised tools. A regional ophthalmology group with eight locations does not.

Mid-market clinics, the ones with twenty to two hundred staff, face the same compliance obligations as a major health system but with a fraction of the infrastructure. HIPAA doesn’t scale its expectations based on headcount. Neither does the NHS Data Security and Protection Toolkit. The rules apply equally, but the resources to meet them don’t.

And this is precisely where shadow AI hits hardest. Staff in smaller practices wear more hats. The practice manager handling operations, compliance, and HR simultaneously isn’t going to run monthly audits of browser extensions. The gap between what’s required and what’s realistic creates the conditions for ungoverned AI use to spread quietly.

What Regulators Are Doing Now, and What’s Coming Next

In the US, HHS has proposed the first major update to the HIPAA Security Rule in over twenty years. The draft language makes clear that patient data processed by AI systems, including training data, model outputs, and algorithmic predictions, falls squarely under HIPAA protection. The distinction between “required” and “addressable” safeguards is being removed. Civil penalties now exceed $2 million per violation category annually.

At state level, Texas already requires written AI disclosure to patients as of January 2026. California, Illinois, Ohio, and Pennsylvania are pursuing similar legislation.

In the UK, the MHRA established a National Commission on clinical AI regulation in early 2026, tasked with producing a regulatory framework this year. The UK Sovereign AI Unit launched in April with £500 million in funding and an explicit mandate around domestic data processing. Meanwhile, 5.5 million patients have exercised the National Data Opt-Out, a signal that public tolerance for casual data handling is thinning fast.

For clinics that haven’t addressed shadow AI internally, the window for voluntary action is narrowing. Waiting for enforcement to force the issue is considerably more expensive than getting ahead of it.

Four Moves That Actually Help

Policy-only responses don’t work. You can circulate a memo telling staff not to use ChatGPT, but if the alternative is forty minutes of typing after a twelve-hour shift, the memo loses. What works is making the governed option easier than the ungoverned one.

Run a quiet audit first

Before writing any policy, understand what staff are actually using. Check browser extensions, app store activity on work devices, and network logs. You can’t govern what you can’t see, and the findings are usually a surprise.

Pick one workflow and fix it properly

Don’t try to solve everything at once. If the biggest pain point is clinical correspondence, deploy a compliant documentation tool for that workflow. One working, governed AI tool that genuinely saves time does more to reduce shadow AI than ten policies.

Build governance alongside delivery, not after it

Access controls, audit logging, prompt versioning, and incident response procedures need to exist from day one of any AI deployment. Retrofitting governance onto a tool people are already using is slower, messier, and far more expensive.

Make the infrastructure decision early

The fundamental architectural question, whether your AI processes data inside or outside your organisation’s boundaries, determines everything else. Cloud-based AI with a BAA covers some risk. But for clinics handling sensitive clinical data across multiple locations, keeping AI within your own infrastructure removes the most complex compliance variable entirely.

The Regulatory Clock Is Running

Shadow AI in healthcare is not going away. Clinicians will keep using tools that save them time, regardless of policy memos. The question for practice leaders is whether those tools operate inside a governed, compliant environment or outside it entirely.

Mid-market clinics that get ahead of this are doing three things: deploying approved alternatives, building governance into their workflows, and investing in compliant AI solutions. Private AI healthcare infrastructure that keeps data within organisational boundaries is what puts them on solid ground when the new rules arrive. The ones that wait will be retrofitting under pressure, at higher cost, and with considerably less control over the outcome.

The regulatory direction is clear on both sides of the Atlantic. What matters now is whether your organisation acts before it has to.

Similar Posts