Frank Ramos on Healthcare AI Governance That Scales
A practical expansion of Frank Ramos's post on healthcare AI governance, model risk questions, and upcoming regulation and reimbursement.
Frank Ramos recently shared something that caught my attention: "The era of "AI as a feature" is ending" and, looking into 2026, AI is becoming "the system of decision support, navigation and automation" across telehealth, remote monitoring, care coordination, and utilization workflows.
That framing is worth sitting with. It suggests a shift from "nice-to-have" AI add-ons to AI that quietly shapes operational reality: what gets flagged, routed, escalated, documented, and reimbursed. And as Frank pointed out, that is exactly why HHS is "focusing directly on how AI should be regulated, reimbursed and supported."
Below is my take on what this means for health-tech leaders, compliance teams, and the attorneys who will be asked (often late in the game) to make the risk legible.
The shift: from feature AI to infrastructure AI
When AI is a feature, it is relatively easy to contain.
- It sits inside a single workflow.
- A product manager can describe it in a sentence.
- A user can ignore it.
- A failure is irritating, not existential.
When AI becomes infrastructure, it is different.
- It influences clinical communications and operational decisions.
- It can trigger downstream actions across multiple systems.
- It can be embedded in care pathways, utilization review, scheduling, triage, and documentation.
- It can create institutional dependency, where the human process is no longer viable without the model.
Frank’s post connects that reality to the regulatory and reimbursement conversation. If AI participates in care delivery decisions (even indirectly), payers and regulators will ask: Who is accountable? What evidence supports safety and effectiveness? What gets billed, and under what assumptions?
The questions "without hesitation" are the real moat
Frank highlighted a key point from Maguregui and Hennessy: the fastest-growing health-tech outfits will be those that can answer certain questions "without hesitation." I agree, and I would go further: these are not just due diligence questions. They are the questions that determine whether your organization can scale AI beyond a pilot.
The winners are not the teams with the flashiest demo. They are the teams that can explain their model, data rights, monitoring, and human oversight in plain English.
Let’s break down the core questions Frank listed and why each one is operational, legal, and reputational.
1) What does the model do, exactly, and what does it not do?
If you cannot draw a clean boundary around model behavior, you cannot govern it.
In healthcare, this is especially dangerous because ambiguity gets interpreted as clinical capability. A model that "suggests" might be treated as one that "decides." A tool that "summarizes" might be treated as one that "documents." And a model that "flags risk" might be treated as "diagnosing."
Practical guidance:
- Write an "intended use" statement in non-technical language.
- Write an equally explicit "non-intended use" statement.
- Map every downstream workflow dependency: who sees the output, when, and what they do next.
2) What data trained it, and do you have defensible rights to use that data for that purpose?
This is where the conversation goes from "our vendor said it is fine" to "show me the paper trail."
In health-tech, training data questions show up in multiple forms:
- Patient data and consent scope (including HIPAA-adjacent issues, de-identification claims, and data use agreements).
- Third-party datasets with restrictive licenses.
- Web-scraped content or clinical text sources with unclear provenance.
- Fine-tuning on customer data in a way that accidentally creates cross-customer leakage risk.
Defensible rights are not just about avoiding lawsuits. They are about maintaining continuity of service. If a training dataset becomes unusable due to a contractual dispute, your model lifecycle and your product roadmap can collapse.
Practical guidance:
- Maintain a dataset register: source, license, permitted uses, restrictions, retention, and deletion triggers.
- Ensure your model card references the dataset register, not vague descriptions.
- In vendor contracts, explicitly address training, fine-tuning, and whether customer data may be used to improve models.
3) How do you monitor drift, bias, and safety issues post-deployment?
Healthcare environments change. Patient populations shift. Clinical guidelines evolve. Seasonality changes symptom distribution. Coding practices change. Even UI tweaks can change user behavior and therefore model inputs.
Drift monitoring is not optional if AI is part of decision support. And bias monitoring is not a press release line. It is an ongoing measurement problem.
Practical guidance:
- Define drift metrics tied to clinical or operational outcomes, not just statistical distance.
- Set monitoring cadence and thresholds that trigger investigation.
- Build a post-deployment incident process similar to security: detect, triage, mitigate, document, and learn.
4) What is the human oversight and escalation path when AI influences care or clinical communications?
Frank’s post repeats this oversight question, and that repetition feels appropriate. Oversight is where many AI programs become brittle: everyone assumes someone else is watching.
Human oversight needs to be concrete:
- Who owns the workflow?
- Who can override the model?
- When must the AI output be reviewed before action?
- What gets logged for auditability?
- How are clinicians trained on limitations and failure modes?
Practical guidance:
- Establish a "human-in-the-loop" policy per AI tier (more on tiers below).
- Define escalation paths for patient safety concerns, adverse events, and suspected bias.
- Require documentation when AI meaningfully influences a decision or patient-facing message.
Why HHS regulation and reimbursement matter now
Frank noted that HHS is focusing on "how AI should be regulated, reimbursed and supported." This is the hinge point: reimbursement is behavior-shaping.
Once reimbursement policies reflect AI-enabled workflows, organizations will standardize around those workflows. That increases the stakes of governance because the model is no longer experimental. It is part of the institution’s economic engine.
Even before specific rules mature, you can anticipate pressure in three areas:
- Evidence: demonstrating performance and safety for the intended population.
- Accountability: documenting roles, oversight, and audit trails.
- Transparency: being able to explain what the system did and why, especially when outcomes are questioned.
The action steps that turn governance into a growth enabler
Frank shared that Maguregui and Hennessy recommend steps like creating an enterprise AI inventory, setting a tiered governance model, and contracting for your AI position. Those three moves are foundational because they replace ad hoc decisions with repeatable controls.
Build an enterprise AI inventory (and keep it alive)
An AI inventory is not a spreadsheet you make once for a board deck. It is a living map of where models exist and what risk they create.
Include:
- Model name, version, owner, vendor, and deployment locations.
- Intended use, users, and affected patient populations.
- Data inputs and outputs, including whether PHI is involved.
- Monitoring plan, audit logs, and incident history.
Use tiered governance, not one-size-fits-all approvals
Not all AI needs the same level of review. A tiering approach lets you move fast where risk is low and slow down where risk is high.
Example tiers:
- Tier 1: Administrative automation (low clinical impact).
- Tier 2: Operational decision support (indirect care impact).
- Tier 3: Clinical communications and clinical decision support (high impact).
Each tier should have required controls: testing, monitoring, clinical review, and documentation.
Contract for your AI position (before you need it)
Contracts define reality when something goes wrong. If you are relying on a vendor, your governance program is only as strong as your negotiated rights.
Key terms to negotiate:
- Clear description of model behavior and limitations.
- Data use rights and restrictions, including training and retention.
- Audit rights and transparency commitments.
- Incident notification timelines and cooperation obligations.
- Indemnities and liability allocation aligned to your risk tier.
A simple litmus test for readiness
If a regulator, payer, partner health system, or plaintiff’s counsel asked you to answer Frank’s highlighted questions tomorrow, could you respond quickly with documentation?
If the honest answer is "not yet," the fix is not to slow down AI. The fix is to operationalize governance so scaling becomes safe, explainable, and durable.
In 2026, the competitive edge is not just having AI. It is being able to prove you are in control of it.
This blog post expands on a viral LinkedIn post by Frank Ramos, Best Lawyers - Lawyer of the Year - Personal Injury Litigation - Defendants - Miami - 2025 and Product Liability Defense - Miami - 2020, 2023 🔹 Trial Lawyer 🔹 Commercial 🔹 Products 🔹 Catastrophic Personal Injury🔹AI. View the original LinkedIn post →