Back to Blog
Trending Post

Frank Ramos on Deepfake Coverage Gaps in Insurance

·Cyber Insurance

Frank Ramos spotlights deepfake "hybrid" incidents and why cyber, D&O, and media policies can leave gaps, making brokers essential.

LinkedIn contentviral postscontent strategydeepfakescyber insuranceD&O insurancemedia liabilityrisk managementsocial media marketing

Frank Ramos recently shared something that caught my attention about deepfake risk and insurance: "Many deepfake incidents involve hybrid techniques... While existing lines cover parts of the exposure, coverage gaps may still occur. That's why brokers have such an important role." I keep coming back to that word: hybrid.

Deepfakes are not just a tech novelty anymore. They are becoming a practical tool for fraud, extortion, harassment, market manipulation, and reputational damage. And when an incident blends multiple tactics (social engineering plus malware plus public misinformation), the insurance response can feel equally blended, sometimes in a good way, sometimes in a frustratingly incomplete way.

In this post, I want to expand on Frank's point and make it actionable: where existing policies might respond, where the gaps tend to show up, and how to work with a broker and counsel to reduce surprises at claim time.

Deepfakes are "hybrid" by design

A classic cyber event used to look like one of a few buckets: ransomware, business email compromise, data theft, or system outage. Deepfake incidents often refuse to stay in one bucket because they combine:

  • Voice or video impersonation (a "person" risk)
  • Social engineering (a "process" risk)
  • Account takeover or malware (a "technology" risk)
  • Public distribution of false content (a "media" risk)
  • Governance consequences (a "board and management" risk)

"While existing lines cover parts of the exposure, coverage gaps may still occur."

That is the core warning. A single deepfake can trigger multiple losses at once, and each part may map to a different line of insurance, with different triggers, exclusions, sublimits, and notice requirements.

Where coverage might respond (and what it often misses)

Frank referenced three common lines that can come into play: cyber, D&O, and media liability. Here's a practical look at each.

1) Cyber insurance: the first stop, but not always the last

Cyber policies can potentially respond to:

  • Incident response costs (forensics, breach counsel, crisis consultants)
  • Cyber extortion (depending on wording and conditions)
  • Network interruption and extra expense
  • Privacy liability and regulatory proceedings (if personal data is involved)
  • Social engineering or funds transfer fraud endorsements (sometimes)

Common deepfake-related friction points:

  • Social engineering is frequently sublimited or requires very specific verification procedures
  • Some policies require a "security failure" or "network security failure" trigger, while a deepfake may be purely human manipulation
  • Reputational harm and loss of future revenue are often limited, excluded, or hard to quantify
  • If the deepfake is posted publicly, the harm can look like a media or defamation issue more than a traditional cyber incident

Example: A finance employee receives a deepfake voice call that sounds like the CEO and approves a wire. No malware, no system intrusion, just impersonation plus urgency. If the policy treats it as a crime or social engineering loss with a small sublimit, the company may learn too late that its main cyber limit does not apply the way it expected.

2) D&O insurance: governance fallout, not the immediate loss

D&O can potentially respond when a deepfake incident leads to:

  • Shareholder claims alleging mismanagement of cyber risk
  • Derivative demands tied to weak controls, inadequate disclosures, or lax oversight
  • Securities claims following a market-moving deepfake (for example, false statements attributed to an executive)

But D&O is not designed to reimburse the company for the operational loss itself (like a stolen wire) in the same way cyber might. D&O is about claims against directors and officers, and the entity coverage is often limited to certain claim types.

Common friction points:

  • Conduct exclusions if fraud is established (timing and final adjudication matter)
  • Disputes over what counts as a "claim" and when it was first made
  • Disclosure issues if the organization had known control weaknesses but did not address them

3) Media liability: the public-facing content risk

Media liability (sometimes packaged as part of a broader E&O program) can potentially respond to:

  • Defamation, libel, or trade disparagement claims
  • Invasion of privacy or misappropriation claims in a content context
  • Certain IP claims related to content distribution

Deepfakes can create a media-style loss even when the company is the victim. For instance, if a fake video appears to show a company spokesperson making discriminatory remarks, the company may face third-party claims, employment fallout, and partner disputes.

Common friction points:

  • "Knowing falsity" exclusions if the insured is accused of intentional publication
  • Questions about whether the insured "published" the content or merely failed to stop it
  • Limitations on first-party crisis costs unless specifically endorsed

The coverage gaps Frank is pointing to

When Frank shared that brokers can help identify "what types of protection" a business needs, I read that as a reminder that deepfake losses can fall between policies. The gaps I see most often are:

1) Social engineering gaps and sublimits

Many insureds assume a deepfake-triggered wire loss equals "cyber." In reality it is often treated like a voluntary transfer induced by deception. Coverage may depend on:

  • Whether the policy has a specific social engineering insuring agreement
  • Whether verification procedures were followed exactly
  • Whether the fraud involved impersonation of a vendor, executive, or bank

2) Reputational harm and crisis spend

The most immediate cost after a viral deepfake may be communications, PR, brand monitoring, and customer reassurance. Unless the policy includes explicit crisis coverage, reimbursement can be limited.

3) Bodily injury and property damage exclusions

If a deepfake causes real-world harm (think: a fake safety instruction video that leads to injuries, or a false emergency directive that creates a panic), cyber policies often have bodily injury and property damage exclusions. That can push the loss into other lines, or nowhere at all.

4) AI and synthetic media ambiguity

Many policies were drafted before synthetic media was a common threat. Definitions matter: "computer system," "security failure," "publication," "privacy event," and even "wrongful act" can become battleground terms.

The broker's role: translating hybrid risk into a coherent program

Frank's point that brokers "can guide businesses" is not just general advice. With deepfakes, program design is a coordination exercise.

A good broker (working with coverage counsel as needed) can help you:

  • Map realistic deepfake scenarios to specific insuring agreements
  • Negotiate endorsements that close known gaps (social engineering, crisis, reputational harm)
  • Align notice requirements across policies so you do not jeopardize coverage
  • Clarify which policy is primary for which type of loss
  • Avoid silent overlaps where two insurers each argue the other should pay first

The goal is not to buy every product. The goal is to remove unpleasant surprises.

Practical steps to reduce deepfake risk before renewal

Here are concrete actions that improve both risk and insurability:

1) Build verification into money movement

  • Call-back procedures using known numbers, not numbers provided in an email or message
  • Dual approval for wires and changes to payment instructions
  • Out-of-band confirmation for urgent requests

2) Harden identity and access

  • Phishing-resistant MFA for email and finance tools
  • Tight controls on executive calendars and public-facing contact details
  • Monitoring for lookalike domains and executive impersonation accounts

3) Prepare an "authenticity" incident playbook

Deepfakes move fast. Your playbook should include:

  • Who can declare content false and authorize public statements
  • Evidence preservation steps (download, hash, document)
  • Rapid escalation to platform takedown processes
  • Coordination among legal, PR, HR, and security

4) Stress test your insurance language

At renewal, ask targeted questions:

  • Does our program cover deepfake-enabled social engineering? What are the sublimits and conditions?
  • Is there coverage for crisis and reputational response costs? Are they first-party or only tied to a third-party claim?
  • If a deepfake triggers securities volatility, how do cyber and D&O interact on notice and allocation?
  • Are there AI-related exclusions or broad "fraudulent instruction" carve-outs?

Closing thought

Frank Ramos highlighted a reality that many organizations are learning the hard way: deepfakes create hybrid incidents, and hybrid incidents expose the seams in an insurance program. Cyber, D&O, and media liability may each respond to pieces of the loss, but pieces are not the same as protection.

If you take one action, make it this: sit down with your broker and run three deepfake scenarios that feel uncomfortably plausible for your business. Then match each cost to an insuring agreement. Where you cannot confidently point to coverage, you have found the gap while you still have time to fix it.

This blog post expands on a viral LinkedIn post by Frank Ramos, Best Lawyers - Lawyer of the Year - Personal Injury Litigation - Defendants - Miami - 2025 and Product Liability Defense - Miami - 2020, 2023 🔹 Trial Lawyer 🔹 Commercial 🔹 Products 🔹 Catastrophic Personal Injury🔹AI. View the original LinkedIn post →