Back to Blog
Trending Post

Frank Ramos on California’s New AI Verification Rule

·AI Regulation in Law

A deeper look at Frank Ramos’s viral post on California’s AI bill and what verification duties mean for lawyers and arbitrators.

LinkedIn contentviral postscontent strategyAI regulationlegal ethicscourt filingsCalifornia legislationarbitrationsocial media marketing

Frank Ramos recently shared something that caught my attention: "A bill passed on Thursday by the California Senate would require lawyers in the state to verify the accuracy of all materials produced using artificial intelligence, including case citations and other information in court filings." That single point captures the moment the legal profession is in right now.

We are past the phase where AI in law is a novelty. It is embedded in research, drafting, discovery workflows, and client communications. Frank’s post highlights a simple idea with big consequences: if you use AI, you still own the output.

In the same post, Frank noted that the measure would move to the State Assembly, and he also flagged an important companion issue: the bill would limit how arbitrators can use generative AI, including prohibiting them from delegating decision-making to AI and restricting reliance on AI-generated information outside the case record unless parties are informed.

Below is my take on why these guardrails matter, what they signal about where regulation is headed, and how legal teams can adapt without overreacting.

What California is trying to solve

Generative AI is excellent at producing fluent text quickly. The problem is that fluency can mask errors. In litigation, an error is not just embarrassing. It can be sanctionable, can damage credibility with the court, and can harm a client.

The most visible risk has been "hallucinated" case citations or mischaracterized holdings. But the broader risk is subtler: AI can produce plausible factual assertions, procedural summaries, or quoted language that looks real but is wrong, incomplete, or out of context.

Key idea from Frank Ramos: if AI helps create it, a lawyer must still verify it before it reaches a court.

This is not a new ethical concept. It is a modernization of existing duties: competence, diligence, candor to the tribunal, and supervision of work product. AI simply changes the speed and scale at which mistakes can be generated.

Verification is the product, not the prompt

When people talk about "using AI" in practice, they often focus on prompts, tools, and efficiency. Frank’s post pulls the focus back to verification, which is the real professional service.

Verification means more than quickly scanning for typos. In the context of court filings, it usually implies:

  • Confirming citations exist and are accurate
  • Checking that quoted language matches the source
  • Validating that the cited authority supports the proposition
  • Ensuring procedural and factual statements are grounded in the record
  • Making sure the filing complies with local rules and disclosure requirements

If AI generates a draft with ten citations, the lawyer’s job is not to admire the draft. The job is to independently confirm each citation and the way it is used.

A practical example

Imagine an AI-drafted motion that says a case stands for a broad rule on admissibility, and it provides a citation that looks correct. Verification means pulling the case, reading the relevant section, and confirming the rule applies in your jurisdiction and to your fact pattern. AI might be directionally helpful, but it is not a substitute for that legal judgment.

Why the bill’s arbitration provision is just as important

Frank also pointed out that the bill addresses arbitrators, not just lawyers. That matters because arbitration has traditionally been more private, more flexible, and less transparent than court proceedings. AI adds a new layer of opacity if a decision-maker uses it in ways the parties cannot see.

The bill, as Frank described it, would:

  • Prohibit arbitrators from delegating decision-making to generative AI
  • Prohibit relying on AI-generated information outside the case record without first telling the parties involved

These are due process concerns as much as technology concerns.

Delegation versus assistance

An arbitrator can use tools to manage workflow, but decision-making is different. If an arbitrator asks a model to decide credibility, weigh evidence, or select a damages number, parties lose the ability to test the reasoning. Even if the arbitrator remains "in the loop," there is a risk that the AI’s output becomes a de facto decision engine.

The "outside the record" problem

AI systems can inject information that was never admitted into evidence. For example, a model might supply an industry statistic, a medical generalization, or a summary of "typical" outcomes. Even when it is accurate, it is still outside the record unless introduced and tested. Frank’s highlight about notifying parties is a procedural safeguard: it lets parties respond, object, or contextualize.

Frank called this measure "one of the first pending in a state legislature on the use of AI by lawyers." Whether or not it is ultimately among the very first, the direction is clear: regulation is moving from general ethics guidance to specific process requirements.

I expect to see more of the following across jurisdictions and professional bodies:

  • Explicit verification duties for AI-assisted filings
  • Clear accountability rules: the lawyer signs, the lawyer owns it
  • Disclosure requirements in limited settings (especially for evidence, declarations, and expert-adjacent work)
  • Governance rules for neutrals (arbitrators and mediators) to preserve fairness
  • Recordkeeping expectations: what tool was used, for what task, and what checks were performed

Importantly, verification rules do not prohibit AI. They set a standard for safe use.

A simple workflow lawyers can adopt now

If you want to operationalize the spirit of what Frank shared, here is a lightweight approach that does not require a massive tech overhaul.

1) Categorize AI use by risk

Not all AI tasks are equal.

  • Low risk: formatting, summarizing your own notes, generating checklists
  • Medium risk: drafting arguments based on sources you already have
  • High risk: generating citations, stating facts, summarizing the record, predicting outcomes

High risk tasks require the strictest verification.

2) Use a "source-first" rule for citations

If the tool can link citations directly to primary sources, use that feature. If it cannot, treat every citation as untrusted until verified in an authoritative database.

3) Add a verification checklist to every filing

Before anything is filed, someone must confirm:

  • Every citation exists
  • Pin cites are correct
  • Quotes match the source
  • The proposition is supported by the authority cited
  • Factual statements match the record cite

This can be done by the drafter or a second reviewer, but it must be done.

4) Document what you checked

You do not need a novel. A short note in the file like "All citations and quotations verified against Westlaw/Lexis and record cites reviewed" can help demonstrate diligence if questions arise.

5) Train the team on failure modes

AI errors are often confident, not obvious. Train lawyers and staff to recognize patterns like:

  • Citations that do not resolve
  • Overbroad statements of law
  • Missing exceptions and jurisdictional nuances
  • "Too perfect" quotes that cannot be found in the source

The bottom line

Frank Ramos’s post resonates because it is not anti-AI. It is pro-responsibility. The legal system runs on trust: trust that citations are real, that facts match the record, and that decision-makers rely on what the parties can see and challenge.

If California requires verification of AI-produced materials, it will not be imposing a new burden so much as making an old duty explicit in a new environment. And if arbitrators are limited in how they can use generative AI, that is a reminder that fairness is not compatible with invisible inputs.

Used well, AI can make lawyers faster and more consistent. Used carelessly, it can erode credibility and due process. Frank’s core message points to the right standard for the profession: innovation is welcome, but accountability is non-negotiable.

This blog post expands on a viral LinkedIn post by Frank Ramos, Best Lawyers - Lawyer of the Year - Personal Injury Litigation - Defendants - Miami - 2025 and Product Liability Defense - Miami - 2020, 2023 🔹 Trial Lawyer 🔹 Commercial 🔹 Products 🔹 Catastrophic Personal Injury🔹AI. View the original LinkedIn post →