Back to Blog
Richard  Tromans on Claude Plugins for Legal Tech
Trending Post

Richard Tromans on Claude Plugins for Legal Tech

·Legal Tech AI

Expanding Richard Tromans's viral post on Midpage's Claude MCP connection and Anthropic legal plugins, and what it means for research.

LinkedIn contentviral postscontent strategylegal techlegal researchAnthropicClaudeAI pluginssocial media marketing

Richard Tromans, Founder at Artificial Lawyer, recently shared something that caught my attention: "The intersection between Anthropic and legal tech is busy this Monday...." He then pointed to two connected moves: Midpage launching an MCP connection with Claude to combine a general-purpose LLM with a legal research product, and Anthropic signaling a shift toward distinct legal tech skills via plugins.

That short update contains a big idea worth unpacking: the legal AI market is moving from "chat with an LLM" toward composable systems where a general model is orchestrated through specialized tools, product workflows, and domain guardrails. In other words, the center of gravity is shifting from the model alone to the integration layer and the legal product that wraps it.

"Midpage, the startup focused on legal research, has launched an MCP connection with Claude, to fuse a general LLM with a legal-specific product." - Richard Tromans

Legal research is a high-stakes, citation-sensitive workflow. The job is not merely to produce fluent text. The job is to:

  • Find authoritative sources and distinguish binding from persuasive authority
  • Trace how a case has been treated and whether it is still good law
  • Quote precisely and attribute correctly
  • Build an argument that is auditable by another lawyer
  • Keep client and matter data confidential

General LLMs are strong at language and reasoning patterns, but legal research demands provenance and repeatability. That is why, as Richard Tromans implied, the real innovation often happens at the intersection: a general LLM plus a legal-specific research layer that handles retrieval, citation, and workflow.

What an MCP connection signals (in plain English)

When Richard Tromans mentioned an "MCP connection with Claude," the key takeaway is interoperability. In practice, an integration of this kind aims to let the model interact with a legal research system as a tool rather than relying on the model's internal memory.

Instead of:

  • Lawyer asks Claude a question
  • Claude answers based on general knowledge and whatever it can infer

You get:

  • Lawyer asks a question in Claude
  • Claude calls Midpage (or another legal tool) to search, retrieve, and structure results
  • Claude drafts with citations grounded in retrieved sources
  • The system can log what was pulled, from where, and when

This is a step toward making LLM outputs more defensible. Not perfect, but materially better than a purely open-ended chat.

Richard Tromans also noted that Anthropic is moving into offering distinct legal tech skills via plugins, describing it as a major moment if a leading model provider starts packaging domain capabilities.

Conceptually, this is the difference between:

  • A model that can talk about legal topics

and

  • A model that can do legal tasks through tools, guardrails, and structured actions

In a plugin-style world, "legal" becomes a set of capabilities: research, citation checking, document comparison, clause extraction, timeline building, privilege filtering, and matter-specific drafting. The model becomes an orchestrator that routes work to the right tool and then explains the result in natural language.

If you are going to introduce tool-enabled legal skills, research is a logical starting point because the value is obvious and measurable.

A typical research workflow includes steps that tools can standardize:

  1. Formulate the issue and jurisdictions
  2. Retrieve primary and secondary sources
  3. Validate authority (citator behavior, treatment, status)
  4. Extract key holdings and tests
  5. Draft a memo with quotes and pinpoint cites

An LLM integrated with a research product can accelerate steps 1, 4, and 5 while the research system strengthens steps 2 and 3. This is precisely the "fusion" Richard Tromans described.

When legal teams evaluate AI today, it is tempting to compare models head-to-head. But the trend Richard Tromans surfaced suggests a different evaluation frame:

1) Workflow fit becomes the differentiator

If the model is increasingly accessible through multiple vendors, then the legal-specific layer matters more: source coverage, citation quality, filters, jurisdiction controls, export formats, and collaboration.

2) Provenance and auditability become table stakes

A tool-connected workflow can produce a record: what queries were run, what documents were retrieved, and which passages were used. That is essential for legal quality control and for managing risk.

3) Security and data boundaries become more explicit

Plugins and integrations raise real questions:

  • What data leaves your environment?
  • Is matter content sent to the model provider, the legal tool, or both?
  • How is it stored, and for how long?
  • Who can access logs?

The more "agentic" the system becomes, the more governance matters.

A practical example: research memo with tool grounding

Imagine an associate needs a short memo on whether a particular contractual limitation of liability clause is enforceable under a given state's law.

In a tool-fused setup:

  • The associate asks Claude for the rule and factors.
  • Claude uses the Midpage connection to retrieve relevant cases and treatises.
  • Claude drafts a memo that includes:
    • A short rule statement
    • A list of key cases with citations
    • Quotations with pinpoint cites
    • A reasoned application to the facts (with clear assumptions)

The human lawyer still reviews, verifies, and edits. But the time spent on first-pass synthesis and formatting can drop sharply.

What to watch next: where the market could go

Richard Tromans's brief "busy Monday" observation hints at several near-term developments.

Composability will beat monoliths

Rather than one vendor doing everything, we will likely see stacks: a model (Claude), a research layer (Midpage or others), and additional tools for drafting, redlining, knowledge management, and e-discovery.

Product moats shift upward

Model quality matters, but the durable advantage often becomes:

  • Exclusive or deeply curated legal content
  • Superior citation and validation workflows
  • UX designed for lawyers, not generic chat
  • Integration into firm systems (DMS, KM, billing, matter management)

The procurement conversation changes

Instead of buying "an LLM," legal buyers increasingly buy:

  • A set of skills (research, cite check, drafting)
  • With defined data handling
  • With measurable accuracy and coverage
  • With integration commitments

If you are assessing a Claude-plus-legal-tool approach, here are practical questions to put in front of vendors:

  1. What sources are used, and how often are they updated?
  2. Can the system show supporting passages for every key claim?
  3. How does it handle negative treatment and superseded authority?
  4. What is the citation format, and can it match firm standards?
  5. What gets logged, and can logs be exported for audits?
  6. Where does data flow, and what are the retention defaults?
  7. Can we restrict jurisdictions, practice areas, or source sets?
  8. How do we evaluate performance beyond anecdotes (benchmarks, test sets, red teaming)?

My takeaway from Richard Tromans's post

Richard Tromans was not just sharing a product update. He was pointing at an architectural shift: general LLMs are becoming platforms, and legal value is increasingly delivered through domain tools connected via standardized interfaces. Midpage connecting to Claude is one example of that "fusion" in action, and Anthropic exploring plugin-based legal skills suggests the model providers are leaning into this direction rather than leaving it entirely to third parties.

For lawyers and legal ops teams, the opportunity is real: faster research, better first drafts, and more consistent outputs. But the responsibility grows too: integration governance, verification workflows, and clear policies for what can be automated.

This blog post expands on a viral LinkedIn post by Richard Tromans, Founder, Artificial Lawyer. View the original LinkedIn post →