The $285 Billion Misunderstanding

Last week, software and legal tech stocks in particular crashed. RELX, Wolters Kluwer, Thomson Reuters, etc. - over $285 billion in market value evaporated in hours.

What triggered the panic? Anthropic announced a legal plugin for Claude. Legal LinkedIn's interpretation: "Foundation models are coming for legal tech. This is the beginning of the end for SaaS vendors."

The reality: Anthropic just open-sourced six prompt templates that demonstrate capabilities developers have had access to since February 2025. Nothing changed technologically. What changed is who can access it.

This isn't a competitive threat from Anthropic. It's foundation layer commoditization, which means the actual competitive dynamics in legal tech just shifted dramatically. But not in the direction everyone thinks.

The Evolution: From Coding Agent to Legal Plugin

To understand what actually happened, you need to see how we got here. This didn't start with a legal plugin announcement. It started in February 2025 with Anthropic's release of Claude Code, as of today probably the most capable and most popular coding agent amongst developers.

And what eventually emerged wasn't just better models. It was an agentic ecosystem: a stack of interoperable tools - foundation models, agent harnesses, protocol standards, and reusable workflows - that together enable sophisticated automation without proprietary platforms.

Claude Code

Claude Code is not a chatbot that suggests code. It is an agent with terminal access, file manipulation, multi-step workflows, and actual execution capability. Claude Code can:

  • Read and modify entire codebases
  • Execute commands directly in your terminal
  • Build multi-file projects from scratch
  • Debug, refactor, and test code iteratively
  • Connect to external systems and data sources

Developers loved it immediately and other providers as well as the open source community followed with their own coding agents (e.g., OpenAI's Codex, Google's Gemini-CLI or the open source tool OpenCode).

Model Context Protocol

One reason for Claude Code's versatility and extensibility is its MCP (Model Context Protocol) integration. MCP is a standardized way for AI agents to connect to external data sources and tools, also developed by Anthropic.

Before MCP, every integration was custom-built. Want your AI to access Google Drive? Build a connector. Slack? Another integration. Your document management system? Start from scratch. This created the "M×N problem": every AI tool needed separate integrations for every data source.

MCP solved this by creating a universal standard. Now anyone can build an MCP server that exposes data or tools, and any AI application can connect automatically. Build one integration, use it everywhere. And there are already hundreds or thousands of MCP servers out there, built by companies, open source projects, and individual developers. Look on GitHub and you'll find MCP servers for everything from databases to CRMs to document management systems (see for example this Github repo or this one).

All major providers and coding agents adopted MCP. The protocol became an open standard. The plumbing connecting AI to real-world data commoditized overnight.

Why this matters? Your contract database, your DMS, your client matter system - wrap them in MCP servers, and suddenly any AI agent (Claude Code, ChatGPT, whatever comes next) can access them using the same interface.

Skills

Then came Skills. Yet another mechanism developed by Anthropic for packaging reusable agent workflows.

A skill is a template consisting of pre-configured instructions, rules, and tool integrations that teach Claude how to perform specific tasks your way. In essence, it is a markdown file (shouldn't surprise you if you have read my previous posts) that defines a structured workflow for a particular task, including:

  • Instructions: Step-by-step guidance on when to use the skill (i.e., when to load it as a prompt into the LLM's context) and how to approach the task (the prompt template so to speak)
  • Resources: Reference materials, templates, or examples to guide the agent's reasoning
  • Scripts: Pre-written code snippets that the agent can use or modify to execute specific functions

Similar to MCP, skills became an open standard and other coding agents support them. And since skills are just files and folders, they are portable and shareable. Save them. Modify them. Share across your organization. Publish for others to use. Again, there are hundreds or thousands of skills out there already free to use or adjust and plug into your agent.

What is important to understand here: skills are just prompts paired with the right tools and data. You don't need to write code. Anyone can create a skill just by defining the workflow as a prompt in a markdown file. To be clear here: I do not want to downplay the value of skills by calling them "just prompts." They are a powerful way to encode complex workflows in a reusable format. But they are not "proprietary technology" or "deep domain expertise." They are structured instructions that anyone can write.

Pairing skills with MCP integrations in a powerful agent harness like Claude Code creates a system where you can define complex workflows that access real-world data and tools, and execute them systematically. So, given its capabilities, the name Claude Code was actually a misnomer. It's not just for coding. It's an agent that can do anything you can do on a computer. Write scripts, manipulate files or process documents. And if you can't do something directly, you can use Claude Code to write a script that does it.

Now if you think about how much of legal work you are doing using a computer, it should not surprise you that this infrastructure or ecosystem has at least the potential to be used for legal work.

Claude Opus 4.5

Claude Code was already good. But it was just the harness, i.e., a set of tools and capabilities that still required powerful LLM models to navigate and execute all the tools and instructions reliably. That happened when Anthropic released Claude Opus 4.5 at the end of November 2025. From that day, Claude Code became exceptional.

Opus 4.5 marked a huge step forward in agentic coding capabilities and elevated Claude Code's functionality dramatically. It felt like Superman had just been put in the Batmobile (if that metaphor makes any sense). The model's ability to understand complex instructions, reason through multi-step problems, and generate accurate outputs improved dramatically. Tasks that previously required multiple iterations and manual guidance could now be executed in one go.

The model alone is impressive. But the model in the harness with access to tools, data connections (e.g., via MCP), extensible skills, and the ability to execute complex multi-step workflows becomes transformative.

Point Claude Code (or any other coding agent with access to a highly capable LLM) at a complex feature request, and it builds it systematically. Give it access to your contract database, and it can analyze patterns across thousands of documents. Define a skill for your compliance workflow, and it will execute it.

This is the ecosystem that matters: not just a powerful model, but the connections between the model, the agent harness, customizable skills, internal or external data connections, and execution tools.

Claude Cowork

Claude Code is a command line tool and addresses developers. If you are not familiar or comfortable with command-line interfaces, you might probably never have heard of Claude Code before. So in January 2025, Anthropic released Cowork, essentially "Claude Code for non-developers." Fun fact: According to Anthropic, Claude Cowork has been built in less than 2 weeks using Claude Code.

Claude Cowork has the same agentic capabilities (file access, multi-step workflows, tool execution), but wrapped in a normal interface instead of a terminal. Designed explicitly for enterprise workflows and "knowledge work", Cowork is able to create or edit PDFs, Excel files, PowerPoint etc. and process all sorts of documents or connect to all kinds of data sources - all of these capabilities are just skills or MCP integrations you can find on GitHub (see for example Anthropic's skills repo).

This was the accessibility unlock. Non-technical users could now use the same agent capabilities that previously required developer or at least some technical expertise. No terminal. No command-line knowledge. Just describe the workflow, and Cowork executes it.

The Legal Plugin

Which brings us to the announcement that crashed legal tech stocks.

Anthropic released 11 new knowledge work plugins on GitHub - pre-configured skill and MCP packages for different industries: marketing, finance, customer support, sales... and legal.

The legal plugin includes:

  • NDA triage workflows
  • Contract review against standard playbooks
  • Compliance checks
  • Legal risk assessment
  • Meeting preparation
  • Legal response drafting

That's it. A publicly available GitHub repository containing a handful of skills paired with some MCP integrations (i.e., for Microsoft 365 or Slack). Or, in other words, just six prompts that define some legal workflows in a very broad manner. When you look at the skill instructions you will see there's no secret sauce. No proprietary legal knowledge. No deep domain expertise baked in. Just templates defining basic workflows that anyone with legal knowledge and some technical skill could have written.

This is Superman on a bicycle.

You have one of the world's most powerful AI model - capable of sophisticated reasoning, complex analysis, and nuanced judgment - equipped with a narrow set of basic legal workflow instructions that barely scratch the surface of what legal work actually involves.

The skills themselves aren't impressive. "Review this NDA against a standard playbook" is undergraduate-level legal work. The ecosystem that makes those skills possible - the LLM, the agent harness, MCP integrations, extensible skill framework - that's what's impressive.

The threat isn't the legal plugin itself. The threat is that the ecosystem built by Anthropic commoditizes the foundation layer that legal tech vendors have been selling as proprietary value. If your legal tech product's core value proposition is "we use AI to do basic document analysis," you're not competing with Anthropic's legal plugin. You're competing with what any competent developer can now build in a few days by just using open source tools, skills and integrations.

What the Panic Gets Wrong: Four Critical Misunderstandings

1. This Isn't New Capability, It's Packaging

Everything the legal plugin does was already possible with Claude Code. The only thing that changed is accessibility.

Think of it this way: When Apple released the iPhone, they didn't invent touchscreens or mobile internet. They packaged existing technology in a form regular people could use. Revolutionary? Yes. But not because the underlying technology was new.

The legal plugin is packaging of existing capabilities and making them accessible to the average person, not innovation.

2. Shallow Automation ≠ Competitive Moat

The plugin handles surface-level tasks:

  • Triage NDAs by standard vs. non-standard clauses
  • Compare contracts against a playbook

This is the easy part of legal tech. It's table stakes, not differentiation.

Real legal tech value lives in:

  • Proprietary data: Jurisdiction-specific analysis, firm-specific precedents
  • Specialized workflows: Complex due diligence, regulatory compliance tracking, litigation management
  • Compliance infrastructure: Audit trails, conflict checking, ethical walls, security certifications
  • Long-term knowledge management: Case outcomes, negotiation history, client preferences
  • Deep domain integration: Matter management systems, billing integration, court filing automation

The legal plugin does none of this. It's a demo of what foundation models paired with powerful agentic tools could enable, not a replacement for what (serious) legal tech vendors actually deliver.

3. My Guess: Anthropic Won't Specialize (And Shouldn't)

Look at the announcement again: 11 starter plugins across industries. Legal is one of them. Not the only one. Not even the primary focus.

Anthropic is an infrastructure company. Their business model is:

  • Sell API access to foundation models
  • License enterprise deployments
  • Provide tools that make their models more useful across industries

Building vertical legal tech would require:

  • Deep legal domain expertise (hiring lawyers, compliance specialists)
  • Industry-specific partnerships (bar associations, courts, regulators)
  • Ongoing product support (customer service, training, updates)
  • Sales infrastructure (enterprise legal sales cycles are 12-18 months)
  • Regulatory navigation (jurisdiction-specific requirements, ethical rules)

That's a completely different business. Anthropic isn't staffed for it. They're not funded for it. And most importantly: they don't need it.

Their market is every industry that uses AI. Why would they narrow to legal tech specifically? The plugin isn't a product. It's a reference implementation showing what customers can build.

4. The Real Threat: Foundation Layer Commoditization

Here's what legal tech vendors should actually worry about: The bar just rose for what counts as differentiation.

When basic document processing, contract analysis, and clause extraction become things anyone can build in a weekend... what are you selling?

If your legal tech product's core value is "we use AI to review contracts" and that capability is nothing more than a GitHub template plus API calls, you don't have a moat. You have a countdown timer.

The threat isn't "Anthropic will replace legal tech vendors." It's "legal tech vendors need to deliver value above what commodity AI infrastructure provides."

Who Should Actually Be Worried (And Who Just Got Safer)

Legal LinkedIn panicked about the wrong companies. Let me explain:

The Publishers: The Only Real Moat Just Got Deeper

Everyone assumed LexisNexis, Westlaw, or - for the German readers - BeckOnline should be terrified. I think they just secured the next decade.

They own the data. Not just "data" - the actual legal corpus. Case law, statutes, regulations, secondary sources. This is the training data, the reference material, the validation source for any legal AI.

Anthropic's legal plugin can review contracts. But where does it get jurisdiction-specific case law analysis? Where does it find relevant precedents for novel legal issues? Where does it validate that its interpretation aligns with current doctrine?

Publishers sit on the only true data moat in legal tech. And here's the critical insight: agentic workflows make that data more valuable, not less.

When AI agents can actually execute legal research workflows instead of just answering questions, they need better data, not less data. They need structured, validated, current legal information with proper citations and context.

The publishers who understand this will win. The ones who panic and try to build their own competing agents will waste resources fighting the wrong battle.

What publishers should do:

Make your data agent-ready and agent-accessible. Build robust integrations that expose your legal databases to any AI tool, whether that's Claude, ChatGPT, or whatever comes next.

Stop trying to build "AI-powered legal research tools" yourself. You're not software companies. You're data companies. Your competitive advantage is the corpus, not the interface.

Let legal tech vendors build specialized tools. Let law firms build custom workflows. Let Anthropic and OpenAI compete on foundation models. You just need to ensure every one of them requires access to your data to function properly.

The publishers who get this right will become the data layer for legal AI - licensing data access to every tool, every agent, every workflow. The ones who try to compete with vertical applications will find themselves fighting on too many fronts.

Legal Tech Vendors: The Squeeze Is Real

This is where the actual vulnerability lives.

Legal tech vendors who built their business on shallow automation, e.g., basic contract review, simple document assembly, templated legal analysis, are facing commoditization.

If you only sell software, you have no moat. The moat is the domain expertise, specialized workflow integrations, the proprietary data, the compliance infrastructure, the trust, integrity and relationship you build with your clients. If you don't have that, you're just selling repackaged API calls. And when those API calls become free or easily replicable, your business model collapses.

Law Firms: The Strategic Position

And if I were a law firm? Make your data accessible for any AI agent implementation.

This means:

Don't lock yourself into vendor ecosystems where your data, workflows, and institutional knowledge are trapped in proprietary formats.

Build on open standards. Use tools that support data export, API access, and standard protocols (like MCP). Ensure you can switch vendors without losing everything you've built.

Make your data agent-ready. Clean, structured, well-documented legal information that any AI tool can process. This isn't preparation for a specific vendor's product, it's preparation for a world where you can plug any tool into your systems.

The Conclusion: Foundation Layer ≠ Competition

Legal LinkedIn panicked because they confused the foundation layer for competition. Anthropic didn't enter the legal tech market with their six simple markdown files containing shallow prompt instructions. Rather, they commoditized the base capabilities of any agentic workflow, and in doing so, raised the bar for what counts as real legal tech differentiation.

I think legal tech needs to solve the data problem before the AI problem and this announcement just proved that point. The ecosystem is built. The models are highly capable. The tools exist. What matters now is what you build on top of it.