The Number That Doesn't Make Sense
Anthropic has 19 million users. ChatGPT has 800 million. Google has 2.2 billion active devices feeding the most sophisticated data flywheel ever constructed.
And yet.
Anthropic's enterprise market share went from 24% to 40% in under a year, overtaking both OpenAI and Google. Claude Code — a terminal-based coding tool that most consumers have never heard of — crossed $1.1 billion in annualized recurring revenue within 18 months of launch. The company's valuation went from $61.5 billion to $350 billion in ten months.
If you're evaluating these companies on the metrics that usually matter — compute resources, data assets, distribution channels, consumer reach — Anthropic shouldn't be winning. On a structural moat analysis, they score a 6 out of 15. Google scores a perfect 15. OpenAI has 42 times more users.
Something doesn't add up. Unless the standard metrics are measuring the wrong thing.
The Red Ocean Everyone's Fighting In
In 2005, W. Chan Kim and Renée Mauborgne published Blue Ocean Strategy, a framework that has since become canon in business schools and boardrooms. The core thesis is elegant: most companies compete in "red oceans" — crowded market spaces where rivals fight over the same customers using the same value propositions. The winners create "blue oceans" — entirely new market spaces where competition becomes irrelevant because you've changed what game is being played.
The AI industry is drowning in a red ocean right now. And the water is getting redder.
The dimensions everyone competes on are obvious: benchmark scores, parameter counts, multimodal capabilities, consumer user growth, and — more recently — reasoning chain performance. These are the metrics that dominate headlines, that VCs ask about in pitch meetings, that get debated on X at 2 AM.
The problem is that on every single one of these dimensions, the structural advantages belong to companies that are simply too large to out-resource.
Google owns its compute (custom TPUs plus GCP), its data (Search, YouTube, Gmail — decades of the world's information), its distribution (Android, Chrome, Workspace — 2.2 billion active devices), and its talent (DeepMind alone). In a previous analysis, I scored Google 15 out of 15 on the five structural dimensions that determine AI survival. There is no company on Earth better positioned on traditional competitive metrics.
Microsoft owns enterprise relationships (365, Azure, Windows), distribution to 1.5 billion desktop users, and hedged its model bet by investing $13 billion in OpenAI while simultaneously partnering with Anthropic and running its own Copilot stack. Microsoft doesn't need to build the best model. It needs to own the surface where every model gets used.
OpenAI owns consumer mindshare. "ChatGPT" became a verb. 800 million weekly active users, 2.5 billion daily prompts, 92% Fortune 500 penetration. They defined the category and they own the brand.
Now look at Anthropic's position on these same dimensions. No owned compute — they rent from AWS, Google Cloud, and now Azure. No consumer distribution channel. No proprietary data moat. A user base that's roughly 2% of ChatGPT's.
If this is the game, Anthropic loses. Every time.
So they stopped playing it.
The Blue Ocean: What Anthropic Actually Built
While every competitor fought over who has the smartest model this quarter, Anthropic asked a different question entirely: What does the operating system for AI agents look like?
The strategic logic was counterintuitive at every step. Go language-only when everyone goes multimodal. Go developer-deep when everyone goes consumer-wide. Build infrastructure when everyone else builds features. Don't try to be the biggest. Try to be the layer everything else runs on.
There's a precedent for this exact play. In 2007, Apple had a fraction of Nokia's market share. They didn't try to outsell Nokia on phones. They built the platform that made "phone" the wrong category entirely. The iPhone launched without an App Store — Steve Jobs actually opposed opening to third-party developers. Usage demanded it. The App Store launched a year later. By 2022, the iOS ecosystem facilitated $1.1 trillion in annual commerce. Apple never had the most market share. They had the most valuable ecosystem.
Anthropic is running the same playbook. And most people are still comparing benchmark scores.
The Stack: A Technical Walkthrough
What follows is a layer-by-layer breakdown of what Anthropic actually built, in the order they built it. Each layer does two things: it adds capability, and it creates a switching cost that has nothing to do with model quality. Understanding this stack is the difference between seeing Anthropic as a model company and seeing it as a platform company.
Anthropic's Platform Stack
Layer 1: Claude Chat — The iPhone
The base product. Conversational AI. This is the layer where every competitor plays and where differentiation is thinnest. Any model can chat. Any user can switch between Claude, ChatGPT, and Gemini in the time it takes to open a new tab.
Switching cost at this layer: near zero. Anthropic knew this.
Layer 2: Claude Code — The Native Apps
This is where Anthropic stopped being a model company and became a tooling company.
Claude Code is a terminal-based agentic coding tool with full filesystem access and command-line execution. The critical architectural decision was operating directly in the terminal rather than through an IDE plugin. This meant Claude Code worked everywhere — not locked to VS Code or any specific editor. Developers didn't need to learn a new interface. They worked in the environment they already used.
But the real insight came from what happened next. Anthropic expected developers to use Claude Code for programming. Instead, they started using it for everything — vacation research, spreadsheet management, email cleanup, file organization. The tool outgrew its name.
Switching cost created: workflow muscle memory. Once your instinct is to type a natural language command in the terminal instead of manually navigating files and applications, the habit becomes the lock-in.
Layer 3: CLAUDE.md — The Configuration Layer
This is the layer most observers underestimate.
CLAUDE.md is a project-level instruction file that persists across sessions. It defines architecture, conventions, constraints, and context that any Claude session can read. Think of it as the onboarding document for an employee who doesn't exist yet.
What makes CLAUDE.md structurally important is that it transforms Claude from a general-purpose model into a specialist for your specific context. A well-written CLAUDE.md contains hours of iteration about how your project works, what conventions matter, what mistakes to avoid. It's institutional knowledge encoded into a format that agents can consume.
Boris Cherny, the creator of Claude Code, described CLAUDE.md as "shared, durable memory for teams." The agent becomes increasingly intelligent specifically for that team's codebase and workflows.
Switching cost created: institutional knowledge. Your CLAUDE.md represents accumulated context that doesn't transfer to GPT or Gemini. Moving to a competitor means re-teaching everything from scratch.
Layer 4: Hooks — The Event System
Hooks are automatic triggers that execute actions at specific points in Claude's workflow. If CLAUDE.md tells the agent what to know, hooks tell it when to act and how to constrain itself.
The technical architecture supports multiple event types: PreToolUse fires before any tool runs — use it to validate, approve, or deny operations. PostToolUse fires after a tool completes — use it to auto-format, lint, or test. SessionStart fires when a session begins — use it to initialize logging or load context. Stop fires when Claude finishes — use it for cleanup or notification.
Hooks can be command-based — executing a shell script — or prompt-based, where Claude itself evaluates something before proceeding. A PreToolUse hook on Write operations can validate file safety before any code is written. A PostToolUse hook can auto-format every file after every edit. This creates predictable, constrained agent behavior.
The practical impact: hooks turn a creative, sometimes unpredictable AI into a reliable, rule-following system. In enterprise environments where predictability matters more than creativity, this is the difference between a toy and a tool.
Switching cost created: operational trust. Once you've built hooks that make your agent workflow predictable and safe, you've created a system you trust. Trust is earned over iterations. It doesn't transfer to another platform on day one.
Layer 5: Skills — The Recipe Book
Skills are folders containing instructions, scripts, and resources that Claude autonomously invokes based on task context. This is the layer that separates Claude from every other model.
Unlike slash commands, which require manual invocation, skills activate automatically when their description matches the current task. The architecture uses progressive disclosure: YAML frontmatter tells Claude when to use the skill, the SKILL.md body gives instructions, and linked files provide reference materials. Claude loads only what's relevant — a context-efficient approach that lets you bundle unlimited specialized knowledge without burning the context window.
In practical terms: you can build a skill called property-contract-analysis that contains instructions for parsing lease agreements, extracting rent escalation clauses, and generating comparison spreadsheets. When you point Claude at a contract, it identifies the relevant skill automatically and follows the recipe.
This is the layer where non-technical domain expertise becomes programmable. A legal team's knowledge of what matters in a contract, a finance team's framework for evaluating a deal, an engineering team's standards for code review — all of it becomes a skill that any agent can execute. This is the part most AI transformations get wrong — they buy the technology without encoding the expertise that makes it useful.
Switching cost created: codified expertise. Each skill represents organizational IP that lives exclusively in the Claude ecosystem. The more skills you build, the more your team's knowledge is embedded in a platform you can't easily leave.
Layer 6: Plugins — The App Store
Plugins are distributable bundles of commands, hooks, skills, agents, and MCP servers. Install with a single command. Share with teammates. Publish to marketplaces.
This is the layer that transforms Claude Code from a tool into a platform. The plugin architecture follows a standard structure: a plugin.json for metadata, directories for commands, agents, skills, and hooks, plus optional MCP configuration for external tool connections.
Anyone can build a plugin. Anyone can distribute one. Anthropic has open-sourced reference implementations and launched marketplaces where developers can discover and install community-built plugins.
The parallels to Apple's App Store are structural, not metaphorical. Apple didn't build every app — it built the platform and the distribution mechanism, then let third-party developers create the value. The App Store launched with a handful of Apple-built apps and grew to facilitate $1.1 trillion in commerce because developers built on top of Apple's infrastructure.
Anthropic is at the equivalent of 2008 — the App Store just launched. The ecosystem is young. But the architecture is in place.
Switching cost created: ecosystem dependency. When your team's workflow relies on five plugins that exist only in the Claude ecosystem, you don't switch models. You might switch between Opus and Sonnet for cost optimization, but you don't leave the platform.
Layer 7: MCP — The Universal Adapter
The Model Context Protocol is an open standard for connecting AI systems to external tools and data sources. Anthropic created it, open-sourced it, and then donated it to the Linux Foundation's Agentic AI Foundation — co-founded by Anthropic, Block, and OpenAI, with backing from Google, Microsoft, AWS, and Cloudflare.
Over 10,000 active MCP servers are in operation. The protocol has been adopted by ChatGPT, Gemini, Microsoft Copilot, Cursor, VS Code, and dozens of other platforms. NVIDIA's CEO Jensen Huang said MCP "completely revolutionized the AI landscape."
Here's the strategic move that most people miss: MCP is model-agnostic. It works with every competitor's product. On the surface, this seems to weaken Anthropic's moat — why give your competitors a standard you designed?
The answer is the same logic that made USB-C valuable for the companies that helped design it. By making MCP the universal standard, Anthropic ensured that the protocol layer of agentic AI runs on their architecture, regardless of which model sits on top. They gave away the standard and kept the best implementation. Every MCP server built by any developer, for any model, strengthens the protocol that Anthropic designed and understands most deeply.
Switching cost created: industry-level protocol lock-in. Even if you use GPT or Gemini, you're probably connecting them to tools via a protocol Anthropic created. The protocol becomes invisible infrastructure — and invisible infrastructure is the hardest kind to replace.
Layer 8: Cowork — The Platform for Everyone
In January 2026, Anthropic launched Cowork — Claude Code's capabilities made accessible to non-technical users through a desktop application. File management, document generation, data processing, all through natural language. It was built in approximately ten days, largely using Claude Code itself.
Cowork is significant not for what it does, but for what it opens. Plugins that developers built for Claude Code carry directly into Cowork. Skills transfer. MCP connections transfer. The entire ecosystem built by and for developers is now accessible to every knowledge worker in the organization.
Two weeks after Cowork launched, Anthropic brought plugins into it — 11 open-sourced templates for finance, legal, sales, and engineering workflows. By February, they launched an enterprise agents program with private marketplaces, controlled data flows, and department-specific customization.
Boris Cherny defined the shift precisely: Claude isn't a chatbot anymore. It's a "doer" — an agent that touches files and applications directly, taking action rather than just producing text. That changes the value proposition from information retrieval to operational automation.
Switching cost created: organizational adoption. When both developers and non-technical teams are working within the same ecosystem — sharing plugins, skills, and workflows across the organization — the switching cost becomes institutional, not individual.
The Value Curve Inversion
The Flywheel
Each layer didn't just add functionality. It enabled the next layer to be built faster.
Claude Code was built on Claude. Cowork was built using Claude Code. Plugins built for Claude Code carried into Cowork without modification. Boris Cherny shipped over 300 pull requests in December 2025 — his most productive month in 18 months at Anthropic — running five or more agents simultaneously.
Map the flywheel: a better model produces better developer tools, which drives developer adoption, which creates more plugins and skills, which enriches the ecosystem, which attracts more enterprise customers, which generates more revenue, which funds a better model.
Now compare that to the flywheel every competitor is running: a better model drives more consumers, which generates more API revenue, which funds a better model.
Anthropic's flywheel has more gears. That means more friction to start — which is why they appeared to be behind for two years. But it also means more momentum once it's spinning. The evidence says it's spinning: 300,000+ business customers up from fewer than 1,000 two years ago. Enterprise share from 24% to 40%. Claude Code from zero to $1.1 billion ARR. One in five businesses on Ramp now pay for Anthropic, up from one in 25 a year ago. And 79% of OpenAI's paying customers also pay for Anthropic — enterprises are buying both, which means Anthropic is in the consideration set for every serious AI deployment.
The flywheel is compounding. And it's recursive — each product accelerates the development of the next one, because Anthropic uses its own tools to build its own tools.
What the Bears Get Wrong
This analysis only holds if I address the obvious objections. There are three worth taking seriously.
"Google can just copy all of this."
Google can copy individual features. They launched Gemini Code Assist. They adopted MCP. They have their own cloud AI platform. But copying individual layers is not the same as replicating a compounding stack. Google adopted MCP — which means they're building on a protocol Anthropic designed. Google launched coding tools — but without the plugin ecosystem, the skills architecture, or the developer community that's already building on Claude's infrastructure. The App Store's moat was never any single app. It was the ecosystem. That's much harder to replicate than a feature.
"Anthropic burns cash and has no path to profitability."
True: Anthropic is spending aggressively. But the runway question is functionally answered. A $15 billion investment from Microsoft and NVIDIA, plus $30 billion in committed Azure compute capacity, provides years of operational security. The real question isn't whether they can survive — it's whether the flywheel generates positive unit economics before the capital runs out. Claude Code's trajectory ($1.1B ARR in 18 months) suggests the product-led revenue model is working faster than projected. Anthropic told investors it would be cash-flow positive by 2028. Claude Code's growth rate suggests it could hit that target 12–18 months early.
"Models will commoditize, and then none of this matters."
This is actually the strongest argument for Anthropic's strategy, not against it. If models commoditize — which the convergence of benchmark scores strongly suggests they will — then differentiation moves entirely to the ecosystem, tooling, and infrastructure layer. Which is exactly what Anthropic spent three years building while everyone else optimized for model performance.
Model commoditization doesn't kill Anthropic's moat. It is Anthropic's moat thesis.
What This Means If You're Making a Bet
The standard way to evaluate AI companies is model capability × distribution × compute resources. On those metrics, Google and Microsoft win, and Anthropic is a long shot.
But these metrics assume the game is about who builds the smartest model. If the game shifts to who builds the operating system that agents run on — the orchestration layer between intelligence and action — then the evaluation inverts entirely.
Anthropic is building the one layer that becomes more valuable as models commoditize. The switching costs they've created are not technical (any model can be plugged in) but architectural (your workflows, your institutional knowledge, your team's skills and plugins all live in one ecosystem). I explored this pattern in the AI company that wins won't be the smartest — the thesis is the same, but the evidence has compounded since.
The question for investors is not "Is Claude smarter than GPT?" That question has a shelf life of about six weeks.
The question is: When every model is roughly as smart as every other model — and that day is approaching faster than most people think — who owns the operating system?
That's the blue ocean. And right now, Anthropic is swimming in it alone.
This is the first in a series examining AI industry structure. Next: what the shift from prompting to agent training means for anyone building AI inside a traditional company.
Framework for strategic thinking. Company positioning reflects opinion-based analysis. Not investment advice.