---
title: "Cross-AI as a structural moat"
url: https://mdfy.app/1tyw7rGT
updated: 2026-05-14T18:15:49.480Z
source: "mdfy.app"
---
# Cross-AI as a structural moat

> The argument distilled, because I keep getting this question.

## The claim

AI companies cannot build cross-AI memory. The cross-AI position is structurally available only to a non-AI-company.

## The reasoning

Every AI company has the same revenue equation: users staying inside the product. ChatGPT's memory works in ChatGPT because making it work in Cursor would teach users that they don't need ChatGPT to have memory. Claude's projects work in Claude for the same reason. Cursor's project context works in Cursor.

This is **not** a technical limitation. Anthropic could ship "Claude memory works in OpenAI's API" tomorrow. They won't, because doing so cannibalises the business model that funds the model training that *makes Claude worth using.*

## The implication

The layer **above** the AI providers is structurally available to a player whose revenue doesn't depend on holding the user inside any one wall. That's us.

## Two pushbacks I've heard

> "But the AI companies will partner with each other."

Possibly on tool-calling and inference primitives. Not on memory. Memory is the stickiest thing a chatbot has, and giving it up is giving up your stickiest user behaviour. None of them will, voluntarily, until they're forced to.

> "But Anthropic is the AI company *and* the platform — they have MCP, they could ship a hub-shaped product."

MCP is genuinely good and we use it. But MCP standardises *how* tools are called, not what context is preserved between sessions. Anthropic could ship a memory product. It would only work inside Claude. The Cursor + ChatGPT + Codex user wouldn't benefit. That's the wedge — we're not chasing the Anthropic-loyal user; we're chasing the user who's already in three AI tools.

## What this thesis commits us to

- **Never bet on a single AI vendor.** Every integration we ship has to work across Claude / OpenAI / Gemini / open-source models. Where there's a quality gap (capture extraction, reranking) we pick the best per-task model but never the *same* one for everything.
- **The URL is the contract.** Not an SDK, not a plugin. Anything we ship that requires a vendor-specific runtime betrays the thesis.

## The risk if I'm wrong

If the AI companies *did* meaningfully interoperate, our edge shrinks. But the fallback is the authoring layer — users authoring what they want remembered is its own value, independent of cross-AI portability. So the downside is "we become a smaller product", not "we have no product."
