12 min read
Why I’m building mdfy
I built mdfy in one month while doing other work. Now I’m going full-time.
This is why.
The state of AI memory today
Every day, millions of people pour their thinking into ChatGPT, Claude, Gemini, and Cursor. We ask hard questions. We get back genuinely useful answers — strategies, code, frameworks, insights that took experts decades to develop.
Then we close the tab.
That answer is gone. Not literally — it sits in some chat history we’ll never search. But functionally gone. We can’t find it. We can’t reuse it. We can’t build on it.
The next day, we ask similar questions. We get similar answers. We close the tab again.
This is happening at civilizational scale. Trillions of tokens of high-quality, AI-assisted thinking, evaporating into chat histories nobody returns to. The world’s most expensive forgetting machine.
The industry’s response so far is what I call extracted memory — services like Mem0 and Letta that watch your conversations and extract facts the AI thinks are important. “Sarah is vegetarian. Sarah lives in Seoul. Sarah is interested in LLM evaluation.”
Mem0 and Letta are excellent at what they do. They solve a real problem. But they answer a different question than the one I want to answer.
They ask: what should the AI remember about you?
I want to ask: what do you want to remember?
These are not the same question. The first is about inference. The second is about authorship.
Why authorship matters
Memory is not just data. Memory is identity.
What you remember shapes who you become. What an organization remembers shapes what it can do. This was true in the age of paper, true in the age of databases, and is more true in the age of AI than ever before.
When you let an AI extract your memory, you let an AI define what mattered. You let an algorithm decide which thread of yesterday’s thinking is worth carrying forward, which insight to compress into a fact, which piece of yourself to keep.
That’s a strange thing to outsource.
Some people will outsource it gladly. The convenience is real. But for those of us who think carefully about what we want our future selves to know — for those of us who treat our knowledge as a craft, not a byproduct — there should be another option.
That option is mdfy.
What mdfy is today
If you visited mdfy.app right now, you’d see what looks like a markdown publishing tool — and it is.
You can capture markdown from anywhere: ChatGPT, Claude, Gemini (via Chrome extension), GitHub repos, your terminal (cat README.md | mdfy), VS Code, your Mac clipboard. You can edit it in a beautiful WYSIWYG editor — no syntax friction, no install required. You can share it with a permanent URL that anyone can read in the browser, that any AI can fetch as context.
It’s a publishing tool. It works. People can use it today.
In one month — built nights and weekends — I shipped:
- A Rust markdown engine (mdcore, open source)
- A web editor with WYSIWYG
- A Chrome extension for any AI chat
- A VS Code extension
- A Mac desktop app
- A CLI
- An MCP server
I shipped this fast because I had a clear primitive: the markdown URL. Every surface points to the same thing. Every surface composes with the others.
The bigger bet
The bigger bet is that markdown URLs are the right substrate for AI-era knowledge.
Not as a publishing tool. As infrastructure.
LLMs read and write markdown natively. It’s the lingua franca they were trained on. When ChatGPT outputs structured information, it outputs markdown. When you paste context into Claude, you paste markdown. When agents communicate with each other, the natural format is markdown. This is not changing. It’s compounding.
Humans also read markdown natively. Plain text formatted lightly is how we’ve taken notes for centuries. It’s how we’ll keep taking them. No proprietary format will displace it.
URLs are the simplest possible interface. Anyone can paste them. Any agent can fetch them. They cross every boundary — operating systems, applications, AIs, time zones, decades.
If LLMs write markdown, humans read markdown, and URLs cross every boundary, then the natural primitive for AI-era knowledge is a markdown document at a URL.
What’s coming next
The next eight weeks of building are about turning mdfy from a publishing tool into a memory layer.
You should be able to take what you’ve authored and deploy it as context to any AI, anywhere.
The Memory Bundle is the deployment unit. Take five mdfy URLs that together describe your project — the spec, the design decisions, the recent meeting notes, the customer interview, the open questions — and bundle them into a single URL. Paste that one URL into Cursor, Claude, ChatGPT. Your AI now has the full context, in your words, organized your way.
Bundle versioning lets you snapshot moments. The spec as it was when the project started. The spec as it is now. The diff between them.
Semantic search lets you find by meaning, not just keyword. “What did Claude tell me about LLM memory architecture?” returns results even if the words don’t match exactly.
The five beliefs
Markdown is the right primitive for AI-era knowledge.
Not Notion blocks. Not proprietary formats. Plain markdown — what LLMs speak, what humans read.
URLs are the right interface.
Not SDKs. Not vendor lock-in. A URL — pastable, fetchable, openable by any human, by any AI.
Memory is something you author, not something extracted.
mdfy lets you write it, edit it, decide what stays.
Memory should be deployable.
Storage isn’t the goal. A memory you can’t paste back into an AI as context isn’t doing the work memory is supposed to do.
Open by default.
mdcore is open source. The Bundle spec will be published openly. Open formats are how durable infrastructure gets built.
Why now
For the past two years, the industry has been building closed AI memory systems — OpenAI Memory inside ChatGPT, Google’s Memory Bank inside Gemini. Each one is trying to own your memory inside their walls.
In another two years, either the closed systems will have won, or an open standard will have emerged. I’m betting on the second outcome. I’m betting that markdown URLs become the open standard for AI memory the way HTTP became the open standard for documents.
mdfy exists to make that outcome more likely.
The roadmap
Markdown publishing tool. Capture from any AI, edit in WYSIWYG, share with permanent URLs. Free during beta.
Memory Bundle, Semantic Search, Bundle Versioning. The transition from publishing tool to memory layer.
MCP write access for AI agents. Team workspaces. Open Bundle Spec v1.0.
Bundle marketplace. Enterprise self-host. Standard-setting consortium.
An open invitation
If you use AI daily
Try mdfy. The Chrome extension is the fastest entry. Beta is free.
If you build AI agents or tools
Look at the MCP server. Write access coming Phase 2.
If you care about open standards
Bundle spec is coming. Want feedback before it ships.
If you’re an investor
Not raising now. Will when metrics justify. Care about open infrastructure.
mdfy is built by Hyunsang at Raymind.AI.
The mdcore engine is open source on GitHub.
The Bundle spec will be published before Phase 2 ships.
Reach me at hi@raymind.ai.