---
title: "AI memory architectures: a Claude conversation"
url: https://mdfy.app/yt2ZZqyI
updated: 2026-05-14T17:52:48.410Z
source: "mdfy.app"
---
# AI memory architectures: a Claude conversation

> Captured from a working session with Claude Opus, 2026-03-12. Cleaned, structured, and saved as a permanent URL so the next AI session can pick up where we left off.

## The question

What architecture should a personal memory layer use? Three patterns are in production today:

1. **Vector recall** — every message goes through an embedding model, gets stored, retrieved by cosine similarity on demand. ChatGPT memory beta works this way.
2. **Episodic snapshots** — full conversation transcripts are stored verbatim, indexed by date and topic. Claude Projects does this.
3. **Hub-shaped memory** — the user authors structured notes; the AI reads them as URL-addressable resources.

## What Claude argued

> Vector recall trades precision for breadth. Episodic snapshots trade verbosity for fidelity. Hub-shaped memory trades automation for author-control.

The third pattern wins for one reason: **the human stays the author**. Vector + episodic both let memory drift — once stored, the user can't easily edit or curate without leaving the AI's UI. Hub-shaped puts the artifact in a place the user already lives (a document) and the AI reads from there.

## Takeaway for mdfy

This is the existing direction. Worth checking against the spec page — `/spec` already documents the URL contract. No code change needed; this conversation just validates the choice.

## Related concepts

- Vector recall, episodic snapshot, hub-shaped memory
- Forgetting as a feature
- URL-addressable knowledge
