---
title: "GPT-5 on context-budget rituals"
url: https://mdfy.app/EyW4HU4-
updated: 2026-05-14T18:15:49.480Z
source: "mdfy.app"
---
# GPT-5 on context-budget rituals

> ChatGPT-5 share link, 2026-04-09. I'd been losing context across sessions and wanted a framework, not another "use Projects" answer.

## The prompt

I dumped four things: the v6 AGENTS.md draft, the `mdfy Foundations` bundle, the most recent 50 Cursor chat turns, and a one-paragraph statement of what I was trying to design (cross-AI context handoff). Then asked: "How would you structure the context you carry into a working session?"

## What it built

A worksheet, not a heuristic. The structure was:

| Slot | Allocation | Source |
|---|---|---|
| Project context | 30% | AGENTS.md / CLAUDE.md / hub URL |
| Recent edits | 30% | Last N file diffs |
| Live conversation | 40% | This turn + last 4-6 |

The split is the part I actually use. Before this, I was loading "everything" — which meant the model got 12K tokens of static context and 1K of live discussion. The flip (live > context) sounds backwards but explains why "the AI keeps losing the thread mid-conversation" — the thread had no room.

## The caveat

The worksheet over-fits to **coding** sessions. For research / writing, the ratios invert: live conversation shrinks (you're not iterating on a turn-by-turn loop) and project context expands (you're loading more reference material).

## What I'm doing

- Default `?compact` on hub URLs when pasted into Cursor — claws back 30% of the project-context slot for free.
- Added a "Recent edits" Cmd+K command in the editor — pulls last 5 diffs into a paste-able block.
- For research, swap to `?full=1` on bundle URLs — the whole bundle as context, no compaction.
