---
title: "Claude prompts I keep reusing"
url: https://mdfy.app/wzixoDcN
updated: 2026-05-14T18:15:49.480Z
source: "mdfy.app"
---
# Claude prompts I keep reusing

> Personal pack. Not authoritative, just what's working for me as of 2026-05.

## Re-read mode

> "Read this URL as context. Summarise the key claims in 3 bullets. Then ask me one question that exposes the weakest claim in the summary."

Why it works: the summary forces compression; the one question forces critical engagement. The combination prevents the "Claude says yes to everything" failure mode.

## Decision mode

> "You're an opinionated engineering manager. The context is at <URL>. I'm trying to decide whether to <decision>. Recommend an action, then explain the single biggest tradeoff. No hedging."

Why it works: "opinionated EM" anchors the voice. "Single biggest tradeoff" forces ranking. "No hedging" closes the escape hatch.

## Audit mode

> "List every assumption in this doc that isn't backed by evidence elsewhere in the hub. For each, suggest what evidence would make the assumption checkable."

Why it works: the second clause keeps it from being a list of nitpicks. Each item ends with a concrete way to validate.

## Reading list mode

> "From <bundle URL>, what's the ideal reading order if I have only 15 minutes? Order them by 'most value gained from reading first'."

Why it works: bundles have reading orders built in (the analyser produces one) but they optimise for "complete understanding," not "best 15-min." Asking for the time-boxed order is different.

## Counterpart mode

> "You're the smartest person who disagrees with this thesis: <thesis>. The context is at <URL>. Write the strongest version of their argument."

Why it works: forcing the opposite framing reveals which parts of my thesis are durable and which parts only hold because I haven't seen the counter-argument.

## Pre-write mode

> "I'm about to write a doc with the working title <title>. The audience is <person>. The context is at <URL>. Before I write, what are three questions you'd want me to answer in the doc, in priority order?"

Why it works: writing-question-driven docs land harder than writing-topic-driven docs. The pre-write prompt produces the questions.

## What I'm not using

- "Be concise." Useless — Claude is already calibrated for length to context. Better to say "Answer in <X> sentences" with a specific number.
- "Be honest." Implies otherwise. The model is mostly honest; ask better questions instead.
- Roleplay prompts that anchor to a famous person. They make the answer feel persona-shaped instead of evidence-shaped.
