Product Management
Specs, user research, and prioritisation with Claude.
← All modules in this stagePMs spend the day translating: customer language to engineering tasks, vague leadership desire to a written spec, "we should do X" to "here's what X means and what it costs". Claude is a serious accelerator on every one of those — when you keep the thinking and let it do the typing.
By the end of this module you'll have
- A working PRD scaffolder that turns rough notes into a reviewable draft
- A user-research synthesiser that doesn't lose nuance from your interview transcripts
- A trade-off framer that helps stakeholders argue from the same set of facts
Time: about 1 hour for the basics, ~5 hours with all three notebooks.
Prerequisites: Modules 3 (prompt basics), 11 (data analysis), 13 (content creation), 15 (advanced reasoning).
What to delegate, what to keep
| Delegate to Claude | Keep yourself |
|---|---|
| Drafting the PRD body once you know the answer | Deciding the answer |
| Summarising 8 user interviews into themes | Choosing which themes to act on |
| Writing the FAQ from a draft of the spec | Knowing which questions to anticipate |
| Producing a stakeholder update from your notes | The judgement of what to say (and what not to) |
| Reformatting one stakeholder's brain dump into a one-pager | The relationship that lets you push back on it |
The pattern: anything where the value is in the choice, you keep. Anything where the value is in the words, Claude can help.
Recipe 1 · PRD scaffolder
from anthropic import Anthropic
from dotenv import load_dotenv
load_dotenv()
client = Anthropic()
PRD_SYSTEM = """\
You are drafting a product requirements document. Use this structure exactly:
1. Problem (one paragraph; concrete, not "users want better tools")
2. Who it's for (one sentence; specific role)
3. Why now (two sentences; what's changed)
4. What we're building (3–6 bullets; verbs first; avoid solutions in problem space)
5. Out of scope (3 bullets; the things people will ask about)
6. Success metric (one number; what would prove this worked in 90 days)
7. Open questions (3–5 bullets; the things you'd refuse to ship without answering)
Voice rules: plain English, no marketing words ('leverage', 'streamline', 'unlock'),
no hedging ('possibly', 'might'). If a section can't be filled from the input, write
[NEEDS INPUT: <what to ask>] — do NOT make things up.
"""
rough_notes = """\
- engineers complain that test failures take ages to root-cause
- hypothesis: surface flaky tests automatically, not via memory
- conversation with Sara (eng lead) suggests biggest pain is "is this test flaky or actually failing"
- internal tools team has bandwidth Q3
- success would be: 80% of devs say they trust the CI green/red signal
"""
response = client.messages.create(
model="claude-sonnet-4-6", max_tokens=900,
system=PRD_SYSTEM,
messages=[{"role": "user", "content": "Draft from these notes:\n" + rough_notes}],
).content[0].text
print(response)
The [NEEDS INPUT: ...] rule is the trick. The draft surfaces what you don't yet know instead of papering over it with confident prose. You can hand the draft to your team with the open questions still marked.
Recipe 2 · User-research synthesiser
You have 8 interview transcripts. You need themes. Claude is good at this — if you keep it grounded.
def synthesise(transcripts: list[str]) -> str:
bundle = "\n\n---\n\n".join(f"INTERVIEW {i+1}:\n{t}" for i, t in enumerate(transcripts))
return client.messages.create(
model="claude-opus-4-7", max_tokens=1200,
system=(
"Synthesise the interviews below into themes. Output:\n"
"1) 3–5 themes, each with: name, one-sentence definition, count of interviews mentioning it\n"
"2) Two memorable verbatim quotes per theme (with interview number)\n"
"3) The single most important contradiction across interviews\n"
"Use only quotes present in the transcripts. Do not paraphrase quotes."
),
messages=[{"role": "user", "content": bundle}],
).content[0].text
# print(synthesise([open(f'transcript_{i}.txt').read() for i in range(1, 9)]))
Two things make this honest:
- Counts force the model to point at evidence. "Mentioned in 6/8 interviews" is harder to fake than "many users said".
- Verbatim quotes are checkable: open the transcript, find the line.
Recipe 3 · Trade-off framer
When stakeholders argue, half the time they're not actually disagreeing — they're using different definitions of the same word. Claude can help expose that.
prompt = """\
We're debating whether to ship the redesign before or after the migration.
Frame this as a structured trade-off doc. Output:
- The decision in one sentence
- Two named options (Option A and Option B), with what each would mean concretely
- A 3-row table with columns: dimension, Option A, Option B (rows: user impact, eng cost, risk)
- The tightest possible question that would unblock the decision
Do not pick a side. Do not invent constraints we haven't mentioned.
"""
framing = client.messages.create(
model="claude-sonnet-4-6", max_tokens=600,
messages=[{"role": "user", "content": prompt}],
).content[0].text
print(framing)
The output isn't an answer — it's a shared frame. Bring that to the meeting; people argue about the cells in the table instead of the topic in the air.
Stakeholder updates without the marketing voice
The most common PM complaint about LLM writing: it sounds too positive. Drop this into your system prompt:
- No words: "excited", "delighted", "leverage", "streamline", "unlock", "world-class".
- No phrases: "we're thrilled to announce", "going forward", "in light of".
- Replace any sentence with two adverbs with the same sentence with zero adverbs.
- If you'd write 'great progress', either give the number or rewrite the sentence.
It's blunt. It works. Your CTO will quietly stop fixing your updates.
What Claude can't replace
- Saying no to stakeholders. Even when the no is right, the relationship is yours.
- Sensing room temperature. A PRD doesn't tell you that ops is burnt out and shipping this in Q4 will break them.
- Knowing the customer. The model didn't sit on the call where the user said the actual revealing thing.
- Owning the call. When the data is ambiguous, the decision is yours. The model isn't accountable.
Try changing one thing
- Run the PRD scaffolder on a feature you've already shipped. Compare to the doc you actually wrote. Notice what each got right.
- Synthesise transcripts you've already coded by hand. Compute agreement on themes. Calibrate your prompt.
- Build a "stakeholder voice" model: from a few real updates, generate one in their voice. Useful for drafting; never ship it as theirs.
- Wire the trade-off framer to a slash-command in your team's chat. Decisions get framed in seconds.
Going deeper: open the notebooks
notebooks/01_introduction.ipynb— PRDs, synthesis, trade-off framing (~1.5–2h)notebooks/02_intermediate.ipynb— discovery interviews, prioritisation frameworks (~2–3h)notebooks/03_advanced.ipynb— exec comms, board-level updates, board-deck drafting (~1.5–2.5h)
Module checklist
- [ ] You've drafted a PRD where the [NEEDS INPUT] markers exposed real gaps
- [ ] You've synthesised real interviews and verified the quotes against the source
- [ ] You can name three things in your job that Claude cannot replace
- [ ] You have a "no marketing words" snippet you reuse for every external comm
Curriculum complete
That's it — twenty-six modules from your first prompt to specialist applications. You've earned a real skill set: reasoning about prompts, building apps, calling tools, grounding in data, scaling across teams, and proving quality with evidence.
What's next is whatever you build with this. The fastest way to consolidate everything above is to ship one small Claude-backed thing this week — a CLI summariser, a Slack bot, an internal eval. Pick something whose absence currently annoys you. Use the patterns from this curriculum. Iterate.
When you do, open a discussion and tell us what you built. We'll learn from it too.