Most prompt advice is either too generic (“be specific”) or too gimmicky (“use this magic phrase”). This newsletter is a practical pattern library: repeatable ways to structure prompts so you get consistent, usable results across writing, analysis, planning, and execution.

Use it as a reference or a quick course. Let’s start with Part 1.
Part 1: The Foundation — Role, Goal, and Context
1) Persona (Role + Goal + Constraints)
What it is: Assign a role to shape tone, priorities, and vocabulary.
When to use: You want consistent output (e.g., “staff engineer review”, “PM summary”, “security mindset”).
Copy/paste prompt
Act as a {role}.
Your goal: {goal}.
Constraints: {constraints}.
Output format: {format}.
Simple example
Act as a Staff Software Engineer.
Your goal: review my design for pitfalls.
Constraints: be blunt, focus on scalability and failure modes.
Output format: 8 bullets + 3 questions.
Design: We stream events from Kafka to a Postgres table for analytics dashboards.
2) Ask for Input (Clarifying Questions First)
What it is: Force the model to ask questions before producing an answer.
When to use: Requirements are vague, or the answer depends on context (audience, constraints, timeline).
Copy/paste prompt
Before answering, ask {N} clarifying questions.
After I answer, produce the final output in {format}.
Simple example
Before answering, ask 4 clarifying questions.
After I answer, draft a 1-page project kickoff email.
Context: new data pipeline migration.
3) Question Refinement (Rewrite the Prompt, Then Answer)
What it is: Use the model to turn your messy request into a high-quality prompt.
When to use: You’re unsure what to ask, or you keep getting generic answers.
Copy/paste prompt
Rewrite my request into a precise prompt (include audience, scope, constraints, and definition of done).
Then answer using that improved prompt.
My request: {your rough request}
Simple example
Rewrite my request into a precise prompt (include audience, scope, constraints, and definition of done).
Then answer using that improved prompt.
My request: “Summarize this incident for leadership.”
4) Cognitive Verifier (Sub-Questions → Answers → Synthesis)
What it is: Break a big decision into smaller verifiable questions, answer them, then synthesize.
When to use: Decisions, architecture, tradeoffs, “should we do X?”, anything with hidden assumptions.
Copy/paste prompt
To answer the question, do this:
1) List 5–8 sub-questions that must be answered.
2) Answer each sub-question briefly.
3) Provide a final recommendation.
Question: {question}
Context: {context}
Simple example
To answer the question, do this:
1) List 6 sub-questions that must be answered.
2) Answer each sub-question briefly.
3) Provide a final recommendation.
Question: Should we move from batch ETL to streaming?
Context: 2TB/day, 15 downstream consumers, SLA is 15 minutes, team of 4.
5) Alternative Approaches (Generate Options + Tradeoffs)
What it is: Ask for multiple approaches and compare them explicitly.
When to use: Design choices, prioritization, roadmap decisions, or “how do we reduce X by Y?”
Copy/paste prompt
Propose 3 approaches to achieve {goal}.
For each: summary, pros, cons, complexity, risks, and when I should choose it.
End with a recommendation based on these constraints: {constraints}.
Simple example
Propose 3 approaches to reduce CI time by 30%.
For each: summary, pros, cons, complexity, risks, and when I should choose it.
End with a recommendation based on these constraints: monorepo, 2000 tests, limited parallel runners.
6) Fact Check List (Assumptions + What to Verify)
What it is: Make the model state assumptions and output a verification checklist.
When to use: Any output that may be forwarded, implemented, or used in a decision.
Copy/paste prompt
Give your answer.
Then list:
- Assumptions you made
- What facts I should verify
- What would change your recommendation
Simple example
Is it okay to share this doc externally?
Then list:
- Assumptions you made
- What facts I should verify
- What would change your recommendation
Doc summary: “Architecture overview of customer event tracking with vendor names and internal URLs.”
Part 2: Make Outputs Usable: Structure, Steps, and Repeatable Formats
If Part 1 was about thinking patterns (clarify, decompose, compare), Part 2 is about shipping patterns.
Most “bad LLM output” isn’t wrong; it’s just not actionable. It’s a blob. These patterns turn blobs into artifacts you can use: runbooks, PRDs, plans, and finished drafts.
7) Template (Force Structure for Repeatability)
What it is: Constrain output into a reusable format so results are consistent and scannable.
When to use: Status updates, PRDs, incident summaries, decision records, meeting notes.
Copy/paste prompt
Fill this template exactly:
Title:
Context:
Goal:
Non-goals:
Options considered:
Recommendation:
Risks & mitigations:
Next steps:
Open questions:
Topic: {topic}
Simple example
Fill this template exactly:
Title:
Context:
Goal:
Non-goals:
Options considered:
Recommendation:
Risks & mitigations:
Next steps:
Open questions:
Topic: Choose between ClickHouse vs BigQuery for near-real-time product analytics.
8) Recipe Pattern (turn advice into an executable runbook)
What it is
A structured, repeatable procedure: prerequisites → steps → decision points → pitfalls → final checklist.
When to use it
You want a process your team can actually follow
You’re writing a runbook, SOP, onboarding steps, migration plan
You keep getting high-level “best practices” instead of actions
Copy/paste prompt
Create a recipe for: {task}.
Output must include:
1) Goal (1–2 lines)
2) Inputs / prerequisites
3) Tools / access needed
4) Numbered steps (each step starts with an action verb)
5) Decision points (if/then)
6) Common pitfalls + how to avoid
7) Validation steps (how to confirm it worked)
8) Final checklist (5–10 checkboxes)
Constraints:
- Keep it concrete and tool-agnostic unless I specify tools
- Prefer bullets and short steps over paragraphs
Context: {context}
Simple example
Create a recipe for: migrating a Kafka consumer from v1 schema to v2 schema.
Constraints:
- Zero downtime
- Must support dual-read for 2 weeks
- Team uses Terraform + Kubernetes
Context: Consumer reads from topic orders.events and writes to Postgres.
9) Outline Expansion Pattern (control direction before the model “fills in”)
What it is
A two-phase workflow:
generate a detailed outline first
expand each section into the final deliverable
You’re essentially preventing the model from sprinting in the wrong direction.
When to use it
PRDs, tech specs, proposals, stakeholder updates, postmortems
You care about structure and coverage more than “clever writing”
Multiple stakeholders need to agree on headings before details
Copy/paste prompt
We will write a {deliverable}. Do it in 2 phases.
Phase 1: Create a detailed outline (headings + bullet subpoints).
- Include any missing sections you think are necessary.
- Ask me up to 5 clarifying questions if needed.
Stop after the outline and wait for my approval.
Phase 2 (after I approve): Expand each section.
Constraints for expansion:
- Keep it concise, scannable, and decision-oriented
- Use bullets where possible
- Include open questions and risks
Topic: {topic}
Audience: {audience}
Context: {context}
Simple example
We will write a design doc. Do it in 2 phases.
Topic: Add idempotency to our payments API to prevent double charges
Audience: Backend engineers + PM + SRE
Context: We see duplicate charges during retries and timeouts. We use Postgres and a Go service.
10) Tail Generation Pattern (finish the last 20% without rewriting the first 80%)
What it is
A continuation-only prompt: the model must match tone/structure and only generate what comes next.
This is the opposite of “rewrite my draft.” It’s “don’t touch my draft—just finish it.”
When to use it
You already have a solid draft and don’t want it “creatively improved”
You want consistent voice (especially if you’re writing for leadership)
You’re stuck on conclusions, next steps, executive summaries, or FAQs
Copy/paste prompt
Continue writing from where I stopped.
Rules:
- Do NOT rewrite or summarize earlier text
- Only add the next {N} sections/paragraphs
- Match the existing tone, formatting, and level of detail
- If you need assumptions, list them at the end as “Assumptions”
What to generate next: {what’s missing}
Text so far:
{paste your draft}
Simple example
Continue writing from where I stopped.
Rules:
- Do NOT rewrite earlier text
- Only add the next 3 sections
What to generate next: Risks, Rollout Plan, and Open Questions
Text so far:
[Paste a half-finished PRD/design doc here]
Part 3: Advanced Leverage: Reasoning, Examples, and Smart Workflows
Part 1 was about getting better answers.
Part 2 was about getting usable outputs.
Part 3 is about getting leverage: workflows that make an LLM feel less like “chat” and more like a reliable co-pilot.
11) Few-shot (teach the pattern with examples)
What it is
You give a few input → output examples so the model learns the format, tone, and decision policy you want.
Use when
Output must be consistent (status updates, ticket triage, review comments)
You have a house style (short bullets, specific headers, terminology)
You’re tired of “almost right” formatting
Copy/paste prompt
You will follow the pattern shown in the examples.
Rules:
- Match the formatting exactly
- Match tone and concision
- If information is missing, add a “Missing Info” section with questions
Examples:
Input: {example_input_1}
Output: {example_output_1}
Input: {example_input_2}
Output: {example_output_2}
Now do the same for:
Input: {your_input}
Simple example
You will follow the pattern shown in the examples.
Examples:
Input: Weekly project update for “Data Pipeline”
Output:
- Progress: Migrated 3/5 jobs to new scheduler
- Risks: Delay due to access approvals
- Next: Complete remaining 2 jobs, add monitoring
Input: Weekly project update for “API Modernization”
Output:
- Progress: Deprecated v1 endpoints (read-only)
- Risks: Partner adoption lag
- Next: Publish migration guide, office hours
Now do the same for:
Input: Weekly project update for “Identity Service”
12) Chain of Thought (guided reasoning, but keep it usable)
What it is
You ask for a structured reasoning trace in a usable format (tradeoffs, assumptions, and a recommendation). The goal is not “hidden magic”—it’s to make the thinking auditable.
Use when
You’re making a decision with tradeoffs (architecture, prioritization, vendor choice)
You want to see assumptions and risks, not just a confident conclusion
You need something you can paste into a decision record
Copy/paste prompt
Help me decide on {decision}. Use this structure:
1) Clarifying questions (max 5). If none, say “None”.
2) Assumptions (explicit list).
3) Options (at least 3).
4) Evaluation criteria (5–8 criteria).
5) Tradeoff analysis (table preferred).
6) Recommendation + why.
7) Risks and mitigations.
8) What would change your recommendation?
Context: {context}
Constraints: {constraints}
Simple example
Help me decide on an approach for idempotency in a payments API.
Context: Retries can cause double charges. We use Postgres. Latency budget is tight.
Constraints: Must be backward compatible for 60 days.
13) ReAct (Reason + Act loop for investigation)
What it is
A loop: reason about what you know → decide what to do next → execute an action (ask a question, request a snippet, propose an experiment) → revise.
Use when
Debugging and incident investigation
Root cause analysis with incomplete data
Research synthesis where you need iterative refinement
Copy/paste prompt
We will solve this using an iterative loop.
At each iteration, output:
- Current hypothesis
- What evidence supports/contradicts it
- Next action (choose ONE): ask me a question / request data / propose an experiment / propose a fix
- Expected outcome if the hypothesis is true
Stop after the next action and wait for my response.
Problem: {problem}
What I know so far: {facts}
Simple example
Problem: API latency spiked after a deployment.
What I know so far: P95 went from 180ms to 600ms. Error rate unchanged. CPU normal.
14) Flipped Interaction (make the model interview you)
What it is
Instead of “here’s my messy ask, please write a perfect document,” you force a quick interview first. This prevents rewrites.
Use when
Requirements are fuzzy
Multiple stakeholders have opinions
You want speed without losing accuracy
Copy/paste prompt
Before you produce any output, interview me.
Rules:
- Ask up to {N} questions.
- Ask them in the best order (highest information gain first).
- After I answer, summarize requirements and confirm understanding.
- Only then produce the deliverable.
Deliverable: {deliverable}
Audience: {audience}
Context: {context}
Simple example
Deliverable: One-page rollout plan
Audience: SRE + product
Context: Rolling out a new auth middleware next week
Ask up to 7 questions.
15) Menu Actions (keep control in long workflows)
What it is
The model offers a small “menu” of next moves. You pick one. This prevents the model from running ahead and keeps you in control.
Use when
Multi-step tasks (docs, investigations, planning)
You want predictable collaboration instead of surprises
You’re working with stakeholders and need checkpoints
Copy/paste prompt
Work in checkpoints.
After each response, provide a “Next Actions” menu with 4–6 options.
Do not proceed until I pick an option.
Task: {task}
Context: {context}
Simple example
Task: Create a migration plan from service A to service B
Context: Must support dual-run for 30 days and minimize risk
16) Meta Language Creation (invent a tiny shared protocol)
What it is
You define a lightweight “language” (tags + meanings) to speed up repeated collaboration. Think: shorthand for priority, risk, confidence, and next steps.
Use when
You repeatedly do the same kind of work (weekly updates, incident comms, reviews)
You want outputs to be easy to scan
Multiple people will consume the output
Copy/paste prompt
Create a compact meta-language for our collaboration.
Include:
- 8–12 tags (e.g., [RISK], [DECISION], [ASSUMPTION])
- Definition for each tag
- Rules for usage (where tags appear, frequency)
Then apply it to the content I provide.
Goal: {goal}
Content: {content}
Simple example
Goal: Standardize incident updates in Slack
Content: “Login failures started at 10:12. We rolled back. Monitoring looks better…”
17) Combining Patterns (compose a reliable pipeline)
What it is
Patterns become powerful when you combine them intentionally, like Lego blocks.
Use when
You want repeatable “factories” for common deliverables
You want outputs that are both structured and high quality
Common combos
Flipped Interaction → Outline Expansion → Tail Generation (fast + accurate docs)
Few-shot → Recipe (consistent runbooks in a house style)
Menu Actions → ReAct (investigate step-by-step with checkpoints)
Copy/paste prompt
Use this pipeline:
1) Flipped Interaction: ask me up to {N} questions.
2) Produce a detailed outline and wait for approval.
3) Expand section-by-section.
4) Finish with a Recipe-style checklist for execution.
Deliverable: {deliverable}
Audience: {audience}
Context: {context}
Constraints: {constraints}
Simple example
Deliverable: Design doc for adding rate limiting
Audience: Backend + SRE
Context: We’re seeing abuse and cost spikes
Constraints: Minimal latency impact; phased rollout required
Ask up to 6 questions.
18) Game Play (stress-test with roles, constraints, and scoring)
What it is
You turn a plan into a “game”: assign roles (red team vs blue team), define rules and scoring, then simulate scenarios to uncover blind spots.
Use when
You want to stress-test an architecture or rollout plan
You need to discover risks you’re not thinking about
You’re preparing for stakeholder objections
Copy/paste prompt
Run a structured role-play simulation.
Roles:
- Blue Team: proposes the plan
- Red Team: attacks the plan (failures, edge cases, abuse)
- Referee: scores risks by impact/likelihood and proposes mitigations
Rules:
- 3 rounds
- Each round ends with: top risks, mitigations, and an updated plan
- Keep it practical and realistic
Plan: {plan}
Context: {context}
Constraints: {constraints}
Simple example
Plan: Roll out a new authentication middleware behind a feature flag to 5% → 25% → 100%
Context: Multi-region service, peak traffic at 9am local
Constraints: No downtime; rollback must be < 5 minutes
19) Semantic Filter (Transform While Removing/Restricting Content)
What it is: Rewrite text while removing sensitive details or enforcing rules.
When to use: External sharing, anonymization, tone changes, or “keep only what matters.”
Copy/paste prompt
Rewrite the text for {audience}.
Remove or mask: {items to remove}.
Keep: {items to keep}.
Constraints: {tone/length rules}.
Text: {text}
Simple example
Rewrite the text for an external audience.
Remove or mask: names, customer identifiers, internal URLs, exact dates, dollar amounts.
Keep: the problem, the approach, and the outcome.
Constraints: 120 words max, neutral tone.
Text: “John at ACME signed on 12 Jan for $120k. We deployed feature flags at intranet/wiki/...”
That’s the library.
If you take one thing away: don’t hunt for “perfect prompts”; reuse patterns. Pick the one that matches the job, paste your context, and iterate once.
