Prompt Patterns That Scale

Why Patterns?

Prompts are software. When they’re structured, repeatable, and testable, you get consistent outputs that improve over time. Below are patterns I use when building assistants and production workflows.

Core Building Blocks

RTCF

Role · Task · Constraints · Format. Explicitly set the model’s role, the requested task, limits/guardrails, and exact output format (JSON/Markdown).

Few-Shot

Provide 2–5 high-quality examples. Favor small, diverse samples over one giant example.

Rubric

Define how an answer will be judged. The model optimizes to the scoring criteria you give it.

Critic → Refine

Two-step loop: generate, then have a “critic” evaluate against the rubric, then revise.

Extraction

Bind the output to a JSON schema with strict keys & types for downstream reliability.

Step Budget

Bound complexity: “Think in ≤5 numbered steps, then produce final output only.”

Reusable Templates

1) RTCF Instruction

You are a {ROLE}. 
TASK: {TASK — single sentence}.
CONSTRAINTS:
- Audience: {AUDIENCE}
- Tone: {TONE}
- Sources: {SOURCE_RULES or "No external web access"}
- Safety: refuse if {BOUNDARIES}
OUTPUT FORMAT:
- Return {FORMAT e.g., Markdown with H2s, or JSON schema below}
SUCCESS CRITERIA:
- {RUBRIC bullets}

2) JSON Extraction (Schema-bound)

Extract the fields using this JSON schema:
{
  "type":"object",
  "properties":{
    "title":{"type":"string"},
    "summary":{"type":"string"},
    "tags":{"type":"array","items":{"type":"string"}}
  },
  "required":["title","summary","tags"],
  "additionalProperties":false
}
Return ONLY JSON.
INPUT:
{{DOCUMENT}}

3) Critic → Refine

# DRAFT
{{MODEL_OUTPUT}}

# RUBRIC
- Accuracy: facts align with provided material (0-2)
- Clarity: concise, avoids fluff (0-2)
- Actionability: clear next steps (0-2)

Act as a critic. Score each dimension 0–2 and list concrete fixes.
Then produce a REVISED version that addresses the fixes.

4) Few-Shot Style Guide

STYLE: Conversational, confident, no hype.
EXAMPLES:
Q: "Summarize for execs"
A: "In 3 bullets, each ≤14 words. Add 1 risk and 1 action."
Q: "Explain for staff"
A: "Write 120–160 words, plain language, end with a checklist."

5) Guarded Action (Tools)

When you need data, call the tool:
- find_doc(query): returns {title, url, snippet}
RULES:
- Cite URLs you use.
- Never invent links.
- If no relevant doc, ask for clarification.

Patterns by Outcome

Content & SEO

  • Brief → Outline → Draft → Critique → Final (pipeline beats one mega-prompt).
  • Term sheet of target keywords, competitors, and banned claims.
  • Channel adapters: one canonical draft, adapters for email/LI/blog/help-center.

Support & Policy-Aware Answers

  • Retrieve → Cite → Answer: include links and snippets used.
  • Policy filters (deny/allow lists) before final answer.
  • Escalation rule: if confidence < threshold → suggested reply + assign to human.

Analytics & BI Copilot

  • Metric dictionary: names, definitions, caveats, owner.
  • Explain-then-act: 2 reasons + 1 recommended action with risk.
  • SQL pattern: describe table schema, ask for query + justification.

Engineering & Code Review

  • Diff rubric: security, performance, clarity, tests, docs.
  • Deprecation guide: preferred libs/versions + migration notes.
  • Refactor brief: constraints (no API changes, keep public types), tests required.

Anti-Patterns

  • Everything in one prompt — split into stages and test each.
  • Vague success — always include a rubric or KPI.
  • Unbounded outputs — specify length, sections, and format.
  • Hallucination-prone asks — require citations or add a “don’t know” path.

Operationalizing Prompts

  • Version prompts like code; keep a changelog.
  • Golden tasks for regression testing (evals on accuracy/helpfulness/safety/latency).
  • Telemetry: capture input category, prompt version, model, tokens, cost, and feedback.

Need help templatizing your prompts?

I can turn your ad-hoc prompts into tested playbooks with rubrics, tools, and evals—so teams get repeatable results.

Train your team →  Design a custom GPT →