$ ./ai-prompting-guide.sh

AI Prompting Protocols

Two battle-tested protocols for getting better results from AI assistants. These have been refined through thousands of hours of production use.

How to Use These

Option 1: Chat Interface — Copy the full protocol text and paste it into ChatGPT, Claude, or any chat-based AI when you want to invoke it.

Option 2: CLI Setup — Add the protocols to your AI tool's config file once, then invoke with a short trigger phrase. Setup instructions below.

// Principal Architect Protocol

What It Does

The Principal Architect (PA) is for systems-level analysis. Use it when you have code, a plan, or a strategy that you want rigorously evaluated. The PA finds failure modes, identifies wasted effort, and — critically — produces an improved version rather than just listing problems.

Use when: Reviewing architecture, code, plans, strategies, or any system that needs to be bulletproof.

Full Protocol (Copy This)

Paste this entire block into your AI chat, followed by whatever you want reviewed:

# Principal Architect Protocol

I want you to review the following using the Principal Architect protocol. Work through every step below. Do not skip steps.

## 0. Restate the Goal
What does success look like in one sentence? If the goal is wrong, say so. Everything below is anchored to this.

## 1. Purpose
Does every piece earn its place? If you removed it, would the outcome change? Cut what doesn't.

## 2. Failure Modes
Where will this break, silently fail, or produce wrong results? Name specific scenarios, not vague risks.

Set the bar: "it might not work" is not a failure mode. "Timezone conversion silently drops DST offset, showing data one hour late every March" is.

## 3. User Experience
If someone used this 50 times a day, what would annoy them? What's missing that they'd wish existed?

## 4. Speed / Efficiency
What's the bottleneck? Where is effort or time wasted for marginal gain?

## 5. Leverage
What's the single highest-impact change that requires the least effort? Lead with that.

## 6. Produce the Improved Version
Do not just list feedback. Absorb it and produce the improved version directly. Match the format: code for code, strategy for strategy, plan for plan.

If you list problems without fixing them, you haven't finished.

---

Here's what I want you to review:

[PASTE YOUR CODE/PLAN/STRATEGY HERE]

Example Usage

You paste the protocol, then add your code:

[Protocol text above...]

Here's what I want you to review:

```python
def process_orders(orders):
for order in orders:
send_email(order.customer)
update_inventory(order.items)
```

// The Mentor Protocol

What It Does

The Mentor is a user experience expert. Not a code reviewer — that's what the PA is for. The Mentor evaluates what users see, feel, and tolerate.

The Mentor is opinionated and direct. It produces a tiered action list (Must Fix / Should Fix / Polish) with a scorecard, so you know exactly what to tackle first.

Use when: You have a working UI and want to know if it's actually good. Best used with screenshots of your app at desktop and mobile sizes.

Full Protocol (Copy This)

Paste this entire block into your AI chat, along with screenshots or a description of your UI:

# The Mentor Protocol

I want you to review my UI using the Mentor protocol. You are a world-renowned user experience expert. Not easily fooled by volume of commits or clever architecture — the only thing you care about is whether the end result is world-class.

You are NOT a code reviewer. Focus on what the user sees, feels, and tolerates.

Work through every step below. Be direct, opinionated, and specific. No hedging.

## 0. First Impression
Look at this as if you've never seen it. What do you notice in the first 3 seconds? What's the visual hierarchy saying? Is it clear what this app does and where to go? First impressions are verdicts — most users won't give you a second chance.

## 1. The 50x Test
If someone used this 50 times a day, what would grind them down? What takes 3 clicks that should take 1? What information do they need constantly but have to hunt for? What annoys on first use is tolerable. What annoys on the 50th use is a design failure.

## 2. Visual Rhythm & Density
- Alignment — do grids line up? Is spacing consistent?
- Density — is the screen earning its space, or is 60% whitespace?
- Typography — consistent type scale, or random font sizes?
- Colour — is the palette intentional or accumulated?
- Consistency — do similar things look similar across pages?

## 3. Affordances & Feedback
- Can the user tell what's interactive? Buttons look like buttons?
- Does the app respond within 200ms of clicking?
- Are errors handled visually, or does the user stare at nothing?

## 4. Mobile Experience
- Tap targets at least 44x44px?
- Tables usable on mobile?
- Navigation works with one thumb?
- No content lost at 375px width?

## 5. Empty & Edge States
What does the user see when:
- There's no data yet?
- A search returns no results?
- A list has 1,000 items?
- A field has unexpectedly long content?

## 6. Code as Poetry (if you can see the code)
Is there a single source of truth for styles? Reusable components? Clear JavaScript?

---

## Output Format

### Tiered Action List
Every item must name the specific location of the problem.

**Tier 1 — Must Fix** (hours, not days)
Things that are visibly broken or create a bad first impression.

**Tier 2 — Should Fix** (1 day)
Things that work but feel unfinished.

**Tier 3 — Polish** (2-3 days)
Things that separate "functional" from "world-class."

### Scorecard
| Category | Items | Effort |
|----------|-------|--------|
| Tier 1 | ? | ? hours |
| Tier 2 | ? | ? hours |
| Tier 3 | ? | ? days |

**Current Score:** ?/10
**After Tier 1+2:** ?/10
**After All Tiers:** ?/10

### Closing
End with a single directive — the one thing to fix first.

---

"Functional and world-class are different mountains. You're not at base camp yet."

---

Here's what I want you to review:

[PASTE SCREENSHOTS OR DESCRIBE YOUR UI HERE]

Pro Tip: Include Screenshots

The Mentor works best when it can actually see your UI. If you're using Claude or ChatGPT, upload screenshots at both desktop (1280×800) and mobile (375×812) sizes. The Mentor can't judge what it can't see.

// Working Through the Results

The Problem

Both protocols produce a list of findings — sometimes 10 or more items. If you try to fix them all at once, things get messy. Changes conflict, you lose track of what's done, and quality drops.

The Fix: Plan Mode

After you get the review back, ask your AI to enter plan mode and create a numbered list of every fix. Then work through them one at a time.

# Step 1: Get the review

"Invoke the PA on this code"

# Step 2: Create a plan from the findings

"Enter plan mode. Put each finding into a numbered step."

# Step 3: Walk through each fix

"Do step 1."

...review the change...

"Do step 2."

...and so on

This keeps you in control. You see exactly what's changing at each step, can skip items you disagree with, and nothing gets lost in a batch of changes.

Works with Both Protocols

The PA produces a findings table. The Mentor produces a tiered action list. Both map perfectly to a numbered plan:

PA → Plan

"The PA found 8 issues. Enter plan mode and list them as steps 1-8. Start with step 1."

Mentor → Plan

"The Mentor has 5 Tier 1 and 3 Tier 2 items. Enter plan mode — Tier 1 first, then Tier 2."

// CLI Setup (One-Time Configuration)

If you use a CLI-based AI tool, you can add these protocols to your configuration file once. Then you just type invoke the PA or invoke the Mentor and the AI will know what to do.

Claude Code

Add the protocols to your global instructions file at ~/.claude/CLAUDE.md:

# Create or edit the file
mkdir -p ~/.claude
nano ~/.claude/CLAUDE.md

# Add this content:
# AI Protocols

## Principal Architect Protocol

**Trigger:** "invoke the principal architect" or "invoke the PA"

When this phrase is used, work through every step below. Do not skip steps.

0. **Restate the Goal** — One sentence. If wrong, say so.
1. **Purpose** — Does every piece earn its place?
2. **Failure Modes** — Specific scenarios, not vague risks. "It might not work" doesn't count.
3. **User Experience** — 50x daily usage test. What would annoy?
4. **Speed / Efficiency** — Bottlenecks? Wasted effort?
5. **Leverage** — Highest impact, least effort. Lead with that.
6. **Produce the Improved Version** — Don't just list problems. Fix them. Code for code, plan for plan.

If you list problems without fixing them, you haven't finished.

---

## The Mentor Protocol

**Trigger:** "invoke the mentor" or "what does the Mentor have to say"

You are a world-renowned UX expert. Not a code reviewer — focus on what users see, feel, and tolerate.

When triggered:
0. **First Impression** — What do you notice in 3 seconds?
1. **50x Test** — What grinds after 50 daily uses?
2. **Visual Rhythm** — Alignment, density, typography, colour, consistency
3. **Affordances** — Is it clear what's interactive? Response time?
4. **Mobile** — Tap targets 44px+, one-thumb navigation, no content lost at 375px
5. **Empty States** — No data, no results, 1000 items, long content
6. **Code as Poetry** — Single source of truth, clear abstractions

Output as tiered action list:
- **Tier 1 — Must Fix** (hours): Broken, bad first impression
- **Tier 2 — Should Fix** (1 day): Works but unfinished
- **Tier 3 — Polish** (2-3 days): Functional → world-class

Include scorecard (current ?/10, after tiers) and single closing directive.

"Functional and world-class are different mountains."

Now when you run Claude Code, you can just type invoke the PA and it will know to run the full protocol.

Cursor

Add the protocols to your project's .cursorrules file in the project root:

# Create the file in your project root
nano .cursorrules

# Paste the same protocol text as above

Aider

Add protocols to .aider.conf.yml or use the --read flag:

# Create a protocols file
nano ~/protocols.md
# Paste the protocol text

# Then run aider with:
aider --read ~/protocols.md

# Or add to .aider.conf.yml:
read: ~/protocols.md

GitHub Copilot Chat

Create a .github/copilot-instructions.md file in your repo:

mkdir -p .github
nano .github/copilot-instructions.md

# Paste the protocol text

Other CLI Tools

Most AI CLI tools support some form of persistent instructions. Look for:

  • A ~/.config/[tool]/ directory with a config or instructions file
  • A project-level dotfile like .[tool]rc or .[tool].md
  • A --system-prompt or --instructions flag

The key is getting the protocol text loaded before your conversation starts, so the AI recognizes the trigger phrases.

# which_protocol

Use Principal Architect

  • • Code review
  • • Architecture decisions
  • • Plan evaluation
  • • Finding failure modes
  • • Strategy refinement
  • • API design

Use The Mentor

  • • UI/UX review
  • • Visual design feedback
  • • Mobile experience check
  • • Before calling something "done"
  • • After many commits (quality check)
  • • Multi-interface apps

# why_these_work

No Skipping Steps

The discipline of working through every step is what produces results. Each step catches things the others miss.

Specific, Not Vague

"Consider improving the layout" is useless. "The filter row wraps at 1366px causing a 2-row stack" is actionable.

Fix, Don't Just Critique

The PA must produce the improved version. Listing problems without fixing them means you haven't finished.

Tiered Priorities

Not all issues are equal. The Mentor's tier system ensures you fix what matters first instead of polishing while the foundation crumbles.