The Verification Checklist: Difference between revisions
Imported from AI Fluency Playbook |
Fix 2 internal link(s) |
||
| Line 59: | Line 59: | ||
== 🧭 Why this matters (Strategists start here) == | == 🧭 Why this matters (Strategists start here) == | ||
In [[ | In [[The Fact-Check Habit|EP-Basic-01]], you built a simple 3-question verification prompt. Here, you're building a '''systematic process''' — a checklist that works regardless of topic, catches both factual and reasoning errors, and is tuned to your specific work context. The community's 75% Ethical Prompting score means most people ''intend'' to verify AI output but lack a consistent method. A checklist turns good intentions into reliable behavior. At the advanced level, you'll scale this into a governance framework for a team; this exercise builds the individual practice first. | ||
---- | ---- | ||
| Line 71: | Line 71: | ||
== ⬆️ Level up == | == ⬆️ Level up == | ||
Ready for more? Try [[ | Ready for more? Try [[The AI Governance Playbook|EP-Advanced-01]] — where you'll design an AI governance framework for a team or project. | ||
Back to [[Pillars/Ethical Prompting|Ethical Prompting & Judgment]] | Back to [[Pillars/Ethical Prompting|Ethical Prompting & Judgment]] | ||
Latest revision as of 16:28, 16 March 2026
Build a personal AI verification checklist and stress-test it against real AI outputs. 25 minutes.
One-liner: Build a personal AI verification system — a checklist you'll actually use — and stress-test it against real AI outputs to find its limits.
🔧 Jump in (Tinkerers start here)[edit | edit source]
You're going to build a verification checklist, then immediately try to break it.
Step 1 — Generate something to verify. Ask AI to produce a piece of content you might actually use in your work:
Write a [deliverable type — e.g., client email, project proposal, market analysis, technical recommendation] about [topic relevant to your work]. Make it detailed and specific. Include data points, recommendations, and reasoning.
Step 2 — Build your checklist. Before reading the output carefully, write your own verification checklist. Start with these categories and add your own:
| Check || Question || Pass/Fail
| Factual claims || Are specific numbers, dates, or statistics verifiable? || | Sources || Could I find the original source for any cited information? || | Reasoning || Does the logic hold? Are there hidden assumptions? || | Completeness || What important perspective or consideration is missing? || | Tone/audience || Is the tone appropriate? Would the intended audience trust this? || | Actionability || Are the recommendations specific enough to actually follow? || | Your domain check || [Add a check specific to your field] || | Your domain check || [Add another check specific to your field] ||
Step 3 — Apply the checklist. Go through the AI output line by line using your checklist. Mark each check as pass or fail. For every fail, note what the issue is.
Step 4 — Stress-test the checklist. Now deliberately ask AI to produce something harder to verify:
Write the same type of [deliverable] but on a topic I'm less familiar with: [topic outside your expertise]. Make it equally detailed and authoritative.
Apply your checklist again. Where does it fail to catch problems? What check do you need to add?
Step 5 — Finalize. Update your checklist based on what you learned. Save it where you'll actually use it — bookmark it, pin it, print it, whatever works.
📋 Plan first (Planners start here)[edit | edit source]
Here's what you're about to do:
- Generate test content — Ask AI to produce a work-relevant deliverable. This gives you realistic material to verify.
- Draft your checklist — Build a structured verification list covering factual accuracy, reasoning quality, completeness, tone, and domain-specific concerns.
- Apply to familiar territory — Use the checklist on AI output about a topic you know. This lets you calibrate how well the checklist catches real errors.
- Apply to unfamiliar territory — Use the checklist on AI output about a topic you don't know well. This exposes gaps in your process — the errors you can only catch with domain knowledge.
- Iterate and save — Update the checklist based on what it missed. Save it in a format you'll actually reach for.
"Done" looks like: A tested, refined verification checklist (8-12 items) saved in a usable format, with evidence of at least one error it caught and one gap you identified and fixed.
🧭 Why this matters (Strategists start here)[edit | edit source]
In EP-Basic-01, you built a simple 3-question verification prompt. Here, you're building a systematic process — a checklist that works regardless of topic, catches both factual and reasoning errors, and is tuned to your specific work context. The community's 75% Ethical Prompting score means most people intend to verify AI output but lack a consistent method. A checklist turns good intentions into reliable behavior. At the advanced level, you'll scale this into a governance framework for a team; this exercise builds the individual practice first.
Reflection[edit | edit source]
- Which check caught the most problems? Which was least useful?
- How did your verification experience change between the familiar topic and the unfamiliar one?
- Is your checklist something you'd actually pull up before sending an AI-generated deliverable? What format makes it most likely you'll use it?
- 💬 Trade checklists with a colleague. Have them apply yours to an AI output from their work — their feedback will reveal blind spots specific to your domain. (Social Learners)
⬆️ Level up[edit | edit source]
Ready for more? Try EP-Advanced-01 — where you'll design an AI governance framework for a team or project.
Back to Ethical Prompting & Judgment