Ethical Prompting & Judgment: Difference between revisions
Imported from AI Fluency Playbook |
Fix 3 internal link(s) |
||
| Line 41: | Line 41: | ||
Ethical Prompting is foundational — without it, speed and automation become liabilities. It directly supports: | Ethical Prompting is foundational — without it, speed and automation become liabilities. It directly supports: | ||
* '''[[ | * '''[[Workflow Automation|Workflow Automation]]''' — every automated AI step needs a quality gate | ||
* '''[[ | * '''[[Insight Synthesis|Insight Synthesis]]''' — synthesizing AI output requires knowing which parts to trust | ||
* '''[[ | * '''[[Agent Collaboration|Agent Collaboration]]''' — more autonomous AI requires clearer accountability frameworks | ||
== Exercises == | == Exercises == | ||
Latest revision as of 16:23, 16 March 2026
Responsible AI use, verification, and critical thinking. The pillar where confidence is highest but systematic practice matters most.
Responsible AI use, verification, and critical thinking. This pillar covers knowing when to trust AI output, how to verify it, and how to build systems for accountability.
Community average score: 75% — the highest of all five pillars. Most people know they should verify AI output, but there's a consistent gap between "I know I should check" and "I actually have a system for checking." This pillar exists to close that gap.
Why this pillar matters[edit | edit source]
AI produces confident-sounding text that is sometimes completely wrong. It doesn't flag its own uncertainty, it doesn't distinguish between well-supported facts and plausible guesses, and it will never tell you "I'm not qualified to answer this." That responsibility falls on you.
For generalists, this is especially critical. You work across domains where you're not always the subject-matter expert. When AI generates output about a topic you know well, you can spot errors. When it generates output about a topic you're learning, those same errors become invisible — unless you have a verification process.
The community's 75% average score is encouraging but misleading. People score well on awareness questions ("AI can hallucinate" — yes, most people know this) but lower on practice questions ("I have a systematic way to verify AI output before sharing it"). Confidence without rigor is the most dangerous pattern in AI use.
What this looks like at each level[edit | edit source]
Basic — Building the verification instinct[edit | edit source]
You're learning to not accept AI output at face value. The core skill is simple: before using AI output for anything that matters, check it. You're developing the habit of asking "how do I know this is accurate?" after every AI interaction.
What it feels like: You catch your first AI hallucination. You ask the AI to fact-check itself and discover it readily admits to uncertainty when prompted. You build a 3-question verification prompt you start using regularly.
Intermediate — Systematic verification[edit | edit source]
You've moved from "I should check this" to "I have a checklist for checking this." You've built a verification process tuned to your work — covering factual accuracy, reasoning quality, completeness, and domain-specific concerns. You can evaluate AI output on unfamiliar topics using your process, not just your domain knowledge.
What it feels like: You have a saved verification checklist you actually reach for. You notice when AI reasoning has hidden assumptions. You can explain to a colleague why you trust one AI output and not another.
Advanced — Governance and accountability[edit | edit source]
You're designing systems for teams, not just yourself. You can map AI risk levels across a workflow, create tiered verification processes for different stakes, and write practical guidelines that people actually follow. You think about transparency, attribution, and escalation — not as compliance requirements but as trust infrastructure.
What it feels like: You've written an AI usage guideline that your team references. You've red-teamed your own framework and found the edge cases. You can explain AI-assisted decisions to stakeholders who didn't see the process.
Common mistakes[edit | edit source]
- "I always double-check." Intention isn't process. Without a consistent method, you check when you remember and skip when you're busy. That's when errors get through.
- Over-verifying low-stakes output. Not everything needs the same scrutiny. Brainstorming ideas don't need fact-checking; a client-facing report does. Calibrating effort to risk is itself a judgment skill.
- Trusting AI's self-assessment. When you ask AI "are you sure?" it will often say yes — or hedge unconvincingly. The Fact-Check Habit exercise teaches you to use structured self-audit prompts that produce genuinely useful uncertainty signals.
How this connects to other pillars[edit | edit source]
Ethical Prompting is foundational — without it, speed and automation become liabilities. It directly supports:
- Workflow Automation — every automated AI step needs a quality gate
- Insight Synthesis — synthesizing AI output requires knowing which parts to trust
- Agent Collaboration — more autonomous AI requires clearer accountability frameworks
Exercises[edit | edit source]
| Level || Exercise || Time || What you'll build
| Basic || | The Fact-Check Habit || 15 min || A reusable verification prompt | Intermediate || | The Verification Checklist || 25 min || A systematic verification process | Advanced || | The AI Governance Playbook || 40 min || A team-level AI usage framework