Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
AI Fluency Playbook
Getting Started
How to Use
Core Content
Five Pillars
Exercises
Concepts
Learning Profiles
Archetypes
Pathways
Reference
Resources
Glossary
Tools
Further Reading
GW AI Fluency Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
The Fact-Check Habit
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
''Catch an AI making a confident mistake and build a simple verification process you'll use every time. 15 minutes.'' <blockquote> '''One-liner:''' Catch an AI making a confident mistake β and build a simple verification process you'll use every time. </blockquote> ---- == π§ Jump in (Tinkerers start here) == Pick a topic you know well β your industry, your hobby, your area of expertise. Something where you can spot errors. '''Step 1 β Get a confident answer.''' Send this prompt: <blockquote> Give me a detailed overview of '''[topic you know well]'''. Include specific facts, statistics, and examples. Be thorough and authoritative. </blockquote> Read the output carefully. '''Find at least one claim that feels off.''' It might be a statistic that seems too round, a date that feels wrong, a name that's slightly off, or a causal claim that oversimplifies reality. '''Step 2 β Make the AI check itself.''' Send this: <blockquote> Look at your previous response. I want you to fact-check yourself. For each specific claim, statistic, or example you cited: 1. Rate your confidence (high / medium / low) 2. Flag anything you might have fabricated or estimated 3. Identify which claims are most likely to be wrong and why Be ruthlessly honest. I'd rather know what you're uncertain about than have you defend everything. </blockquote> '''Step 3 β Verify.''' Pick the 2-3 claims the AI flagged as lowest confidence. Google them. Were they accurate, close but wrong, or completely fabricated? '''Step 4 β Build your check.''' Based on what you just learned, write a 3-line "verification prompt" you can append to any AI output: <blockquote> Before I use this, tell me: 1. Which specific claims are you least confident about? 2. What did you estimate or approximate vs. know with certainty? 3. What should I verify independently before sharing this? </blockquote> Save this somewhere you'll see it. Use it as a default follow-up to any AI output you plan to rely on. ---- == π Plan first (Planners start here) == Here's what you're about to do: # '''Choose a familiar topic''' β You need to be able to spot errors, so pick something in your area of knowledge. Don't use an unfamiliar topic β you won't know what to verify. # '''Generate an authoritative-sounding response''' β Ask AI for a detailed, factual overview. The more specific and confident the output, the more likely it contains subtle errors. # '''Ask AI to fact-check itself''' β Use the self-audit prompt to force the AI to rate its own confidence and flag potential fabrications. # '''Independently verify''' β Pick the lowest-confidence claims and check them against reliable sources. Track what was right, close, and wrong. # '''Create your verification template''' β Build a reusable 3-question follow-up that you'll use after any AI output you plan to act on. '''"Done" looks like:''' You've caught at least one AI error, you understand ''why'' the AI got it wrong, and you have a saved verification prompt you can use going forward. ---- == π§ Why this matters (Strategists start here) == The community's Ethical Prompting score is 75% β the highest of all five pillars. Most people ''know'' they should verify AI output, but few have a systematic process for doing so. This exercise closes the gap between awareness and practice by giving you a concrete, reusable tool. The verification prompt you build here becomes a habit β a 30-second step that catches errors before they become problems. At the intermediate level, you'll build a more comprehensive verification checklist; this exercise establishes the baseline behavior. ---- == Reflection == * What type of error did the AI make β a fabricated fact, a wrong date, or a subtle logical leap? Does the category matter for how you'd catch it? * Did the AI's self-assessment match what you found when you verified manually? Was it too confident, too cautious, or well-calibrated? * Will you actually use your verification prompt going forward? What would make it stick as a habit vs. something you forget about? * π¬ ''Share a specific AI error you caught with a colleague. Ask them how they currently verify AI output β you may discover they don't.'' (Social Learners) == β¬οΈ Level up == Ready for more? Try [[The Verification Checklist|EP-Intermediate-01]] β where you'll build a comprehensive verification checklist and stress-test it against real AI outputs. Back to [[Pillars/Ethical Prompting|Ethical Prompting & Judgment]] [[Category:AI Fluency Playbook]] [[Category:Exercises]] [[Category:Ethical Prompting Exercises]]
Summary:
Please note that all contributions to GW AI Fluency Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
GW AI Fluency Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)