Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
AI Fluency Playbook
Getting Started
How to Use
Core Content
Five Pillars
Exercises
Concepts
Learning Profiles
Archetypes
Pathways
Reference
Resources
Glossary
Tools
Further Reading
GW AI Fluency Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Ethical Prompting & Judgment
(section)
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== What this looks like at each level == === Basic β Building the verification instinct === You're learning to not accept AI output at face value. The core skill is simple: before using AI output for anything that matters, check it. You're developing the habit of asking "how do I know this is accurate?" after every AI interaction. '''What it feels like:''' You catch your first AI hallucination. You ask the AI to fact-check itself and discover it readily admits to uncertainty when prompted. You build a 3-question verification prompt you start using regularly. === Intermediate β Systematic verification === You've moved from "I should check this" to "I have a checklist for checking this." You've built a verification process tuned to your work β covering factual accuracy, reasoning quality, completeness, and domain-specific concerns. You can evaluate AI output on unfamiliar topics using your process, not just your domain knowledge. '''What it feels like:''' You have a saved verification checklist you actually reach for. You notice when AI reasoning has hidden assumptions. You can explain to a colleague ''why'' you trust one AI output and not another. === Advanced β Governance and accountability === You're designing systems for teams, not just yourself. You can map AI risk levels across a workflow, create tiered verification processes for different stakes, and write practical guidelines that people actually follow. You think about transparency, attribution, and escalation β not as compliance requirements but as trust infrastructure. '''What it feels like:''' You've written an AI usage guideline that your team references. You've red-teamed your own framework and found the edge cases. You can explain AI-assisted decisions to stakeholders who didn't see the process.
Summary:
Please note that all contributions to GW AI Fluency Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
GW AI Fluency Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)