Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
AI Fluency Playbook
Getting Started
How to Use
Core Content
Five Pillars
Exercises
Concepts
Learning Profiles
Archetypes
Pathways
Reference
Resources
Glossary
Tools
Further Reading
GW AI Fluency Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Why AI Gets Things Wrong
(section)
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== When it's most likely to go wrong == Hallucinations aren't random. They follow predictable patterns: '''Specific facts, numbers, and dates.''' Ask AI for a general explanation of how photosynthesis works and it'll be accurate. Ask it for the exact year a specific obscure paper was published and it might invent one. The more specific and verifiable the claim, the more you need to check it. '''Citations and sources.''' AI is particularly bad at this. It will confidently produce author names, paper titles, journal names, and URLs that look real but don't exist. Never trust an AI-generated citation without verifying it. '''Recent events.''' AI models have a training cutoff date. If you ask about something that happened after that date and the AI doesn't have search access, it may either say it doesn't know (good) or generate a plausible-sounding answer (dangerous). '''Niche or specialized domains.''' AI performs best on topics that appeared frequently in its training data. Mainstream topics in English have dense coverage. Obscure or specialized topics β especially in other languages β have less, so the model has fewer patterns to draw from and is more likely to fill gaps with plausible-sounding fabrications. '''When you push it.''' If you insist the AI answer a question it's uncertain about, or tell it "you must provide an answer," it will comply β by generating something. AI tools generally don't have a strong instinct to say "I don't know." Some are better than others, but the pressure to produce output is built into the system.
Summary:
Please note that all contributions to GW AI Fluency Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
GW AI Fluency Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)