Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
AI Fluency Playbook
Getting Started
How to Use
Core Content
Five Pillars
Exercises
Concepts
Learning Profiles
Archetypes
Pathways
Reference
Resources
Glossary
Tools
Further Reading
GW AI Fluency Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
How AI Actually Works
(section)
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== The simplest explanation that's still true == Here's what happens when you send a message to ChatGPT, Claude, Gemini, or any other AI chat tool: # '''Your message gets broken into tokens.''' The AI doesn't read words the way you do. It splits your text into small chunks called [[Glossary|tokens]] β sometimes whole words, sometimes pieces of words. The word "tokenization" might become "token" + "ization." This is why AI tools have limits on how much text you can send β they're counting tokens, not words. (For a deeper look at this, see [[Tokenization & Context Windows|Tokenization & Context Windows]].) # '''Each token gets converted into numbers.''' The AI represents every token as a list of numbers (called an embedding) that captures its meaning and relationships. Words with similar meanings end up with similar numbers β "happy" and "joyful" are close together, "happy" and "wrench" are far apart. # '''The model predicts what comes next.''' This is the core of it. The AI looks at all the tokens in your message, weighs how they relate to each other (this is the "attention" mechanism you might have heard about), and predicts the most likely next token. Then it predicts the next one. Then the next. One token at a time, until it has a complete response. That's it. No understanding. No reasoning in the human sense. Pattern matching at a scale and sophistication that produces remarkably useful output β but pattern matching nonetheless.
Summary:
Please note that all contributions to GW AI Fluency Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
GW AI Fluency Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)