Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
AI Fluency Playbook
Getting Started
How to Use
Core Content
Five Pillars
Exercises
Concepts
Learning Profiles
Archetypes
Pathways
Reference
Resources
Glossary
Tools
Further Reading
GW AI Fluency Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Agent Collaboration
(section)
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== What this looks like at each level == === Basic β Role-based prompting === You're learning to give AI a defined role and perspective, rather than asking it as a generic assistant. The core skill: framing AI as a specific expert and comparing perspectives to make better decisions. '''What it feels like:''' You run your first "AI team meeting" β asking AI to respond as two different experts on the same problem. You notice that the dual-role output is more nuanced than a single generic answer. You start using role-based prompts by default. === Intermediate β Managing separate agents === You've moved from simulating multiple roles in one chat to actually running separate AI sessions with isolated contexts. Each session has a specific role, limited scope, and doesn't see the other's work. You act as the orchestrator β the person who manages the handoff and synthesizes the results. '''What it feels like:''' You run two AI chats in parallel (strategist and executor), manually pass information between them, and produce a plan that neither could have generated alone. You notice where context gets lost in handoffs and design better transfer summaries. === Advanced β Agent architecture === You're designing complete multi-agent workflows with defined roles, inputs, outputs, handoff triggers, and feedback loops. You think about agent architecture before writing prompts β deciding how to decompose work, what each agent needs to know, and where human checkpoints belong. '''What it feels like:''' You design a 4-agent pipeline (researcher, drafter, critic, editor), run it end-to-end on a real project, and produce output that's measurably better than a single "do everything" prompt. You can explain ''why'' you split the work the way you did.
Summary:
Please note that all contributions to GW AI Fluency Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
GW AI Fluency Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)