Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
AI Fluency Playbook
Getting Started
How to Use
Core Content
Five Pillars
Exercises
Concepts
Learning Profiles
Archetypes
Pathways
Reference
Resources
Glossary
Tools
Further Reading
GW AI Fluency Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
The AI Governance Playbook
(section)
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== π§ Jump in (Tinkerers start here) == Pick a real team, project, or organization you work with. You're going to design an AI usage framework they could actually adopt. '''Step 1 β Map the AI touchpoints.''' Send this prompt: <blockquote> I'm designing an AI governance framework for a '''[team type/project type]''' that does '''[describe the work]'''. Map out all the places where team members might use AI in their workflow. For each touchpoint, classify the risk level: - '''Low risk:''' AI errors are easily caught and consequences are minor (e.g., drafting internal emails, brainstorming) - '''Medium risk:''' AI errors could waste significant time or create confusion (e.g., research summaries, data analysis, first drafts of client deliverables) - '''High risk:''' AI errors could cause reputational, legal, or financial harm (e.g., published content, financial recommendations, legal language, customer-facing decisions) Present this as a table with: Touchpoint | Description | Risk Level | Why </blockquote> '''Step 2 β Design the verification tiers.''' Based on the risk map, create a tiered verification system: <blockquote> Based on the risk map above, design a 3-tier verification system: '''Tier 1 (Low risk):''' What's the minimum verification needed? What can proceed without review? '''Tier 2 (Medium risk):''' What checks are required? Who reviews? What's the turnaround expectation? '''Tier 3 (High risk):''' What's the full review process? Who signs off? What documentation is needed? For each tier, specify: - Verification steps (checklist) - Who is responsible - What "approved" looks like - What happens when issues are found </blockquote> '''Step 3 β Write the team guidelines.''' Now produce the actual document: <blockquote> Write a 1-page "AI Usage Guidelines" document for this team. It should be practical, not corporate. Include: 1. '''When to use AI''' β Green light scenarios 2. '''When to be careful''' β Yellow light scenarios with required verification 3. '''When NOT to use AI''' β Red light scenarios or scenarios requiring explicit approval 4. '''Verification standards''' β The tier system from above, simplified 5. '''Attribution''' β When and how to disclose AI usage 6. '''Escalation''' β What to do when you're unsure whether AI use is appropriate Write it in the tone of a senior colleague giving practical advice, not a legal department issuing mandates. </blockquote> '''Step 4 β Red-team the framework.''' Test it: <blockquote> Now role-play as a team member who wants to use AI in a gray area. Come up with 3 realistic scenarios where the guidelines are ambiguous or where a reasonable person might interpret them differently. For each scenario, suggest how to clarify the guideline. </blockquote> Revise the guidelines based on the edge cases. ----
Summary:
Please note that all contributions to GW AI Fluency Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
GW AI Fluency Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)