Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
AI Fluency Playbook
Getting Started
How to Use
Core Content
Five Pillars
Exercises
Concepts
Learning Profiles
Archetypes
Pathways
Reference
Resources
Glossary
Tools
Further Reading
GW AI Fluency Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
The AI Governance Playbook
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
''Design a practical AI governance framework for a team or project. 40 minutes.'' <blockquote> '''One-liner:''' Design a practical AI governance framework for a team or project β covering when to use AI, how to verify outputs, and what requires human judgment. </blockquote> ---- == π§ Jump in (Tinkerers start here) == Pick a real team, project, or organization you work with. You're going to design an AI usage framework they could actually adopt. '''Step 1 β Map the AI touchpoints.''' Send this prompt: <blockquote> I'm designing an AI governance framework for a '''[team type/project type]''' that does '''[describe the work]'''. Map out all the places where team members might use AI in their workflow. For each touchpoint, classify the risk level: - '''Low risk:''' AI errors are easily caught and consequences are minor (e.g., drafting internal emails, brainstorming) - '''Medium risk:''' AI errors could waste significant time or create confusion (e.g., research summaries, data analysis, first drafts of client deliverables) - '''High risk:''' AI errors could cause reputational, legal, or financial harm (e.g., published content, financial recommendations, legal language, customer-facing decisions) Present this as a table with: Touchpoint | Description | Risk Level | Why </blockquote> '''Step 2 β Design the verification tiers.''' Based on the risk map, create a tiered verification system: <blockquote> Based on the risk map above, design a 3-tier verification system: '''Tier 1 (Low risk):''' What's the minimum verification needed? What can proceed without review? '''Tier 2 (Medium risk):''' What checks are required? Who reviews? What's the turnaround expectation? '''Tier 3 (High risk):''' What's the full review process? Who signs off? What documentation is needed? For each tier, specify: - Verification steps (checklist) - Who is responsible - What "approved" looks like - What happens when issues are found </blockquote> '''Step 3 β Write the team guidelines.''' Now produce the actual document: <blockquote> Write a 1-page "AI Usage Guidelines" document for this team. It should be practical, not corporate. Include: 1. '''When to use AI''' β Green light scenarios 2. '''When to be careful''' β Yellow light scenarios with required verification 3. '''When NOT to use AI''' β Red light scenarios or scenarios requiring explicit approval 4. '''Verification standards''' β The tier system from above, simplified 5. '''Attribution''' β When and how to disclose AI usage 6. '''Escalation''' β What to do when you're unsure whether AI use is appropriate Write it in the tone of a senior colleague giving practical advice, not a legal department issuing mandates. </blockquote> '''Step 4 β Red-team the framework.''' Test it: <blockquote> Now role-play as a team member who wants to use AI in a gray area. Come up with 3 realistic scenarios where the guidelines are ambiguous or where a reasonable person might interpret them differently. For each scenario, suggest how to clarify the guideline. </blockquote> Revise the guidelines based on the edge cases. ---- == π Plan first (Planners start here) == Here's what you're about to do: # '''Choose your context''' β Pick a real team or project. The framework should be one you could actually share or implement. # '''Map AI touchpoints and risk levels''' β Identify every place AI could be used in the workflow and classify each by potential harm from errors. # '''Design tiered verification''' β Create different verification processes for different risk levels. Not everything needs the same scrutiny. # '''Write the guidelines''' β Produce a practical 1-page document that a team member could reference in their daily work. # '''Red-team with edge cases''' β Test the framework against ambiguous scenarios. Fix any gaps before sharing. '''"Done" looks like:''' A complete, practical AI governance framework (risk map + tiered verification + 1-page guidelines) that you could present to your team, plus documentation of edge cases you tested against. ---- == π§ Why this matters (Strategists start here) == Individual verification habits (from [[The Verification Checklist|EP-Intermediate-01]]) don't scale to teams. When five people use AI differently with different standards, the team's AI output quality is only as good as the weakest link. A governance framework creates '''shared standards without bureaucracy''' β it tells people what's safe to do quickly and what requires care, without making every AI interaction feel like a compliance exercise. This is also the document organizations will pay for: a practical, calibrated AI usage policy that actually gets followed. ---- == Reflection == * Which risk classification was hardest to assign? What does that ambiguity tell you? * Would your team actually follow these guidelines? What would make them ignore it? * Did the red-teaming step reveal fundamental gaps, or just edge cases? * π¬ ''Present your framework to a colleague and ask: "Would you follow this?" Their honest reaction is more useful than any AI review.'' (Social Learners) == β¬οΈ Level up == You've reached the advanced level for Ethical Prompting & Judgment. From here, consider: * Presenting this framework to your actual team and iterating based on feedback * Combining this with [[Design Your Agent Workflow|AC-Advanced-01]] to add governance to multi-agent workflows * Building a case study of how the framework changed AI usage behavior in your team Back to [[Pillars/Ethical Prompting|Ethical Prompting & Judgment]] [[Category:AI Fluency Playbook]] [[Category:Exercises]] [[Category:Ethical Prompting Exercises]]
Summary:
Please note that all contributions to GW AI Fluency Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
GW AI Fluency Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)