<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://mediawiki.informationgeek.org/index.php?action=history&amp;feed=atom&amp;title=Ethical_Prompting_%26_Judgment</id>
	<title>Ethical Prompting &amp; Judgment - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://mediawiki.informationgeek.org/index.php?action=history&amp;feed=atom&amp;title=Ethical_Prompting_%26_Judgment"/>
	<link rel="alternate" type="text/html" href="https://mediawiki.informationgeek.org/index.php?title=Ethical_Prompting_%26_Judgment&amp;action=history"/>
	<updated>2026-05-03T14:44:50Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.6</generator>
	<entry>
		<id>https://mediawiki.informationgeek.org/index.php?title=Ethical_Prompting_%26_Judgment&amp;diff=65&amp;oldid=prev</id>
		<title>Admin: Fix 3 internal link(s)</title>
		<link rel="alternate" type="text/html" href="https://mediawiki.informationgeek.org/index.php?title=Ethical_Prompting_%26_Judgment&amp;diff=65&amp;oldid=prev"/>
		<updated>2026-03-16T16:23:00Z</updated>

		<summary type="html">&lt;p&gt;Fix 3 internal link(s)&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 16:23, 16 March 2026&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l41&quot;&gt;Line 41:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 41:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Ethical Prompting is foundational — without it, speed and automation become liabilities. It directly supports:&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Ethical Prompting is foundational — without it, speed and automation become liabilities. It directly supports:&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;[[&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Pillars/&lt;/del&gt;Workflow Automation|Workflow Automation]]&#039;&#039;&#039; — every automated AI step needs a quality gate&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;[[Workflow Automation|Workflow Automation]]&#039;&#039;&#039; — every automated AI step needs a quality gate&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;[[&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Pillars/&lt;/del&gt;Insight Synthesis|Insight Synthesis]]&#039;&#039;&#039; — synthesizing AI output requires knowing which parts to trust&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;[[Insight Synthesis|Insight Synthesis]]&#039;&#039;&#039; — synthesizing AI output requires knowing which parts to trust&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;[[&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Pillars/&lt;/del&gt;Agent Collaboration|Agent Collaboration]]&#039;&#039;&#039; — more autonomous AI requires clearer accountability frameworks&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;[[Agent Collaboration|Agent Collaboration]]&#039;&#039;&#039; — more autonomous AI requires clearer accountability frameworks&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Exercises ==&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Exercises ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mediawiki:diff:1.41:old-53:rev-65:php=table --&gt;
&lt;/table&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://mediawiki.informationgeek.org/index.php?title=Ethical_Prompting_%26_Judgment&amp;diff=53&amp;oldid=prev</id>
		<title>Admin: Imported from AI Fluency Playbook</title>
		<link rel="alternate" type="text/html" href="https://mediawiki.informationgeek.org/index.php?title=Ethical_Prompting_%26_Judgment&amp;diff=53&amp;oldid=prev"/>
		<updated>2026-03-16T16:10:20Z</updated>

		<summary type="html">&lt;p&gt;Imported from AI Fluency Playbook&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&amp;#039;&amp;#039;Responsible AI use, verification, and critical thinking. The pillar where confidence is highest but systematic practice matters most.&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Responsible AI use, verification, and critical thinking. This pillar covers knowing when to trust AI output, how to verify it, and how to build systems for accountability.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Community average score: 75%&amp;#039;&amp;#039;&amp;#039; — the highest of all five pillars. Most people &amp;#039;&amp;#039;know&amp;#039;&amp;#039; they should verify AI output, but there&amp;#039;s a consistent gap between &amp;quot;I know I should check&amp;quot; and &amp;quot;I actually have a system for checking.&amp;quot; This pillar exists to close that gap.&lt;br /&gt;
&lt;br /&gt;
== Why this pillar matters ==&lt;br /&gt;
&lt;br /&gt;
AI produces confident-sounding text that is sometimes completely wrong. It doesn&amp;#039;t flag its own uncertainty, it doesn&amp;#039;t distinguish between well-supported facts and plausible guesses, and it will never tell you &amp;quot;I&amp;#039;m not qualified to answer this.&amp;quot; That responsibility falls on you.&lt;br /&gt;
&lt;br /&gt;
For generalists, this is especially critical. You work across domains where you&amp;#039;re not always the subject-matter expert. When AI generates output about a topic you know well, you can spot errors. When it generates output about a topic you&amp;#039;re learning, those same errors become invisible — unless you have a verification process.&lt;br /&gt;
&lt;br /&gt;
The community&amp;#039;s 75% average score is encouraging but misleading. People score well on awareness questions (&amp;quot;AI can hallucinate&amp;quot; — yes, most people know this) but lower on practice questions (&amp;quot;I have a systematic way to verify AI output before sharing it&amp;quot;). Confidence without rigor is the most dangerous pattern in AI use.&lt;br /&gt;
&lt;br /&gt;
== What this looks like at each level ==&lt;br /&gt;
&lt;br /&gt;
=== Basic — Building the verification instinct ===&lt;br /&gt;
&lt;br /&gt;
You&amp;#039;re learning to not accept AI output at face value. The core skill is simple: before using AI output for anything that matters, check it. You&amp;#039;re developing the habit of asking &amp;quot;how do I know this is accurate?&amp;quot; after every AI interaction.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;What it feels like:&amp;#039;&amp;#039;&amp;#039; You catch your first AI hallucination. You ask the AI to fact-check itself and discover it readily admits to uncertainty when prompted. You build a 3-question verification prompt you start using regularly.&lt;br /&gt;
&lt;br /&gt;
=== Intermediate — Systematic verification ===&lt;br /&gt;
&lt;br /&gt;
You&amp;#039;ve moved from &amp;quot;I should check this&amp;quot; to &amp;quot;I have a checklist for checking this.&amp;quot; You&amp;#039;ve built a verification process tuned to your work — covering factual accuracy, reasoning quality, completeness, and domain-specific concerns. You can evaluate AI output on unfamiliar topics using your process, not just your domain knowledge.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;What it feels like:&amp;#039;&amp;#039;&amp;#039; You have a saved verification checklist you actually reach for. You notice when AI reasoning has hidden assumptions. You can explain to a colleague &amp;#039;&amp;#039;why&amp;#039;&amp;#039; you trust one AI output and not another.&lt;br /&gt;
&lt;br /&gt;
=== Advanced — Governance and accountability ===&lt;br /&gt;
&lt;br /&gt;
You&amp;#039;re designing systems for teams, not just yourself. You can map AI risk levels across a workflow, create tiered verification processes for different stakes, and write practical guidelines that people actually follow. You think about transparency, attribution, and escalation — not as compliance requirements but as trust infrastructure.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;What it feels like:&amp;#039;&amp;#039;&amp;#039; You&amp;#039;ve written an AI usage guideline that your team references. You&amp;#039;ve red-teamed your own framework and found the edge cases. You can explain AI-assisted decisions to stakeholders who didn&amp;#039;t see the process.&lt;br /&gt;
&lt;br /&gt;
== Common mistakes ==&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;&amp;quot;I always double-check.&amp;quot;&amp;#039;&amp;#039;&amp;#039; Intention isn&amp;#039;t process. Without a consistent method, you check when you remember and skip when you&amp;#039;re busy. That&amp;#039;s when errors get through.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Over-verifying low-stakes output.&amp;#039;&amp;#039;&amp;#039; Not everything needs the same scrutiny. Brainstorming ideas don&amp;#039;t need fact-checking; a client-facing report does. Calibrating effort to risk is itself a judgment skill.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Trusting AI&amp;#039;s self-assessment.&amp;#039;&amp;#039;&amp;#039; When you ask AI &amp;quot;are you sure?&amp;quot; it will often say yes — or hedge unconvincingly. The Fact-Check Habit exercise teaches you to use structured self-audit prompts that produce genuinely useful uncertainty signals.&lt;br /&gt;
&lt;br /&gt;
== How this connects to other pillars ==&lt;br /&gt;
&lt;br /&gt;
Ethical Prompting is foundational — without it, speed and automation become liabilities. It directly supports:&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;[[Pillars/Workflow Automation|Workflow Automation]]&amp;#039;&amp;#039;&amp;#039; — every automated AI step needs a quality gate&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;[[Pillars/Insight Synthesis|Insight Synthesis]]&amp;#039;&amp;#039;&amp;#039; — synthesizing AI output requires knowing which parts to trust&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;[[Pillars/Agent Collaboration|Agent Collaboration]]&amp;#039;&amp;#039;&amp;#039; — more autonomous AI requires clearer accountability frameworks&lt;br /&gt;
&lt;br /&gt;
== Exercises ==&lt;br /&gt;
&lt;br /&gt;
| Level || Exercise || Time || What you&amp;#039;ll build&lt;br /&gt;
&lt;br /&gt;
| Basic || [[Exercises/Ethical Prompting/Ep Basic 01 || The Fact-Check Habit]] || 15 min || A reusable verification prompt&lt;br /&gt;
| Intermediate || [[Exercises/Ethical Prompting/Ep Intermediate 01 || The Verification Checklist]] || 25 min || A systematic verification process&lt;br /&gt;
| Advanced || [[Exercises/Ethical Prompting/Ep Advanced 01 || The AI Governance Playbook]] || 40 min || A team-level AI usage framework&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:AI Fluency Playbook]]&lt;br /&gt;
[[Category:Five Pillars]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
</feed>