Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
AI Fluency Playbook
Getting Started
How to Use
Core Content
Five Pillars
Exercises
Concepts
Learning Profiles
Archetypes
Pathways
Reference
Resources
Glossary
Tools
Further Reading
GW AI Fluency Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
The Research Pipeline
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
''Build a complete research synthesis pipeline with evidence grading and contradiction analysis. 40 minutes.'' <blockquote> '''One-liner:''' Build a complete research synthesis pipeline β from question to evidence-graded conclusions β using structured AI queries and your own critical judgment. </blockquote> ---- == π§ Jump in (Tinkerers start here) == Pick a question you genuinely need answered for your work. Not a trivia question β something where the answer shapes a real decision. '''Phase 1 β Define the research question.''' Send: <blockquote> I need to research this question: '''[your question]''' Help me refine it into a research-ready question by: 1. Breaking it into 3-4 sub-questions that, if answered, would fully address the main question 2. For each sub-question, identifying what type of evidence would count as a strong answer (data, expert consensus, case studies, logical argument, etc.) 3. Flagging any assumptions embedded in the main question that I should test </blockquote> '''Phase 2 β Structured evidence gathering.''' For each sub-question, run a separate AI query: <blockquote> Research sub-question: '''[sub-question]''' For this query, I want structured evidence: - '''Strong evidence:''' Claims supported by widely documented data, peer-reviewed research, or established expert consensus - '''Moderate evidence:''' Claims supported by credible case studies, industry reports, or respected analysis - '''Weak evidence:''' Claims based on anecdotes, single examples, logical inference without data, or common assertions that may not hold up Classify every claim you make. If you're not sure about the evidence quality, say so. I'd rather have honest uncertainty than false confidence. </blockquote> '''Phase 3 β Contradiction analysis.''' After running all sub-queries, send this to a fresh session: <blockquote> Here are the findings from my research on '''[main question]''': '''Sub-question 1 findings:''' [paste summary] '''Sub-question 2 findings:''' [paste summary] '''Sub-question 3 findings:''' [paste summary] Analyze the contradictions: 1. Where do the findings from different sub-questions conflict? 2. Which conflicts can be resolved by looking at the evidence quality? 3. Which conflicts are genuine unresolved tensions? 4. What additional evidence would resolve the remaining tensions? </blockquote> '''Phase 4 β Your synthesis.''' Write a 500-word research brief yourself (not AI-generated) that answers your original question. Structure it as: # '''Bottom line:''' Your answer in 1-2 sentences # '''Key evidence:''' The 3 strongest pieces of evidence supporting your answer, with evidence grades # '''Key uncertainty:''' What you're least confident about and why # '''What would change your mind:''' 1-2 pieces of evidence that, if found, would reverse your conclusion ---- == π Plan first (Planners start here) == Here's what you're about to do: # '''Formulate a research question''' β Choose something decision-relevant. Use AI to decompose it into sub-questions with defined evidence standards. # '''Gather evidence by sub-question''' β Run separate queries for each sub-question, requiring the AI to grade its own evidence quality (strong/moderate/weak). # '''Analyze contradictions''' β Feed all findings into a fresh session and ask for conflict analysis. Identify which conflicts are real vs. caused by weak evidence. # '''Write your own synthesis''' β Produce a 500-word brief that answers the question, cites evidence with quality grades, and states what would change your mind. # '''Assess the pipeline''' β Evaluate whether this process produced a meaningfully better answer than a single AI query would have. '''"Done" looks like:''' A research brief that clearly distinguishes strong from weak evidence, acknowledges uncertainty, and provides a decision-ready answer with stated confidence. ---- == π§ Why this matters (Strategists start here) == This exercise combines the skills from [[The Signal in the Noise|IS-Basic-01]] (extracting signal from noise) and [[The Multi-Source Brief|IS-Intermediate-01]] (triangulating across perspectives) into a '''complete research methodology'''. The evidence grading system prevents the common failure mode of treating all AI output as equally reliable. The contradiction analysis surfaces genuinely open questions rather than papering over them. This pipeline is directly applicable to due diligence, competitive intelligence, policy analysis, and any context where the cost of being wrong is high and the question is too complex for a single query. ---- == Reflection == * Did the evidence grading change which findings you trusted? Were you surprised by what was classified as "weak"? * How did the contradiction analysis change your initial view? * Was the 500-word synthesis harder or easier than expected? What was the hardest part β compression, confidence, or acknowledging uncertainty? * π¬ ''Teach this pipeline to a colleague and have them run it on a different question. Compare how you each handle the "what would change your mind" step β that reveals different attitudes toward uncertainty.'' (Social Learners) == β¬οΈ Level up == You've reached the advanced level for Insight Synthesis. From here, consider: * Using this pipeline for a real decision and tracking whether your evidence-graded conclusion held up * Combining this with [[Design Your Agent Workflow|AC-Advanced-01]] to delegate different research phases to different agent roles * Teaching this method to a colleague and seeing how they adapt it Back to [[Insight Synthesis|Insight Synthesis]] [[Category:AI Fluency Playbook]] [[Category:Exercises]] [[Category:Insight Synthesis Exercises]]
Summary:
Please note that all contributions to GW AI Fluency Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
GW AI Fluency Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)