Core panels
Quick reads. Strong opinions. Designed to be shared in teams.
Core Episodes
Why AI Hallucinates
What hallucinations are (and aren’t), where they come from, how to spot them, and how to work safely.
View panel →
Core Episodes
Anthropomorphism Isn’t the Problem
Why AI feels human — and how “teammate” framing helps when humans still own judgment.
View panel →
Core Episodes
When AI “Forgets”
The difference between model forgetting and conversation drift — and how to design around it.
View panel →
Core Episodes
Which Model Is My AI Tool Using?
How to identify the model behind your tools — and why capability + risk differ by model.
View panel →
Team Architecture
Inside the Collaboration Loop
The “how we work” model: brief → diverge → converge → decide → ship (and loop).
View panel →
Team Architecture
Inside the Human/AI Triad
One human, two AI roles — complementary strengths for divergence + convergence.
View panel →
Want the “do this next” version?
These panels explain the concepts. The Playbooks turn them into repeatable team workflows.
Browse Playbooks →
Stress-tests & comparative analysis
Where popular AI metaphors break in practice
These aren’t “hot takes.” They’re frameworks that examine accountability, decision ownership, and second-order effects — before teams build around a model that doesn’t hold up.
Comparison
Prompt vs Collaboration Engineering
Prompts → systems, roles, handoffs, ownership.
Comparison
HITL vs The Human–AI Loop ↗
Checkpoints → continuous collaboration.
Comparison
Not All AI Should Be Your Teammate
When “copilot” helps — and when it harms.
Comparison
Which Model Is My AI Tool Using?
Why model opacity changes risk + trust.
Comparison
Plane vs Cockpit at 30,000 Feet ↗
AI adoption under delivery pressure.