Human–AI Literacy Library
Short panels that teach the fundamentals you need to collaborate with AI responsibly: how it “thinks,” where it fails, and how to design workflows that keep humans in charge.
Core panels
Quick reads. Strong opinions. Designed to be shared in teams.
Why AI Hallucinates
What hallucinations are (and aren’t), where they come from, how to spot them, and how to work safely.
View panel →Anthropomorphism Isn’t the Problem
Why AI feels human — and how “teammate” framing helps when humans still own judgment.
View panel →When AI “Forgets”
The difference between model forgetting and conversation drift — and how to design around it.
View panel →Which Model Is My AI Tool Using?
How to identify the model behind your tools — and why capability and risk differ by model.
View panel →Inside the Collaboration Loop
The “how we work” model: brief → diverge → converge → decide → ship (and loop).
View panel →Inside the Human/AI Triad
One human, two AI roles — complementary strengths for divergence and convergence.
View panel →Where popular AI metaphors break in practice
These aren’t “hot takes.” They’re frameworks that examine accountability, decision ownership, and second-order effects — before teams build around a model that doesn’t hold up.
The philosophy behind this work
“We’re less interested in what AI can produce — and more interested in what humans and AI can achieve together.”
That’s not a tagline. It’s the question that drives every framework, playbook, and experiment in this ecosystem — and the one the Human–AI Loop methodology exists to answer.
Methodology
Guides & Playbooks
Literacy & Writing
Tools & Connect