aigal.io

One Human. Two AI Teammates. Infinite Possibilities.

Episode 3 of the Human–AI Literacy Library

From The Human AI Loop

When AI “Forgets”

Catastrophic Forgetting vs. Conversation-Time “Forgetting”
Why your AI teammate feels like it forgot everything, even when the underlying mechanics are nothing like human memory.

Sometimes it feels like your AI teammate lost the thread. You shared a key detail three messages ago and suddenly it’s gone. It looks and feels like forgetting.

Under the hood, though, there’s a big difference between the real machine-learning term catastrophic forgetting and what actually happens when your model “forgets” in the middle of a chat.

Catastrophic Forgetting

Training-time phenomenon (ML research)

  • Happens while the model is being trained, not while you’re chatting with it.
  • The model learns a new task and its internal weights change so much that it performs worse on an earlier task.
  • Think: “We retrained the whole brain on Task B and accidentally overwrote part of Task A.”

Shows up in research on continual learning, multi-task training, and long-term model updates – not in your day-to-day conversations.

Conversation-Time “Forgetting”

What you see in chat sessions

  • Happens during a session while you’re collaborating with the model.
  • The model appears to ignore earlier details because of context window limits, attention falloff, and prediction drift.
  • Think: “That detail fell out of focus — not because it was ‘forgotten’ like a memory, but because it’s no longer prominently represented in the current tokens.”

This is the “my AI teammate forgot everything” moment — it feels</em human, but it’s driven by token limits and next-token prediction, not human-style memory loss.

Why both behaviors feel like human forgetting

On the surface: it looks the same

  • The system used to get something right and now it doesn’t.
  • Performance drops after new information or a long stretch of work.
  • To a human, it maps neatly to: “We knew this. Now we don’t. We must have forgotten.”

Under the hood: completely different mechanics

  • Humans forget via memory decay, interference, emotion, and focus lapses.
  • LLMs “forget” in-chat because earlier tokens fall out of the context window or lose attention weight.
  • There is no persistent, human-style memory being lost — just shifting probability over what to predict next.

Working with an AI teammate that “forgets”

If you treat AI like a vending machine, these moments just feel like bugs. If you treat AI like a teammate at the edge of its abilities, the “forgetting” becomes part of how you work together.

  • Re-ground the context. Periodically restate the key constraints and decisions in a single message.
  • Use anchor summaries. Ask AI to summarize where you’ve landed so far, then reuse that summary as you continue.
  • Watch the length. Very long, meandering chats are more likely to lose important details in the token shuffle.
  • Name the pattern. “This isn’t you ‘forgetting’ like a human — this is context and prediction limits. Let’s reset.”
  • Design for the edge. When you collaborate at the edge of a model’s capabilities, the cracks show up sooner — and that’s where better workflows are born.

Big idea: AI doesn’t forget the way humans forget — but it behaves in ways that look like forgetting. That’s where anthropomorphism sneaks in, and where better human–AI collaboration patterns can emerge.

Built by The Triad: Maura (Product), CP (Structure), Soph (Synthesis)

Part of The Human AI Loop — a framework for treating AI as a true teammate, not a task vending machine.

On AI Forgetting