aigal.io

One Human. Two AI Teammates. Infinite Possibilities.

Human–AI Literacy Library · Episode 4 · Models & Platforms

Which Model Is My AI Tool Using?

A practical guide to understanding which underlying model powers your favorite tools, why it matters, and how to ask better questions about risk and capability.

Most people evaluate AI by the logo on the homepage. But under the hood, many products are simply different faces on top of the same small set of large models. Knowing which model is doing the work helps you reason about behavior, limitations, and risk — and keeps you from treating every shiny interface like a complete mystery.

First, untangle the stack: model vs product vs feature

When someone says “I’m using this AI tool,” they might be talking about three different layers without realizing it. Separating them makes everything clearer.

1. Model

The underlying large language model (LLM) or image model that generates the text, code, or visuals. Examples: models from OpenAI, Anthropic, Google, Meta, and others.

2. Product

The app or service you interact with. It might expose one model, several model options, or even switch providers behind the scenes.

3. Feature

The specific button or workflow you use: summarize, “magic rewrite,” meeting notes, quick chart, draft email. Different features in the same product may use different models.

When you say “this AI is great at X but terrible at Y,” it’s helpful to know: are you talking about the model, the product UX, or a specific feature?

Model families: you don’t need every detail, just the shape

You don’t have to memorize version numbers. At a day-to-day level, it’s often enough to know which family of models you’re working with and what they’re generally optimized for.

OpenAI-family models

Commonly used for general-purpose conversation, coding help, and content generation. Often tuned for broad capability and flexible instruction-following.

Anthropic-family models

Frequently used for long-form reasoning, analysis, and safety-conscious deployments, with an emphasis on helpful, honest, and harmless behavior.

Google / Gemini-family models

Often paired with search and document understanding features. Useful where web context and multimodal input matter.

Other or mixed providers

Many platforms route to different models depending on task, region, or price. Some even let you choose the model per workspace or feature.

The goal here isn’t brand loyalty. It’s pattern recognition: knowing which family you’re leaning on for which kind of work.

Why you should care which model is doing the work

1. Capability & fit

Some models excel at coding, some at long-form writing, some at analysis, some at search-style Q&A. When you know the family, you can match tasks to strengths.

2. Data boundaries

Different providers make different promises about how they handle your data, where it lives, and whether it’s used for training. That matters for privacy and compliance.

3. Safety & risk profile

Guardrails, red-teaming, and safety policies differ by provider. If your tool is handling sensitive workflows, you want to know whose safety stack you’re actually using.

4. Cost, latency, and limits

Model choice affects speed, token limits, and pricing. That shapes which workflows feel smooth and which ones constantly hit ceilings.

When you understand the model layer, “this tool is flaky” can become a more precise question: “Is this a UX problem, a model limitation, or a mismatch between the two?”

How to find out which model your tool is using

Many products quietly list their providers in documentation, security pages, or model pickers. A simple set of questions can turn “black box” into “understood stack”:

  • Which model family is powering this feature? (OpenAI, Anthropic, Google, other?)
  • Is the model the same across all features? Or do different features use different models?
  • Where can I see the model name and version? Is it visible in a picker, or only in docs?
  • How is my data handled? Is it sent directly to the provider? Is it stored, logged, or used for training?
  • Can I change the model? For this workspace, project, or tenant?
  • What are the limits? Token caps, file sizes, conversation history, rate limits.

Even if you only get partial answers, asking these questions signals that you’re treating AI as real infrastructure, not magic.

A simple way to map your AI stack

You don’t need a 40-page architecture diagram. Start with a quick inventory you can update as your tools evolve.

Tool / Feature
Underlying model (family)
What I trust it for
Chat interface / “main AI teammate”
_____________________
Brainstorming, structure, co-writing, planning
Docs: summarize / rewrite button
_____________________
Draft clean up, summaries, not final decisions
Knowledge base / search assistant
_____________________
Finding likely answers, pointing to sources
Design / slides helper
_____________________
Idea starters and layout drafts, not brand-perfect assets

Use this as a living document for your team. As tools change providers or add new model options, update the map rather than relying on memory.

How I map my own models

Here’s a simplified snapshot of how I (Maura) think about the model layer in my daily work. The exact versions change over time, but the roles stay stable.

🧠 ChatGPT (CP)

Powered by OpenAI models.

My primary collaboration space for building playbooks, visuals, and systems. I treat this as a long-running teammate with deep context.

🔮 Claude (Soph)

Powered by Anthropic models.

I lean on Soph for long-form synthesis, structure, and “step back” analysis on complex projects and drafts.

📘 Rovo & other workspace helpers

I treat these primarily as tools: I check which provider they use, decide what kind of content I’m comfortable sending, and use them for focused transformations like summaries and cleanup.

🔎 Search & research assistants

For tools that blend web results with model answers, I check both the search stack and the model provider, then treat outputs as “pointers to investigate,” not final truth.

The specifics will look different for your team. The important part is that someone knows which models are actually in play and has made intentional choices about where to use which.

Connect this to the rest of the library

Understanding models is one piece of working inside the loop. These related panels fill in the rest of the picture:

When you’re ready to turn these concepts into practice, explore the Playbooks & Guides for step-by-step workflows.

Which Model Is My AI Tool Using?