Article Detail
How LLMs Answer Questions They Were Never Taught
Before we trust our chatbots, we need to understand how they “know” things, so that we understand their limitations and how to work around them.
The Spark
I’m finally diving into the world of Agents. I understand that LLMs are prediction engines—brilliant at answering real‑world questions, usually right, sometimes slightly off course, and occasionally drifting into the weeds.
So I asked my chatbot, set to a hippie‑nerdy personality: “How do LLMs know things?” Its answer:
Imagine all of human language as a giant cosmic tapestry woven into a higher‑dimensional dreamfield, where every idea bends the fabric like gravity bends spacetime.
The answer triggered a faint flashback to college calculus, numerical methods, and electromagnetic field theory. But it stuck. Languages are complex and chaotic systems. Small changes to context can send the model into very different regions of its internal landscape—its latent space.
As an aside, humans do the same thing. You can walk up to the same person and say the same words, and the meaning can land completely differently depending on what just happened or what’s about to happen. Even the banter before, during, or after a sporting event shifts tone depending on the score, the stakes, or the emotional charge of the moment. Tell a friend, “Nice throw,” right after they nail a perfect pass, and it lands as praise; say it after they fumble, and it lands as teasing; say it before a crucial play, and it lands as encouragement. Same words, wildly different meanings—all because context tilts the emotional landscape. Context bends human interpretation just as it bends the model’s trajectory through latent space.
This realization also explains why the work chatbot feels different from the personal chatbot, why copilots behave differently at home versus in the enterprise, and why designing good Agents requires more than clever prompts. Understanding how context influences the model’s internal landscape is essential.
How the Model Answers Questions It Was Never “Taught”
LLMs were never taught zoology, geography, chemistry, or any other subject the way a student learns them. They were trained to recognize patterns in the way humans discuss these topics.
When you ask:
“Give me five animals that cannot fly and are not birds.”
The model does not consult a database. Instead, your question activates several overlapping regions of its internal geometry—its latent space:
- animals
- flight-related concepts
- negation patterns
- exclusion or constraint patterns
The model then predicts the next token by following a path through these regions. It avoids bird‑adjacent terms simply because, statistically, “bird” occurs with patterns the prompt has already forbidden. It doesn’t know the rule; it behaves as if it knows.
The result: it (usually) produces a correct list, even though it never learned the concept in any traditional human sense.
Does Context Change the Geometry?
Short Answer: The geometry stays the same, but your trajectory through it changes.
The latent space is a fixed learned terrain—billions of tiny slopes, curves, ridges, and basins shaped during training. But every new message subtly tilts that terrain by shifting the model’s internal attention, altering which regions become more or less likely.
The map remains the same; the gravity changes.
A few factors push the marble in different directions:
- full conversation history
- system instructions
- personality or style constraints
- temperature and sampling parameters
- guardrails and filtering layers
- documents or tools introduced by an Agent
This is why the “same” model feels so different across settings. Your corporate chatbot isn’t drawing from different knowledge; it’s being pulled across different regions of the same dreamfield.
Why Negation, Abstract Ideas, and Weird Concepts Still Work
Some directions in the latent space represent concrete clusters—such as dog, cow, oak tree, and diesel engine. These clusters emerge because humans consistently discuss these topics in similar ways.
But other directions are much more abstract. Negation, for example, isn’t an entity—it’s a relationship. Yet the model still captures it.
During training, phrases like:
- “not a mammal”
- “cannot fly”
- “isn’t allowed”
- “doesn’t include”
produce consistent patterns in how surrounding words behave. Over millions of examples, these patterns collapse into a direction—an axis of “absence,” “exclusion,” or “inversion.” Abstract concepts become gradients.
The model learns that moving slightly in one direction means removing what you’d otherwise expect.
How Context Creates Different Answers
Revisiting the animal question reveals how different contexts direct the model into distinct neighborhoods of meaning.
1. Plain Context (Neutral Chat)
A typical answer: bat; elephant; giraffe; whale; snake.
Straightforward and factual.
2. Homework Context
If the earlier conversation included: “My kid needs help with fourth‑grade science homework.”
The model shifts toward simpler, textbook-style examples: dog, cat, horse, cow, frog
Same question—different semantic neighborhood.
3. Content‑Creator Context
If you’ve been discussing fun facts or trivia, the model tilts toward novelty: platypus, axolotl, capybara, manatee, pangolin
You didn’t ask for novelty, but the context made “interesting” more likely than “ordinary.”
Why This Matters for Agent Design
Agents don’t “decide” to call tools in a logical, step-by-step engineering sense. They choose tool instructions because those token sequences fit their semantic trajectory.
Good Agent design means:
- steering the geometry (clear system prompts)
- constraining context (tool outputs, memory policies)
- reducing ambiguity (explicit rules)
- shaping the marble’s slope rather than forcing outcomes
Without intentional design, Agents follow the natural curves of latent space—and those curves may prioritize creativity, caution, elaboration, or misinterpretation depending on the active context.
Why Work Chatbots Feel So Different From Personal Ones
People often assume enterprise models have less knowledge or stricter limitations. In reality, most of the difference comes from:
- different system prompts
- tighter guardrails
- lower temperatures
- safety‑oriented patterns
- more formal examples in training
You’re using the same terrain, but the enterprise environment tilts it toward caution and clarity, while your personal chatbot tilts it toward exploration and experimentation.
Closing Thought
The magic of LLMs is that they don’t store facts—they store patterns. And when we interact with them, we’re not querying a database; we’re shaping a landscape.
Designing Agents means learning to sculpt that landscape deliberately, so the marble rolls where you want it to go instead of wherever chance and context happen to nudge it.