Vibe Coding Reality
The machines are a mirror. What they reflect about how we communicate—and think—is not always flattering.
Series: Inferentialism
- The Right Answer to the Wrong Question
- The Entropy Tax
- Vibe Coding Reality
I have spent considerable time in these essays discussing how human discourse operates—the underspecified questions, the entropy tax, the fight over defaults. But I confess I might never have noticed these patterns so clearly had I not spent the past year working intensively with large language models.
The machines, it turns out, are a mirror. And what they reflect is not always flattering.
Here is something that becomes obvious after your first hundred hours of prompting: LLMs do not read your mind. They respond to what you actually say, interpreted according to the probability distribution encoded in their weights. If your prompt is ambiguous, you will get an answer—but it will be an answer to the question the model found most probable given the tokens you provided, which may not be the question you intended.
This is frustrating at first. "That's not what I meant!" But the frustration is instructive. The model is not being obtuse. It is revealing that your question was underspecified—that you were relying on context you never provided, on defaults you assumed were shared, on background knowledge that exists in your head but not in the prompt.
The LLM is a harsh teacher precisely because it has no access to your intentions. It has only your words and its priors. The gap between intention and expression, usually papered over by human interlocutors who charitably fill in the blanks, becomes starkly visible.
I argued previously that human disagreements often arise from underspecified questions—from two people answering different questions while believing themselves to be addressing the same one. The LLM interaction makes this dynamic visceral. You learn, through repeated failure, that specification is a skill. That the feeling of having communicated clearly is not the same as having actually done so. That the work of making yourself understood is harder and more important than you thought.
But here is the deeper lesson: you also learn when not to specify.
Experienced prompters develop an intuition for which details matter and which are noise. They learn that over-specification can be as harmful as under-specification—that drowning the model in context can obscure the signal, that sometimes the default interpretation is better than any alternative you could explicitly request.
This is mastery: knowing which blanks to leave blank. Knowing which defaults to trust.
Now turn the mirror around. If LLMs operate on probability distributions over tokens, shaped by training data into stable patterns that we might call "weights"—what are we operating on?
The parallel is uncomfortably close. Humans also have weights—cognitive defaults burned in by years of experience, cultural immersion, evolutionary pressures. We also have context—the immediate situation, recent events, the current conversation. The weights are slow and stable; the context is fast and ephemeral.
When I described culture as having "deep structure" versus "working discourse," I was describing the same architecture. Some norms are weight-level: they persist without constant reinforcement, they feel obvious, they require no justification. Other norms are context-level: they need continual signaling to remain active, they decay quickly without maintenance.
The culture war, in this framing, is a fight over what gets written to weights versus what stays in context. The victors are not those who win the argument but those who make their position feel obvious—who get their priors installed as defaults, who make the entropy tax fall on their opponents.
There is a phrase that has gained currency lately: "vibe coding." It refers to programming with LLM assistance by describing what you want in natural language and letting the model figure out the implementation. You specify the vibe; the system handles the details.
I want to suggest that this is what we have always been doing. Human cognition is vibe coding. We navigate the world not through explicit reasoning about every decision but through trained intuitions that compress vast amounts of experience into quick, cheap heuristics. We specify the vibe—the general shape of what we want—and let our subconscious handle the implementation.
"Trust your gut" is not anti-statistical. It is differently statistical. The gut is a slow LLM trained on a lifetime of unlabeled data, running inference below the threshold of conscious awareness.
The question is whether the training data is still valid. Our intuitions about social dynamics, about status and threat and opportunity, were trained on environments that may no longer exist. We are running outdated weights on a changed world.
But—and this is crucial—we cannot simply discard the weights and run on pure context. The weights are what make real-time cognition possible. Without them, we would be paralyzed by the computational cost of reasoning from first principles about every decision.
The art is in knowing which weights to trust and which to override. When to go with the vibe and when to stop and think. When the default is a gift and when it is a trap.
I began these essays with a simple observation: if two reasonable people seem to disagree, check whether they are actually answering the same question. I end with a stranger suggestion: check whether you are answering the question you think you are.
Your defaults are not transparent to you. The questions you take as obvious, the framings you reach for automatically, the interpretations that feel natural—these are not the products of pure reason. They are the outputs of a system trained on data you did not choose, optimized for objectives you may not endorse, running inference on a world that may have shifted out from under its assumptions.
The machines are a mirror. Look into them long enough, and you may start to see yourself more clearly.
Whether that clarity is a gift or a burden, I leave to the reader to determine.