← Writing

Fluency Is Not Understanding

Fluency has always been persuasive. It just isn't the same thing as understanding.

One of the more unsettling early experiences with AI systems was not that they could be wrong, but how convincingly they could be wrong.

You would ask a question, receive a clear and well-structured answer, and have no immediate reason to doubt it. The language was precise, the tone assured, and the explanation unfolded with the kind of coherence that usually signals understanding. It had all the right signals, which, in most settings, is most of the job.

Only later, sometimes much later, would it become apparent that the answer was incomplete, built on faulty assumptions, or simply incorrect. The delay was part of the problem. By the time the error surfaced, the answer had often already been accepted, repeated, and quietly integrated into other lines of thinking, which is an efficient way to distribute a mistake.

What made this difficult to detect was not the error itself, but the way it was presented. The fluency of the response created a strong signal of credibility. It sounded like understanding, and in most contexts, that is enough for us to accept it without further inspection, particularly if we have other things to get through.

This is not a uniquely technical problem. The same dynamic consistently shows up in human systems. In meetings, in design reviews, in strategy discussions, the most persuasive contribution is often the one delivered with the greatest clarity and confidence. A well-formed answer carries weight, even when the reasoning behind it has only been partially examined, or not examined at all, which is occasionally treated as a feature rather than a limitation.

We are, as a rule, better at evaluating how something sounds than how it was constructed, which is a useful skill in some domains and a liability in others.

This creates a subtle but important distortion. Fluency becomes a proxy for quality. Confidence becomes a proxy for correctness. The structure of the reasoning, which is the part that actually determines whether the conclusion holds, is left largely unexamined unless something goes obviously wrong, which is a high bar for intervention.

Usually, nothing goes wrong immediately - but give it time.

A confident answer that is directionally correct will often survive long enough to be reinforced. It is repeated, built upon, and integrated into other decisions, gradually acquiring the appearance of stability. By the time any weaknesses become visible, the cost of revisiting the underlying reasoning is significantly higher, which tends to reduce enthusiasm for doing so.

This is why flawed ideas can feel so stable in the early stages. Not because they are robust, but because they are presented in a way that discourages deeper inspection, often quite effectively.

AI made this pattern easier to see because it stripped away some of the usual signals we rely on. There is no track record, no reputation, and no interpersonal dynamic to anchor judgement. All you have is the output, presented without context and without the usual social cues that help us decide whether to trust it.

When that output is fluent but wrong, the gap becomes more obvious.

In human settings, the signal is noisier. Delivery style, status, experience, and prior credibility all influence how an answer is received. A senior person offering a clear recommendation will often be trusted, even if the underlying reasoning is only partially surfaced. A more tentative contribution, even if better reasoned, may struggle to gain traction, particularly if it requires the group to slow down and think more carefully, which is not always a popular suggestion.

None of this is irrational. In complex environments, we rely on heuristics to manage cognitive load, and fluency is a useful one. It allows us to move quickly without interrogating every detail, which would otherwise be impractical and, in some cases, intolerable.

The problem arises when the heuristic becomes invisible.

When we stop distinguishing between an answer that is easy to follow and one that is structurally sound, we begin to optimise for the wrong thing. The system starts to reward clarity of delivery over clarity of thinking, which produces a very specific kind of competence.

This is where many decision failures begin. Not in obvious disagreement, but in quiet acceptance.

A well-presented answer moves through the system without sufficient scrutiny. The reasoning behind it remains partially implicit. Alternative paths are not explored, not because they were considered and rejected, but because the initial answer appeared strong enough to make further exploration unnecessary, which is a surprisingly common threshold.

By the time the gaps become visible, the decision has already propagated, and undoing it requires more effort, more coordination, and often more explanation than anyone is particularly keen to provide.

The corrective move is not to distrust fluent answers, which would be impractical and mildly exhausting, but to treat them as incomplete by default.

Instead of asking only whether an answer makes sense, you ask how it was constructed. What assumptions it depends on. What constraints shaped it. What alternatives were considered and why they were set aside. What would need to be true for it to hold under pressure. These questions are not adversarial, although they can feel that way if they are unfamiliar. They are structural, which is a quieter but more reliable category.

They shift the focus from the surface of the answer to the architecture of the reasoning behind it.

In practice, this does not require turning every discussion into an interrogation, which would be effective but unpopular. A small shift is often enough. Asking someone to walk through their thinking, even briefly, can reveal whether the fluency is backed by substance or simply masking gaps that have not yet been examined, which is useful information to have before committing to a direction.

Over time, this changes what the system rewards.

Instead of valuing answers that sound good, it begins to value answers that can withstand inspection. Confidence becomes less about delivery and more about the ability to make reasoning visible and coherent under scrutiny, which is a more demanding standard and, as a result, a more useful one.

AI did not create this problem. It made it harder to ignore.

Fluency has always been persuasive.

It just isn’t the same thing as understanding.

← Writing