Chapter 3: Fog and Friction in the Age of AI

[!idea] Modern Maxim A polished answer may still be a broken answer.

The next problem is not speed alone, but what speed does to certainty itself. Once systems begin producing answers at machine pace, the fog of war does not disappear. It changes texture. It becomes cleaner-looking, more legible, more persuasive, and in some cases more false.

That is where we turn next. The old fog remains. But now it glows.

What Fog and Friction Always Meant

For as long as people have written honestly about war, they have admitted an irritating truth: much of it is uncertainty wrapped in urgency. Commanders do not see the whole field. Reports arrive late, or wrong, or both. Units misunderstand intent. Weather interferes. Equipment fails. Maps lie by omission. Men grow tired. Plans that looked precise in the briefing room go lopsided the first time they collide with terrain, fear, and the enemy's determination not to cooperate.

This is what older military thinkers meant by fog and friction. Fog is uncertainty: the haze that prevents anyone from seeing the situation in full. Friction is the stubborn resistance of reality: the thousand little things that make simple plans hard and hard plans absurd.

Neither has gone away.

What AI Changes

Modern institutions, being modern institutions, are forever tempted to believe they have finally escaped these ancient burdens. Better sensors. Better communications. Better software. Better models. Better dashboards. Surely now the haze will lift. Surely now war can be seen clearly, managed rationally, optimized by machine.

It is an attractive fantasy. It also happens to be wrong.

Artificial intelligence does not eliminate fog or friction. It changes the way they appear.

AI changes fog by painting uncertainty over with confidence scores, fluent prose, and crisp interfaces. It converts old confusion into something more dangerous than confusion: synthetic clarity.

AI changes friction by moving more of it into the seams between systems: corrupted data, stale models, conflicting labels, broken integrations, overloaded networks, missing metadata, jammed communications, degraded GPS, and software that performs beautifully until the enemy starts lying to it.

The result is not a cleaner battlefield. It is a battlefield in which uncertainty can look cleaner and breakdown can hide deeper in the machinery.

Fog Now Arrives Polished

A human analyst can be wrong in familiar ways. They may be tired, biased, rushed, overconfident, or politically cornered. But humans often leak clues when they are uncertain. They hesitate. They hedge. Their stories wobble at the edges. They betray strain.

A model may do none of this. A model may be wrong with a straight face.

That matters because commanders are not merely collecting information. They are deciding and acting under pressure. In such an environment, a clean lie is often more dangerous than a messy truth. A hesitant report invites scrutiny. A polished report invites use.

The machine does not need to be malicious to create danger. It only needs to sound settled when the world is not.

The modern battlefield is crowded with sensors and feeds. Satellites observe from above. Drones circle. Radios pulse. Thermal imagery, intercepted signals, logistics logs, open-source feeds, cyber telemetry, and battlefield reports pour into systems faster than human staffs can absorb them. This flood of information creates a seductive belief that more inputs must mean more understanding.

But information and understanding are not twins.

Sensors do not explain themselves. A thermal signature is not intent. A cluster of vehicles is not automatically an assault. A period of silence is not necessarily concealment. An intercepted phrase is not a plan. A detected pattern may be preparation, deception, or coincidence. Every system that promises clarity depends somewhere on assumptions about what the data means and how the world works.

War is rude to assumptions.

A model trained on clean, labeled imagery may perform impressively in controlled conditions and then stumble in smoke, darkness, mud, camouflage, damaged infrastructure, bad weather, or unfamiliar equipment. A language model asked to synthesize reports may produce a coherent summary that quietly flattens ambiguity, drops crucial caveats, or invents connective tissue where none exists. A predictive system may treat a rare event as noise just before that rare event becomes decisive.

The most dangerous error in war is not obvious nonsense. It is plausible nonsense that moves through the chain of command wearing the costume of competence.

Friction Now Lives in the Seams

Classical friction involved roads, boots, horses, fuel, weather, timing, morale, and the ordinary sabotage of intention by reality. Those still matter. The mud has not retired. Engines still break. Batteries still die at the worst possible moment. Junior officers still mishear orders. Convoys still get lost because reality enjoys irony.

But now friction also lives in the connection points between digital systems.

One system classifies. Another prioritizes. A third translates. A fourth routes. A fifth recommends. Each may work well enough in isolation. Put them together in a live environment, with damaged infrastructure and an intelligent adversary, and suddenly the whole orchestra begins sounding like a dishwasher full of forks.

A great deal of modern military fragility hides not in the main system but in the seams. That is where the gears grind. A broken integration, mislabeled field, missing timestamp, stale model, overloaded network, or corrupted handoff can distort action long before anyone notices the root cause.

This matters because decision advantage depends not merely on having tools, but on having tools that continue to function under stress, ambiguity, and deliberate interference. The enemy does not have to destroy your system outright. It may be enough to make it doubt itself, overload itself, mistrust its own inputs, or point confidently in the wrong direction.

That is friction in the age of AI: not only resistance in the field, but resistance hidden in the architecture that claims to explain the field.

Doctrine: Command Under Synthetic Clarity

If fog can now look polished and friction can now hide in digital seams, command doctrine has to adjust.

Commanders must treat every tool that claims to reduce uncertainty as part of the uncertainty. A model output, dashboard, summary, or confidence score is not the end of analysis. It is one more object to interrogate.

Commanders must distinguish between raw observation, interpreted output, and machine-generated conclusion. Those are not the same thing, and a force that collapses them will soon confuse formatted confidence with reality.

Commanders must train staffs to ask not only whether an answer is useful, but how it was produced, what assumptions sit beneath it, what data is missing, and how an adversary might exploit the system's habits.

Commanders must rehearse degraded operations. An AI-enabled force that cannot function when its models fail, its networks degrade, or its digital nervous system is disrupted is not advanced. It is brittle.

Commanders must red-team their own certainty. If the system seems unusually clean, unusually confident, or unusually aligned with what the staff already hoped to be true, that is a reason for scrutiny rather than comfort.

The standard is simple: the human role is not to do every task manually. The human role is to remain responsible for meaning.

The Human Judgment Checkpoint

That responsibility becomes concrete in questions.

Before action, commanders and staffs must be able to ask:

  • What is the source of this conclusion?
  • What assumptions sit beneath it?
  • What data is missing?
  • What would this system likely miss in degraded conditions?
  • What would an adversary do if they knew we trusted this output?
  • What signs would tell us that the model is confidently wrong?
  • What happens if we delay long enough to cross-check?
  • What happens if we do not?

Those are not ornamental questions. They are the price of retaining command.

This is why training must include degraded operations, model-failure drills, red-team deception, and explicit practice in challenging machine outputs. It is not enough to teach people how to use a tool. They must learn how the tool fails, how the enemy might exploit it, and how to continue operating when the pretty interface stops pretending it is omniscient.

The point is not to distrust every model reflexively. The point is to distrust ease.

In war, an easy answer deserves extra suspicion.

Closing Reflection

The enduring burden of command is not to eliminate uncertainty. It is to act responsibly within it.

That was true when orders moved by courier and maps were incomplete. It remains true when orders move at network speed and maps refresh in real time. The technologies have changed. The moral burden has not.

The commander who forgets the existence of fog becomes reckless. The commander who forgets the existence of friction becomes fragile. The commander who believes a machine has abolished both becomes dangerous.

That is the third doctrine of this book: every system that claims to tame uncertainty must itself be treated as a source of uncertainty until proven otherwise under stress.

Chapter Takeaway

AI does not remove fog and friction. It repackages fog as synthetic clarity and relocates friction into the digital seams of command.