AI · Part 2 of 2 ·

The operational graph.

In Part 1, we argued that the central design question in AI-native operations is not how to make LLMs smarter about your domain, but where structural authority over operational reality should reside.

This piece describes what the structural answer looks like. What does it mean for an operational graph to be sovereign over the AI's execution frame? What does an operational graph node actually contain? What changes when constraints are constitutive rather than corrective? And what properties emerge, not as policies you configure, but as structural consequences of the architecture?

Procedural versus operational: what the distinction means

The inversion we are proposing is worth stating precisely, because it is easy to mistake for a technical preference when it is actually a structural claim about where correctness comes from.

In a procedural graph, the graph describes a sequence of actions. The AI is placed inside this sequence as a step, receives some context, and reasons about what to do within its slot. The graph is a recipe; the AI adapts when ingredients are missing. Authority over what is correct flows from the AI's reasoning toward the graph's slots.

In an operational graph, the graph describes the current state of the world: every entity, every relationship, every constraint, every instance. What exists, what is committed, what is available, what is forbidden at this moment. The graph is not a sequence. It is a continuously updated model of operational reality. The AI does not navigate it. The graph projects execution context into the AI's frame before any LLM invocation occurs. Authority flows from the graph toward the AI's reasoning.

This is not a UX improvement or a configuration choice. It changes what the AI can and cannot do, and it changes where errors can originate.

In a procedural graph, the LLM decides what to look at, constraints are validated after the AI proposes an action, context currency degrades between queries. It's up to the AI system to apply guardrails and ensure that the tools and context enrichments are legitimate and coherent.

On the other hand, in an operational graph, the graph decides what the LLM sees, invalid actions are structurally inexpressible: they do not exist in the projected toolset. Context is the current state of the graph at the moment of execution, derived from live operational events. AI and human actions are recorded in the same step instances, with the same structure and the same timestamps.

What an operational graph node actually contains

The difference becomes concrete when you look at what a node represents in each model.

In a procedural workflow engine, a node is a function. Call this tool, route to this branch, check this condition. The node has no inherent substance beyond its place in a sequence.

In an operational graph, a node exists at the intersection of three dimensions: time, instance identity, and process structure. A node is not "call function X." It is "this barge, at this position, at this moment, within this delivery mission, subject to this lock schedule, consuming storage at this rate." When dispatch creates a task at that node, the execution frame inherits all three dimensions regardless of whether the executor is human, automated, or AI. The executor does not query for them. They constitute the context of the action.

This is why constraints travel with the projection rather than being validated against it after the fact. In a procedural graph, every new capability requires a new configuration binding that someone must get right. In an operational graph, every new capability is a typed extension of the existing structure. Scoping rules propagate through the structural layer. The complexity scales with the graph, not with the number of integration points someone has to maintain manually.

Constraints that are constitutive, not corrective

Gorelkin in his paper From Hallucinations to Categorical Machines draws a distinction that is central to the architecture we are describing. He separates truth-preserving systems, where correctness is an invariant of the system's own composition at every intermediate step, from truth-filtered systems, which produce ungrounded structure and apply discipline after the fact or at intervention points along the generative path. A truth-preserving system fails only when its invariants are violated, a detectable event. A truth-filtered system can fail silently whenever the filter's coverage is incomplete.

This distinction maps onto how AI integrates with operations.

When the AI assembles its own context, whether by navigating a knowledge graph or drawing on trained priors, operational constraints are advisory. The LLM might respect them. It might not. External validation can catch some structural errors. It cannot catch the ones it does not know to look for. This is the truth-filtered regime: generation first, discipline after.

When the graph projects context into the AI session, constraints are constitutive. The AI's toolset is derived from the graph's typed structure. Invalid operations are not checked and rejected. They are inexpressible: they do not exist in the projected execution frame. The AI can still reason imperfectly within its frame. But it cannot reason from structurally invalid premises, and it cannot propose actions the graph's typing rules do not admit. This is the truth-preserving regime: not because the LLM has become truth-tracking, but because the structure within which it operates guarantees structural soundness as an invariant.

The LLM provides reasoning. The graph provides structure. The operational graph must remain sovereign over which actions are possible.

The three-tier structure that makes this reliable

Having established where structural authority should reside, the question becomes what makes such a graph reliable in practice. A graph is only as trustworthy as the structural principles that govern it. What we have found, through deployments in fluvial logistics, airport ground handling, and warehousing, is that operational reliability requires three distinct layers with explicit, structure-preserving maps between them.

The Ontology Layer defines the domain: the types of entities, the valid relationships between them, the constraints that make certain configurations meaningful and others impossible. This is not a data schema. It is the domain grammar from which all instances and all executions are derived. The ontology layer does not change with each operation. It changes when the business model changes.

The Instance Layer is the live state of the world: the actual barge, its actual position, its actual committed missions, the actual storage levels at the plant it is supplying. Instance data is continuously updated from operational events. It is typed against the ontology layer, meaning every instance is a valid expression of the domain grammar, not a free-form record.

The Execution Layer is where dispatch happens: step instances created from process templates, projected into the execution frame of the assigned actor (whether human or AI) with context derived from the instance layer and constraints derived from the ontology layer. The executor acts within this projected frame. It does not reconstruct the frame.

What holds this together is not just the presence of three layers but the structure-preserving maps between them. Changes in the ontology layer propagate typed constraints downward to instances and to execution frames. Events in the execution layer propagate state updates upward to instances. The graph's integrity is maintained across all three layers by rules about what projections and dependencies are valid. This is what we mean by graph governance: not access control, but the formal coherence of a system where planning and execution share the same typed structure.

The right question for the industry is not how to make LLMs reason better about operational domains. It is how to structure operational data so that AI execution contexts are grounded in typed, live, layer-coherent graphs.

When that structure exists, the AI's role becomes well-defined: reason within the projected frame, produce one outcome, return authority to the dispatch layer. When that structure does not exist, no amount of training or retrieval engineering fully compensates for it.

What follows from the inversion

When the graph is sovereign and sessions are derived, certain properties emerge. Not as policies you configure, but as structural consequences.

Intent becomes explicit. The graph resolves what the task is about before the AI session exists. The platform provides intent. The AI provides reasoning. The agent never decides its own objectives.

Every AI interaction becomes deterministic in its framing. The dispatch layer generated the step. The process template defined it. There are no ad hoc interventions. Operations teams tolerate change when they expect it. They reject it when it appears from nowhere.

Compliance becomes structural. The AI's available actions are projected from typed constraints, not assembled from a tool catalogue. An invalid operation is not a validation error; it is an action that does not exist in the first place.

Actions route to the agent closest to the event. Position in the graph determines who handles the step. The person nearest to the problem has context that no amount of data can replace. AI supports local judgment. It does not override it.

Outcomes are atomic. One interaction produces one result. After each step, the dispatch layer re-evaluates. The AI never chains autonomous decisions. A human can understand and accept a single proposal. A sequence of five automated decisions feels like loss of control, even if each individual decision was correct.

Everything is auditable by construction. AI and human actions are recorded in the same step instances, with the same structure, the same timestamps. There is no separate AI log to reconcile. One trail. One story.

And every AI invocation is owned by a human. The delegation chain is a graph relationship. Accountability flows upward to a person who can be named.

These are not features. They are consequences of building on a substrate where the operational structure is native to the execution model.

What we claim

We do not claim this architecture makes language models truth-preserving. Nothing does. The transformer's native invariants are distributional, not truth-tracking. Gorelkin is right that this is an architectural fact, not an engineering shortfall to be patched.

What we claim is narrower. Within the operational domain defined by the graph, structural violations are inexpressible. Not because we filter the AI's outputs, but because the AI's operational frame is projected from a typed graph whose constraints are constitutive. The AI reasons within a structurally sound space. The space is the graph's responsibility. The reasoning is the LLM's.

We are also not claiming this is easy to build. Structuring an operational domain into a coherent three-tier typed graph requires understanding the domain, its constraints, and its change patterns well enough to express them formally. That investment is real. But it is the right investment, because it is the only one we have found that gives AI agents a structural foundation they can be trusted to operate within.

The era of handoffs is ending

The era of handoffs, between planning and execution, between automation and AI, between the knowledge graph and the agent session, is ending. Linear is showing what replaces it for product development teams, in the vertical where AI adoption is most advanced. Forge is showing what replaces it inside the model. We are building what replaces it in operations, where the stakes are physical and the constraints are not forgiving of structural errors.

The shared insight is that context must become execution. But context does not become execution by itself. It becomes execution when it is grounded in a structure that governs what is possible, what is current, and what is owned. That structure is the operational graph. Getting it right is not an AI problem. It is a data architecture problem that makes AI trustworthy.

The graph sets the beat. The AI plays within it. The team hears something they can trust.