AI · Part 3 of 3 ·

The system topology: linearity over a normalised graph

By Frédéric Husser

Most architectures for operational tooling force a choice. Linear workflow engines, the kind that power ticket systems and state machines, deliver auditability and determinism. Every case is a sequence; every sequence has a current state; every state transition is recorded. You can tell a regulator what happened and when. You can tell an AI where in the process it is. What you cannot easily do is express the real structure of operational work: a step that allocates one of several interchangeable resources, a constraint that applies across many processes, a dependency that bridges two workflows running on different cadences.

General orchestration frameworks, the kind that power complex workflow engines with parallel branches, conditional forks, and dynamic sub-processes, express this structure well. You can model any topology. You can handle any exception. What you lose is the audit trail and the determinism. At any given moment, a case may be in a superposition of branches, with sub-processes running out of order and exceptions handled by ad-hoc paths. Regulators find these systems hard to inspect. AI agents find them hard to reason within.

Teams pick one and work around the other. Ticket systems get extended with custom fields and external dashboards to capture the relational structure they cannot express natively. Orchestration frameworks get wrapped in audit tooling to reconstruct what happened after the fact. Neither approach scales well, and neither produces a substrate AI can operate within safely.

The topology that resolves this is not a third option along the same axis. It is a different architecture.


The topology, stated precisely

An AI-ready operational system has two distinct layers, with a set of operators that bridge them.

The first layer is linear processes. Every workflow is a sequence of steps. Each step has a defined predecessor, a defined successor, a set of preconditions, and an owner. The state of a process at any moment is fully determined by its step history. There are no parallel branches at this layer, no conditional forks, no dynamic sub-processes. Linearity is a constraint on the control flow, and it is the constraint that produces auditability and AI-readiness together.

The second layer is a normalised operational graph. Entities, relationships, constraints, and temporal state live here. A barge is a node. A mission is a node. A constraint — that this barge cannot carry this cargo on this route — is an edge with typed semantics. Resources, people, processes, and locations all live as typed nodes with typed relationships. The graph is normalised in the database sense. Relationships are explicit rather than embedded. Constraints are declared once and referenced by many processes.

The operators bridge the two layers. A linear step can expand into parallel dispatch across a set of graph-resolved resources using operators like foreach, filter, any, and all. A step that says "assign a driver to this mission" expands, through foreach, into a dispatch over the set of currently available drivers matching the mission's requirements, then collapses back into a single step outcome via any (first match) or all (all candidates notified). The linearity of the process is preserved. The parallelism lives in the expansion, not in the control flow.

The source of truth is the graph. The audit trail is the processes. The AI execution frame is the intersection: at each step, the AI inherits its context from the graph and its intent from the process specification.


Why linearity at the process level

Linearity is often treated as a limitation. Real work, the argument goes, is not linear: it has parallel tracks, conditional branches, exceptions that loop back. A workflow engine that enforces linearity forces you to flatten the real structure into a sequence that does not reflect how the work actually runs.

That argument is half right. Real work has relational structure. What it does not have is non-linear control flow at the process level, as long as the relational structure is modelled somewhere else. When the graph carries the relationships, dependencies, and constraints, the process can stay linear and still express what is happening. When the graph does not carry them, the workflow has to encode them as branches, and the workflow becomes unauditable.

Linearity gives three properties that AI-ready systems cannot do without. The first is determinism of state. At any point in a process, exactly one step is current, and the successor is determinable from the specification. An AI that needs to know where it is in the process can read the answer directly. In a branching workflow, the question has many answers simultaneously, and the AI has to reason about which one applies.

The second is determinism of audit. Every transition is a record. Every record belongs to a specific case. Reconstructing what happened is reading the transitions in order, not traversing a dependency graph of concurrent sub-processes. This matters for compliance, and it matters equally for the team's own ability to understand what the system did.

The third is determinism of AI engagement. A linear step has a clearly defined entry context, output shape, and downstream effect. An AI participating in that step receives a bounded operational envelope. A parallel or branching step does not provide this. The AI has to reason about which branch it is in, which variant of the context applies, what the downstream effects will be given the other branches' states. The AI ends up doing orchestration, which is not what it is good at and not what we should ask of it.


Why normalisation at the graph level

Linearity at the process level works only if the relational complexity of operations is absorbed somewhere else. Normalisation is the design principle that puts it in the right place.

In a normalised operational graph, a resource like a barge is represented once. Its properties, current state, commitments, constraints, and history are attached to the single node. A mission that allocates the barge does not describe the barge; it references it by identity and inherits whatever is currently true of it. A constraint on what the barge can carry lives as an edge on the barge node, and it applies automatically to any mission that references the barge, without having to be restated per mission.

This is the relational algebra principle applied to operational modelling. SQL queries are linear at the statement level, but their expressive power comes from the normalised schemas they operate over. Nobody argues that SQL needs branching control flow to handle complex queries; the complexity lives in the join structure of the schema, and the statements stay declarative and linear. The same trade is available for operations: keep the processes linear, put the relational structure in the graph, and bridge the two with operators.

A worked example grounds this. A mission requires two tugboats from a pool of eight available, with a constraint that they must be stationed within a certain distance of each other. In a branching workflow, this would be a multi-step process with a nested selection loop, exception handlers for the distance constraint, and back-tracking if the first pair fails. In the linear-plus-graph topology, it is a single step with a foreach over the pool, a filter on the distance edge, and an any-matching-pair operator that returns the first valid combination. The linearity of the step is preserved. The combinatorial logic lives in the graph operators. The audit trail records one step outcome, not a tree of partial branches. The AI, if one is involved in the selection, operates on the filtered candidate set; it does not reason about the branching structure of the workflow, because there is no branching structure to reason about.

The combinatorial logic lives in the graph operators. The audit trail records one step outcome, not a tree of partial branches.


Why both are needed for humans and AI together

An operational system that delivered only linearity would be brittle under human reality. Operations have substitutions, partial dependencies, informal exceptions, and temporal constraints that interlock. Humans handle these through judgment and context. A linear process with no graph underneath would force humans to encode every exception as a branch, and the processes would deteriorate into unreadable variant sequences within months.

A system that delivered only graph flexibility would be unauditable, and therefore AI-hostile. An AI operating over a general graph with no process-level determinism has to reconstruct what the current operational intent is at every step. The reconstruction can be wrong. Even when it is right, the system has no single story to tell a regulator, an auditor, or a new operator joining the team. The graph alone is a powerful modelling tool; it is not by itself a process environment.

The combination produces a system both sides can work within. Humans get the expressive flexibility to model real operations in the graph. AI gets the structural determinism to operate predictably within the processes. Auditors get one record to inspect. The system scales without the trade-off that forces most teams to pick one side and tolerate the other's weakness.

An implication worth naming explicitly is that this topology does not emerge from combining a workflow engine with a knowledge graph at the application layer. Most current attempts to build AI-native operational tooling take this shape: start with a workflow engine, add a knowledge graph as a data source, use the workflow to orchestrate AI steps that query the graph. The stitching between the two is where the determinism breaks down. The workflow engine does not know what is in the graph. The graph does not know which processes depend on it. The AI has to mediate between them, and the mediation is exactly the context reconstruction that produces the failures described in the previous posts in this series.

The topology we are describing is a single substrate with two layers, not two systems stitched together. Linear processes are projections over the graph. The graph's state generates dispatch events. The operators are native to the substrate, not bolted on. The AI does not mediate between two systems; it operates within one.


Where this leaves the reader

If you are evaluating or designing an AI-ready operational system, two questions matter more than most of the feature comparisons you are probably being asked to do.

The first is about sources of truth. What is the source of truth in the system, and what operates over it? If the source of truth is a database and the processes are executed by a separate workflow engine, and the AI sits above both and queries both, you are running the stitched architecture. Integration costs compound, and determinism degrades at every seam. If the source of truth is a graph and the processes are projections over it, and the AI operates within step envelopes projected from the graph, you are running a different architecture, and the difference is not a matter of performance but of what the system can structurally guarantee.

The second is about context inheritance. What does the AI actually inherit when it takes a step? If the answer is whatever the prompt contains, the AI is reconstructing context every time, and reconstruction errors are inevitable. If the answer is a bounded envelope projected by the dispatch layer from the graph and typed by the process specification, the AI has a substrate that can carry it.

Neither question is answered by a feature list. Both are answered by the architecture underneath. For most systems on the market today, the honest answers are uncomfortable ones. That is not an indictment of those systems; it is a structural fact about what it takes to make AI participation in operations work safely.


What we are not claiming

We are not claiming we have solved all the design questions this topology raises. The ontology question — about how an operational model gets designed and evolved over time — is open, and the debate around intentional versus emergent ontologies is one we have views on but have not yet fully articulated in public. The measurement question — about how to tell whether a given operational model is well-structured before running AI over it — is open. The human-factors question — about how operators actually build trust in a system that schedules their attention rather than waiting for it — is open. Each of these deserves its own treatment.

What we claim is narrower and, we believe, defensible. Linearity over a normalised graph is a better starting point than the alternatives for systems where AI is expected to participate in operational work. It trades upfront structural clarity for ongoing reliability, and for work where the stakes are high and the exceptions are frequent, the trade is the right one. The architecture pays for itself in what it no longer has to work around.

References

  1. [1] Mikhail Gorelkin. From Hallucinations to Categorical Machines.
  2. [2] Linear. Issue tracking is dead.
  3. [3] How humans and AI actually complete a task together; Declarative processes and the three planning modes.