Declarative processes and the three planning modes
By Frédéric Husser
An operations team documents its process. A delivery mission arrives by email, gets logged in a spreadsheet, dispatched to a driver, tracked through status updates, and closed on confirmation. The process is a checklist. Everyone knows the steps. New team members learn by shadowing. The process works, and has worked for years.
Then the team adds an AI assistant. The assistant helps with step four: assigning the mission to a driver, taking into account current workload and recent assignments. To make the assistant useful, someone writes a prompt. The prompt describes what step four is, what constraints apply, what output format is expected, what to do in edge cases.
Now the team maintains two things. The original checklist, which defines the human-readable process. And the prompt, which defines what the AI believes the process to be. Over time, the two drift. The checklist gets updated when the team discovers a new edge case. The prompt gets updated when the AI produces a bad output. The two edits do not always reach each other. After six months, no single document captures what the process actually is.
This is an imperative process with an AI patched into it. The checklist tells humans what to do. The prompt tells the AI what to do. No structural specification tells the system what the work is for. Intent is fragmented across artefacts, and the AI is one more artefact in the drift.
The imperative pattern
Imperative processes are the default for most operations. Someone writes down the steps, someone executes them in order, exceptions are handled in the margins by judgment. Imperative is comfortable because it mirrors how humans naturally describe work to each other. It is efficient when the work is stable, the team is small, and the exceptions are rare.
It breaks when any of those conditions fail. When work volumes grow, imperative processes become hard to distribute across team members, because the implicit knowledge that makes the checklist work does not scale with headcount. When exception rates rise, imperative checklists grow into elaborate branching documents that no one reads end to end. When AI is introduced, imperative processes cannot be extended natively. The AI has to be given a parallel, prompt-based description of the process, and the team ends up maintaining both.
The underlying issue is that imperative processes encode two different things as one: the sequence of actions, and the intent behind them. A human reading the checklist can usually infer the intent from the actions, because humans have domain context. An AI reading the same checklist cannot. An AI given a prompt has only the intent the prompt author remembered to articulate, which is rarely the complete picture.
Declarative as a different starting point
Declarative processes begin from a different question. Not what steps do we execute, but what outcomes, recurrences, and bounds define this work. A declarative specification describes the shape of the work. The system derives the steps from the shape, rather than the shape being implicit in a pre-written sequence.
Three shapes of declaration correspond to three temporal patterns in how work arrives, and together they cover the planning space we have encountered across logistics, ground handling, warehousing, and lab operations. We have not yet found an operational planning problem that does not reduce to a composition of these three.
Order-driven declarations describe work that arrives as discrete events. A delivery request, a maintenance ticket, a client brief, a barge clearance. The declaration names the outcome the work must produce, the workflow that takes it from arrival to completion, and the constraints it must satisfy along the way. The system dispatches the steps as their preconditions are met. Nothing starts until the event arrives; once it arrives, the workflow runs to completion along a deterministic path.
Demand-driven declarations describe work that recurs on a schedule. Daily shift assignments, weekly team reviews, monthly storage replenishment, quarterly audits. The declaration names the recurrence pattern, the resources each cycle consumes, and the outcome each instance must produce. The system scaffolds steps ahead of time and triggers execution on schedule. The important property is that the work is provisioned in advance, not reactive; demand planning is what keeps the operation running smoothly, not what rescues it once it has fallen behind.
Capacity-driven declarations describe work that must fit into a finite resource envelope. A fleet's available hours, a team's weekly capacity, a plant's storage volume, a lock's daily throughput. The declaration names the bounds, the allocation policy, and the conflict resolution rule when demand exceeds capacity. The system allocates incoming work against the envelope and surfaces conflicts before they become deviations. Capacity declarations are what make the other two types safe; without them, orders and demand accumulate into overload.
These three shapes are not three types of workflow. They are three ways of specifying what a process is for. Each produces a specific pattern of dispatch events, deterministically, from the current state of the operational graph.
Why this matters for AI
An imperative process integrates AI by embedding a prompt at a step. The prompt carries the intent. The prompt is what the AI reads. The prompt has to be maintained as a separate artefact, and it drifts from the underlying process specification over time.
A declarative process integrates AI differently. The AI step inherits intent from the declaration. The step is an order-driven resolution step, or a demand-driven scaffolding step, or a capacity-driven allocation step. That typing is structural, not textual. The AI receives the step's type, its inputs, its constraints, and its expected output shape. It does not receive a prompt that describes the step; the step's type is the description.
The practical consequence is that the process specification becomes the source of truth for both humans and AI. When the process changes, the change propagates to the AI step automatically. When a new team member joins and needs to learn the process, they read one specification, not a checklist and a prompt that disagree. When an auditor asks what the AI was supposed to do at a given step, the answer is the step's type and constraints, recorded in the process model, not a prompt version buried in a configuration file.
A second consequence is that dispatch events become deterministic. If a declarative process says that every weekday morning, today's delivery missions should be scaffolded based on yesterday's confirmed orders, the dispatch layer generates the scaffolding step on schedule. No human has to initiate it. The AI step can run in delegation mode because the step's intent, preconditions, and context are all structurally defined. This is the property that makes non-imperative AI participation possible: AI that does not require a human to initiate every interaction.
When the process specification is declarative, intent lives in structure rather than in prompts, and the AI step inherits what it needs instead of reconstructing it.
The honest trade
Declarative processes are harder to set up than imperative ones. Writing a checklist is something a team lead can do in an afternoon. Writing a declarative specification requires articulating what the work is for, what recurrences it runs on, what bounds it respects, what outcomes it must produce. That articulation is structural clarity the team may not have yet, and producing it takes real effort.
It also surfaces disagreements that imperative processes leave latent. Two team members may both follow the same checklist while interpreting it differently, and the work proceeds because humans accommodate the ambiguity. A declarative specification forces the disagreement into the open. Is this a capacity-gated step or an event-driven one? Is this recurrence aligned to the calendar week or to the production cycle? Who owns the allocation policy when demand exceeds capacity? These are the right questions to answer, but they are questions, and they take time to answer well.
For work that is genuinely static, low-stakes, and rare, the cost of declarative specification exceeds the benefit. A checklist is fine, and adding a prompt to a checklist may be fine too. The argument for declarative specification is specifically for work where the stakes are high, the exceptions are frequent, and AI participation is desired. For that work, the upfront structural clarity pays back many times over, because every AI interaction afterwards inherits the clarity instead of reconstructing it.
Composability
A consequence worth naming explicitly is that declarative processes are composable in ways imperative ones are not. A capacity-driven declaration for fleet allocation can be referenced by a demand-driven declaration for weekly route planning, which in turn produces order-driven workflows for each mission. Each specification names its outcomes and its constraints, and each can be reused wherever its shape matches.
Imperative checklists do not compose. You can copy a checklist into another, but you cannot reference it; you maintain two copies, and they drift. A declarative specification is a reusable unit of work that other specifications can orchestrate. The closest analogue most readers will be familiar with is declarative infrastructure, where a Kubernetes manifest or a Terraform module can be referenced from another specification and re-applied wherever needed. The analogy is imperfect, because operations depend on human judgment and physical resources in ways infrastructure does not. But the principle transfers. Specifying what you want produces a more durable artefact than scripting how to get it.
Where this leads
Declarative processes need a substrate that can compose them, enforce their constraints, and project their context into execution. A declarative specification is not self-executing; something underneath it must read the specification, maintain the live state of the operational world, and generate dispatch events when the specification's conditions are met.
That substrate is not a workflow engine bolted to a knowledge base, and it is not a knowledge graph with a workflow engine added on top. It is a particular combination of both, structured so that processes stay linear at the control-flow level while the relational complexity of operations lives in the graph. That combination is the subject of the next post in this series.
For now, the question worth asking about your own processes. If you had to rewrite your current process as a declarative specification rather than a checklist, what would it look like, and which of the three shapes — order-driven, demand-driven, or capacity-driven — would best describe what you are actually trying to do?