Praxial Dissonance: The Structural Gap Between Shared Intention and Coordinated Action
Giannandrea Giammanco · February 2026
There is a peculiar failure mode in human coordination that has no name. It is not conflict in the ordinary sense—the agents involved may share values, share goals, even share a vision of the future they are trying to build. And yet their plans, when enacted in parallel, undermine each other. Not because their purposes diverge, but because something goes wrong in the translation from intention to action. The purpose survives; the plan distorts. Two organizations working toward the same end produce strategies that, when they meet the world simultaneously, interfere destructively—cancelling each other’s effects, exhausting shared resources, confusing the very communities they both aim to serve.
This phenomenon deserves its own name, because unnamed things remain invisible to the systems that could address them. I propose praxial dissonance: from the Greek praxis (action, practice, the enactment of intention in the world) and dissonance (a lack of harmony between things that should, by their nature, cohere). Praxial dissonance is the structural condition whereby plans generated by agents with compatible or even identical intentions become mutually incompatible through the process of planning itself.
* * *
The key insight is where the incompatibility originates. In cases of genuine value conflict—what telodiversity describes at its sharpest edges—plans clash because the agents want fundamentally different things. One community wants industrial development; its neighbour wants wilderness preservation. No planning process, however sophisticated, can make these plans fully compatible, because the underlying purposes are in tension. This is real and important, but it is not praxial dissonance.
Praxial dissonance occurs one level removed: in the space between intention and its expression as an actionable plan. An intention—“improve governance in this region”—must pass through a series of transformations before it becomes a concrete plan with timelines, resource claims, geographic scope, and operational methods. Each transformation introduces distortion. The intention is filtered through the planner’s cognitive model of the problem, which is necessarily incomplete. It is shaped by the information available to the planner, which is necessarily partial. It is compressed into a format that can be communicated to others, which is necessarily lossy. And it is coloured by the planner’s identity—their professional training, their institutional position, their theory of change—which is necessarily particular.
When two planners with the same intention pass that intention through different distortion filters, they produce different plans. This is expected and, in many cases, valuable—different approaches to the same problem generate comparative knowledge. But when those different plans are enacted simultaneously without awareness of each other, they can interfere in ways that neither planner anticipated or intended. One team’s community engagement strategy saturates the same population that another team needs receptive and fresh for their governance pilot. One organisation’s rapid technology deployment undermines the slow trust-building that another organisation’s programme depends on. The intentions were aligned. The plans, as practised, are dissonant.
* * *
Four structural features of human cognition and communication make praxial dissonance not merely possible but probable in any system of distributed planning.
Cognitive load is the first and most fundamental. The human mind operates under severe bandwidth constraints. A planner designing a strategy for community governance in a particular region cannot simultaneously model how that strategy interacts with every other active plan in the same region, sector, or population. He/She can hold perhaps three to five external constraints in working memory while planning; the real ecosystem may contain hundreds or thousands. The plans they cannot model are the plans they will accidentally interfere with.
Lossy communication is the second. A plan that took months to develop, informed by tacit knowledge, contextual intuition, and lived experience, must be communicated to other actors through documents, meetings, and summaries that inevitably lose information. The receiving party decompresses this transmission through their own cognitive filters, introducing further distortion. Two teams can exchange detailed plans, conduct joint workshops, and review each other’s strategies—and still miss the interference point, because it lives in an unstated assumption that never made it into the communication channel.
Temporal myopia is the third. Human planning is overwhelmingly sequential and present-biased. Plans are designed at a point in time, checked against other plans at discrete intervals, and then executed in a world that changes continuously. A modification to Plan A at month four may create a conflict with Plan B that did not exist at month one. This new dissonance may not be detected until a coordination review at month six—by which point both plans have invested resources along incompatible trajectories. The plans were harmonious when reviewed; they became dissonant while no one was watching.
Identity entanglement is the fourth and most insidious. Planners are not disembodied analytical engines. They are social beings whose professional identities, institutional affiliations, and personal narratives are woven into their plans. Detecting praxial dissonance between your plan and another’s is not a neutral observation—it is a potential threat to your resources, your timeline, your theory of change, and your professional standing. This creates a systematic bias: planners unconsciously avoid seeing dissonance that would require them to modify their plans, and when dissonance is externally surfaced, they are more likely to defend their approach than to seek genuine harmonisation. The ego does not want coordination; it wants validation.
* * *
Praxial dissonance is distinct from, but deeply related to, telodiversity—the irreducible diversity of human intentionality, purpose, and drive. Telodiversity describes the input condition: agents in any complex social system carry different purposes shaped by genetics, culture, experience, and consciousness. Some of these purposes are compatible; some are in genuine tension; some are fundamentally opposed. Telodiversity is the landscape of human motivation in all its variety.
Praxial dissonance describes what happens to that landscape when it passes through the machinery of planning and action. Even in the narrow band of telodiversity where purposes align—where agents genuinely want the same thing—the distortions introduced by cognitive limits, communication constraints, temporal myopia, and identity entanglement can produce plans that clash. Telodiversity generates conflict at the level of purpose. Praxial dissonance generates conflict at the level of practice. Both are real. Both require different interventions. And any coordination system that addresses only one will fail to prevent the other.
This distinction matters enormously for system design. A system that assumes all plan conflicts arise from divergent purposes will try to resolve them through negotiation, compromise, or value alignment. But if the conflict is praxial—arising from distortion in the planning process rather than from incompatible intentions—then value negotiation is the wrong tool. What is needed instead is better coordination infrastructure: shared languages for expressing plans, continuous monitoring of cross-plan interactions, and mediation layers that catch interference before it manifests in the world. The agents do not need to change their values. They need better mirrors in which to see how their actions, as planned, interact with the actions of others who want the same things they do.
* * *
This is precisely where artificial intelligence enters not as an optimisation tool but as a substrate shift in the coordination problem. The four sources of praxial dissonance—cognitive load, lossy communication, temporal myopia, and identity entanglement—are all structural constraints of human cognition. They are not failures of effort or intelligence; they are features of how human minds process information and construct identity. They cannot be trained away.
But they can be mediated. An AI-powered system that ingests plans from multiple agents, translates them into a shared protocol rich enough to capture preconditions, side effects, resource claims, causal assumptions, and scope boundaries, and then continuously compares them for interference—such a system directly addresses each source of praxial dissonance. It absorbs the cognitive load that human planners cannot carry. It provides a lossless communication channel between plans that human language cannot offer. It monitors continuously rather than episodically, catching dissonance as it emerges rather than months after the fact. And it detects interference without ego, reporting conflicts with the same equanimity regardless of whose plan is affected.
The human planner’s role shifts from attempting comprehensive cross-plan awareness—a task that exceeds cognitive capacity—to exercising judgment on the dissonances the system surfaces. Values remain human. Direction remains human. The experience of collective agency and democratic deliberation remains human. What changes is that the translation layer between intention and coordinated action becomes transparent, comprehensive, and continuous, rather than opaque, partial, and episodic.
* * *
Praxial dissonance will never be fully eliminated, any more than signal-noise can be eliminated from a communication channel. There will always be some gap between what agents intend and what their plans, in practice, produce when they meet the world alongside other plans. But the current state of human coordination is not a world of irreducible noise—it is a world where we have not yet built the basic infrastructure to detect the most obvious interferences. We are, in effect, transmitting plans on open frequencies with no protocol for avoiding crosstalk, and then marvelling that our signals degrade.
Naming and factoring praxial dissonance is a first step toward building that infrastructure. What is named can be measured. What is measured can be managed. And what is managed—even imperfectly, even partially—represents an enormous advance over a world where agents with shared purposes routinely undermine each other simply because no one built the system that would have shown them, before they acted, that their plans, despite their aligned intentions, were about to collide.

