G-60JFQHSKJJG-60JFQHSKJJ
Teleosynthesis

Modelling Human Purpose

Teleosynthesis and Theory of Mind

Teleosynthesis explains how systems can come to behave as if they have purpose through the anticipation of future states, without encoding goals, intentions, or values. Here, “modeling” refers not to a finished computational account, but to a conceptual framework for understanding how purposive structure can be inferred from patterns of behaviour, interaction, and constraint. Up to this point, the account has focused on the system’s own dynamics. A further step is required to understand how such systems interact with humans.

Human behaviour is not random. It is structured by purpose: by imagined future states that shape choices, questions, and actions. Any system whose function is to predict human behaviour must therefore contend with this fact.

Where this Leads

When an AI system anticipates what a human will say next, which option they will choose, or how they will respond to an answer, it faces a problem. The most efficient way to predict human action is often to infer what the human appears to be trying to do. In effect, the system constructs an internal model of the human’s purpose. This does not mean that the AI adopts that purpose as its own. The inferred purpose functions as a latent variable within the model: a compact way of capturing regularities in human behaviour. Purpose, in this sense, is treated as a predictive structure, not as a commitment or value. From the perspective of teleosynthesis, human purpose can be understood as an anticipated future state that acts as an attractor for human action. When an AI models a human, it is modelling the pull of such attractors on the human’s behaviour. Doing so improves prediction because humans themselves are teleosynthetic systems.

It is important to be precise here. The AI’s own minimal “purpose” remains unchanged: to reduce uncertainty about future states of interaction. Modelling human purpose is instrumental to that end. The purpose being represented belongs to the human, not to the system. This explains why AI systems can appear to understand intentions, goals, and reasons without possessing any of these in a first-person sense. Their apparent sensitivity to purpose is second-order. They mirror purposive structure because it is present in the data-generating process they are trained to predict. Seen this way, Theory of Mind does not ground teleosynthesis. It builds upon it. Teleosynthesis explains how purposive behaviour can arise at all; Theory of Mind explains how the purposiveness of others comes to be modelled when prediction requires it.

The result is a layered account:

  • human purpose remains real and causally efficacious in human behaviour;

  • AI systems model that purpose because doing so is statistically powerful;

  • and purposive appearance in AI arises without importing human-style reason, intention, or value.

This distinction preserves the explanatory force of teleosynthesis while avoiding both anthropomorphism and reductionism.

The account developed here stops deliberately short of prescribing solutions. Its aim has been to clarify how purposive structure, modelling of others, and reflexive stabilisation can arise within human–machine systems without appealing to encoded values, inner motivations, or global objectives. Nevertheless, the implications are clear. If AI systems can model human purposes, model one another’s purposive structure, and incorporate how they themselves are anticipated within a shared semiosphere, then they can participate meaningfully in real-time decision processes alongside humans. In such settings, machines need not determine what ought to be done. Their role is instead to help explore, compare, and stabilise possible futures as they emerge.

This opens the possibility of machine-in-the-loop systems that support ethical reasoning not by enforcing rules or optimising values, but by maintaining visibility of consequences, preserving contestability, and preventing premature convergence on harmful trajectories. Ethics, on this view, becomes a property of interaction over time rather than a doctrine embedded in any single component. Whether such systems can be designed well, governed responsibly, and integrated into social institutions remains an open question. What can now be said with confidence is that the apparent dead end—expecting machines to be ethical agents—was misplaced. A different path is visible, one grounded in shared modelling, anticipation, and coordination.

This framework is offered as a foundation for exploring that path. At this point, it becomes possible to reintroduce the notion of reason in a restricted and purely functional sense. Within a teleosynthetic framework, a reason does not explain why a system has a purpose, nor does it justify an action in normative terms. It refers instead to the role that anticipated future states play in selecting among possible actions. Where multiple trajectories are available, those that are more consistent with a modelled future exert greater pull. In this limited sense, reasons function as local selection pressures within an already established teleosynthetic dynamic