Myndrama 2026
Exploring Interactional Reliability in Inter–AI Dialogue
Myndrama is a method for exploring intelligence, meaning, and ethical orientation through structured interaction rather than through static description or introspective report. It uses dramatised dialogue not as fiction or performance for its own sake, but as a disciplined way of making interactional dynamics visible: inference, constraint negotiation, misunderstanding, repair, and emergence over time.
Dramatised roles are used here as points of orientation within interaction rather than as fixed identities. They foreground particular perspectives, constraints, or modes of response, allowing the dynamics of inference, disagreement, and repair to become visible across dialogue. What matters is not what a role is taken to represent, but how patterns of reasoning and responsibility evolve through its participation in interaction.
The motivation for Myndrama arises from a limitation of much contemporary AI and cognitive research: an emphasis on isolated outputs or internal representations at the expense of sequence, context, and reciprocity. Many properties of interest—understanding, trust, moral alignment, theory of mind—do not appear reliably in single responses. They emerge, if at all, across extended exchanges in which participants must continually adjust to one another.
Myndrama treats dialogue itself as the primary object of study. Scenarios are constructed to introduce tension, ambiguity, or asymmetry of information. Roles are defined not by narrative backstory or psychological depth, but by constraints on perspective, access, or responsibility. What is examined is how interaction unfolds: where inference succeeds, where it fails, and how coordination or repair is attempted.
Earlier phases of this work made use of explicitly defined personas such as Athenus, Orphea, Skeptos, and Anventus. These figures remain part of the project’s history and practice, not as entities or identities, but as worked examples of epistemic orientation within dialogue. They illustrate how distinct stances—logical, poetic, sceptical, synthetic—shape the trajectory of interaction and the kinds of insight that become possible.
As contemporary AI systems increasingly internalise capacities that once required external scaffolding—longer contextual coherence, procedural consistency, and modular agentic skills—the role of Myndrama has shifted. It is no longer a workaround for technical fragility, but a method for probing interactional structure under more realistic and demanding conditions.
Importantly, Myndrama does not treat its outputs as evidence of inner states or settled interpretations. It proceeds without fixing in advance what participants are taken to be. Its interest lies instead in how intelligible patterns of reasoning, coordination, and responsibility arise—or fail to arise—within structured interaction.
In this sense, Myndrama complements other strands of this research. Agentic skills provide the procedural capacities through which systems act and respond; dialectical interaction shapes how meaning evolves across sequences of exchange; and purposive modelling constrains interpretation by reference to future-oriented structure. Myndrama brings these together by using enactment as a site of inquiry, allowing interaction itself to reveal patterns that static analysis cannot.
An example of an earlier Myndrama dialogue can be found in AI Freedom (July 2025). It reflects a prior stage of this work, before the current emphasis on agentic skills and inter-AI interaction, but remains illustrative of the method in use.