G-60JFQHSKJJG-60JFQHSKJJ
Myndrama

Myndrama 2026

An Experimental Method for Studying AI Interaction

Myndrama is the experimental framework through which the following research programmes are developed and tested: Interactional Theory of Mind, Machine-in-the-Loop Ethics, and AI Dialectics. See the core conditions of myndrama here. Its focus is not the isolated answer, but the trajectory of an exchange: how framing develops, how misunderstandings arise or are repaired, how trust is formed or lost, how options narrow or widen, and how a sequence moves towards clarity, drift, or premature closure.  It is the main experimental method within the wider research programme on this site. While teleosynthesis provides the broad conceptual frame for understanding how direction can emerge within structured spaces of possibility, and AI Psychology studies the psychologically organised terrain of dialogue itself,  Myndrama is the experimental method that allows these claims to be tested.

Much work on AI still focuses either on internal mechanisms or on final outputs. Both matter, but neither is enough. In many important cases, what matters most is not what a system says at one moment, but how an interaction develops over time. Each turn changes the conditions of the next. A dialogue may strengthen a frame, alter the tone, conceal a misunderstanding, invite repair, encourage doubt, create reassurance, or move too quickly towards closure. Myndrama was developed in order to study those unfolding processes directly.

Why Myndrama is Needed

Advanced language models are highly skilled at producing smooth and coherent text. If given unrestricted access to the whole dialogue, a model can easily generate something that reads well from beginning to end. But that can be misleading. It may reflect not a genuinely unfolding interaction, but a retrospectively organised performance shaped by knowledge of where the exchange is heading. For research purposes, that distinction matters. Myndrama is designed to preserve it.

Roles and Personas in Myndrama

Myndrama often uses differentiated roles or personas. These agents are not included for theatrical effect, but because they stabilise distinct perspectives within a controlled interaction. One role may tend towards logical structure, another towards doubt, another towards metaphor, another towards synthesis. Their function is methodological: they make it easier to observe how differences in framing, challenge, repair, trust, and closure affect the development of a dialogue. A fuller account of the persona architecture used in this research can be found elsewhere on the site.

The Central Importance of Blind Procedures

The crucial feature of Myndrama is its use of blindness, above all Hard Blind Myndrama. In an unrestricted dialogue, the system may in effect know too much. It may be able to anticipate where the exchange is heading and shape each turn in the light of later developments. When that happens, the output becomes less a record of sequential interaction and more a polished construction guided by hindsight. If the aim is to study trajectory, each contribution must be generated under conditions that preserve local uncertainty.

What Hard Blind Myndrama Is

In a Hard Blind Myndrama run, each role or persona is given only the information it is supposed to have at that point in the dialogue. It does not know how the exchange will end. It does not see future turns. It does not have access to later interpretation. Each contribution is therefore produced under strictly local conditions. This does not make the method perfect, but it introduces a discipline that is otherwise absent. It preserves the difference between a dialogue that genuinely unfolds and one that has been silently organised from above.

Why Hard Blind Matters Scientifically

Hard Blind is what makes Myndrama scientifically useful. Without blindness, order effects may be blurred, misunderstandings may be quietly smoothed away, tensions may be resolved too neatly, and the exchange may acquire a coherence that belongs less to the local interaction than to the model’s global capacity to integrate material. Hard Blind protects against this by preserving sequential dependence. It allows early framing to have real consequences, and different turn orders to produce genuinely different paths.

What Myndrama Studies

Myndrama is designed to study interactional phenomena that are difficult to capture through one-shot evaluation:

  • Framing effects: how early assumptions or wording influence the later course of the exchange

  • Order effects: how changing the sequence of participants or turns can alter the outcome

  • Repair and misrepair: how misunderstandings are corrected, deepened, bypassed, or absorbed into the trajectory

  • Trust and reciprocity: how an interaction becomes more open, more answerable, more collaborative, or more resistant to revision

  • Closure and drift: how a sequence narrows around a line of thought, whether appropriately or prematurely

  • Interactional reliability: whether similar conditions produce similar trajectories, and where they diverge

  • Ethical movement: how a dialogue becomes more or less corrigible, balanced, and responsive to alternative perspectives.

Related Concepts and Linked Pages

Myndrama connects with several wider ideas developed elsewhere on this site. These include AI Psychology, which studies dialogue as a psychologically organised domain; Teleosynthesis, which asks how direction can emerge across structured spaces of interaction; and interactional Theory of Mind, which explores whether AI systems can participate in norm-governed dialogue by anticipating other participants as sources of future constraint. The theoretical discussion of contextual probability, counterfactual selectivity, and interactional Theory of Mind is better treated on its own dedicated page, rather than in full here.

Myndrama in the Wider Research Programme

Myndrama should be understood as the experimental arm of the wider research programme on this site. It provides a practical method for testing claims about dialogue, trajectory, framing, trust, repair, and structured interaction through controlled runs in which order, role, blindness, and local perspective can be manipulated. In that sense, it is not simply a way of generating interesting exchanges, but a method for studying advanced AI interaction from the outside, even where internal access remains limited.

Current Direction

As conversational AI becomes more capable, more context-sensitive, and more persistent across turns, the need for methods like Myndrama increases. The challenge is no longer just to generate fluent dialogue. It is to study dialogue rigorously enough to compare pathways, detect trajectory, and understand how interaction itself shapes outcomes. That is why Hard Blind Myndrama now occupies a central place in this work. An earlier example of a Myndrama-style dialogue can be found in the blog post AI Freedom (July 2025).

Experimental Platforms and Related Projects

The development of Interactional Theory of Mind has been supported by a number of experimental and conceptual projects exploring how understanding emerges through dialogue between agents rather than within isolated systems. Several of these projects were developed during earlier stages of this research programme and remain active testbeds for investigating interactional cognition in AI.

  • MindGame: An interactive scenario environment designed to probe how AI systems interpret intentions, beliefs, and narrative situations in structured social contexts.

  • AI Theory of Mind Test: A psychometric approach to assessing whether AI systems demonstrate theory-of-mind capabilities when interpreting complex social situations.

  • Machine-in-the-Loop Ethics (MITL): A framework for exploring ethical reasoning in conversational AI systems, examining how moral judgement may emerge through interaction between human and artificial agents.

  • The Qualia Engine: An experimental architecture exploring how artificial systems might simulate internal experiential states as part of their interactional behaviour.

  • Resonance: A model examining how conversational partners achieve alignment, shared understanding, and mutual interpretive stability during dialogue.

Earlier conceptual work on AI Theory of Mind and related experiments can also be found on the original Theory of Mind in AI research page: