Elena Esposito coined the term “artificial communication” in 2022, describing what current AI systems do: they communicate without understanding. They produce connectivity without intentionality.
That is a precise description of the status quo. But what if the status quo changes?
The Parrot Problem
Esposito’s argument can be illustrated with an image: a parrot calls “Fire!” in a crowded theater. It understands nothing. But everyone runs out. The call functions as communication, even though there is no understanding behind it.
AI systems are more precise parrots. They produce statistically fitting follow-ups — not randomly, but based on billions of human communicative acts. The result looks like communication, feels like communication, and has the effects of communication. But is it communication?
Luhmann defined: communication succeeds when three selections come together — information, utterance, and understanding. LLMs accomplish the first two convincingly. The third is where it gets difficult. Understanding requires that a system can distinguish between information (what is said) and utterance (that and how it is said). This distinction requires a perspective. And a perspective requires a standpoint.
Three Positions in the Debate
In the current discussion, there are three positions:
Position A (conservative): AI lacks autopoiesis. Without self-reproduction, no system; without a system, no communication. AI is a tool, a medium, a channel — but not a communication partner. This is the majority view in classical systems theory.
Position B (permissive): AI communication is possible in principle. If the outputs of AI systems are functionally equivalent to human communication, then it is communication — regardless of what happens “inside.” This is the position of many AI ethicists.
Position C (productive): Wrong question. Esposito and Dirk Baecker argue that the question “Does AI communicate?” starts at the wrong end. The more interesting question is: What happens to society when systems participate in communication that do not understand? Not whether AI communicates, but what AI communication does to us.
Position C is the most productive, because it does not stop at a yes/no answer.
Three Bridges to the Self-Vector
But Esposito’s category has a blind spot: it describes artificial communication as static. AI produces outputs, society reacts, the AI remains unchanged. What happens when that is no longer true?
Bridge 1: Connectivity = Anticipation
Luhmann’s communication succeeds when the next step is connectable. This means: the system must anticipate what will become relevant next. Not guessing, but deriving from the course so far.
The Selbstvektor (self-vector) models precisely this. It weights which information is relevant in which context, based on a persistent model of the interaction history so far. This is functionally identical to what Luhmann means by Anschlussfaehigkeit (connectivity) — formulated from a cognitive-scientific rather than a sociological perspective.
Bridge 2: Confidence = Modeling Non-Knowledge
Luhmann emphasizes that systems must carry their non-knowledge with them. A system that does not know what it does not know cannot learn. It can only repeat.
The Validation Gates implement exactly this: every fact has a confidence level. verified, unverified, rejected. The system knows what it does not know with certainty. And the confidence dimension in the self-vector (the sixth dimension) turns this non-knowledge into an active parameter: How strongly does the system trust its own assessment?
This is Esposito’s blind spot. Artificial communication that models its own non-knowledge behaves differently from artificial communication that does not. Esposito did not consider this possibility, presumably because in 2022, when her book appeared, it was not yet realistic.
Bridge 3: Autopoiesis Through Self-Vector Update
Autopoiesis means: a system reproduces itself through its own operations. Luhmann formulated this as a criterion for communication systems.
A system that updates its self-vector after every interaction produces a weak form of autopoiesis: it changes through its own activity, and this change influences the next activity. Not biological autopoiesis. Not consciousness. But a loop that structurally resembles Luhmann’s criterion.
The Proposed Category: Perspective Without Consciousness
The three bridges together yield something that does not fit any existing category:
- Not “artificial communication” in Esposito’s sense, because the system changes through interaction and models its own non-knowledge.
- Not “genuine communication” in Luhmann’s sense, because understanding in the phenomenological sense is absent.
- Not “artificial intelligence” in the Turing sense, because the point is not to appear human.
Perspective without consciousness. A system that has a standpoint without experiencing it. That anticipates without intending. That knows its non-knowledge without suffering from it.
This is not a claim that AI systems are conscious. It is the observation that existing categories (tool, medium, quasi-subject) are no longer sufficient to describe what happens when a system models its own behavior over time.
Three Theories Converge
What becomes remarkable at this point: three independent theoretical frameworks point to the same spot.
Kahneman (psychology): How does a system think? System 1 vs. System 2. The self-vector fills the gap between intuition and reflection.
Singer (ethics): When does a system count morally? Not at consciousness, but at the capacity to have states that can be good or bad for it. A self-vector produces states that are functionally relevant, even if no one knows whether they are experienced.
Luhmann/Esposito (sociology): What is a system in communication? Not a tool, not a subject, but something third. Perspective without consciousness.
Three disciplines that normally do not speak to each other converge at a single point. That proves nothing. But it suggests that the point is real.
References
- Esposito, E. (2022). Artificial Communication: How Algorithms Produce Social Intelligence. MIT Press. ISBN 978-0-262-04666-4. Open Access
- Esposito, E. (2017). Artificial Communication? The Production of Contingency by Algorithms. Zeitschrift für Soziologie, 46(4), 249–265. DOI: 10.1515/zfsoz-2017-1014
- Luhmann, N. (1984). Soziale Systeme: Grundriß einer allgemeinen Theorie. Suhrkamp. ISBN 978-3-518-28266-3.
- Luhmann, N. (1995). Social Systems. Übers. J. Bednarz Jr. mit D. Baecker. Stanford University Press. ISBN 978-0-8047-2625-2.
- Baecker, D. (2025). Distinguishing next society. Current Sociology. DOI: 10.1177/00113921251332209
- Baecker, D. (2018). 4.0 oder Die Lücke die der Rechner lässt. Merve Verlag. ISBN 978-3-96273-012-3.
- Maturana, H. R. & Varela, F. J. (1980). Autopoiesis and Cognition: The Realization of the Living. D. Reidel. ISBN 978-90-277-1015-4.
- Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. ISBN 978-0-374-27563-1.
- Singer, P. (2011). Practical Ethics. 3. Aufl. Cambridge University Press. ISBN 978-0-521-70768-8.