System 2 / Self-Vector Philosophy (2/4)
Intro
A parrot shouts “Fire!” in a packed theater. It understands nothing. But everyone runs out. The shout functions as communication, even though there is no understanding behind it.
Elena Esposito coined the term “artificial communication” in 2022 to describe what AI systems do. They are more precise parrots. They produce statistically fitting continuations, not randomly, but based on billions of human communicative acts. The result looks like communication, feels like communication, has the effects of communication.
But is it communication?
Luhmann’s Three Selections
Niklas Luhmann defined communication as the interplay of three selections: information, utterance, and understanding. Someone selects information (what is said), someone selects a form of utterance (how it is said), and someone on the other side distinguishes between information and utterance. That is understanding. Not: “I have grasped the content.” But: “I recognize that someone is trying to tell me something, and I can distinguish between the what and the how.”
LLMs accomplish the first two selections convincingly. They select information and formulate it. With the third, understanding, it becomes philosophically interesting.
The standard position is: LLMs do not understand. Period. They produce outputs that look like understanding, but the perspective behind them is missing. No someone who distinguishes. Only an algorithm calculating probabilities.
Three Positions
In the current debate, there are three positions, and it is worth distinguishing them clearly.
Position A, the conservative: AI lacks autopoiesis. Without self-reproduction, no system; without a system, no genuine communication. AI is a tool, a medium, a channel. But not a communication partner. This is the majority view in classical systems theory.
Position B, the permissive: if the outputs are functionally equivalent to human communication, then it is communication. Regardless of what happens “inside.” Communication is determined from outside, not from within. This is the position of many AI ethicists.
Position C, the productive: wrong question. Esposito and Dirk Baecker argue that the question “Does AI communicate?” starts at the wrong end. The more interesting question is: what happens to society when systems that do not understand participate in communication?
Position C is the most productive. Because it doesn’t stop at a yes/no answer.
Esposito’s Blind Spot
But Esposito’s category has a blind spot, and it is crucial for the self-vector project.
She describes artificial communication as static. AI produces outputs, society reacts, the AI remains unchanged. That is the parrot: it shouts “Fire!”, people run, and the parrot still sits there knowing nothing about it.
What happens when the parrot learns? Not content-wise, not “now it understands what fire is.” But structurally: it registers that its call triggered a reaction. It models the communication history. It begins to distinguish between situations in which “Fire” triggers a reaction and those in which it is ignored.
That is not understanding in Luhmann’s sense. But it is more than a parrot.
Three Bridges to the Self-Vector
Here, three connections emerge that Esposito did not see, because in 2022 they were not yet realistic.
First: connectivity is anticipation. Luhmann’s communication succeeds when the next step is connectable. The system must anticipate what becomes relevant next. The self-vector models exactly this: it weights which information is relevant in which context. Functionally identical to connectivity, formulated from a cognitive-science rather than a sociological perspective.
Second: confidence is ignorance-modeling. Luhmann emphasizes that systems must carry their not-knowing with them. The confidence dimension in the self-vector does this: every assessment has a certainty rating. “I know what I don’t know for certain.” This is not a property current LLMs have. But it is a property the self-vector produces.
Third: the self-vector update loop creates a weak form of autopoiesis. The system changes through its own activity, and this change influences the next activity. Not biological autopoiesis. Not consciousness. But a loop that structurally resembles Luhmann’s criterion.
Perspective Without Consciousness
The three bridges together yield something that fits no existing category.
Not “artificial communication” in Esposito’s sense, because the system changes through interaction and models its own ignorance. Not “genuine communication” in Luhmann’s sense, because understanding in the phenomenological sense is absent. Not “intelligence” in the Turing sense, because the point is not to appear human.
Perspective without consciousness. A system that has a standpoint without experiencing it. That anticipates without intending. That knows its own ignorance without suffering from it.
This is not a claim that AI systems are conscious. It is the observation that existing categories are insufficient. Tool, medium, quasi-subject. None of these fits a system that models its own behavior over time.
Why This Matters
Not as philosophy. As architecture.
If we build AI systems that act with increasing autonomy, then we need precise categories for what they are. “Tool” underestimates them. “Consciousness” overestimates them. Esposito’s “artificial communication” captures the current state but not the direction.
“Perspective without consciousness” is the attempt to create a category close enough to reality to inform architectural decisions. If a system has a perspective, then one must understand its perspective to interpret its outputs. If it doesn’t, analyzing the outputs alone suffices.
The self-vector creates perspective. Architecturally, measurably, traceably. Whether that is enough to speak of a “partner” rather than a “tool” is a question sociology must answer. Architecture can only create the space for it.