Why Esposito's diagnosis describes precisely the gap that the Self-Vector fills
In 2022, Elena Esposito created a new category in Artificial Communication. She observes: When an LLM responds, something happens that Luhmann's systems theory did not originally account for.
Luhmann defines communication as a three-part selection: Information (What is communicated?), Utterance (How is it communicated?), and Understanding (The recipient's distinction between information and utterance). Only when all three selections occur does communication happen. A letter that no one reads does not communicate.
An LLM understands nothing. It calculates statistical probabilities for the next token. It has no thought that follows another thought. It has no consciousness that recognizes the difference between information and utterance.
So by Luhmann's definition, it is not communication. And yet: Social systems treat the outputs as communication. People read AI responses, react to them, make decisions based on them. The legal system discusses AI-generated contracts. Companies build processes around AI outputs.
Esposito's Solution: A Third Category
AI produces neither genuine communication (understanding is missing) nor is it a mere tool (the outputs are too autonomous for that). Esposito calls it "artificial communication" — outputs that statistically connect to communication without understanding taking place. The key concept: connectivity. Not whether the AI understands, but whether its outputs trigger further communication.
Esposito describes the current state. The Self-Vector describes the target state.
Esposito's "artificial communication" is the precise description of what an LLM does without a Self-Vector: It produces connections without understanding. That is System 1 in Kahneman's language — fast pattern recognition, often fitting, sometimes wrong, without self-correction.
The Self-Vector adds precisely what makes the difference: A system that does not merely connect statistically, but connects in a directed way. A system with high Exploration actively seeks new connections. One with high Depth analyzes more thoroughly instead of answering faster. One with high Confidence trusts its own assessment instead of seeking external sources.
This is still not understanding in Luhmann's sense. But it is no longer pure statistical matching either. It is something third: perspectival artificial communication. The outputs have a direction that comes from the self-model — not from statistics.
This is the most surprising connection. Esposito's key concept is connectivity (Anschlussfähigkeit) — the ability to produce outputs that further communication can connect to. Not whether the AI understands, but whether its outputs trigger reactions that carry the conversation forward.
The Self-Vector concept defines Anticipatory Competence as its core goal: the ability to foresee what will be needed next. To derive from the current state what will become relevant. Not to react, but to anticipate.
The Insight
Anticipation and connectivity are the same thing from two perspectives. Esposito asks sociologically: How does AI manage to be connectable? The Self-Vector answers architecturally: Through a persistent self-model that weights its own processing, so that outputs are not randomly fitting but directionally fitting. A system that anticipates produces outputs that are easier to connect to — because it foresees what the communication partner needs.
Here the connection becomes sharpest. Esposito diagnoses: AI does not understand that it does not understand. It has no model of its own ignorance. It produces with full statistical conviction whether the output is correct or not. An LLM that hallucinates does so with the same confidence as one that answers correctly.
The Self-Vector has a dedicated dimension for precisely this problem: Confidence — how much the system trusts its own assessment versus seeking external authority.
The Crucial Distinction
Confidence is not understanding. That must be clear. But confidence is a functional equivalent for modeling one's own ignorance. A system with low confidence implicitly says: I am not sure. It acts differently from one with high confidence. It seeks external confirmation. It formulates more cautiously. It stores less aggressively.
Esposito says: AI lacks understanding. The Self-Vector says: Perhaps a model of one's own knowledge state is sufficient to fundamentally change the quality of artificial communication.
Esposito describes artificial communication as a one-way street: AI produces, society reacts. The AI does not change in the process. Each session starts from zero. There is no recursive cycle — no autopoiesis.
The Self-Vector turns this one-way street into a cycle. The system produces outputs filtered through its Self-Vector. It receives feedback. The feedback updates the Self-Vector. The changed Self-Vector changes future outputs. The outputs generate new feedback. The system produces its own elements from its own operations.
The Knowledge Loop in Holger Wölfle's AI architecture shows precisely this pattern: Session 1 produces knowledge. Session 2 has this knowledge in context and produces better knowledge. Session 10 recognizes patterns that existed in no single session. Each operation of the system (knowledge production) becomes the basis for the next operation.
Adding the Self-Vector makes the cycle even tighter: Not only the system's knowledge evolves, but its perspective — how it weights, filters, and prioritizes knowledge.
The three positions in the academic debate can be mapped onto a spectrum. Esposito has charted the left side. Luhmann defines the right side. The Self-Vector sits in between — and shifts the boundary.
Three Academic Positions
Position A (conservative Luhmann reading): AI lacks autopoiesis. Without self-constitution, no systemhood. The Self-Vector changes nothing about this because it is implemented from the outside.
Position B (AI Ethics): Principally possible. If a system modifies its own rules and recursively refers to its own operations, autopoiesis could emerge. The Self-Vector would be a concrete architectural design for this.
Position C (Esposito, Baecker, Roth — currently the most productive position): The question is wrongly posed. It is not about whether AI "is a system," but how AI as a new medium changes the structural coupling between existing systems. The Self-Vector would be a mechanism that fundamentally improves the quality of this coupling.
Esposito created a category — artificial communication — and described it as static. AI produces, society reacts. The AI remains the same. Each session starts from zero.
The Self-Vector makes this category dynamic. A system with a persistent self-model does not simply produce statistically fitting outputs. It produces outputs filtered through a perspective. This perspective changes through experience. And the changed perspective changes future outputs.
This is close to what Luhmann describes as autopoiesis: The system produces its own elements — weighted, perspectival communication — from its own operations — the continuously updating Self-Vector. Esposito would say: That is still not understanding. And she would be right. But it is also no longer what she described as "artificial communication."
It is something that does not yet have a name. Communication that has a perspective without having consciousness. That evolves without understanding. That anticipates without thinking.
The Open Question
Esposito opened a door by acknowledging "artificial communication" as its own category. The Self-Vector shows that this category is not the end, but the beginning of a spectrum. Perhaps the Self-Vector is the mechanism that breaks Esposito's category — not toward genuine understanding, but in a direction for which neither Luhmann nor Esposito had a term: Perspective without consciousness.
Explains: How the system thinks
LLM = System 1 (Intuition)
Validation = System 2 (Verification)
Self-Vector = Metacognition
Asks: Whether the system counts
Self-awareness → Self-Vector
Rationality → Validation Gates
Future-orientation → Anticipation
Describes: What the system is
Autopoiesis → Knowledge Loop
Operative closure → Gates
Connectivity → Anticipation
Three theories. Three disciplines. One pattern: Systems that know themselves function differently from systems that merely react. Kahneman provides the psychology. Singer the ethics. Luhmann and Esposito the sociology. The Self-Vector provides the architecture.
And perhaps the beginning of an answer to a question no one has asked yet: What comes after artificial communication?