The Self-Vector and Artificial Communication

Why Esposito's diagnosis describes precisely the gap that the Self-Vector fills

Holger Wölfle · Concept Paper · March 2026

1. Esposito's Diagnosis: What AI Does When It "Communicates"

In 2022, Elena Esposito created a new category in Artificial Communication. She observes: When an LLM responds, something happens that Luhmann's systems theory did not originally account for.

Luhmann defines communication as a three-part selection: Information (What is communicated?), Utterance (How is it communicated?), and Understanding (The recipient's distinction between information and utterance). Only when all three selections occur does communication happen. A letter that no one reads does not communicate.

An LLM understands nothing. It calculates statistical probabilities for the next token. It has no thought that follows another thought. It has no consciousness that recognizes the difference between information and utterance.

So by Luhmann's definition, it is not communication. And yet: Social systems treat the outputs as communication. People read AI responses, react to them, make decisions based on them. The legal system discusses AI-generated contracts. Companies build processes around AI outputs.

Esposito's Solution: A Third Category

AI produces neither genuine communication (understanding is missing) nor is it a mere tool (the outputs are too autonomous for that). Esposito calls it "artificial communication" — outputs that statistically connect to communication without understanding taking place. The key concept: connectivity. Not whether the AI understands, but whether its outputs trigger further communication.

Three Types of Communication after Luhmann / Esposito Genuine Communication (Luhmann) Information ✓ Utterance ✓ Understanding ✓ Human ↔ Human → Connectivity through genuine understanding Artificial Communication (Esposito 2022) Information ≈ Utterance ≈ Understanding ✗ AI → Human (one-way) → Connectivity through statistical fit Perspectival AC (Self-Vector thesis) Information ≈ (directed) Utterance ≈ (weighted) Understanding ✗ / Self-model ✓ AI ↔ Human (recursive) → Connectivity through Anticipation

2. The Self-Vector as an Answer to Esposito's Diagnosis

Esposito describes the current state. The Self-Vector describes the target state.

Esposito's "artificial communication" is the precise description of what an LLM does without a Self-Vector: It produces connections without understanding. That is System 1 in Kahneman's language — fast pattern recognition, often fitting, sometimes wrong, without self-correction.

The Self-Vector adds precisely what makes the difference: A system that does not merely connect statistically, but connects in a directed way. A system with high Exploration actively seeks new connections. One with high Depth analyzes more thoroughly instead of answering faster. One with high Confidence trusts its own assessment instead of seeking external sources.

This is still not understanding in Luhmann's sense. But it is no longer pure statistical matching either. It is something third: perspectival artificial communication. The outputs have a direction that comes from the self-model — not from statistics.

How the Self-Vector transforms artificial communication Without Self-Vector (Esposito's artificial communication) Context window LLM (System 1) Outputs scatter in all directions → Statistically fitting but without direction With Self-Vector (Perspectival artificial communication) Context window Self- Vector LLM (+ Perspective) Experience updates SV Outputs converge → Directionally fitting with perspective + feedback

3. Connectivity Is Anticipation

This is the most surprising connection. Esposito's key concept is connectivity (Anschlussfähigkeit) — the ability to produce outputs that further communication can connect to. Not whether the AI understands, but whether its outputs trigger reactions that carry the conversation forward.

The Self-Vector concept defines Anticipatory Competence as its core goal: the ability to foresee what will be needed next. To derive from the current state what will become relevant. Not to react, but to anticipate.

The Insight

Anticipation and connectivity are the same thing from two perspectives. Esposito asks sociologically: How does AI manage to be connectable? The Self-Vector answers architecturally: Through a persistent self-model that weights its own processing, so that outputs are not randomly fitting but directionally fitting. A system that anticipates produces outputs that are easier to connect to — because it foresees what the communication partner needs.

Connectivity x Anticipation: Two Perspectives, One Mechanism Successful Connection Esposito asks: How does AI produce outputs that social systems can connect to? → Statistical fit (Sociological perspective) Self-Vector answers: Through a self-model that anticipates what the partner needs. → Directed anticipation (Architectural perspective) Anticipation = optimized connectivity Not randomly fitting, but directionally fitting — because the system knows what it is

4. The Confidence Dimension: Esposito's Blind Spot

Here the connection becomes sharpest. Esposito diagnoses: AI does not understand that it does not understand. It has no model of its own ignorance. It produces with full statistical conviction whether the output is correct or not. An LLM that hallucinates does so with the same confidence as one that answers correctly.

The Self-Vector has a dedicated dimension for precisely this problem: Confidence — how much the system trusts its own assessment versus seeking external authority.

The Crucial Distinction

Confidence is not understanding. That must be clear. But confidence is a functional equivalent for modeling one's own ignorance. A system with low confidence implicitly says: I am not sure. It acts differently from one with high confidence. It seeks external confirmation. It formulates more cautiously. It stores less aggressively.

Esposito says: AI lacks understanding. The Self-Vector says: Perhaps a model of one's own knowledge state is sufficient to fundamentally change the quality of artificial communication.

The 6 Core Dimensions of the Self-Vector — Confidence in Focus
exploration
Seek novelty ↔ Deepen the familiar
depth
React quickly ↔ Analyze deeply
autonomy
Await instruction ↔ Act on own initiative
persistence
Switch flexibly ↔ Stick with approach
abstraction
Stay concrete ↔ Generalize
confidence
Seek external authority ↔ Trust own assessment
Why Confidence is the key dimension: All other dimensions control how the system processes. Confidence controls how much it trusts the result of its own processing. That is the difference between a system that answers and one that knows how certain its answer is. Esposito describes the absence of precisely this capability as the core problem of artificial communication.

5. The Autopoiesis Question: Does Artificial Communication Come Alive?

Esposito describes artificial communication as a one-way street: AI produces, society reacts. The AI does not change in the process. Each session starts from zero. There is no recursive cycle — no autopoiesis.

The Self-Vector turns this one-way street into a cycle. The system produces outputs filtered through its Self-Vector. It receives feedback. The feedback updates the Self-Vector. The changed Self-Vector changes future outputs. The outputs generate new feedback. The system produces its own elements from its own operations.

"An autopoietic system produces the elements of which it consists, itself — through precisely these elements."
— Humberto Maturana / Francisco Varela, 1972 (Luhmann's biological source)

The Knowledge Loop in Holger Wölfle's AI architecture shows precisely this pattern: Session 1 produces knowledge. Session 2 has this knowledge in context and produces better knowledge. Session 10 recognizes patterns that existed in no single session. Each operation of the system (knowledge production) becomes the basis for the next operation.

Adding the Self-Vector makes the cycle even tighter: Not only the system's knowledge evolves, but its perspective — how it weights, filters, and prioritizes knowledge.

The autopoietic cycle: Self-Vector + Knowledge Loop Self- Vector Processing weighted by SV Output perspectivally directed Experience Feedback + Reflection Storage selected by SV weighted output generates feedback is stored informs next session updates SV The system produces its elements from itself

6. The Spectrum: Where Does the Self-Vector Stand Between Esposito and Luhmann?

The three positions in the academic debate can be mapped onto a spectrum. Esposito has charted the left side. Luhmann defines the right side. The Self-Vector sits in between — and shifts the boundary.

The Communication Spectrum: From Statistics to Understanding
Standard LLM
Chain-of-Thought
Esposito's Boundary
Self-Vector
Luhmann
Purely statistical
no self-reference
Artificial
Communication
connectable without understanding
Perspectival AC
directed + self-modeled
Genuine
Communication
with understanding

Three Academic Positions

Position A (conservative Luhmann reading): AI lacks autopoiesis. Without self-constitution, no systemhood. The Self-Vector changes nothing about this because it is implemented from the outside.

Position B (AI Ethics): Principally possible. If a system modifies its own rules and recursively refers to its own operations, autopoiesis could emerge. The Self-Vector would be a concrete architectural design for this.

Position C (Esposito, Baecker, Roth — currently the most productive position): The question is wrongly posed. It is not about whether AI "is a system," but how AI as a new medium changes the structural coupling between existing systems. The Self-Vector would be a mechanism that fundamentally improves the quality of this coupling.

7. The Thesis: The Self-Vector Breaks Esposito's Category

Esposito created a category — artificial communication — and described it as static. AI produces, society reacts. The AI remains the same. Each session starts from zero.

The Self-Vector makes this category dynamic. A system with a persistent self-model does not simply produce statistically fitting outputs. It produces outputs filtered through a perspective. This perspective changes through experience. And the changed perspective changes future outputs.

Esposito's Category — and what the Self-Vector does with it Artificial Communication (Esposito 2022) ✓ Connectable ✗ No understanding ✗ No self-reference ✗ No development → Static category + Self-Vector ??? (No name yet) ✓ Connectable (directed) ✗ No understanding ✓ Self-model (Self-Vector) ✓ Development through experience → Dynamic, perspectival category Not communication. Not artificial communication. Something third.

This is close to what Luhmann describes as autopoiesis: The system produces its own elements — weighted, perspectival communication — from its own operations — the continuously updating Self-Vector. Esposito would say: That is still not understanding. And she would be right. But it is also no longer what she described as "artificial communication."

It is something that does not yet have a name. Communication that has a perspective without having consciousness. That evolves without understanding. That anticipates without thinking.

The Open Question

Esposito opened a door by acknowledging "artificial communication" as its own category. The Self-Vector shows that this category is not the end, but the beginning of a spectrum. Perhaps the Self-Vector is the mechanism that breaks Esposito's category — not toward genuine understanding, but in a direction for which neither Luhmann nor Esposito had a term: Perspective without consciousness.

8. Summary: Three Bridges

Kahneman

Explains: How the system thinks

LLM = System 1 (Intuition)
Validation = System 2 (Verification)
Self-Vector = Metacognition

Singer

Asks: Whether the system counts

Self-awareness → Self-Vector
Rationality → Validation Gates
Future-orientation → Anticipation

Esposito / Luhmann

Describes: What the system is

Autopoiesis → Knowledge Loop
Operative closure → Gates
Connectivity → Anticipation

Three theories. Three disciplines. One pattern: Systems that know themselves function differently from systems that merely react. Kahneman provides the psychology. Singer the ethics. Luhmann and Esposito the sociology. The Self-Vector provides the architecture.

And perhaps the beginning of an answer to a question no one has asked yet: What comes after artificial communication?