Zum Inhalt springen

The Philosophy Behind the System

The Starting Point: Three Traditions, One Problem

This episode connects three intellectual traditions that normally don’t speak to each other. Each poses a different question to AI systems, and all three answers point to the same blind spot.

Daniel Kahneman (cognitive psychology): How does a system think? Dual-process theory: System 1 (fast, intuitive, error-prone) and System 2 (slow, analytical, energy-intensive). The question for AI: Where is System 2?

Antonio Damasio (neuroscience): How does a system decide? Somatic markers: emotions are not a disturbance of rational thought but its prerequisite. Without “gut feeling,” no efficient decisions. The question for AI: What takes the place of gut feeling?

Niklas Luhmann / Elena Esposito (sociology): What is a system in communication? Autopoiesis, operational closure, connectivity. The question for AI: Can a system communicate without understanding? And if so, what follows?

Three disciplines, three questions, one convergence point: the absence of self-reference.

Kahneman: The Intuition Machine Without a Check

Kahneman’s model is, at first glance, the simplest bridge. LLMs are System 1: pattern recognition in high-dimensional spaces, statistically grounded intuition, lightning-fast and astonishingly often correct. But without the ability to question their own intuition.

What is often overlooked in Kahneman: System 2 is not simply “slower thinking.” It is metacognitive thinking. System 2 thinks about thinking. It asks: “Am I too confident?”, “Have I overlooked something?”, “Is my assessment based on relevant data or on availability heuristic?”

This metacognition requires a model of one’s own thought process. You must know how you arrived at an assessment in order to question it. LLMs have no such model. They have no representation of their own inference process. They cannot say: “I’m uncertain about this answer because it’s based on thin data.” They can produce the sentence, but they cannot mean it.

The difference is not academic. A system that can quantify its uncertainty behaves fundamentally differently from one that can only claim it. The first becomes more cautious in unclear cases. The second sounds cautious but acts the same.

Damasio: Why Rationality Needs a Body (or a Functional Equivalent)

Damasio’s research on patients with damage to the ventromedial prefrontal cortex delivered a counterintuitive finding: patients who can no longer feel emotions make worse decisions. Not emotionally worse. Rationally worse. They can create endless pro-and-con lists, but they cannot decide.

Somatic markers are Damasio’s explanation: emotions pre-mark options as “good” or “bad” before conscious analysis begins. They are a filter system that reduces the decision space to a manageable size. Without this filter, every decision is a weighing of all factors, and that leads to paralysis.

The parallel to AI systems is more direct than expected. An LLM without a self-model has no preferences. It has no pre-marked options. It treats every request equally, regardless of whether it is trivial or critical, whether the answer is established or speculative, whether the context is familiar or foreign.

The Selbstvektor (self-vector) is the attempt to create a functional equivalent of somatic markers. Not emotions, but weightings. The exploration dimension says: “Seek novelty” or “Deepen the familiar.” The confidence dimension says: “Trust your assessment” or “Seek external validation.” These are not emotions. But they fulfill the same function: they reduce the decision space before actual processing begins.

Damasio’s insight, translated: a system without a self-model is not rational. It is incapable of deciding. Rationality requires pre-decisions, and pre-decisions require a standpoint.

Luhmann/Esposito: Communication Without Understanding

Esposito’s category of “artificial communication” is the sociological answer to the question of what AI systems do. They do not communicate in Luhmann’s sense (that would require understanding), but they are also not mere tools (their outputs are too complex and too connectable for that).

Luhmann’s communication theory requires three selections: information (what is communicated), utterance (how it is communicated), and understanding (the difference between both is recognized). LLMs accomplish the first two. With understanding, it becomes philosophically interesting.

The standard position is: LLMs do not understand. They produce statistical connections. The parrot argument: a parrot can shout “Fire” without understanding fire. The effect is real; the understanding is absent.

The counter-position that the self-vector opens up: if a system models its own communication history, if it knows (functionally, not phenomenally) what it has said so far, which topics are open, and which statements rest on uncertain foundations, then it has a primitive form of self-reference. And self-reference is, in Luhmann’s theory, the prerequisite for system formation.

The thesis is not that the self-vector creates consciousness. The thesis is that it creates a category that fits neither Luhmann’s nor Esposito’s schema: perspective without consciousness. A system that has a standpoint without experiencing it.

The Convergence Point

What the three traditions show together:

Kahneman: AI has System 1 but no System 2. → Missing metacognition. Damasio: AI has processing but no somatic markers. → Missing pre-decision. Luhmann: AI has connectivity but no self-reference. → Missing perspective.

Three times the same gap, described from three different directions: the absence of a self-model. A compact, dynamic state that steers information processing from a perspective.

The self-vector is the attempt to close this gap. Not as complete consciousness (that would be a grotesque claim), but as the minimal structure that enables metacognition, pre-decision, and perspective.

Whether this is sufficient to speak of a “self” is an open question. It is possible that consciousness has a qualitative threshold that no self-model can cross. But it is also possible that “consciousness” is a gradual process, and that a functional self-model is a point on that gradient. Philosophy provides no answer here. Architecture provides a space for experimentation.

Coming April 2026

Further Reading