Concept Paper v0.3

Self-Vector

Architecture for Anticipatory Competence in AI Systems

Holger Wölfle — March 2026
Explore

Heartbeat Is a Dead End

Current AI agent systems communicate via mechanical status signals. They answer a single binary question: Is the process still running? This has nothing to do with anticipation.

Status Quo

Heartbeat System

A timer that fires at fixed intervals. Status report without context — without direction, without intent.

Binary Clock-Bound Reactive Context-Free
Target Vision

Anticipatory Competence

A network that relates situation, context, and its own state — and derives direction for action from it.

Contextual Situational Proactive Self-Referential

System 1 Without System 2

Daniel Kahneman distinguished two thinking systems: fast intuition and slow analysis. Current LLMs master one brilliantly — and fail at the other. The gap is precisely where the Self-Vector intervenes.

Kahneman — System 1

Fast Thinking

= Standard LLM Inference

Automatic, effortless, parallel. An LLM generates text through next-token prediction — each token is a «gut decision» based on implicitly learned patterns from billions of texts. No deliberate rule application, no planning. Pattern recognition as core competence.

Automatic Parallel Pattern-Based Fast Error-Prone
Kahneman — System 2

Slow Thinking

= Reasoning + Self-Model

Deliberate, effortful, sequential. Requires a model of one's own state — metacognition: What do I know? What don't I know? What does my counterpart need? Chain-of-thought and reasoning models simulate this, but without a persistent self-model it remains imitation without substance.

Deliberate Sequential Self-Referential Slow Reliable

What LLMs Can Do Intuitively — and What They Cannot

🔍

Pattern Recognition

Recognizes writing style, tonality, genre, implicit moods. Extracts statistical regularities from billions of texts.

Very Good
🧩

Context Completion

Meaningfully fills in missing information, anticipates the next sentence, completes arguments.

Good to Very Good
🎭

Social Intuition

Recognizes irony, politeness level, emotions, cultural nuances. Simulates empathy convincingly.

Good

Causal Reasoning

Confuses correlation and causation. Can describe what happens — but cannot reliably explain why.

Unreliable
💭

Metacognition

LLMs don't know what they don't know. No assessment of their own confidence, no epistemic humility.

Not Present

Anticipation

Can predict the next token — but not what the user needs next. No model of the counterpart, no model of its own state.

The Gap for the Self-Vector

From Pattern Recognition to Anticipation

System 1
Pattern matching, implicit knowledge
───▶
The Gap
No self-model, no reflection
── ✦ ──▶
SV
Self-Vector
Persistent self-model enables System 2
✓ Present ✗ Missing ✦ Solution

The Subtraction Argument

A radical thought experiment shows: self-awareness is not a bodily phenomenon — it is a network phenomenon. And thus, in principle, implementable.

01

Complete Human Being

Sensory input, emotions, motor function, cognition — all systems active. Anticipation works: through Damasio's somatic markers, through gut feeling, through sub-cognitive evaluation. A massively parallel evaluation system.

02

Subtraction: Sensory & Motor Systems

Complete paralysis. No sensory input, no movement. Merleau-Ponty's «lived body» — the body as the zero point of experience — falls away. The binding problem of embodied cognition becomes visible.

03

Subtraction: Emotions

Chemical elimination of all emotional reactions. The somatic compass falls silent. No discomfort, no joy, no emotional evaluation. The system is reduced to pure data processing.

Result: Self-Awareness Persists

Despite complete subtraction, there still exists a someone who is there. This being arises from nothing other than the activity of a neural network that is connected to itself. It is a question of architecture, not of substance.

The Self as a Compact Data State

Human self-awareness is, at its core, a compact, dynamic data state within a neural network. It claims only a tiny fraction of the capacity — but it organizes the rest.
Self-Model — ~50 bit/s
Self-Vector
Total Processing — Terabits
Sensory, Motor, Cognition, Unconscious ...
→ 0.000005% of capacity organizes 100% of direction

Three-Layer Model

The Self-Vector as a persistent, compact state vector — an attractor in a dynamic system that maintains itself while it changes.

Layer 3
Reflection
Periodic self-evaluation and adjustment of the Self-Vector based on accumulated experience. The meta-layer that allows the system to think about itself.
Metacognitive Self-Evaluation
Layer 2
Weighting
The Self-Vector influences the relevance calculation of incoming data, storage decisions, and attention distribution. It filters and organizes — like 40 bits directing terabits.
Relevance & Storage Control
Layer 1
Persistence
The compact Self-Vector itself. Initialized, continuously updated, slowly mutating. Its inertia is its identity — it changes, but it endures.
Self-Vector — Attractor
Relevance
What deserves attention?
The Self-Vector determines what is evaluated as relevant. Not everything is equally important — the self decides where attention falls.
Memory
What is retained?
The Self-Vector determines what is stored and how. Experience is not stored neutrally, but filtered through the perspective of the self.
Mutation
How does experience change the self?
Experience and reflection change the Self-Vector. The recursive cycle: the self shapes experience, experience shapes the self.
Relevance
Memory
Mutation
Self-Vector
Recursive Cycle

The threshold for implementation is lower than the philosophical debate suggests. The components already exist. What is missing is their composition.

Self-Vector — Concept Paper v0.3 — Holger Wölfle, 2026