Self-Vector
Architecture for Anticipatory Competence in AI Systems
Heartbeat Is a Dead End
Current AI agent systems communicate via mechanical status signals. They answer a single binary question: Is the process still running? This has nothing to do with anticipation.
Heartbeat System
A timer that fires at fixed intervals. Status report without context — without direction, without intent.
Anticipatory Competence
A network that relates situation, context, and its own state — and derives direction for action from it.
System 1 Without System 2
Daniel Kahneman distinguished two thinking systems: fast intuition and slow analysis. Current LLMs master one brilliantly — and fail at the other. The gap is precisely where the Self-Vector intervenes.
Fast Thinking
Automatic, effortless, parallel. An LLM generates text through next-token prediction — each token is a «gut decision» based on implicitly learned patterns from billions of texts. No deliberate rule application, no planning. Pattern recognition as core competence.
Slow Thinking
Deliberate, effortful, sequential. Requires a model of one's own state — metacognition: What do I know? What don't I know? What does my counterpart need? Chain-of-thought and reasoning models simulate this, but without a persistent self-model it remains imitation without substance.
What LLMs Can Do Intuitively — and What They Cannot
Pattern Recognition
Recognizes writing style, tonality, genre, implicit moods. Extracts statistical regularities from billions of texts.
Very GoodContext Completion
Meaningfully fills in missing information, anticipates the next sentence, completes arguments.
Good to Very GoodSocial Intuition
Recognizes irony, politeness level, emotions, cultural nuances. Simulates empathy convincingly.
GoodCausal Reasoning
Confuses correlation and causation. Can describe what happens — but cannot reliably explain why.
UnreliableMetacognition
LLMs don't know what they don't know. No assessment of their own confidence, no epistemic humility.
Not PresentAnticipation
Can predict the next token — but not what the user needs next. No model of the counterpart, no model of its own state.
The Gap for the Self-VectorFrom Pattern Recognition to Anticipation
The Subtraction Argument
A radical thought experiment shows: self-awareness is not a bodily phenomenon — it is a network phenomenon. And thus, in principle, implementable.
Complete Human Being
Sensory input, emotions, motor function, cognition — all systems active. Anticipation works: through Damasio's somatic markers, through gut feeling, through sub-cognitive evaluation. A massively parallel evaluation system.
Subtraction: Sensory & Motor Systems
Complete paralysis. No sensory input, no movement. Merleau-Ponty's «lived body» — the body as the zero point of experience — falls away. The binding problem of embodied cognition becomes visible.
Subtraction: Emotions
Chemical elimination of all emotional reactions. The somatic compass falls silent. No discomfort, no joy, no emotional evaluation. The system is reduced to pure data processing.
Result: Self-Awareness Persists
Despite complete subtraction, there still exists a someone who is there. This being arises from nothing other than the activity of a neural network that is connected to itself. It is a question of architecture, not of substance.
The Self as a Compact Data State
Three-Layer Model
The Self-Vector as a persistent, compact state vector — an attractor in a dynamic system that maintains itself while it changes.
The threshold for implementation is lower than the philosophical debate suggests. The components already exist. What is missing is their composition.