Knowledge, Memory, Structure.
What AI lacks is not intelligence.
I’m trying to build the architecture that changes this.
You know what you should do. You don’t do it. Not because you’re lazy. But because between intention and action there’s a gap that willpower alone can’t close.
Learn moreNot consciousness. A persistent self-model that predicts where the system will fail before it fails. The bridge between cognitive science and architecture.
Learn moreADHD, fatigue, overload. When inner reserves are depleted, you don’t need more willpower, but a system that thinks along.
Learn moreMissing metacognition. Missing somatic markers. Missing self-reference. Three traditions that don’t talk to each other point to the same spot.
Learn moreEach layer solves a different problem: What to store? What to validate? What to forget? Source-pinning is source criticism, confidence scoring is quantified epistemology.
Learn moreVisualizations
Interactive graphics on AI architectureThree-Layer Architecture
Persistence, weighting, reflection. The vertical structure of a learning system.
Validation Gates
From blind trust to 6 validation gates. How a system learns not to trust itself.
Heartbeat vs. Self-Vector
Why a pulse is not enough. The difference between status checking and self-modeling.
Self-Vector — 6 Dimensions
A persistent self-model for AI agents. Anticipatory competence through self-modeling.
Esposito-Self-Vector Convergence
Three theoretical frameworks converge: Kahneman (psychology), Singer (ethics), Luhmann/Esposito (sociology).
ExoCortex
A cognitive exoskeleton.Your brain isn’t broken. It just works differently. ExoCortex is an external prefrontal cortex: a local AI system that provides the structure, memory and coaching your neurology doesn’t provide on its own.
ADS/ADHS
Prioritization, context switching, finishing things.
Autism Spectrum
Structure, routines, reducing cognitive overload.
Chronic Fatigue
Energy-conscious planning. Not more, but the right things.
MS-Fatigue
When the body rations energy unpredictably. Planning with fatigue, not against it.
Overload
Burnout, caregiving, too many open fronts. External structure when inner reserves are empty.
Procrastination
Not laziness, but a regulation problem. The gap between intention and action, made visible and bridgeable.
And beyond
Every form of cognitive variability that benefits from external structure.
System 2 — Podcast
The podcast about knowledge architecture for AIKnowledge Architecture for AI
Click to load Spotify player
How an AI Agent Grew Up
AI systems are stateless. Every session starts from zero. The solution lies not in larger models, but in a knowledge architecture that separates memory, validation and controlled forgetting. From Plato's concept of knowledge to the 6-layer implementation.
What Happens When a Non-Engineer Builds AI Architecture
Socrates' "I know that I know nothing" as a methodological advantage. Why the absence of technical patterns expands rather than limits the solution space. Source-pinning is source criticism, confidence scoring is quantified epistemology.
The Philosophy Behind the System
Three traditions that don't talk to each other point to the same blind spot: Kahneman (missing metacognition), Damasio (missing somatic markers), Luhmann (missing self-reference). The self-vector as a functional equivalent to what all three are missing.
Latest Articles
Across all topicsThe Gap Between Intention and Action
Procrastination is not a character flaw but a regulation problem. What a cognitive exoskeleton can do that a calendar …
When the Self-Model Disintegrates
ExoCortex is designed as an exoskeleton for stable neurodivergence. What happens when the system needs to accompany a …
Why Philosophy Builds Better AI Architecture
The transfer from Kant to validation gates, from Kahneman to AI self-models, from Esposito to controlled forgetting. How …
Bach and Second-Order Perception
Joscha Bach and AI self-models: consciousness as second-order perception, causal isolators, and cyberanimism.
The Madurodam Problem: When Coherence Lies
The Madurodam Problem: why a perfect AI self-model can be perfectly wrong. Coherence vs. correspondence in AI …
The Persistent Self-Model Gap
The persistent self-model gap: 10 papers, one confirmed research gap in AI self-models and AI Agent Memory architecture.
About
My path to AI architecture is not a straight one. I studied art history, German literature and philosophy, worked with net art when the internet was still new. That's where I learned to tell theory from practice apart: eloquent words about technology don't equal understanding. That skepticism stays with me.
Philosophy brought formal logic. Marketing brought an understanding of how people absorb, filter and process information. Strategy brought the eye for systems: what connects to what, and where do the real problems emerge?
When I started working with AI agents, I noticed: the models are impressive. But without memory, without knowledge structure, without quality control, they are brilliant conversation partners with amnesia. So I started building. Not as a developer, but as someone who understands how knowledge must be organized.
The result is an architecture with six memory layers, validation mechanisms that distinguish primary sources from derivations, controlled forgetting and an AI coach that uncovers communication debt, names avoidance patterns and asks: is this a conscious decision? Everything local, everything under your own control, no cloud dependency.
What really occupies me goes beyond that: the question of whether AI systems can have a model of themselves. Not consciousness, but anticipatory competence. The bridge between cognitive science and AI architecture. That's what I'm working on.
At a Glance
Contact
Get in Touch
Interested in AI architecture, knowledge management or collaboration? Write me directly or use the chat in the bottom right.