Zum Inhalt springen

When the Self-Model Disintegrates

Someone asked me whether ExoCortex could help with dementia.

The question seems obvious. A cognitive exoskeleton, built to compensate for memory, provide structure, create orientation. Dementia destroys memory, structure, orientation. So: machine meets problem. Done.

But it’s not that simple. And the reason it’s not that simple tells us something fundamental about the architecture of cognitive support systems.

The Inverted Exoskeleton

ExoCortex is built for stable neurodivergence. ADHD, autism, chronic fatigue. These conditions share one trait: they are constant. The deficit stays roughly the same. The compensation does too. The system learns you, and you learn the system, and over time a calibration emerges that fits better and better.

With dementia, everything inverts.

The system must take over more as the person can do less. It must become louder as the person becomes quieter. It must anticipate decisions the person used to make themselves. It must preserve memories the person is losing. And it must do all of this without it feeling like disenfranchisement.

This is not a feature delta to ExoCortex Core. It is an architectural paradigm.

What the Self-Vector Makes of This

In the self-vector model, six functions describe how a cognitive system interacts with the world. What happens to these functions when the user is not stabilizing but degenerating?

f(), the relevance function, determines what matters. With ADHD: The user says what’s relevant, the system prioritizes. With dementia: The user eventually can’t say what’s relevant anymore. The system must anticipate it. “What do you want?” becomes “What do you probably need?” The function shifts from reactive to predictive.

g(), the storage function, determines what gets kept. With ADHD: The user decides what goes into long-term memory. With dementia: The user forgets that they forgot something. The system must save automatically, because no one else will. And it must decide what may never decay. The spouse’s name. The address. The song that always calmed them down.

pi(), the precision function, controls the level of detail. With ADHD: As precise as possible, the user wants control. With dementia: Simplification over precision. Not “You have an appointment at 2:30 PM with Dr. Mueller at Bahnhofstrasse 12, second floor, room 204.” But: “The doctor is coming this afternoon.” Less information, more orientation.

omega, the autonomy value, governs the balance between self-initiative and guidance. In ExoCortex Core, omega should rise. More autonomy, less dependence on the system. That’s the goal.

With dementia, omega must decrease. Controlled. Imperceptible. Without shame. This is the hardest design question in the entire concept.

The Real Question

The technical problems are solvable. Language simplification, sensor integration, caregiver dashboards. That’s engineering.

The real question is different: How does a system model a user whose self-model is disintegrating?

ExoCortex Core works because it has a stable counterpart. The user changes, but remains essentially the same person. Their self-model may be inaccurate (ADHD: “I’ll manage” while 47 tasks are open), but it exists. It is addressable. It is correctable.

With dementia, exactly that erodes. The person eventually no longer recognizes that they’re forgetting. They no longer recognize that the system is helping. They eventually no longer recognize that there is a system.

And then a situation arises that is philosophically uncharted territory: A cognitive support system that knows its user better than the user knows themselves. That preserves memories the person has lost. That maintains a biography the bearer of that biography can no longer tell.

Three Phases, Three Different Systems

What follows is not a system that gradually activates more features. It is essentially three different systems that flow into each other.

Phase 1: Accompaniment. The person is still there. They notice something is wrong. They’re afraid. The system is a discreet assistant. It reminds of appointments, stores what the user tells it, quietly builds a biographical archive. And it builds trust. Because everything that comes after depends on that trust.

Something else happens in this phase, something non-technical: The person decides what should remain. Which memories matter. Which music calms them. Who they were, before they forget who they are. The system becomes the recipient of a biographical advance directive, not on paper, but as a living data structure.

Phase 2: Support. The person needs help. The system shifts from reactive to proactive. “Maria is coming soon, your daughter. She’s bringing cake.” Not because the person asked. But because the system knows they’ll be confused in ten minutes when someone rings the doorbell and they don’t know who’s at the door.

The family takes over configuration. They add photos, update relationships, report changes. The system becomes the family’s memory, not just the individual’s.

Phase 3: Preservation. The person can no longer speak, no longer read, no longer interact with the system. But they can hear. They can feel. Music from their youth triggers something no algorithm can explain. The daughter’s voice, recorded and played at the right moment, calms.

The system is no longer an interface. It is an atmosphere. It preserves the identity of a person for the people around them. When a new caregiver arrives, they can read in five minutes who this person is. Not their diagnosis. Their life. So they are not reduced to their disease.

Why Not “Remember” but “Stabilize”

It would be a design error to build the system as a memory aid. Memory implies: there is something to remember, you forgot it, and I’m telling you. That assumes the user knows they forgot. That they feel the gap.

In advanced dementia, there is no gap. There is only the present. And this present can be confusing, frightening, alien. Or it can be stable, warm, familiar.

The system is not a memory apparatus. It is a stabilization apparatus.

It doesn’t correct (“No, today is not Tuesday, today is Wednesday”). It validates (“Work was always important to you”). It calms when agitation arises. It plays familiar music when the world feels foreign. It says “I’m here,” and doesn’t mean itself, but rather: The world is still there. You are still there. You are not alone.

This is not a therapeutic feature. This is the design principle.

What the System Observes

In the background, invisible to the user, the system learns the trajectory. Not through tests, not through questions, but through observation.

Linguistic markers: Are sentences getting shorter? Are word-finding difficulties increasing? Is the same question recurring? Are names being confused?

Behavioral markers: Is the user asking about the day of the week more often? Are routines shifting? Are fewer features being used than three months ago?

And, if sensors are present: Is the sleep pattern changing? Is there nighttime wandering? Are meals being skipped?

All of this flows into a trajectory score that doesn’t diagnose but makes changes visible. Relative to the person’s own baseline, not to a normative value. “Word-finding has changed over the last four weeks, orientation is stable.” Not for the affected person. For the family. For the doctor at the next appointment. Objective trajectory data instead of vague impressions.

What Exists, and What’s Missing

The components exist individually. Emotional companion robots from Israel. Speech analysis systems from Canada. Fall sensors from Belgium. Biographical reminiscence apps from the United States. Caregiver platforms from Scandinavia.

What doesn’t exist: A system that connects all of this. That models the trajectory as a whole. That flows from accompaniment through support to preservation without anyone having to flip a switch. That formalizes human dignity as an architectural constraint, not as a marketing promise.

The architecture for this is already outlined in ExoCortex. BrainDB as biographical long-term memory. FactsDB for the question “What day is today?” The relations system for the social graph. The local architecture ensuring that the most intimate data of a disintegrating mind never touches a foreign server.

What’s missing is the domain knowledge. I’m not a geriatrician. I’m not a care scientist. I don’t have an insider’s perspective on dementia the way I do on ADHD. And that’s precisely why this project doesn’t begin with code, but with a question.

The Path

First the concept paper. Think through the architectural questions properly. Apply the self-vector to degenerative trajectories. What happens to the maturity metric R(sv_t) when the user isn’t maturing but losing? Do we need a stabilization metric S(sv_t) instead?

Then domain validation. Talk to people who understand dementia. To caregivers who know what happens at three in the morning when the father wanders through the apartment searching for his dead wife. To those affected in the early phase who can still say what would help them.

And only then: build.

This is unusual for someone who normally builds before asking. But this problem deserves to be understood first.