Zum Inhalt springen

6-Layer Knowledge Architecture

Most AI agents suffer from chronic amnesia. They answer brilliantly, but tomorrow they won’t remember any of it. The problem isn’t the model — it’s the missing knowledge infrastructure.

Imagine working with a consultant who forgets every morning who you are, what you discussed yesterday, and which decisions were made. You’d fire them. But that’s exactly how most AI systems work: every session starts from zero.

The solution isn’t a better AI. The solution is an architecture that stores, structures, and validates knowledge independently of the model.

The 6 Layers

Layer 1: FactsDB (Hot Memory)

What is true RIGHT NOW?

Structured key-value facts with instant access. IP addresses, current roles, project status, configurations. The equivalent of short-term memory that gets loaded automatically at the start of every conversation.

Why a separate layer? Because not all information is equal. “The server has IP 192.168.0.166” is a fact that needs to be available immediately — no search, no context, no interpretation. If a system has to search through thousands of documents to find the current IP, it’s too slow for daily work.

FactsDB is deliberately simple: entity, key, value. No prose, no interpretation, no nuance. That’s the strength: what’s stored here is either current or wrong. There are no shades of gray.

Layer 2: BrainDB (Deep Storage)

What have we LEARNED?

Distilled knowledge from hundreds of interactions. Research results, decisions and their rationale, debug solutions, architecture decisions. Full-text search, relations between entries, a complete changelog.

The crucial difference from a note-taking app: BrainDB doesn’t store raw data — it stores insights. Not “I spent 30 minutes thinking about validation gates,” but “Gate 3 (Contradiction Check) must not be a blocker because contradictions are sometimes intentional.” That’s distilled knowledge, extracted from hours of work and compressed into a single sentence.

Every entry has metadata: source (primary or derived?), tags, timestamps, relations to other entries. The system knows not only WHAT it learned, but WHEN and FROM WHERE. And since the validation gates incident, also: HOW CERTAIN.

What’s in the FILES?

Hybrid search across all project documents. Not just keyword search (“find all files containing the word validation”), but semantic search (“find everything related to knowledge quality assurance”). Cross-lingual, meaning a German query finds English documents and vice versa.

Why isn’t BrainDB enough? Because not everything can be distilled. Sometimes you need the original text, the full context, the document from which an insight originated. Qualia searches all files in the project directory without requiring manual indexing.

The combination of BrainDB (distilled, structured, fast) and Qualia (complete, context-rich, semantic) is critical. BrainDB says “Validation Gates were introduced on March 26.” Qualia finds the 20-page concept paper that explains why.

Layer 4: Coaching Layer

Who’s actually working here?

An AI coach that recognizes work patterns, understands energy cycles, and honestly reports back what it sees. ADHD-aware, energy-aware, direct without being hurtful.

Why is this its own layer? Because coaching requires a different perspective than knowledge retrieval. The first three layers answer questions. The coach asks questions: “You’ve been putting this off for three weeks. Is that a conscious decision?” It aggregates across all other layers and recognizes patterns that aren’t visible in any single layer.

Layer 5: Validation Gates

Is that actually correct?

6 gates that every fact must pass before it’s considered reliable. Source-Pinning, Contradiction Check, Temporal Validation, Confidence Scoring, Scope Check, Provenance Tracking. Born from a concrete error where the system interpreted its own prepared figures as raw data.

This layer is the implemented System 2: the slow, analytical verification authority that System 1 (the LLM) lacks. Not through more thinking, but through deterministic rules that check against external facts.

Layer 6: Temporal Decay

Is this still current?

Controlled forgetting. Not everything needs to last forever. An IP address from six months ago is probably outdated. An architecture decision from six months ago is probably still valid. The system distinguishes between these cases.

Temporal Decay is the counterpart to the Validation Gates: where gates verify whether new information is reliable, Temporal Decay checks whether existing information is still relevant. Together, they form a lifecycle for knowledge: intake, validation, use, decay.

Human memory works similarly. We forget most details but retain the essence. A system that forgets nothing will eventually drown in outdated data. A system that forgets everything is the memoryless AI agent we started with.

Why 6 Layers and Not 3?

The obvious question: isn’t this overengineered? Aren’t facts + documents + search enough?

The answer came from practice, not theory. The system started with 3 layers. Then the metrics error happened, and Validation Gates became necessary. Then it became clear that stale facts were a problem, and Temporal Decay was added. Then the coach emerged as a distinct perspective that needed its own layer.

Each layer solves a problem that the others can’t. No layer is redundant. And each layer is independent: the model can be swapped out, the knowledge layers remain.

Design Principle

Everything local. No cloud lock-in. The model is replaceable; the knowledge layer stays. If a better LLM appears tomorrow, the prompts get adjusted. The facts, the knowledge, the patterns, the validation — all of that remains intact.

That’s the real value: not the model, but the knowledge infrastructure around the model. Models come and go. Knowledge accumulates.

View visualization