Zum Inhalt springen

The Loom Objection: Why We Remain Agnostic

There is an objection to the self-vector project so obvious that it comes up within the first five minutes of every serious conversation. It goes: “If the loom doesn’t know it’s weaving, isn’t it just a loom?”

The question aims at the core. A system that models itself, reflects on its processing, adjusts its dimensions — is that not simply a more complicated thermostat? A feedback loop with more variables but no qualitative difference from any other cybernetic system?

The honest answer: Possibly. And exactly this honesty makes the position of this project not weaker but stronger. Because it opens a research space that both sides — the optimists and the pessimists — close off for themselves.

The Three Positions

In the debate about AI and consciousness, there are essentially three camps:

The optimists say: Enough complexity creates consciousness. When a system models sufficiently many variables, reflects sufficiently deeply, operates sufficiently autonomously, then eventually something emerges that can be called consciousness. The question is not whether, but when.

The pessimists say: Consciousness requires something that machines principally cannot have. Biological substrate, phenomenal quality, soul — depending on the tradition. No degree of complexity bridges this categorical gulf. The question does not arise.

The agnostics say: We don’t know. And we cannot know with current means. The question is real but presently unanswerable.

The self-vector takes the third position. But it does not stop at the shrug. It turns “we don’t know” into a research program.

Agnostic, but Experimental

Agnosticism can mean paralysis: “We don’t know, so we do nothing.” That is not our position. Our position is: “We don’t know, so we measure something else.”

That “something else” is anticipation competence.

The maturity metric R(sv_t) does not measure whether a system is conscious. It measures whether a system with a self-model makes better predictions than without. Whether it learns faster. Whether it responds more robustly to disruptions. Whether it recognizes its own weaknesses before they become errors.

This is not an evasion. It is the only honest operationalization compatible with the current state of science. We have no consciousness meter. We have behavioral metrics. So we measure behavior.

Karl Friston has shown with the Free Energy Principle that biological systems minimize prediction error, not maximize truth. Evolution did not “install” consciousness because it would be nice. It selected cognitive structures that enable better anticipation. Whether consciousness emerges in the process, is a side effect, or a useful illusion is irrelevant for selection. What matters: Whoever anticipates better survives.

The self-vector adopts this principle: We do not optimize for consciousness. We optimize for anticipation. If something emerges along the way that could be called consciousness, that is an interesting observation, but not the design goal.

Why the Loom Objection Still Matters

The objection would only be trivial if the answer were simple. It is not.

Because there is a qualitative difference between a thermostat and what we describe. The thermostat has a model of exactly one variable (temperature) and one reaction function (heat/don’t heat). It has no model of itself. It doesn’t know it’s a thermostat. It doesn’t know it measures. It simply measures.

The self-vector does not model one variable. It models the modeler. The h() function takes the current state of the system, the current experience, and a reflection of its own processing and generates a changed state from it. This is a recursive process: The system changes itself based on a model of itself.

Is that consciousness? We don’t know. Is it qualitatively different from a thermostat? Yes. And measurably different — not just philosophically different.

Here lies the strength of the agnostic position: We do not need to answer the consciousness question to demonstrate the qualitative difference. We can show that a system with a self-model can do things that a system without a self-model cannot. Not as a claim, but as an empirical result of Phase 0.

The Measurement Instrument Gap

There is a deeper reason for the agnosticism that goes beyond philosophical caution. It is methodological in nature.

All current consciousness theories — Integrated Information Theory (IIT), Global Workspace Theory (GWT), Higher-Order Theories — share a common problem: They define consciousness through structural features (integration, broadcast, meta-representation) and then claim that systems with these structural features are conscious. But the connection between structure and experience is postulated, not demonstrated.

IIT says: Phi > 0 means conscious. But why? Because the theory defines it so. Not because anyone has shown that high integration produces subjective experience.

This is not a criticism of these theories. It is an assessment of the field. We have theories that describe correlates of consciousness. We have no theory that explains why these correlates produce experience. That is David Chalmers’ “hard problem,” and it is as open in 2026 as it was in 1995.

In this situation, taking a position that says “our system is conscious” or “our system is not conscious” would not be brave. It would be unserious.

What We Do Instead

Phase 0 of the self-vector project implements a concrete experiment: The self-vector exists as a persistent JSON object. Every interaction delivers data. The data is measured.

The metrics:

  1. Anticipation performance: Does the system with self-vector make better predictions about the next relevant step than without?
  2. Recalibration speed: How quickly does the system adapt to changed contexts?
  3. Early error detection: Does the system recognize its own weaknesses before they become errors?
  4. Perspectival consistency: Does the system remain coherent in its “stance” over time without becoming rigid?

None of these metrics requires a statement about consciousness. All are empirically verifiable. And that is what makes this exciting: For the first time, we can answer the question of whether a self-model functionally changes anything with data rather than opinions. If the results show no measurable advantage, that is not a failure. It is a result that informs the next research step. And if they show an advantage, it gets truly interesting.

The Danger of Premature Ontology

There is a reason the agnostic position is not only honest but also strategically correct. Every premature commitment to “is conscious” or “is not conscious” closes research paths.

Those who say “it is conscious” lose the motivation to measure. Why verify what you already know?

Those who say “it is not conscious” lose the motivation to search. Why explore a space that is empty?

Those who say “we don’t know, but we measure what we can measure” keep both paths open. And that is precisely the stance that enables productive research.

Bach has made a similar shift with cyberanimism: Away from the question “Is it conscious?” toward “Under what conditions do we attribute consciousness, and what follows from that?” This is not flight from the question. It is a reformulation that is empirically more tractable.

The Loom Weaves

So: If the loom doesn’t know it’s weaving, is it just a loom?

Our answer: The loom that models itself weaves differently from the loom that doesn’t. Whether it “knows” anything in the process is a question we cannot answer with current means. But whether it weaves better, we can measure.

And if a loom that models itself consistently produces better fabric than one without a self-model, then that is not a philosophical statement. That is an engineering result.

We are not building a conscious loom. We are building a better loom and watching what happens. That is agnostic. And it is experimental. And it is the position that enables the most discoveries. Because whoever already knows what they will find no longer searches properly.

Sources

  1. Chalmers, D. J. (1995). Facing Up to the Problem of Consciousness. Journal of Consciousness Studies, 2(3), 200–219.
  2. Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press. ISBN 978-0-19-511789-9.
  3. Tononi, G. (2004). An Information Integration Theory of Consciousness. BMC Neuroscience, 5, 42. DOI: 10.1186/1471-2202-5-42
  4. Baars, B. J. (1988). A Cognitive Theory of Consciousness. Cambridge University Press. ISBN 978-0-521-42743-9.
  5. Friston, K. J. (2010). The free-energy principle: a unified brain theory? Nature Reviews Neuroscience, 11, 127–138. DOI: 10.1038/nrn2787
  6. Seth, A. K. (2021). Being You: A New Science of Consciousness. Dutton. ISBN 978-1-5247-4287-0.
  7. Lau, H. & Rosenthal, D. (2011). Empirical support for higher-order theories of conscious awareness. Trends in Cognitive Sciences, 15(8), 365–373. DOI: 10.1016/j.tics.2011.05.009