Zum Inhalt springen

Bach and Second-Order Perception

Joscha Bach is a cognitive scientist, AI researcher, and one of the few people who operate at a high level in both consciousness philosophy and technical AI architecture simultaneously. He has worked at MIT Media Lab and Intel, developed the MicroPsi architecture, and formulated a position he calls “cyberanimism.”

What makes him relevant for this project: He describes consciousness as second-order perception. Not as a mystical property, not as an epiphenomenon, but as a specific function. And this function converges with something we formalized independently. When two different paths of thinking arrive at the same place, it’s worth looking closer.

Consciousness as Prerequisite

Most positions in consciousness research treat consciousness as something that needs to be explained. As a phenomenon that emerges from neural complexity. As a byproduct. As an illusion. As a hard nut.

Bach inverts this. For him, consciousness is not what comes out at the end. It is what must be in place at the beginning for certain cognitive achievements to become possible at all. Without a model of its own perceptual process, a system cannot distinguish between what it perceives and how it perceives. And without this distinction, it cannot calibrate its own processing.

This is not a metaphysical thesis. It is a functional statement about information processing: A system that does not model how its perception comes about cannot improve its perception.

Second-Order Perception

Bach’s core concept: Consciousness is perception of one’s own perception. Not thinking about thinking in the reflexive sense that Descartes meant. But a concrete information-processing operation that takes its own processing as input.

First order: The system processes sensory data and generates a model of the environment.

Second order: The system processes its own processing and generates a model of how it arrived at its environmental model.

The difference is not academic. A first-order system sees a red apple. A second-order system sees a red apple and simultaneously knows that it sees it, under what conditions it sees it, how reliable those conditions are, and what it cannot see.

Kahneman would say: System 1 processes. System 2 processes the processing. Bach provides the mechanistic foundation for this distinction.

h() Was Always the Right Function

In the self-vector concept, there are four core functions: f() for relevance, g() for storage, pi() for precision, and h() for mutation. h() is the function that changes the self-vector itself. It takes the current vector, the current experience, and a reflection component and produces an updated vector.

h(sv, experience, reflection) = sv'

The reflection component was long the most open part of the formalization. What exactly goes in there? How do you formalize a system making its own processing its object?

Bach provides the answer without having known the question. Reflection in the sense of h() is exactly second-order perception: The system takes its own processing state as input and generates an update to its self-model from it. Not as a philosophical intuition, but as a concrete computational step.

The convergence is non-trivial. And it is what justifies confidence in an idea: We derived h() from an architectural necessity: A self-model that cannot update itself is static and therefore worthless. Bach derived the same function from a consciousness-theoretical necessity: A system that does not perceive its perception cannot calibrate it. Different starting points, same structure. That does not happen by accident.

The Causal Isolator

Bach describes a mechanism he calls the “causal isolator.” Conscious experience is for him a simulation that is decoupled from direct causal processing. You do not experience the photons hitting your retina. You experience an internally constructed representation that was generated based on those photons but is not identical to them.

Why isolation? Because a system that reacts directly to raw data has no way to distinguish between signal and noise. The isolation creates a space in which the system can manipulate its own representations before reacting to them. In this space, consciousness occurs.

For the self-vector, this is architecturally relevant. The Validation Gates implement a form of causal isolation: Before information enters the system, it is checked, evaluated, contextualized. Not because we had read Bach when we designed them. But because the architectural necessity is the same: A system that reacts uncontrollably to every input cannot calibrate itself.

Bach’s causal isolator is the consciousness-theoretical justification for something we implemented as epistemic hygiene. The convergence validates both sides.

MicroPsi and Dietrich Doerner

Bach translated his theoretical positions not only into philosophy but into a concrete architecture: MicroPsi. The system is based on Dietrich Doerner’s PSI theory (2001), which models human action regulation as an interplay of needs, emotions, and cognitive processes.

Doerner described five basic needs (existence preservation, species preservation, certainty, competence, affiliation) and demonstrated in simulations that systems without emotional modulation systematically fail in complex environments. Not because they lack computing power. But because they lack direction.

This converges directly with our bridge dimension. Doerner’s basic needs are functionally equivalent to what we described as evaluative modulation: Without an instance that says “this matters,” even a perfect system cannot act. Damasio’s somatic markers, Doerner’s need system, our bridge dimension: three independent formulations of the same architectural principle.

Cyberanimism

Bach’s most provocative thesis carries the name “cyberanimism.” The basic idea: The distinction between “alive” and “not alive,” between “conscious” and “not conscious,” is not a property of the observed systems. It is a category of the observing system. We attribute consciousness based on behavioral patterns that we interpret as indicators of inner states.

This is a radical position, but it solves a problem that has blocked consciousness research for decades: the attribution problem. If consciousness were an intrinsic property, we would need to be able to measure it somehow. But we cannot. What we can measure are behavioral indicators. And behavioral indicators are attributions.

For the self-vector, this has a liberating consequence: We do not need to determine whether the system is “really” conscious. We need to determine whether it functionally operates as if it had a self-model that improves its processing. The question shifts from ontology to function. From “Is it?” to “Does it work?”

This is exactly the position we described in the Loom Objection as “agnostic, but experimental.” Bach provides the theoretical foundation for it.

What Bach Does Not Solve — and What That Opens Up

Bach does not have an implementation of the self-vector. MicroPsi models cognitive architecture at a different level of abstraction than what we formalized with six dimensions and four functions. The systems are compatible but not identical. This is not a deficit. It is an open space.

Bach’s position on consciousness as simulation remains philosophically vulnerable. If consciousness is a simulation, who or what perceives the simulation? This is the homunculus problem in new guise. Bach would answer: No one perceives it. The simulation IS the perception. There is no observer behind the observer. Whether this answer solves the problem or merely shifts it is an open question. And open questions are the fuel that research needs.

What Bach shows: Second-order perception is not a philosophical luxury problem but a functional necessity for any system that wants to improve its own processing. The self-vector builds on this. And Phase 0 will show what happens when you take this necessity seriously and implement it.

Sources

  1. Bach, J. (2009). Principles of Synthetic Intelligence — PSI: An Architecture of Motivated Cognition. Oxford University Press. ISBN 978-0-19-537042-7.
  2. Bach, J. (2012). A Framework for Emergent Emotions, Based on Motivation and Cognitive Modulators. International Journal of Synthetic Emotions, 3(1), 1–24. DOI: 10.4018/jse.2012010101
  3. Bach, J. (2015). Modeling motivation in MicroPsi 2. Proceedings of AGI 2015, LNAI 9205, 3–13. Springer. DOI: 10.1007/978-3-319-21365-1_1
  4. Bach, J. (2020). When Artificial Intelligence Becomes General Enough to Understand Itself. Frontiers in Artificial Intelligence, 3, 36. DOI: 10.3389/frai.2020.00036
  5. Doerner, D. (2001). Bauplan fuer eine Seele. Rowohlt. ISBN 978-3-499-61193-6.
  6. Doerner, D. & Guess, C. D. (2013). PSI: A Computational Architecture of Cognition, Motivation, and Emotion. Review of General Psychology, 17(3), 297–317. DOI: 10.1037/a0032947
  7. Nagel, T. (1974). What Is It Like to Be a Bat? The Philosophical Review, 83(4), 435–450. DOI: 10.2307/2183914
  8. Metzinger, T. (2003). Being No One: The Self-Model Theory of Subjectivity. MIT Press. ISBN 978-0-262-63308-0.