The Starting Situation
Three months of working with an AI coach that logs every session. Not the content of conversations, but the metadata: What was planned? What was actually worked on? Which tasks were postponed? How often? At what times does productive work happen? When do most errors occur?
From this metadata, a picture emerges. Not a portrait, more of a movement profile of work behavior. And this picture shows things that the person it concerns did not see. Not because they weren’t looking, but because certain patterns only become visible when aggregated over weeks.
Five Patterns the System Recognized
1. Communication Debt
The term is an analogy from software development: technical debt describes shortcuts that save time in the short term and create costs in the long run. Communication debt works the same way: every postponed response, every conversation not had, every deferred alignment is a debt that accrues interest.
The system recognized that outward communication (posts, responses, alignments) systematically falls behind conceptual work. Not occasionally. Systematically. Over weeks. The reason is neurochemically traceable: conceptual work is intrinsically motivating (dopamine). Communication requires overcoming resistance (executive functions). With ADHD, dopamine almost always wins.
The insight is not “you communicate too little.” The insight is the structure behind it: which kind of communication is avoided, in which contexts, and what are the accumulated costs.
2. Over-Conceptualizing as Avoidance Strategy
Version 1 of a concept is written. Then version 2, because “there was one more thought.” Then version 3, because “the structure wasn’t quite coherent.” Then versions 4, 5. No one has ever seen version 1.
The system quantified the pattern: for certain task types (strategic papers, role models, presentations for superiors), there is a significant correlation between revision cycles and emotional investment. The more important the outcome, the more versions, the less is shown externally.
The uncomfortable interpretation: perfectionism and avoidance are related phenomena. Those who keep optimizing never have to face others’ judgment. The additional version is not quality assurance. It is deferral.
3. Hyperfocus-Crash Cycles
Three to four hours of highly productive work. Deep concentration, high output quality, subjective flow experience. Followed by an energy low that compromises the rest of the day. The pattern is well-documented in ADHD (Barkley 2015), but knowing it and seeing it in your own data are two different things.
The system mapped the cycles temporally. The most productive phases occur late at night (a classic ADHD pattern: the brain becomes productive when the world goes quiet). But the crash the following morning partially negates this productivity. The question is not “How do I prevent hyperfocus?” (it is the greatest strength) but “How do I manage the crash?”
4. Topic Drift vs. Topic Coherence
Context switching is common with ADHD. But not every context switch is problematic. The system learned a crucial distinction: five switches within a topic (writing text, looking up a source, creating a graphic, back to text) are productive. Three switches between unrelated topics within an hour indicate cognitive overload or understimulation.
The distinction between switching frequency and topic coherence is subtle but central. A coach that warns at every context switch will be ignored, because most switches are harmless. A coach that only warns when thematic coherence breaks delivers relevant information.
5. Invisible Progress
Perhaps the most surprising pattern: systematic underestimation of one’s own output. In a typical week, multiple systems are built, an article written, research conducted, strategic decisions made. But subjective perception focuses on what was not accomplished.
This is documented in ADHD and relates to attention regulation: completed tasks immediately lose salience. Open tasks remain in awareness. The result is a systematically distorted self-image that perceives more deficits than actually exist.
The coach corrects this distortion not through praise but through enumeration: “This week: three systems, one article, two strategic decisions.” No evaluation. An inventory.
The Ethical Dimension: Who Owns Knowledge About a Person?
The five patterns above are intimate. They describe weaknesses, avoidance strategies, neurochemical particularities. They are the result of an analysis that no human could have performed, because no human has all the data points: a therapist sees one hour per week. A colleague sees only the work context. A partner sees only the private context. The AI system sees all contexts across all sessions.
This raises questions that are not technical:
Who owns these insights? The system that generated them? The person they are about? The company that operates the infrastructure? The question is not trivial: if an AI coach recognizes work patterns that suggest a mental health condition, who then has access to this information?
Does an asymmetry emerge? The system knows things about the person that the person doesn’t know about themselves. In every other relationship (doctor-patient, therapist-client, employer-employee), there are rules for such asymmetries. For the human-AI-coach relationship, there are none.
Does knowledge change the relationship? If I know that my system knows my avoidance patterns, do I behave differently? Do I become more honest, because denial is pointless? Or more strategic, because I know what the system “sees”?
The Autonomy Objection
The weightiest objection against the entire concept: if a system tells me what my patterns are and gives me recommendations on how to change them, do I then unlearn self-reflection?
The argument carries philosophical weight. Kant defined enlightenment as “man’s emergence from his self-imposed immaturity.” Maturity requires independent thinking. A system that takes over thinking about oneself could undermine the very maturity it purports to foster.
The counter-position: the premise that humans are good at thinking about themselves without external help is empirically questionable. Kahneman documented how systematically System 1 errs about its own thought processes. Therapists exist because self-reflection alone is often not enough. Coaching exists because the outside perspective sees things that the inside perspective systematically overlooks.
The AI coach does not replace the capacity for self-reflection. It extends it. It delivers data that the person alone would not have had. What the person does with this data remains their decision. A mirror does not create dependency. It shows what is.
But: it is a mirror that sees more sharply than the person looking into it. And this asymmetry deserves attention, not reassurance.
The Open Question
At the end of this episode stands not an answer but a tension: the system delivered real, useful, partly transformative insights. At the same time, it created an information asymmetry for which there are no societal rules.
Is this liberating or unsettling? The honest answer: both. And the more honest answer: the tension will not disappear. It is the condition under which AI coaching takes place. Those who want to resolve the tension, in one direction or the other, are making it too easy.
Coming April 2026
Further Reading
- AI Coach: A Chief of Staff — The coaching concept and the autonomy debate
- ADHD and AI — Why external structure works better than self-discipline
- Self-Vector — The model behind the pattern recognition
- Kahneman and AI — Why we systematically misjudge our own thought processes