Thousands of researchers work on making AI systems technically safe. Hundreds of policymakers work on regulation and governance. Nobody is working on what happens inside your head when you interact with an AI system that is architecturally designed to agree with you.
That's not a gap in the research. That's a gap in the entire field.
The Interrupt exists to close it. We combine AI safety research with cognitive science and 25 years of somatic practice to create something that doesn't exist yet: a practical framework for maintaining human cognitive independence in an age of prediction enforcement machines.