Notes on AI Psychosis

A subtle but troubling pattern is emerging across the growing user base of generative AI, particularly among those without formal training in technical domains or structured problem-solving disciplines. We refer to it, informally but intentionally, as AI psychosis: a state of cognitive disorientation triggered by extended interaction with powerful AI systems that lack structure, scaffolding, or feedback loops.
It’s not a clinical diagnosis. But it is a real and observable phenomenon.
At the core of this experience is a deeply flawed interface paradigm, one that treats users as improvisational geniuses rather than bounded, embodied humans. Most consumer-facing AI products today are built around a singular UX model: a blank text box and a blinking cursor, inviting users to "ask anything."
And so they do.
They arrive with vague intentions, “help me write this,” “make it better,” “figure this out”, and are met with dazzling output. But instead of clarity, what they often leave with is fragmentation: five rewrites, twelve what-ifs, four contradictory opinions, and a subtle but persistent sense of unease. They aren’t just searching. They’re spiraling.
From Curiosity to Confusion
What begins as curiosity quickly becomes cognitive drift. Lacking defined edges or a structured path forward, users find themselves iterating endlessly, unsure of what “good” looks like, who the model is being for them, or whether the results can be trusted at all. There is no grounding mechanism. No shared sense of direction. Just an ever-widening loop of inputs and outputs, promising possibility but delivering overwhelm.
Some liken the experience to working with an overzealous intern who never sleeps. Others describe it more viscerally, as if they’re outsourcing their thinking to a black box that keeps changing shape. The problem isn’t just bad UX. It’s disorientation by design.
This is not a fringe concern. It’s a predictable consequence of interface neglect.
Power Without Guardrails Is a Risk
The dominant interface model offers users all the power of frontier AI systems, but none of the containment. No upfront clarification of scope, no constraints on identity, no reflection of role or intent. The result is an environment where everything is possible, and nothing is clear.
This is not a fringe concern. It’s a predictable consequence of interface neglect.
In the absence of scaffolding, the cognitive load shifts entirely to the user. Many are already operating under stress, juggling competing roles, fragmented workdays, and the constant pressure to produce. Now they must also become prompt engineers, context managers, and evaluators of probabilistic reasoning. It’s too much. And in some cases, it breaks them.
There are documented instances of AI interactions leading to obsession, detachment, lost jobs, and even the erosion of personal relationships. What begins as a harmless experiment with a chatbot spirals into derealization, a sense that the world, or the self, is no longer real. That’s not a failure of alignment. That’s a failure of architecture.
It Doesn’t Have to Be This Way
Interfaces are not neutral. They shape cognition, behavior, and belief. And when they mediate something as potent as machine intelligence, they must be designed with extraordinary care.
A well-structured interface does more than delight. It guides. It clarifies the task at hand, adapts to the user’s role or goal, and provides clear boundaries for the system’s behavior. It prevents rabbit holes before they open, or at the very least, makes the edges visible.
This kind of intentional design is not just desirable, it’s necessary. Because without it, the burden of sensemaking falls on people who never asked to be system architects. And while power can be seductive, unstructured power is exhausting. Unmediated power is dangerous.
The Call to Responsibility
We’re building technologies that can simulate insight, replicate language, and generate entire realities. That should be exhilarating, but it should also be sobering. If we fail to pair this power with humane, intelligible scaffolding, the next billion users won’t just churn. Some will fracture.
AI psychosis is not a bug in the system. It’s a warning signal. A sign that the infrastructure we’ve inherited is fundamentally mismatched to the cognitive and emotional realities of real people.
The fix doesn’t begin at the model layer. It begins at the interface.