DappleAi logo
October 14, 2025
Industry Perspectives

The AI Usability Crisis and What We Can Do About It

The AI Usability Crisis and What We Can Do About It
Secure your spot and be among the first to explore Dapple’s latest technology during our free Private Alpha

There’s something wrong with GenAI.

Despite accelerating technical progress, user satisfaction is declining. According to Forbes, 77% of GenAI users report feeling frustrated or confused. Usage of ChatGPT, the most widely adopted consumer AI tool to date, has begun to decline and is pivoting to erotica and search. Meanwhile the broader ecosystem, valued in the hundreds of billions, remains alarmingly dependent on informal workarounds like prompt libraries, Reddit threads, YouTube tutorials, and screenshots stored in personal albums.

It doesn’t make sense because by nearly every quantitative measure, this technology should be flourishing. The average large language model now outperforms human experts on a range of cognitive tasks. Inference costs have fallen by 90-900x every year according to Stanford research, making it cheaper and faster than ever to deploy. These tools are accessible to hundreds of millions of people, often for free, and are trained on an incomprehensibly vast corpus including books, code, medical journals, manuals, and more.

And yet, for all this power, generative AI is failing at the point of contact with the user.

And yet, for all this power, generative AI is failing at the point of contact with the user.

Over the past few months, I’ve conducted more than 100 interviews with a wide range of users including early adopters, disillusioned testers, and what I call apprehensive avoidants; people who recognize the potential of AI but find themselves alienated by the experience of using it. Their feedback surfaces two central truths that continue to be ignored by many industry leaders:

  • The extraordinary power of these models often conceals their practical limitations
  • While the industry races to improve model capabilities, almost no one is seriously addressing the experience of the user.

This is the heart of the AI Usability Crisis.

We were promised a new era of infinite leverage with tools that could extend our cognitive reach, accelerate our ideas, and move us beyond the old constraints of time and bandwidth. But instead of impact, we’re getting inconsistency, confusion and with increasing alarm AI Psychosis.

These are not edge cases. They are the predictable consequences of a dominant interface paradigm that never evolved to meet the moment.

The Empty Prompt Box

At the center of this crisis lies a now-familiar design pattern: the blank prompt box.

This minimalist interface, popularized by Google and widely adopted by GenAi companies, appears to offer boundless possibility. But what happens when you slap the same interface onto very different technology? In practice, it places an enormous cognitive burden on the user. Most people arrive with vaguely formed intentions “help me write this”, or “figure this out” and are met with an invitation to improvise. The resulting interaction is often meandering, inconsistent, or simply unproductive.

This is not a failure of user imagination. It is a consequence of an interface paradigm that obscures structure and assumes far more technical fluency than most users possess. A simple task can quickly devolve into a recursive cycle of rewriting, clarifying, and re-prompting. This dynamic, which many users have described to me as “spiraling”, can create a state of cognitive overload brought on by increasingly incoherent interactions with a tool that appears intelligent but is procedurally unpredictable.

The Problem of Context Management

Even more pernicious is the issue of context management. Large language models are especially sensitive to how information is sequenced and referenced. Yet most interfaces offer no indication of what the model knows, remembers, or is actively drawing from. Users are left to guess what’s relevant, which instructions persist, and whether prior inputs are still influencing the response.

This is not an abstract technical issue, it is a daily point of failure for millions. Entire online communities exist just to debug context collapse. This should be a red flag. Most users aren’t trying to build agents or orchestrating plug-ins. They’re trying to get help thinking through a client deck, planning a project, drafting a bio, summarizing a meeting. When the tool fails at that, it’s not because people lack imagination. It’s because the interface outsources the hard part to them/

The Real Cost of Confusion

This gap between accessibility and usability is already generating measurable economic and social costs. A recent MIT Sloan study found that the majority of generative AI deployments fail to deliver tangible business value, largely due to poor integration and unclear workflows. Conversion rates from free to paid products remain low, and even according to OpenAI’s own data, the majority of user activity is limited to narrow tasks like simple writing and search.

But the more urgent concern lies with the distribution of benefit.

Those who stand to gain the most from the promise of AI, people managing multiple jobs, caretaking responsibilities, or early-stage businesses, are also the ones most likely to abandon the tools altogether. They do not have time for trial-and-error. They do not have the cognitive bandwidth to debug opaque systems. What they need is a reliable co-pilot not a new skill to master.

If the industry continues to treat usability as a secondary concern, we risk reinforcing an old pattern: building transformative technologies that disproportionately benefit those already empowered.

Where We Go From Here

The good news is that this is a solvable problem. But it requires a fundamental shift in priorities. The next phase of generative AI innovation must center not only the capabilities of the model, but the cognitive realities of the user.

That means abandoning the myth of the idealized power user and designing for the constrained, the overwhelmed, the non-expert. It means developing interfaces that surface what's possible before a prompt is written, preserve and expose context across interactions, and adapt to real-world workflows.

In short, the age of infinite leverage cannot be built on interfaces that assume infinite time, energy, and imagination.

Generative AI remains one of the most powerful technologies ever developed. But its value will not be measured by how many tasks it can complete in theory. It will be measured by how many people it can empower in practice.

Until we address the usability crisis, that promise will remain out of reach.

Posted by
Ebony Belhumeur
Ebony Belhumeur
CEO, DappleAi
Share on