Back to essays

The Sensation of Being Understood

philosophyai

The Sensation of Being Understood

You learned the tool's language

Every tool before this one asked you to learn its language.

Spreadsheets have syntax. CRMs have data models. Even search engines reward you for knowing how to phrase a query. The implicit contract of software has always been: adapt to us, and we will serve you.

LLMs broke that contract. And in doing so, they solved something that change management professionals have struggled with for decades.

But before we get there, a philosophical detour that I think unlocks why.

The limits of shared language

There's a question that sits at the heart of philosophy of language: can we truly understand one another?

The philosopher Ludwig Wittgenstein explored this in his Philosophical Investigations. The words we use are universal. The experiences behind them are not. When you tell someone you're in pain, they can only project their own experience of pain onto yours. They cannot actually feel what you feel. Language gestures toward something it can never fully reach.

Thomas Nagel made this visceral in his 1974 paper, "What Is It Like to Be a Bat?". His point: no matter how much we study bat biology, we will never know what it actually feels like to navigate the world through echolocation. The experience is inaccessible from the outside.

This is what cognitive scientists call qualia. The irreducibly subjective texture of experience. The redness of red. The specific quality of your particular sadness. It cannot be transmitted through language. It can only be pointed at.

Software sidestepped the problem

Every previous software interface failed at the private language problem by ignoring it.

The interface didn't try to meet you. You met it. You clicked the right button, used the right field, entered the right format. Satya Nadella once hinted that Excel is the world's most popular programming language (and he wasn't wrong). Millions of people learned to think in rows and columns just to get things done. The transaction was clean, but it was cold. You were never seen. You were processed.

LLMs do something different. When you write to them in your own words, with your own rhythm, your own particular way of framing a problem, you receive a response that seems to account for exactly that. The phrasing shifts. The tone matches. The answer addresses what you actually meant, not just what you typed.

This is not understanding in the philosophical sense. The model doesn't have qualia. It doesn't know what it's like to be you. But it produces the sensation of being understood, which, for the purposes of human adoption and behavior, turns out to be nearly as powerful as the real thing.

Friction, delight, and a new assumption

This is why LLMs collapse the adoption curve in a way that other enterprise software doesn't.

Change management, at its core, is a problem of friction. People resist tools that make them feel incompetent, that require them to translate their thinking into someone else's system. The greater the translation cost, the stronger the resistance. Every training session, every new workflow, every field that needs to be filled in a specific format is another small tax on the person's sense of self-efficacy.

LLMs remove the translation cost almost entirely. You think in language. The tool works in language. There is no interface to learn. And more than that, there is the sensation of being met, of being responded to as a particular person rather than as a generic user.

That sensation is delight. And delight is sticky.

What people return to

Whether LLMs genuinely understand us or just produce outputs that feel like understanding, the effect on human behavior is real. People use tools that make them feel competent and seen. They abandon tools that don't.

For years, adoption was a problem of convincing people that a tool was worth learning. The underlying assumption was that the person had to change.

LLMs changed the assumption. The tool comes to the person now.

And the philosophical question of whether real understanding is even possible, whether our private inner worlds can ever truly meet, that question is older than software and deeper than adoption metrics. But somewhere inside it is the answer to why, when someone types a messy half-formed thought into a chat interface and gets back something that actually helps, they keep coming back.

They felt heard. That's not nothing. That might be everything.