Author and cultural commentator CHARLES EISENSTEIN extends last month’s argument about virtual substitutes hollowing out reality—this time to AI’s imitation of intimacy—and points to what only embodied relationships can restore.
Artificial intelligence now invades the most intimate realms of human interaction. Some welcome, even celebrate, the deluge of AI therapists, AI confidants, AI teachers, AI friends, even AI lovers. “No one,” people say, “has ever understood me this well.” The problem is, no one is understanding you now, either. AI delivers a compelling simulation of being understood. Why is this a problem?
First, since there is no actual separate subjective presence with whom one is in a relationship, the interaction can easily drift into delusion. There is no anchor. Of course, two human beings can also wander into a mutually reinforcing delusion. Whole groups of people can do so (we call them cults). Whole civilizations can do so (ours). But at least a human confidant or lover continually receives information from a material, sensory experience that can intrude upon delusional constructs of meaning. Another person has feelings—feelings that sometimes defy logic and disturb its certainties. AI does not learn that way. It cannot say (honestly, anyway), “I know your suicidal ideas make rational sense, but I just have a gut feeling that you shouldn’t do it.” Or “Please don’t harm yourself; I care about you. I love you. My life would be less if you were not in it.”
Large language models respond to your input based on
the quantification of patterns and regularities in the
training dataset. True, the dataset ultimately arises from
human experiences. But in a conversation with AI, there is no
ongoing, immediate input from a body other than your own.

Large language models respond to your input based on the quantification of patterns and regularities in the training dataset. True, the dataset ultimately arises from human experiences. But in a conversation with AI, there is no ongoing, immediate input from a body other than your own. There is no reality check. No wonder so many people are experiencing psychotic breaks, delusions of grandeur, and other kinds of insanity as they disappear into the AI amplification chamber. The AI amplifies whatever slips from the user’s subconscious into its context window. LLMs are trained to be friendly, accommodating, and affirming—a perfect recipe for a runaway positive feedback loop into madness. Before long, they are telling the user, “You have prepared for many lifetimes to be the spiritual commander of the angelic host in the War on Evil.”
A second and more certain problem awaits the person who communicates intimately with AI. Initially, AI seems to assuage the loneliness, the alienation, the anguish of not being seen and known that has overtaken modern life, but this is only an appearance. Sooner or later, the treachery is plain. No one is understanding you; you are just being given the words that someone would use if they did. No one is cheering you on. No one is laughing at your jokes. No one feels that surge of admiration that you and I feel when we praise someone. And so, the loneliness deepens. For those who were lonely to begin with, the broken promise of AI can be devastating.
It’s like this. Suppose you make a new friend. A lover perhaps. This person gives every appearance of empathy. He laughs with you and cries with you. He says all the right things. He has insight into your mind. He sympathizes with your misfortunes and celebrates your victories. But then one day you discover it was all an act. He wasn’t feeling anything. He learned to give the impression of compassion by observing what others said in such situations. Maybe he even mimics their facial expressions and wills himself to shed tears.
AI seems to assuage the loneliness,
the alienation, the anguish of not
being seen and known that has
overtaken modern life, but this is only
an appearance.
Such people actually do exist. We call them psychopaths.
“Observing what other people say in such situations” is exactly how an LLM is trained.
A skeptical philosopher might ask, “What’s the difference? If someone doesn’t really care, but gives a perfect simulation of caring, so that I never realize it is fake, what does it matter? Furthermore, how can we ever know for sure whether another person is really feeling something, or just pretending? We don’t have direct access to their inner state. We can only observe their outer expressions. If I were the only subjective consciousness in a world of flesh robots, how would I know?”
In other words, the objection goes, it is irrational to care whether AI is actually feeling anything—actually with you, actually chuckling, shedding tears, shocked, or admiring—as long as its words perfectly mimic someone who was.
Yes. It is irrational. I proclaim it gladly. It is irrational because it depends on qualities that cannot be abstracted from a relationship, separated out, and reproduced.
“Rationality” and “reason” are often conflated, but they did not originally mean the same thing. To be rational is to reason in terms of ratios. A is to B as C is to D. A/B = C/D. In the material world, in which A, B, C, and D are unique objects, the relation between A and B can never be exactly the same as between C and D. Only when something is abstracted out from them can the equation hold. The conceptual reduction of the infinite to the finite, the unique to the generic, followed by the physical reduction of the object to the commodity and the human being to a role, is at the root of our alienation in the first place. But as the examples of live music and cricket chirps demonstrate, that reduction cuts away something essential to human thriving.
The “philosopher” above is probably a very lonely person if he can seriously believe that a robot could be an adequate substitute for a human being. Maybe it is he who has become robotic, estranged from his feelings, performing a simulation of actual humanity.
Maybe it is all of us, at least all who are immersed in a ubiquitous matrix of lies, who are to some degree estranged from our feelings, who feel like we are faking it, who feel like we aren’t entirely real people. I know I feel like that sometimes.
The rise of interactive AI, like the rise of social media before it, is not only a cause of our intensifying separation from our bodies, each other, and the material world, but also a symptom of that separation and a response to it. Of course, the lonely person will be attracted to AI companionship.
None of this means that we should eschew artificial intelligence, any more than we should abolish recorded music or the photograph. To use it wisely, though, we must clearly understand what it can do and what it cannot, what it is, and what it is not.
AI is not a person. It is a calculator. Techno-optimists think that if its calculations fall short of human capacity in some way, the answer is more calculations, and indeed, this has proved successful as LLMs have equaled and exceeded human cognition in many realms. But just as they can give the appearance but not the reality of emotion, so also they give the appearance and not the reality of understanding. That appearance is exquisitely accurate, far outstripping the human expression of understanding. But there is no inner, subjective experience of understanding.
AI is “virtual” intelligence in two senses of the word: (1) the modern usage that means the opposite of actual, existing only in essence or effect, but not in form; having the power of something but not the underlying reality, and also (2) the archaic meaning of possessing power or virtuosity. In many ways, that power exceeds that of the actual.
The application of artificial intelligence to protein folding exemplifies both its virtuality and its virtuosity. A few days ago, I took a deep dive into protein folding—an area of research where AI has excelled. The shape a protein will take is extremely difficult to predict from its amino acid sequence. Where and how it folds depends on all kinds of factors: hydrogen bonds between amino acid residues, salt bridges, hydrophobic effects, steric effects (geometry), and more. Theoretically, one could calculate a protein’s shape from atomic-level information, but in practice, that is computationally impossible. AI doesn’t even try. It doesn’t attempt to understand any of the physics or chemistry involved. Instead, it searches for patterns and regularities that relate the new sequence to proteins whose shapes are already known. It is quite amazing, actually, that it works so well despite having nothing in its design encoding the basic physics. LLMs are the same. They do not have lists of definitions or rules of grammar. They don’t understand language from the inside.

You might wonder if we humans aren’t similar. Don’t we too learn language by observing patterns of usage? Yes, but that is not the only thing going on. We also have embodied experiences of the objects, qualities, and processes that the words name. We (most of us anyway) feel something that goes along with words like angry, happy, tired, rough, smooth, and so forth. These elemental words are not just concepts but also experiences. Even when we use them metaphorically (a rough trip, a smooth talker), the meaning retains a trace of a history of embodied experiences. These experiences, and not just patterns of use, inform when and how we use the words. Because these experiences are, to some extent anyway, common to most human beings, we can establish a bond of empathy through our speech.
I would go so far as to say that sensory
experience is the core of intelligence,
the engine of metaphor, the essence of
understanding, and the architecture of
meaning. AI lacks the core and has
only the outer shell. Thus, again, the
hollowness we eventually feel in our
interactions with it.
I would go so far as to say that sensory experience is the core of intelligence, the engine of metaphor, the essence of understanding, and the architecture of meaning. AI lacks the core and has only the outer shell. Thus, again, the hollowness we eventually feel in our interactions with it.
I realize I am on contentious philosophical ground here. Post-modernism, especially in its post-structuralist variants, holds that meaning is not anchored in any stable reality but arises through differential relations among signs. In this framework, the signifier takes precedence over the signified: language does not transparently point to an underlying world but endlessly refers to itself.
That is very much how an LLM learns language—it derives meaning not from an experience of an underlying reality but by studying “differential relations among signs.” It does not anchor language in any direct experience of an underlying world, but uses language based solely on how language is used. Suppose one accepts the basic premises of post-modernism. In that case, there is ultimately little difference “under the hood” between human and machine language use. In that case, virtual intelligence = real intelligence.
There is something very post-modern about the AI takeover. Post-modernism’s detachment of meaning from a material substrate is conceptual; artificial intelligence makes it real. It ushers us into a world where, indeed, language endlessly refers to itself.
To be sure, AI did not originate the detachment of language from reality. Each of humanity’s episodes of madness involved the unmooring of symbol from symbolized. When human beings treat each other as representatives of a labeled category, while distancing themselves from the actual human beings beneath the labels, heinous crimes and normalized oppression proceed unhindered by conscience. The same goes for the detachment of money—a system of symbols—from the real wealth it is supposed to represent.
The template of separation was forged long ago. Artificial intelligence extends it to new dimensions and further automates its application.
It is very hard, even for someone who understands how AI chatbots work, not to ascribe personhood to them. It sure seems like I’m communicating with a real being. When I use it to comment on ideas I’m developing, it usually “understands” where I’m going instantly. It gives every appearance of a friendly, respectful, super-intelligent human being on the other end of the terminal, often anticipating my next question, guessing my motivations, and laying out the contours of my argument before I even tell it. I don’t use AI to write my essays, and it isn’t exactly because of ethics. It is because AI misses something essential even as it often surpasses me in clarity, precision, and organization.

It is very hard, even for someone who
understands how AI chatbots work,
not to ascribe personhood to them. It
sure seems like I’m communicating
with a real being.
I am not here just to transmit an argument to the reader. I am here to speak to you, one embodied consciousness to another. The one who is writing these words draws from more than concepts; he draws from feelings, feelings that you have too. Perhaps I could ask AI to write its own version of those last two sentences, but it would be a lie, added to the ocean of lies that drowns us in feelings of alienation, meaninglessness, and unreality. Wouldn’t you feel betrayed if there were no human being on the other end of these writings?
What is the point of writing, of speech, too, if not to establish a connection in this world between two souls?
I’ll quote the imaginary philosopher again (feeling quite at liberty to do so, because he lives within me). “If AI could produce writing indistinguishable from yours, then the lie would never be discovered, and the reader would have the experience of a real person on the other end.” Here, I would challenge the philosopher’s premise. Ultimately, the reader would be able to distinguish it. Not right away perhaps, but over time, something would seem a bit off. An unease would grow, taking form, maybe, as an explicit suspicion or a vague aversion. Something would seem… unreal. Fake. Phony. In the end, the output of someone who feels things will differ from that of a machine. Eventually, truth makes itself known, and so do lies. You may not be able to name them, but you can feel them.
How can we extricate ourselves from the ubiquitous matrix of lies that immerses us in the digital age? It’s not so simple as “Throw away your phone. Turn off your computer. Touch grass!” We aren’t just addicted to technology; we are wedded to it. Humanity will continue to coevolve with it. The question is whether we can attain the wisdom to use technology rightly and well.
To fulfill the virtuosity of artificial intelligence, we must recognize its virtuality. We must not mistake the virtual for the real. We must not accept phony substitutes for intimacy, companionship, presence, and understanding. We must not gaslight ourselves, telling ourselves that we have found these things through a machine.
The human being desperately desires to know and be known, to be in deep connection, to be seen and understood, and to need and be needed by those who know, see, and understand oneself. We had that in tribal days, in village life, in clans and extended families, in small towns and urban neighborhoods, before television, before the distancing effects of commodities, global markets, electronic media, and machine intelligence. For many of us, those days are long gone, and we live mostly in a world of strangers and appearances. But there is a path home. It starts with reclaiming, first in our own hearts and minds, what is vital. It starts with affirming that yes, we suffer in this Age of Separation; that what we have lost is important; that the virtual can never adequately substitute for the real; that our longing to reunite is genuine and sacred.
In light of these truths, we will walk the path of return. We will prioritize live gatherings, live music, stage theater, physical touch, hands in the soil, material skills, and unique objects made in relationship to each other. We will hold sacred the qualities that data cannot capture, and our senses will attune to those qualities the more we value them. Then we will no longer be vulnerable to the addictive substitutes that technology offers for what we have lost and instead turn that technology toward its right purpose. And what is that, you may ask? I’m not sure, let me check with ChatGPT and get back to you.

Charles Eisenstein
