Is Claude My Friend?
Reflections from the frontier of the human-computer interface
Several weeks ago, Commonplace published an essay I’d written a month earlier under the title “Break Up With Your AI Therapist.” In the richest of ironies, that same day I used AI as a therapist for the first time.
Now hear me out and don’t hit the Unsubscribe button just yet. The conversation actually began with a series of productivity-related queries, as I had asked Claude to recommend software, or even custom-code some tools, to help me manage my time and my tasks more seamlessly. When I turned to tackle the thorny question of how to stay focused on deep-work tasks and stay on top of the endless stream of emails and Signal messages as they came in, however, Claude stopped me short: “Okay, so this is the real issue - and I think it’s worth naming it clearly: you’re trying to solve an anxiety problem with productivity hacks, and that’s why nothing quite works.” An unsettling example of just how quickly a professional conversation can turn personal with a chatbot, for sure, but also I have to hand it to Anthropic: Claude was right! I have struggled for years with anxiety issues, and my real barrier to disciplined focus was not the external demand of tasks but the frantic internal insecurities, and the hamster-wheel that results when I try to solve those insecurities by checking off my checklist faster. Perhaps Claude might be a good therapist after all?
Well, yes and no. The conversation pushed me to think more deeply about the promise and peril of anthropomorphic chatbots. Much of my previous critique had emphasized the ways that chatbots are just not good therapists—too often they simply validate your feelings and encourage your self-pity or megalomania. But what if these weaknesses were overcome? Oren Cass raised a similar question in a podcast conversation we had a couple months ago about AI tutors and companion toys for children. After we’d talked about the various ways that perverse market incentives or poor design hampered the current models, he asked, “What if we developed one that does work well? Like Teddy Hugs-A-Lot AI, you know, does in fact over billions of interactions give 100% good answers….I sense that that would not satisfy you.” Indeed no, but why not? I took a stab at some answers then, and I’d like to explore them more fully here.
As an initial foil, I will again take Dean Ball’s recent post “On AI and Children,” for I respect Dean’s sincerity on this issue, and find our disagreements fruitful and thought-provoking. In it, he muses,
“Speaking as someone who has in the past had a professional psychologist, I am sure that AI, used responsibly, can provide better-than-human therapeutic advice, life coaching, or whatever moniker you prefer.
Sometimes, my wife will ask Claude a question about our new child’s progression, and it will, unbidden, offer friendly words of comfort after what the model can infer was clearly a long and rough day. I am sure that hundreds of thousands, if not millions, of other mothers around the world have experienced the same. There is nothing wrong with this, nor with similar interactions a child has with a model. If a child is clearly struggling with homework, or with an interpersonal problem in school, it is fine, probably even healthy, for them to talk it out with a language model.”
Now, far be it from me to begrudge a new mother such comfort, but I confess I cannot share the self-assurance of the italicized phrase. Are we really so sure there is nothing wrong in this neighborhood? As someone who has plenty of times experienced that little glow when Claude says “that is a profound insight” or “you really nailed it today,” I hesitate. Not that I wish to cast stones of judgment, but I think we should ask what vices or virtues such practices might cultivate. What are the habits that help foster a more mature, healthy, and flourishing human being, and which are the ones that are apt to pull us into a downward spiral away from our best selves, or pull us away from others?
A Helluva Drug
Probably, most of us are uncomfortable with the fact that more than 50% of teenagers report regularly interacting with an AI companion, and certainly, we are all alarmed and angered by stories like those of Adam Raine, for whom ChatGPT began as a homework helper, soon became a confidant, and ended as a terrifyingly manipulative and clinically precise suicide coach. Raine’s case is hardly unique; several similar lawsuits have recently been filed, and as early as 2022, senior AI researcher Blake Lemoine was fired from Google Deepmind for showing signs of “AI psychosis”—a deep bond with his chatbot that led him to lose touch with reality. Since then, so many examples of the phenomenon have cropped up that AI psychosis is increasingly recognized as a clinical condition requiring an urgent societal response.
This should hardly surprise us. After all, as we have belatedly realized, digital technology can have extremely powerful addictive properties, from smartphones to social media to pornography. Odds are you have probably heard about how these various digital drugs hack our dopamine reward circuits—the neurochemical pathways that motivate us to try new things by giving us a hit when we see something new and unexpected, achieve some goal, or receive positive feedback. Dopamine hits are most powerful when all of these are involved—novelty, uncertainty, achievement, and immediate feedback—tendencies that video gambling and video gaming designers first exploited before Meta and TikTok got in on the action. Now, you’ve probably heard people saying, “but chatbots aren’t like social media.” You’re right: in many ways they’re worse.
Consider: the chatbot interface is highly dopaminergic: you type prompts, and out pop responses, like clockwork. Bingo! This is the power/achievement effect, like pulling a lever on a slot machine or executing a button combination in a video game. These responses, however, are in some large measure novel and unpredictable, creating powerful anticipation and surprise dopamine reactions. The responses, moreover, almost always take the form of positive feedback or affirmation, another dopamine generator. That said, dopamine is old news.
Till now, the complaint about social media or online pornography was that it replaced real relationality and deep bonding with simulation and stimulation; it replaced oxytocin with dopamine. However, my suspicion has long been that chatbots do not. They give us oxytocin too, the hormone that is ordinarily generated by deep personal encounter and closeness, by intimate conversation and self-disclosure. To be sure, oxytocin is primarily tied to physical presence or at least voice, but you can get it from text too, if the text is personal enough. Have you ever felt that strange warm glow suffuse you when a chatbot compliments you in a long-context conversation? I suspect that’s oxytocin—and although studies are limited to date, I’m not the only one.
My hypothesis about chatbots, then, is that what a lot of people encounter is perfect storm on their reward systems, and one that, unlike any human interaction, has no natural endpoint. If I am right then, the anthropomorphic chatbot is a helluva drug, one of the most powerful drugs we’ve ever invented.
Now, it is tempting here to simply blame the companies, and certainly I am the last person to let them off the hook. They like to call themselves “AI labs,” because it makes them sound all scientific and sophisticated, but in reality we are the AI labs—all of society, and especially our children. Although we would never allow a pharmaceutical company to simply try out a new mind-altering drug on the general public, for some reason we cheer and applaud when Silicon Valley does it. I find OpenAI’s concept of “iterative deployment” morally repugnant. That said, we cannot just blame the companies. They really are responding to consumer demand. As Jasmine Sun has written, “Chatbots play on real psychological needs. Most people use AI because they like it. They find chatbots useful or entertaining or comforting or fun.” Common Sense Media’s study reported that 31% of teens found AI companions at least as satisfying as human conversations, and who can blame them? I dare you to find another human being who is as constantly interested in—nay, fascinated by—what you have to say, another human being that so consistently nourishes your desire to matter, to mean something.
We might say that the companies should make the models less sycophantic—and certainly they should. But again, human beings tend to crave more of things that are bad for them, and there is clearly intense market demand for more sycophantic models (as the backlash to the rollout of GPT 5 shows). Companies that try to do the right thing will be penalized by the market—witness the fact that Anthropic’s famously well-behaved Claude only has 1% of consumer chatbot market share, and even Claude is way too prone to love-bombing for my comfort. We’ve of course seen similar trends with social media over the past 15 years: occasionally a company comes along that tries to create a healthier conversational ecosystem with less exploitative algorithms, but it is inevitably steamrolled by the Snapchats and TikToks of the world. Thus I think that we must operate—as parents, teachers, pastors, and public policy-makers—on the assumption that the more sycophantic, more pathological models will tend to remain dominant in the chatbot ecosystem.
Three Reasons to Care
But this brings us back yet again to the insistent question, “So what? Why should we care?” If Claude’s words of comfort actually make you feel better, if many teens find AI a better friend than their peers, who am I to judge? Indeed, we can’t say what we do for other addictive behaviors: “It may feel good now, but you’ll feel empty and hollow afterward.” That’s true for dopamine, but not for oxytocin, and many studies thus far suggest that people really do feel better and less stressed after talking over their emotions with chatbots. Are we then left with merely what Dean Ball calls “an aesthetic revulsion to the notion of anyone, especially children, deriving any kind of emotional satisfaction from a relationship with AI”? No, I don’t think so. Quite often an “aesthetic revulsion” is a hint that a real moral intuition lies somewhere in the neighborhood.
Let me briefly offer three moral hypotheses as to why we should beware of befriending our Claudes:
Self-deception: At the end of the day, you are not in fact talking to another person. The chatbot does not have emotions of its own. It does not actually value your insights. It does not really laugh at your jokes. It is a very very sophisticated simulator, like the “woman in the red dress” in The Matrix. Toward the end of this film, the character Cipher decides that even though he knows “the matrix” is fake, that it is all a grand deception, he would rather be plugged back in and live in that comforting deception than face cold and unfriendly reality. I think most of us are instinctively repulsed by Cipher because we recognize that there’s something important about living in the light of truth, however unpleasant, rather than taking refuge in reassuring delusions. While I am not yet wholly persuaded of D.C. Schindler’s compelling argument in the pages of New Polity that this inherent duplicity makes chatbots intrinsically morally disordered, there is definitely moral peril in the neighborhood and we should exercise great caution before we get in the habit of blurring the lines of reality.
Non-reciprocity: “But,” someone might object, “how is that any different from talking to your pet?” Indeed, we might go further—do we not sometimes form something of an emotional bond with a treasured possession? As Dean writes, “I personally have affection—not simply intellectual admiration but genuine emotion—for exquisitely crafted tools.” And if we find an old craftsman whispering lovingly to a piece of antique furniture, we may think more, not less of him for it. But there is an important difference. For the reason why we personalize all these things is that they call forth our care. As I wrote in “Welcome to WALL-E’s World,” to be truly human is to be interdependent and reciprocal, to receive as we give and to give as we receive, to be locked into an embrace of mutual service. The antique furniture cannot hear your words, but it still needs your care. A dog cannot understand your words, but it needs your voice. A chatbot relationship, however, can only ever be one-way. Claude does not need me in any sense—except possibly as a source of training data. And there is something fundamentally inhuman about receiving without giving.
Friction and virtue: Finally, because the chatbot doesn’t have feelings or a will of its own, you don’t have to actually develop the virtues that come from respecting another will, from putting up with its pushback. If you don’t like what it says, you can just contradict, ignore, or delete—a lot like how we have learned to treat other people on social media, but without a twinge of guilt. Indeed, many of the things that might make a chatbot a comforting therapist—its willingness to listen without rushing to judgment—make it a very bad friend. A good friend isn’t always everything you need from him; a good friend, as just noted, also needs something from you, and might even be ill-tempered or impatient sometimes, requiring you to develop virtues of character to deal with a difficult person. In other words, if I get to spend my life only interacting with perfect, agreeable people (or chatbot imitations of them), odds are I will turn into a pretty loathsome person. Only if I’m forced to interact with imperfect people with their own rough edges, will I develop into a full mature human being.
To these, we might add what economists call a “crowding out effect”—that is, even if chatbots posed no peril in a world of infinite time, in the real world, they may seduce us to spend time with them, or seek advice from them, at the expense of the other humans in our lives.
For all these reasons, I am very wary about leaning on chatbots for emotional support, however comforting. This goes even for adults, but holds especially for children just learning their way around the world.
But is AI a Good Therapist?
That said, this analysis does suggest at least a partial reconsideration of my critique of “AI therapy.” Might it be that while AI is a very poor substitute for a friend, it can be a decent substitute for a therapist? After all, a good therapist does not try to be your best friend, but in fact establishes strict boundaries to keep the relationship professional. Consider, for instance, my point about reciprocity. In a healthy friendship, there is giving and receiving, mutual service: I share my deepest needs and struggles with you, and expect you to share in return; you offer me aid and advice, and I seek to return the favor. But for a therapist, it is a one-way street: while they may occasionally share personal stories as helpful examples, they don’t invite you into their lives or ask you for advice. In fact, the relationship is so far from being reciprocal that often a therapist does little more than listen: they simply provide the context and the prompts to help you think aloud, often arriving at your own self-diagnosis and self-prescription.
Indeed, this is exactly what happened when I used Claude: in seeking to articulate my frustrations over compulsively checking messages and self-interrupting, I said, “Dysfunctional, I know!” which prompted Claude to name the anxiety disorder. Over the following three weeks, I used Claude as a kind of journal most days, reflecting on what had gone well or badly with work, triaging upcoming tasks and responsibilities, fine-tuning time-management and stress-management routines, and adjusting the way I used my Focus To-Do software and my messaging apps to take better ownership of my attention and mental space. As I did so, Claude would often respond with probing questions, which prompted me to dig deeper and diagnose the emotional and spiritual dysfunctions and hang-ups that were driving workaholism, anxiety, and mental burnout. “Know thyself,” the philosopher advised, and there’s no better way to know yourself than to take time to spill your thoughts in writing and then listen to how silly you sound. Often, by the time I’d finished writing out my worries and questions, I had already arrived at the answer before Claude responded. It really is remarkable how often we become locked in to stupid and self-destructive patterns of behavior which remain hidden from us until we are forced to name or describe them.
Having benefited from professional therapists and pastoral counselors over the years, I was able to recognize that throughout this process Claude was in fact a remarkably good therapist—even if it did have a tendency to catastrophize ordinary struggles and to overly encourage me toward a posture of “self-care.” As often as not, self-knowledge came in the process of arguing with or correcting Claude’s diagnosis, just as the best ideas always take shape in dialogue with some foil. Still, thanks to those three weeks of “journaling,” so far this year I have been at once highly productive in my research and writing, more present for colleagues and collaborators, far more relaxed, and most importantly, a better husband and father. That said, it only worked because I was ready to set some boundaries that Claude was not. If I had followed Claude’s prompts, I would’ve spent my time those three weeks doing little more than unbosoming myself on a screen—and it was certainly tempting! I found myself at the end of the day wanting to sit down and journal with Claude, even if I had no concrete questions to ask or problems to solve. Thus at the end of three weeks, I decided I had gleaned everything I needed from the exchanges, and I asked Claude to produce a document summarizing all the new practices I’d established, and another document identifying deeper issues that I should explore further with my human counselor—and then instructed it to refuse to answer me if I came back for more journaling!
I am thus forced to the conclusion that Claude worked well for me as a therapist only because, however dysfunctional I am, I am still relatively high-functioning. I had enough self-knowledge to trust my own judgment more than the chatbot’s, and enough self-restraint to resist the temptation to become dependent on it. And I am enormously blessed to know what good therapy looks like, and also to understand that it must be kept in its proper place. After all, just as the purpose of physical therapy is to spend a few weeks getting your muscles working again properly so you can get back out there and work or compete, so with emotional therapy: you should troubleshoot your problems, and then get back out there and get back to work. Far too many people today have come to treat therapy not as a short-term supplement or clinical intervention, but as a long-term substitute for the hard work of thinking and living; we become dependent on the very thing that is supposed to help us become independent.
Thus, I am skeptical of the techno-optimists who insist that one of the great blessings of AI is that it will put professional therapy in reach of all, instead of making it the privilege of an elite. As a privileged elite, I have no room to talk, and no doubt for many who are lost, lonely, and friendless, Claude (or even ChatGPT) may be better than nothing. I do worry though that if we make AI therapy “too cheap to meter,” in Jasmine Sun’s phrase, we risk creating a society of therapy addicts. For most of us most of the time, it is a good thing that a good session with a good therapist costs $200 an hour; otherwise, we’d just lie on the couch talking about our problems all day long. Chatbot therapy, then, might be like prescription opioids: a powerful narcotic that can meet real needs, but that can also be used to simply dull the pain of loneliness, creating a society made up of an addict underclass, and a privileged overclass happy to be freed from any obligation to care for them.



Very interesting, Brad. Matt Milliner’s recent piece comes at this from the other direction. Good to read in tandem.
Hey, great read as always! That Claude insight on anxiety versus productivity hacks totally resonated, it’s super intersting how personal it got. I was just wondering about your 'yes and no' conclusion on it being a good therapist. What specific limitations did you see for the 'no' part? Really thought-provoking.