Let Me Google that For You
The AI threat to our humanity no one's talking about
Over the weekend, I called up my dad, a financial advisor, with a tax planning question. Halfway through asking it, I chuckled and observed apologetically that I probably could’ve asked just asked ChatGPT—which, in fact, he went on to do on the other end of the line, since he didn’t know the answer. We spent the rest of the phone call talking about the human need for human help.
In the constantly-swirling maelstrom of debate around AI from doomers and boomers, the optimists often concede that while the current state of generative AI usage is appalling (people spending hours seeking romance or therapy from bots, and occasionally being driven to suicide). But the future, they say, is bright: a world of “AI agents” programmed not just to answer questions, but to tackle mundane tasks for you at home, work, or school—often without even needing to be told. Imagine a smart fridge that noticed you were running low on eggs and notified your AI agent, which calculated from your past usage rates how many to buy and went ahead and placed an Instacart order. Or imagine personal finance software that interpreted all your income and expenses and optimized your tax planning and filing without your ever having to consult a professional. Such a world, we are told, is one that will free us up from the drudgery of labor in order to do more “meaningful” things with our lives. I’m not so sure.
Some years ago, in the relatively early days of social media, I recall my first rude encounter with LMGTFY, “Let Me Google That for You,” which allows someone to create a URL that loads an animation of someone condescendingly typing a given query into Google for you—the idea being that you should’ve just done so yourself. The webpage’s tagline is “For all those people that find it more convenient to bother you with their question than to google it for themselves.” It’s hard to think of a tagline that better expresses the anti-humanism of Silicon Valley. Several times, in the course of back-and-forth debates on Facebook (yes, it was the early 2010s), I would ask my interlocutor to explain some term he’d just used, and get a LMGTFY response. At first, I felt foolish and technologically inept—of course I should’ve just Googled it. But wait—if it was just the facts I wanted, why was I having a conversation with another human being to begin with? I was having a conversation because I wanted to pursue truth with another human being, and I respected him enough to trust that he had certain knowledge I did not. Within a conversation, one shows that respect by inviting the other person to explain their ideas and terms; indeed, even if you already basically know what they mean, you’ll often invite them to say it in their own words to deepen the rapport and make them feel valued.
We are wired as human beings to need one another, and, crucially, to need to be needed by one another. In fact, it is precisely in such moments of feeling needed that we gain most of the sense of meaning in our work. Of course, we can often be short-sighted, and grumble at being interrupted by a colleague or pestered with an email query that could, we deem, be easily enough answered by Google or ChatGPT. But I suspect that if you’re like me, after you’ve answered the query or helped the colleague, you often settle back into your desk with a subconscious warm glow and a sigh of satisfaction. Your work has just been affirmed as meaningful, and you have just been affirmed as an expert worth consulting, even if only in a small and seemingly trivial domain. Having become aware of this response in myself, I have increasingly made it a point as I’ve gotten older to look for opportunities to do the same for friends and colleagues. I find myself wondering where to find some piece of information, and someone quickly comes to mind—“ah, I bet they would know!” My next thought is often, “Oh…hm…I could probably figure it out myself, so maybe I shouldn’t bother them.” But I have learned to override this impulse, at least on occasion (there is a fine balance to be observed here!), and the result is ever-deepening bonds of mutual trust, respect, and appreciation with friends and colleagues.
The same goes for everyday life. There is a certain type of person, to be sure, who is maddeningly needy and dependent, ready to ask everyone in sight for favors. But we are often tempted to overreact to the experience of such people in our lives by trying to cultivate a stern self-reliance, repeating the mantra, “I don’t want to be a burden on anyone.” But God created us to bear one another’s burdens, and it turns out that a church or school community is immeasurably enriched by a buzzing weekly trade in carpools, baby-sitting, meal trains, borrowed power tools, and more. The person who never wants to ask anyone else in their community to do them a favor thinks they are being selfless; in reality, they can be depriving others of the blessing of blessing them.
Now what does all this have to do with AI?
So much of the debate over the goods and ills of new digital technologies tends to circle around the question of whether they are functioning as productivity tools or mere attention extractors. Almost everyone seems to agree by now that most of social media represented a technological wrong-turn, a diversion of potentially transformative technologies into the dead-end of political food-fights, infinite scroll, cat videos, and OnlyFans stars. That, everyone agrees, is what is behind our disastrous decline in mental health over the past fifteen years. The debate over AI, then, is whether it is likely to pull us ever deeper into this cul-de-sac, or whether it represents a liberation from it into a new, hyper-productive economy in which robots can take care of all the tedious tasks and free all of us up for meaningful work.
But what if the problem runs deeper than that? What if at least some of the rise in anxiety and depression over the past fifteen years stems from a growing sense that we are no longer needed by one another? When was the last time you stopped and asked a local for directions, after all? And how often do you get your restaurant recommendations from Yelp these days versus from other people? For most of our shopping, we no longer rely on sales associates, but browse webpages, and although we still rely on other humans to deliver the products to our door (not for much longer, if the robot-boosters have their way!) we rarely see or acknowledge them. Casual conversations that might once have blossomed around the water-cooler or the soccer sideline do not anymore because we no longer have anything to ask one another—we can just look it up on our phones. If all this is the case, then is not our current crisis a result of the fact that we still have work to do, but it no longer has meaning, because it feels like we are simply performing it for a computer system, not as an act of service to another human being? And if this is so, then how are AI agents, by depriving us of any occasion to ask another human being to do us a favor, supposed to bring back “meaningful work”?
To be sure, I do not want to romanticize, as if the less technological past was a glorious era in which the sheer scale of suffering and need made everyone’s lives so wonderfully meaningful as they toiled to keep themselves and one another alive. Or as if we should aspire to bring back the world of Downton Abbey, so that we too might find a rich sense of purpose in serving our betters. The advent of labor-saving technologies really can and does enrich human existence and free us up to do more “meaningful” things—up to a point. But here, as in so many places, we encounter the “Irishman’s two stoves” principle: that continued travel in the same direction may prove self-defeating beyond that point.
Today we are increasingly living in WALL-E’s world—not yet, to be sure, the dystopian planetary landfill overwhelmed with consumer surplus, but increasingly the life of the barely-humans on the spaceship Axiom: obese smoothie-slurpers wobbling grotesquely on their hover-pods, so glued to the screens in front of them that they’re not even aware of the humans next to them (even if they’re video-chatting with them!). We recognize ourselves at a glance in this satire of our digital attention economy, but as we look more closely, we realize that it is far more than the attention economy being satirized. What is it that has deprived these future humans of so much of their humanity? It is the loss of their labor, the fact that everything of consequence is now done for them by the army of efficient, solicitous, obsequious robots that buzz about the Axiom. They no longer need to do anything for themselves, or more importantly for one another (when one of them falls off his pod, he is quickly surrounded by bots who advise, “Please remain stationary; a service bot will be here to assist you momentarily”). Most crucially, what is it that changes everything in the film, that causes these humans to come alive and rediscover their humanity? It is the arrival of the tiny, fragile plant—significant not merely or even primarily as a sign of life from a seemingly dead Planet Earth, but as the first thing any of them have encountered that is vulnerable, that demands their care.
Much as we may seek to hide it, deny it, or technologically compensate for it, our fragility and vulnerability are essential to our humanity, and so is our reflexive response to our perception of such vulnerability (e.g., our instinctual care and concern for infants). The latter half of the film WALL-E shows a spaceship full of formerly almost inert blobs rediscovering their legs, their muscles, their wits, and their initiative in the attempt to protect a single tiny seedling that seems always only inches from destruction.
Among the many other messages of this film, perhaps the most emphatic is that to be human is to exercise care, to be needed by other living things, and that without this, however much comfort and convenience we may gain, it will be at the cost of our humanity. While it is clear, then, that AI can be designed and used for good, I do worry that even its best, most productivity-boosting, most labor-saving uses may, if we are not careful, come at a very steep price. Can we design AI in a way that serves us without forgetting, in the process, how to serve one another?



I am fascinated with your observation that our lack of labor drives away our humanity; it proves to be true when we look at how God gave man the responsibility to subdue the earth (and He called this, along with the rest of creation, very good!) (Genesis 1:28). Growing up in the digital world, I have always had so much convenience and didn't realize how much damage it was doing to my heart and mind until I was forced to labor over certain mundane tasks, and saw how valuable self-discipline truly is.
I find so much joy in the little things, now that I've provided myself the opportunity to "be bored" and "waste time" laboring over something an app or robot could do for me. I feel like I've been given my mind back, and I have so many more ideas and thoughts that I would not have, had I pressed a button and scrolled through my phone instead.
Yes! I’ve been thinking about this re: the work towards developing artificial wombs (and as I am myself four weeks post-partum!). So often babies and mothers have reciprocal needs that knit them together in love. For example, a baby’s initial feeding helps the mother’s uterus contact back to its original size. Once we outsource fulfilment of those needs we are robbed of the strength of one of the most basic of human connections.