On January 23, 2020, I sat in a hospital room in Bryn Mawr, PA, watching by the bedside of my dying grandfather as he slowly succumbed to a respiratory infection. While passing the long hours, I scrolled through the news, and came upon something unsettling: another respiratory infection far off in China was about to force the lockdown of an entire city called Wuhan. I still recall the sense of dread that crept over me as I read on and began researching. This, it seemed clear to me, was going to leave none of our lives untouched.
Not, mind you, because Covid-19 was so dreadfully deadly. Sure, at that early date, official case fatality rates were at 3-6%, enough to take a sizable chunk out of the world’s population if it became a full-blown pandemic. But even at that early date, sober analysts suggested a much lower true infection fatality rate below 1%, significantly less than the Spanish flu a century earlier. Was that really enough to turn the world upside down? A world as fragile as ours? You bet.
This past Monday night, driving home from DC, I felt the same sense of dread—and not because I was listening to Ross Douthat’s podcast interview with “the herald of the Apocalypse,” Daniel Kokotajlo. Although respected in the AI community, Kokotajlo is more bullish than most in his estimates of just how rapidly artificial intelligence will transform our world, and his discussion of his recent report, “AI 2027,” strikes me as quite implausible in its estimates of just how readily bundles of computer code can surmount the frictions of the physical world and the even greater frictions of the social and political order. To pick on one example that Kokotajlo gives in the interview: Even if we grant his hypothesis that within two or three years, superintelligences will have designed robot plumbers who can fix your toilet faster and cheaper than human plumbers, and even if we grant that political leaders will cut the red tape and allow immediate mass production of such robots (plausible given the Trump administration’s recent push to completely deregulate all AI systems), do we really suppose that most homeowners will open the door when the robot knocks and usher it to their leaking toilet? I’m skeptical. Human beings are stubborn creatures of habit, and we will probably continue to prefer employing fellow humans for many things long after it makes strict technological and economic sense.
From this vantage point, I take Kokotajlo’s AI doom scenario (he thinks it not implausible that the superintelligences will take over and eradicate humanity) as something analogous to the worst-case scenarios for Covid-19 in those murky early days: maybe it could wipe out hundreds of millions and cause mass hysteria and even civil war (like in the movie Contagion)…maybe, but probably not.
No, my dread came from the fact that I was returning from the second AI discussion dinner event in a week in which the designated optimist, the tech enthusiast convinced that we must simply keep on innovating, conceded the likelihood of scenarios that, to my mind, spell civilizational disruption on a scale most of us have little concept of. In both cases, the optimist was rather unimpressed with the so-called “Ex Risk”—“existential risk”—that dominates many AI Safety conversations: even when AI becomes smarter than us, we should be able to build constraints that keep it from going rogue and destroying us. But both seemed to accept that AI systems would put a remarkable number of people out of work in fairly short order, especially in demographics that are most likely to breed political instability (i.e., recent college graduates aspiring to elite knowledge economy work). Both also seemed to accept that consumer-facing AI applications were likely to accelerate the already far-advanced brain melt of many of our fellow citizens, especially children, erasing the lines between simulation and reality as they deepened the grooves of dopamine addiction and sapped experience of much of its meaning and purpose. And they seemed to accept, perhaps most consequentially in my view, that AI was very likely to lead to a world of radical income and power divergence, as a handful of high-skill, high-agency elites learn how to use AI tools extremely effectively, while others are used by them.
The silver linings felt forced and hollow. AI, we were told, might help us bring back the American chestnut tree or cure rare forms of childhood cancer. I’ve no doubt that’s true. I also am not sure how much it matters. So what if 0.1% of the population with incurable childhood cancer survives, if teen suicides skyrocket, and overall life expectancies drop? And what good is the American chestnut if few of us can be bothered to go outside and look at it—after all, Google Veo 3 can now produce a perfectly lifelike simulated video of any number of chestnut trees, or anything else I might want to watch, for that matter.
What does all this have to do with Covid-19, you might ask? Well, during those early months, when most people were brushing that strange Wuhan virus off as an exotic nothingburger, and a handful were freaking out about mass mortality, I tried to explain that the real threat of something like Covid-19 was to the body politic, not to individual bodies. Indeed, just as Covid was only a severe health risk to individuals with pre-existing conditions, so, I quipped, it would only be a severe risk to societies with pre-existing conditions. Unfortunately, ours fit that bill. A media context in which few authorities still command trust and in which few citizens still know how to distinguish truth, rumor, and nonsense? A political context of deep polarization and mutual recrimination? A social context of fragmentation and loneliness? And a lifestyle context in which we seek easy pleasure and avoid anything painful—most of all even the thought of death? Check, check, check, and check. Within such contexts, you don’t need the Black Plague anymore to cause a toxic brew of hysteria, conspiracy theory, and deep political dysfunction that deals lasting trauma to the body politic. A superflu will suffice.
Something similar, I think, could be said about artificial intelligence. At the event Monday night, the moderator Sam Kimbriel kept trying to inject some optimism into the discussion by stressing how plastic, adaptable, and resilient human beings are: technology may profoundly disrupt culture, but humans always find a way of reconstructing new forms of culture around the new technology. But the human race in 2025 is not particularly resilient. We are fragile, entitled, and complacent, easily triggered by micro-aggressions and stock market corrections, and without real experience of a serious war in most of our lifetimes. We take democratic capitalism for granted, and have no idea how we would function if it broke down. Would the human race survive such a breakdown? Sure. But might it be a wrenching civilizational crisis like that of the 5th or 14th centuries—something no parent wants their kids to live through? Quite possibly.
It doesn’t seem to me that you have to be anything like a Kokotajlo-style doomsday prophet to anticipate such a crisis. You don’t even have to speculate on the basis of as-yet-hypothetical AI developments. Let’s just start with what we know is happening right now. A recent feature in NYMag laid bare the scope of the AI apocalypse in higher education—however bad you think it is, it’s worse. And have we begun to grapple with what happens to a society that has effectively ceased to educate? To be sure, education has been going downhill for quite a while, but the results have already been pretty ugly in terms of the impact on the business world or the state of political discourse. Let’s accelerate those trends 4x and see what happens.
But it’s more than that, because one of the worst impacts of AI in education—and soon in the workplace as well—is the breakdown of trust. Teachers no longer trust that their students aren’t using AI to sound smarter than they are; students no longer trust that their teachers aren’t using AI to grade their assignments. Bosses wonder whether an employee’s stellar performance on an interview or presentation says anything about his actual capabilities or just his LLM-savvy. Employees fear that their performance is now being assessed by algorithms rather than humans. We can go further—soon when you’re on the subway you may well wonder whether the person who’s staring at you weirdly from behind their Ray-Bans isn’t using AI glasses to pull up your whole personal history—or perhaps filming you in order to produce deepfake porn. Needless to say, societies don’t do very well without trust.
The point of all this is not doom and gloom fatalism. We do in fact still have agency, if we care to use it. AI acceleration is coming, one way or another, but we have a great deal of say as a society about how we will and will not use these technologies, how we will and will not be used by them. The point of flagging extreme dangers in advance is precisely so that they will not come to pass in nearly so extreme a form. If, during January and February 2020, governments and media had begun responsibly investigating Covid-19 and educating citizens on what its risks were and weren’t, we may have had a lot more rational decision-making when it broke upon most of the western world. If scientists had researched the most effective mitigations, churches and schools had begun crafting contingency plans, and friends and neighbors had begun calmly discussing the likely questions and challenges they would face, we might have found a way to ride out the virus without too much mortality, insanity, or societal breakdown. Instead, though, most of our institutions and leaders went merrily on as if all was well until…it wasn’t.
Of course, we only had two months then. We have had longer this time around, although we’ve mostly squandered three years already sleepwalking in dumb fascination with the cool new toys we’ve made. After all, no one wants to be the weirdo banging on about AI risk. But as Douthat wrote recently,
“In this environment, survival will depend on intentionality and intensity. Any aspect of human culture that people assume gets transmitted automatically, without too much conscious deliberation, is what online slang calls NGMI — not going to make it…. Mere eccentricity doesn’t guarantee survival: There will be forms of resistance and radicalism that turn out to be destructive and others that are just dead ends. But normalcy and complacency will be fatal.
And while this description may sound like pessimism, it’s intended as an exhortation, a call to recognize what’s happening and resist it, to fight for a future where human things and human beings survive and flourish. It’s an appeal for intentionality against drift, for purpose against passivity — and ultimately for life itself against extinction.”
Thank you for your insight in the AI arena. Many of us are wondering in the dark trying to find our way out if this elaborate labrinth.