An apology to my paid subscribers is in order—no sooner had I announced a couple of weeks ago that I planned to start recording and posting audio of my longer Substack essays for paid subscribers, than I dropped the ball. But my excuse is a good one—bronchitis, and I still won’t be fit to record audio for a little while yet. Amidst all the coughing and hacking, I also did not have time to write up a regular post last week. However, I thought this might be a good opportunity to revive an older format for this Substack, with a brief opening reflection, followed by notes on stuff I’m reading and stuff I’m writing.
Readers of my past two posts will have detected a deciding darkening of my outlook toward the AI acceleration that seems to lie before us. At the very least, we seem doomed to some deep spiritual and social dislocations from what AI has already achieved and is now achieving in the realm of education especially, and from what it is doing to our concept of human relationships. Even more troubling are the predictions of its impending impact on jobs, with Axios last week reporting the prediction of Anthropic CEO Dario Amodei that “AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years.” If you get a sort of frisson from doom and gloom, the Axios piece is a chilling read indeed, echoed in slightly more subdued tones by the New York Times on Friday: “For Some Recent Graduates, the AI Apocalypse May Already Be Here.”
On the other hand, I would be remiss not to highlight dissenting voices, foremost among them my boss Oren Cass, who blisteringly summarized the Axios article in his Understanding America Substack over the weekend: “People with AI models to sell making outlandish claims about their own products should be the classic ‘dog bites man’ story at this point, but the media keeps covering it with the breathless excitement of “‘talking dog takes man’s job.’” When it comes to economics, you usually don’t want to be on the other side of an argument from Oren Cass, so his AI skepticism is worth taking seriously. It’s echoed by Cal Newport in a recent essay, “AI and Work: Some Predictions.” In it, he acknowledges some of the real capabilities of current AI applications, but expresses significant doubts that we are on the cusp of the world-rending breakthroughs that many AI prophets like Kokotajlo have warned of.
I of course have no relevant expertise by which to assess these competing claims, so I will just offer a couple observations about how to think through this epistemic landscape.
In my last post, I highlighted the parallel to Covid-19, and the sense, in January and February of 2020, that something profoundly disruptive to our society was coming down the pike, and that most of us, our institutions, and our governments, were sleepwalking toward it. There was a good reason for that, of course: the nature of exponential growth curves is that a problem will look vanishingly small, and then still really small, and then still really small, and then still just not that big of a deal…until all of a sudden it’s a very big deal, and threatens to become a very very big deal. It feels like it hits some sudden inflection point, but really it’s been exponentially growing all the time. Until it crosses that threshold of visibility, though, anyone seriously worrying about it—not just in the sense of publishing scary articles in places like Axios or the Times (after all, we had those in the early days of Covid too), but in the sense of actually doing something serious about it—will look rather foolish. No one wants to stick their neck out and be Chicken Little.
On the other hand, precisely because we are dealing with exponential growth curve, a lot depends on exactly what the multiplier is—and on whether that multiplier really is constant. So, for instance, if AI capabilities are steadily doubling every six months, then that implies a thousand-fold increase in five years (and indeed, according to folks like Kokotajlo, more than that in fact, because he believes a threshold will be crossed in which AIs take over AI research and accelerate it). But if the rate of progress slows to just, say, a doubling every ten months, that’s only a 64x increase in five years—94% less! And of course, there is also great uncertainty in where the ceilings are on some of these capabilities—exponential growth will not continuity to infinity but will hit some hard limits, at least on certain fronts.
That said, the interesting thing about this debate, to my mind, is that those who are most pessimistic about AI capabilities give us the greatest reasons to be optimistic about AI outcomes: the bears are the bulls and the bulls are the bears. A world in which AI capabilities increase a thousandfold in the next five years is a terrifying world for all but the wildest techno-optimists; even if the endpoint is an objectively good one, society cannot adapt that fast. But a world in which the progress is more analogous to earlier stages in the digital revolution offers us at least a fighting chance of adapting alongside our tools and putting them to productive use.
Recently Published
“A Band-Aid at Best” (WORLD Opinions, 5/29/25): In this column for WORLD, I explore the current debate over AI in education, contrasting the vague effusions of enthusiasts with the very concrete alarm bells being sounded from the front lines by teachers. I note that while there are ways in which it may seem like AI could improve on current educational practice in many schools, this is largely an indictment of how much our education system has already lost sight of the true meaning and purpose of education.
“The Dangers and Possibilities of AI in Schools” (Commonplace, 5/27/25): In this much longer essay on the same theme, co-authored with Jared Hayden, we focus attention squarely on the recent Executive Order from the administration calling for the rollout of AI in K-12 education. We raise a number of concerns and call upon the administration to set clear guardrails and standards, and conclude with a call to recover the humanistic heart of education.
“Tech Legislation 2.0” (WORLD Opinions, 5/12/25): In this column, I take another look at Section 230, the biggest barrier right now to a sane and safe internet. Thanks to this obsolete and radically misinterpreted 1996 law, most tech companies have little or no financial incentive to enact the kinds of reasonable protections that other industries build into their products as a matter of course. Section 230 reform should be high on Congress’s agenda this year.
Coming down the Pipe
“The Purpose-Driven Tech Life”: I’ve written a short essay under this title for WORLD Magazine (that is, the print magazine; not to be confused with my regular web opinion columns), and I’ve also developed a longer talk on this topic for the Academy of Philosophy and Letters conference, which meets in College Park, MD this Friday. I’ll be giving a version of this talk again at Chelsea Academy’s summer conference in Front Royal, VA, July 19. The basic gist of it is to highlight that the greatest deformity in our relation to digital technology is our lack of teleology—both at the individual and society level. How often have you picked up your phone and scrolled around on it for a few minutes with no discernible purpose whatsoever? And how often has Silicon Valley rolled out some flashy new invention, with no concept of what it may actually be useful for? This reality belies our constant cliche: “the internet/the smartphone/AI is just a tool”—because of course, the whole point of tools is that they are ordered toward particular purposes, and you take them out for those purposes only. What would it look like if we actually started recovering such a purpose-driven approach to the use of our technologies—beginning in our own homes?
AI and the First Amendment: Last week, I submitted a column for WORLD Opinions on whether AI chatbots have First Amendment rights. Yep, that’s right—that was the issue at stake in a recent Florida case, Garcia v. Character Technologies, et al. Attentive readers of this Substack may recall that I raised this issue a few weeks ago, and quickly dismissed it—of course bots don’t have free speech protections. But one religious liberty lawyer took me to task on X, insisting that a bot is just a medium for human expression, the same as a newspaper. This strikes me as crazy, but then a lot of our First Amendment jurisprudence has been more than a little crazy for the last few decades, so we can’t take at all for granted that the Courts will side with humanity on this one. To that end, I’m working on a longer piece for National Affairs that will explore this at a bit more length.
On the Bookshelf
Clare Morell, The Tech Exit: A Practical Guide to Freeing Teens and Kids from Smartphones (2025): This releases tomorrow, and I’m working my way through it now. Of course, having worked closely with Clare, I know much of the substance of this book well by now, but I’ve been very impressed by the clarity and conversationality of the prose. This is a fantastic book to give the person in your life that doesn’t read tech criticism Substacks all the time, but needs to wrap their head around these issues.
Sherry Turkle, Alone Together (2011): I read parts of this years ago, and finally came back and read the whole thing last week. It’s a bit shocking that this was written all the way back in 2011 (especially the huge section about robots and AI—way ahead of the curve), and you gotta figure that Turkle is deeply annoyed that almost no one listened until Haidt went and made similar points 13 years later in The Anxious Generation. That said, this book is perhaps a bit tediously anecdotal, which detracts from its value and readability. We don’t need to hear fifty different stories of lonely teens describing their relationships with their phones. We get it.
Matthew Crawford, The World Beyond Your Head (2016): Another one I read parts of years ago, and then finally read in full last month. Absolutely fantastic though—and dovetails startlingly with so many things I’ve been thinking and writing about the past few years. So many thoughts . . . look for quotes and insights from this gem to show up in many Substacks and essays to come over the next year.
William Shirer, The Rise and Fall of the Third Reich (1961): I’m finally wrapping up this mammoth audiobook this week, after several months. I confess that I slowed way down at the section on the Holocaust and the Nazis’ other war crimes, which was very hard listening. But important listening for our day and age, when there’s been a bizarre revival of Holocaust revisionism on the online Right. What’s truly strange about this trend is that Holocaust deniers often talk as if the Holocaust was some freakish anomaly that can be explained away in isolation as the product of deceptive post-war historiography. But even if we were to grant this for the sake of argument, we’d be left with the millions of other civilians and prisoners of war that the Nazis massacred, or allowed to die in horrible conditions. Are we supposed to believe that all of this is a figment of our historical imagination too?
Ivan Illich, Tools for Conviviality (1972): One of those things that’s been recommended to me many times, but which I finally made time to read a few weeks ago. Really profound thoughtful meditation on the shape of modern technological life. The most useful concept I came away from it with was Illich’s idea of “radical monopoly,” on which I may be doing some writing in the weeks/months to come:
“By radical monopoly I mean a kind of dominance by one product that goes far beyond what the concept of monopoly usually implies….[Traditional monopolies] might even compel him to buy one product on the market, but they seldom abridge his liberties in other domains. A thirsty man might desire a cold, gaseous, and sweet drink and find himself restricted to the choice of just one brand. He still remains free to quench his thirst with beer or water. Only if and when his thirst is translated without meaningful alternatives into the need for a Coke would the monopoly become radical. By ‘radical monopoly’ I mean the dominance of one type of product rather than the dominance of one brand….Cars can thus monopolize traffic. They can shape a city into their image—practically ruling out locomotion on foot or by bicycle in Los Angeles.”
Agreed on the Sherry Turkle—she was clearly trying to make the generalizations and normative statements of sociologists and philosophers but only knew how to write like a journalist.