Society is Not Software
The delusion of Burkean accelerationism
Over the past three weeks, there has been a sudden mood swing here in Washington. Policy-makers, till now prone to dismiss all the AI talk as Silicon Valley salesmanship and hype, or else to embrace it uncritically as the ticket to a more prosperous future, have turned brooding and somber. Even the White House, in a U-turn jarring even by the whiplash-inducing standards of the Trump administration, has apparently shifted from a stance of radical deregulation to proposing a heavy-handed federal AI licensing regime. The culprit, of course, is Claude Mythos, Anthropic’s new AI system capable of finding software vulnerabilities in seemingly any and every computer system, even those thought safe for decades.
Cybersecurity isn’t something most of us lose much sleep over, but it’s one of those perils that, once seen, you can’t unsee. Over the past fifty years, we have transformed much of society into one giant interconnected computer system, with everything from your car’s braking system to the respirator machine you’ll be on if it fails all controlled by internet-connected software. Same goes for the global financial system and America’s missile defenses. Of course, experts in these domains are generally well aware of the stakes and invest considerable resources into turning their systems into fortresses guarded against cyber-attacks that could steal sensitive data, shut down the grid, or take control of their machines. But what if AI super-hackers were now available at low cost to every disgruntled teenager, not to mention crime rings and foreign adversaries? So concerned was Anthropic about the threats posed by Mythos that they restricted its release to a handful of key companies and government agencies so they could prepare their defenses, an initiative they dubbed Project Glasswing.
Last week, I took a field trip to the Future (aka San Francisco) with three dozen DC policy professionals, and Mythos loomed over almost every conversation. The consensus seemed to be that we had crossed a threshold into a murky and unstable future in which any wrong move could have uncomfortably high stakes—and judging from this week’s headlines, the White House seems to agree. That said, a mood of cautious optimism still prevailed, driven by the knowledge that the same AI that can exploit software vulnerabilities can also find and fix them—hopefully before the bad guys find them. That is the whole point of Project Glasswing after all, and there seems to be at least some reason to think that, if other companies follow Anthropic’s lead of giving the most advanced models to cyberdefenders first, we may be able to use AI to detect and repair any critical infrastructure vulnerabilities over the next couple years, much as programmers systematically patched the “Y2K bug” during the ‘90s.
Patching bad software, after all, is what Silicon Valley does best. One of the beauties of software, and the reason that it has “eaten the world,” in Marc Andreessen’s famous phrase, is its extraordinarily low repair cost. If you build a house with a crack in the foundation, you could find yourself having to build a whole new house to fix the problem. If you ship a car with defective steering, well, not only are you likely to have some hefty lawsuits on your hands, but even if you catch it quick enough, it’s going to be an expensive process of recalling the vehicles and physically swap out the defective parts. Not so with software: if something’s not quite right, and you find a vulnerability or a bug, you can generally just tweak the code and issue a patch. And whenever you want to try out some cool new features, you invite users to download an update—or just push it out automatically to their devices while they sleep, whether they asked for it or not. Mark Zuckerberg’s “move fast and break things” mantra has aged very poorly, when it became clear that it was the hearts and minds of America’s youth being broken, but from the Silicon Valley standpoint, it made sense, because the stakes of broken code were always relatively low and easily repaired. And the best way to find out if something needed fixing was to crowd-source it: send it out to a million users and collect data on what was going well and what wasn’t.
Till now, this has been the modus operandi of the AI companies, summed up in OpenAI’s commitment to “iterative deployment.” “We don’t know what all this AI is capable of, what people will want to use it for, or what its negative side effects might be,” they reasoned, “and there’s only way to find out…deploy our current model at scale, see what happens, and start tweaking as necessary.” To those outside of Silicon Valley, this sounds insane, especially when the stakes are so high. But if the last few decades are anything to go by, if rapid progress is what you want, iterative deployment is the way to go. It offers a much faster feedback loop for improvement than any in-house testing or government licensing regime could: If you want to make the world’s most powerful drug, why not conscript the whole world as lab rats?
Last month, I had the privilege of listening to a senior figure in government AI policy describe their philosophy of tech governance. Laws would be necessary, for sure, but we shouldn’t be too hasty about them, because how would we know what needed regulating? We should first allow the technology to unfold and diffuse at its own pace, and then, as problems became apparent (like chatbots becoming suicide coaches, for instance), we should target legislation to address those specific harms. This, he contended, was the conservative way: experimenting, learning, adapting. As I listened with growing incredulity, it suddenly struck me: this guy thinks of society as software. Coming from a tech background, the advisor instinctively thought of the world as a giant blank coding terminal waiting to be programmed, and of any problems that might arise as bugs that could be patched by suitable legislation. Just as a good software engineer doesn’t assume too much about what the customer wants or needs in advance, but stands ready to retire old features or deploy new ones, so statesman should adopt a fundamentally tentative and reactive stance, one that he described as “epistemic humility,” and as a kind of Burkean conservatism.
He is far from alone. Once the lightbulb went off, it illuminated many other conversations I’ve had over the past year: this is simply how tech people think. And why shouldn’t they? If all you have is a hammer, everything is a nail. If all you know is coding, every social malady is a bug waiting to be patched. The medium, as Neil Postman observed, is the metaphor.
That tech people should think this way is hardly strange. But that they should think this way of thinking is conservative? With Inigo Montoya, I must interject, “you keep using that word; I do not think it means what you think it means.” If anything, Burke was the fiercest critic of the idea of society as a machine that could be fine-tuned by turning the right knobs or tweaking the code, and it was the turning over of society to the “sophisters, economists, and calculators” that most horrified him. No, like Blackstone and Hooker before him, he understood that political society was a complex system easily disrupted, a delicately-balanced work of architecture that could come crashing down if carelessly tinkered with.
When a new technology begins to achieve mass adoption, it doesn’t just add some new layer on top of society that can be peeled back off it proves unhelpful. Rather, it quickly remakes society around itself, creating all kinds of path-dependencies that can be almost impossible to pull out of and reverse—and all the more so if it is a media technology. These path-dependencies tend to lock in users in positions of weakness and providers in positions of power. Let’s look at each in turn.
For users, technologies can create powerful dependencies, traveling rapidly along the pathway from supplements to substitutes as their users lose their native capacities to do things without the technology. And as in the movie Gattaca, what begins as a totally optional enhancement technology soon becomes a baseline for continued social participation. To say this is not to mount a romantic rejection of technology, but just to name a basic reality of most technologies work. Socrates rightly warned that the technology of text would weaken our memories, leaving us ever more dependent on written records. The creation of the keyboard quickly dealt a death-blow to penmanship, so that few of us would even want to try and decipher each other’s handwritten scrawls. Today, artificial intelligence is rapidly weakening the cognitive capacities of many users, as they increasingly rely upon an external brain to do their most basic problem-solving and decision-making. Technologies, argued McLuhan, each involve an “auto-amputation” of some part of our body or nervous system; once amputated, it cannot be easily regrown.
New technologies create new habits and expectations—and given human nature, these can often be very bad habits. One of the main policy areas I work in is child online safety. It’s an exciting time to be working in this space, because so many sane instincts are finally re-asserting themselves after two decades of insanity, but it is also sobering. On the one hand, we are finally seeing the law respond and begin patching the myriad of problems created by giving digital markets unrestrained access to our kids; but we’re seeing in the process just why this approach doesn’t work. Whenever we contemplate any policy like age verification or mandatory parental controls, advocates face the objection, “kids will just find a way around that” or “parents will just give their kids access anyway.” And it’s true. It’s a lot more work corralling the cattle after you’ve let them bolt than it is just holding the gate shut.
Imagine a family that allowed their young children easy access to alcohol, so that by the time they were 15, these kids were raging alcoholics. The parents take increasingly desperate measures; locking the liquor cabinet, only consuming alcohol themselves out of their children’s sight, and finally going dry altogether. The kids keep finding ways to get ahold of booze. The moral of such a story would not be to shrug the shoulders and say “whatcha gonna do? Kids these days…” but “don’t give your five-year-old tequila!” A child who has been exposed to pornography is liable to keep going back for more, and may make something of a hobby of finding ways around age-gates and parental controls. A child barred from easy access from the get-go may be kept on the straight and narrow by comparatively modest controls.
But of course, this is not something that each family can simply do for themselves. In a world where their friends are watching porn on the playground and every classmate has TikTok, parents can feel like they’re fighting an already lost battle. Nor are parents immune to these path dependencies. Millions of parents have fiercely resisted school phone bans on the outlandish grounds that “I need to be able to keep in touch with my kid during the school day”; as parents become technologically and socially conditioned to think their kids need perpetual connectivity, even laws mandating strict default parental controls will have little effect: the parents will just check all the boxes for maximum access.
We thus find ourselves in a world today where we have identified an urgent cluster of bugs with the technologies we unleashed two decades ago, and are desperately trying to patch them with laws, but finding that only draconian laws may be able to make much difference. A stitch in time, as the saying goes, saves nine.
Just as new technologies tend to lock in losers (in this case, children and families), so they tend to lock in winners. Media technologies in particular can create powerful horizontal interdependencies, or network effects. Consider the telephone. Its value depended almost entirely on everyone else using it; at the outset, this meant the Bell Company had a steep hill to climb, but once telephone usage became widespread, more and more people felt the need to have one, to stay connected. More and more businesses adopted them as their primary means of communicating with customers or suppliers, and re-organized their business models accordingly. The advent of the internet a century later up-ended these business models, as customers and firms expected to be able to find each other online. The businesses that best monetized this new technology were those who understood the power of the platform, the power of establishing themselves as the place where buyers and sellers, clients and providers, friends and friend-seekers found one another. The more people were on the platform, the more valuable it was, leading to immense concentrations of power in key network hubs like Facebook or Google.
And it turns out that if you give people power, they don’t like to give it up. So we find that, for all their pious platitudes about “empowering parents” and “age-appropriate experiences,” most of the tech companies have fought tooth and nail to obstruct even the most modest legal reforms—and have won almost every time. Today, we find ourselves in a world where the CEOs of multi-trillion-dollar companies need only call up the sitting US president in order to effect radical changes in US policies, even if these changes go against the clear national interest, the unanimous advice of the cabinet, and the president’s own campaign promises. Critics on the left will say that this reflects some unique cronyism or moral weakness of the Trump administration, but there is little evidence that the Biden or Obama White Houses were any more immune to the Big Tech puppet-masters. You simply cannot allow that much power to concentrate in one place and not expect it to profoundly tilt the political playing field. “If you see a problem with the tech, just pass a law,” says the techno-optimist. Have you tried lately? Congress is supine before the all-pervasive tech industry dollars, and well-heeled lobbyists effortlessly run circles around well-intentioned state legislators.
Our country used to understand this curse of bigness, enacting and enforcing rigorous antitrust legislation that sought to prevent companies from ever attaining the size that might allow them to distort not merely the market for goods and services, but the marketplace of ideas and legislative solutions as well. But somewhere along the way, we abandoned the fight, re-assuring ourselves that as long as the monopolies provided cheap goods, we didn’t mind handing them the keys to Congress.
There is one more problem with the “just pass a law to patch the problem” mentality. This isn’t what our legislature was designed for. Right now, both tech advocates and tech opponents tend to lament with equal fervor that Congress is absolutely glacial when it comes to passing meaningful legislation. Now in part, to be sure, this is the result of the singular dysfunctionality of Congress in our day and age, which is, as already noted, partly due to the distorting influence of monopolist money in our politics. But it is also partly a design feature, not a bug. The Founders intentionally designed our legislative process to be slow and clunky. Representatives had to assemble from the far-flung corners of the republic for relatively short legislative sessions, hash out complex issues each within their separate chambers, try to arrive at compromise solutions that majorities of each chamber could support, and finally secure the presidential imprimatur. This wasn’t supposed to be easy, because society, it was assumed, would be healthier without having to adopt to a rapid stream of poorly-considered new legislation. For those few arenas of political life that did require quick, energetic decision-making, these were assigned to the executive, who was given quasi-monarchical powers to act decisively in crises.
The problem is that while this made a great deal of sense in a world that moved at the speed of a post-horse, such a political system is ill-adapted to the strains of a world where new world-transforming technologies are rolled out every few years, and artificial intelligence models double in capabilities every four months. Almost everyone complains today about the concentration of power in the executive over the last few decades, but that is exactly what you should expect in periods of rapid technological change: only the executive has any chance of keeping pace.
I do not have a solution to this larger structural mismatch of our constitutional order and our technological order. Hopefully some of our best minds are on the problem. But what I can say is that there no need to hamstring the legislative process even further by a pseudo-conservatism that says, “Let’s hold our horses till we know what the problems are.”
There is nothing authentically “conservative” about the idea of society as a testbed for software. I have heard more than one advocate of this position describe it as a kind of Burkean empiricism, a willingness to learn from experience. But this gets things nearly backward. For Burke, the past provided a wealth of experience upon which to base our actions; no need to slice open the body politic and begin experimenting upon her in order to find out how she responds to novel treatments. For Burke, the laws and customs already inscribed in the social order reflected the time-tested wisdom of the society about what was already known to work fairly well. That certainly didn’t mean there was no room for improvement; but it meant that the burden of proof was always on the innovator. Silicon Valley has upended this, demanding the freedom to ship new products just to see how society responds, and placing the burden on parents, pastors, and policymakers to figure out the resulting problems and propose solutions. As I’ve written before, OpenAI, Anthropic, and Google like to call themselves “AI labs,” but in reality, we are the AI labs.
They want the freedom to experiment with legal immunity, demanding vast carve-outs from existing laws and norms, and then ask that society have the “epistemic humility” not to jump to any conclusions about their products. True epistemic humility does not mean throwing out everything we know about human nature and saying, “I wonder how the patient will respond to this electric shock.” Rather, it means deferring to the accumulated wisdom of ages. At one point, seeking to justify a “wait and see” attitude toward tech regulation, the senior policymaker said, “For instance, who could’ve known that giving kids social media access at school would turn out to be such a problem?” Who indeed?
Not only has Silicon Valley assumed that the coding paradigm applies neatly to society writ large; it has assumed that no one outside this paradigm could possibly have relevant knowledge of human nature with which to push back.
None of this necessarily means that the solution lies in some kind of comprehensive government pre-deployment licensing system for all new technologies; federal bureaucrats, after all, are hardly more reliable repositories of intergenerational wisdom than AI researchers are. But it is telling that such a licensing system has suddenly been proposed when we woke up to the fact that software itself might be at risk from machines that could exploit vulnerabilities faster than we could patch them. The technologists, having casually experimented on the rest of society for more than three decades now without compunction, are suddenly having second thoughts now that their own technologies are at risk. Perhaps now at last, belatedly, they will be willing to heed the warning of Richard Hooker during a time of similar social upheaval: “Since you are trying to destroy something already in force and imposing on us something new…you must take the role of plaintiff and accept that the burden of proof is on you to show both that we must abolish our currently existing order and also that we must adopt yours instead.”



Having just finished reading Burke's reflection on the revolution in France last night, this was perfectly timed. The Hooker quote at the end is perfect.