When Education Becomes Information
How "machine learning" became the norm in American schools
Today, in debates over the future and likely impact of artificial intelligence, one often hears the optimistic refrain: “we’re in charge; we can use AI however we want” (as long as we accept that we have no choice not to use it, that is). AI, on this account, is the ultimate general-purpose technology, capable of being built to any specification, adapted to any use. The world is our oyster, and AI will take us wherever we want to go. Even those otherwise worried about AI, and its effect on youth in particular, will often enthuse about its capacity to “unleash creativity.”
Indeed, we are told that it is this fundamentally creative, boundless character to AI that makes it different from the big bad ugly digital technology that we have soured on over the past generation. Writing last week at Hyperdimensional, Dean Ball argued, “The most important distinction is that AI use is fundamentally creative, whereas social media in its contemporary form is fundamentally consumptive for the vast majority of users….Generative AI, on the other hand, presents users with a blank box and a blinking cursor. ‘What do you want to do?,’ it asks.” Whereas the algorithmic feedback loops of loops of social media put you on an endless hamster wheel, AI puts you in the driver’s seat of a flying car.
Ironically, though, we only need to think back only two decades to remember that the internet, and close on its heels social media, came to us wrapped in the same shiny promise of autonomy and self-expression. Into the boring, bureaucratic, monochrome world of Microsoft Office and America Online, came the bright rainbow-colored playground of MySpace and Facebook, inviting everyone to stake out their own little piece of digital real estate and start creating their own persona.
The personal computer itself—by 1999 perceived as so dull and dehumanizing, that it was cast as one of the main villains of the classic comedy Office Space—had burst onto the scene two decades earlier as a vehicle of self-realization, the first truly “general-purpose technology” that could be put to almost any use that you might imagine. Thus, in his classic 1986 essay, “Thinking About Technology,” Canadian philosopher George Grant interrogated the phrase, “The computer does not impose upon us the ways it should be used,” and dissected the idea that the computer was simply a “neutral tool,” capable of being put to whatever uses humanity might dream up. It seems as we have been chasing this particular pot at the end of the rainbow for quite awhile: each new phase of digital technology presents itself as the long-awaited liberation from industrial monotony and oppression, but each soon becomes a new cage to escape. Will AI prove otherwise?
Getting Human Beings to Act Like Computers
I had the privilege of reading Grant’s essay in 2009 with Oliver O’Donovan, who invited our seminar to look back over the past two decades and ask ourselves whether the computer had not, in fact, imposed upon us the ways it should be used. As a university professor over that whole period, O’Donovan had tale after weary tale of the increasing bureaucratization, quantification, and standardization of higher education during that period. Precisely because the computer was so great at crunching numbers, so great at quantitative analysis, education reconfigured itself around these quantitative demands. Standardized tests became almost the only tests, and multiple choice questions by far the preferred format. Computers loved reading forms, so administrators rushed to fill out enough forms to keep their new masters happy, and required teachers to do likewise. Students soon learned to assess themselves, their professors, and their preferred universities by numbers alone: they had to boost their GPA so that they could go to a top-ranked school with a miniscule acceptance rate, and then spend their university years rating each of their professors on a 1-to-5 scale. The technology that promised to radically accelerate education instead eviscerated it.
Since 2009, these trends have in many ways continued and even accelerated. Education today, it seems, is almost as much a numbers game as every professional sport has also become, chasing performance with sophisticated metrics. The result is that instead of freeing students to make their mark on the world (the original vision of liberal arts education), so much of modern education seems to trap them in narrow paths of self-optimization to boost their chances of getting into one of an increasingly small list of coveted schools. (In Hannah Arendt’s terms, it has transformed action into labor.) And of course, when everything is a matter of numbers, it becomes a game of who can manipulate the numbers to their advantage—hence the race-to-the-bottom grade inflation that we’ve seen at every level.
Confronted with this spectacle, I am reminded of Anton Barba-Kay’s observation, “What is it that digital computers are for, after all? What is it that they are getting better at? Unlike clocks, which are only ever getting better at keeping time, computers are only ever getting better at acting as human beings. In doing so, they are also getting better at getting human beings to act like computers, thereby becoming inseparable from and, at the limit, interchangeable with us” (A Web of Our Own Making, 236). Even as we were developing techniques of so-called “machine learning” to train AI, the application of digital technology to education over the past generation fostered a culture of “machine learning” in our schools: optimizing for narrowly-defined outputs and performance assessments and abandoning any pretense of genuinely educating.
And yet, before we pursue this train of thought further, we may be brought up short by another train that seems to be running in the opposite direction. While from one perspective the sorry scene of modern education, K-Ph.D, has come to resemble a factory of mindless automata in the style of high modernism, from another perspective it seems precisely the opposite: a post-modern “choose your own adventure” story in which even the youngest students are told “there’s no wrong answer” and encouraged to explore their own narratives and question everything, from the multiplication tables to their own sexuality. The fad for “student-centered learning” promised to empower learners to explore the world on their terms, and even resulted in a significant shift in vocabulary. Where once our educational institutions had distinguished between “students” (a term generally reserved for the already well-formed, self-directed learners at university) and “pupils” (those in lower grades who sat under the instruction of teachers and expected to submit to their expertise), we began using the term “student” for all grades, right down to kindergarten. Indeed, my friend who pointed this out to me recently remarked that in his own California, they had gone a step further, and begun referring to elementary school children as “scholars”!
It might thus seem that in the domain of education, the computer age really had helped unleash a kind of radical relativism, in which the ease of exploring new information, questioning narratives, and adopting personalized learning pathways had profoundly democratized education, abolishing hierarchies and liberating each of us to use technology to pursue our own personal quests. Since our own personal quests generally have more to do with what entertains than what edifies, the predictable result, has been a pronounced decline in learning outcomes, especially in reading, so that many are beginning to warn of the “dawn of a post-literate society.” From some vantage points, though, this is a small price to pay for “liberation” and “empowerment,” which is why AI’s boosters are so undeterred by the terrible track record of educational technology thus far.
Curiosity Killed the Student
In a recent conversation with a senior policy lead at a major AI lab, I expressed my reservations about AI’s potential downsides in the educational sphere, where it could harm child development and undermine the formation of cognitive skills. She immediately interjected, “I totally disagree. I think AI is great for education. My sixteen-year-old son uses it all the time as a way of finding information and exploring whatever he’s curious about.” Now, let’s set aside for a second the point that a 16-year-old is perhaps already a well-formed student, and may be able to make more effective use of AI than a younger pupil might. What struck me most of all about her remarks was how she shifted the conversation instantly from one of formation to information; I worried that AI would undermine the cultivation of skills, and she responded that it would facilitate the acquisition of facts. Our EdTech problem, it hit me, is not one of inability to design good technology for education—indeed, if we knew what we were aiming at, I have no doubt that AI could be used to strengthen and augment the work of educators. Our problem is that we do not know what education is anymore.
The shift may be captured in the shifting connotation of the word “curiosity.” In medieval usage, the former word actually named a vice, the opposite of the virtue of “studiousness.” The latter is driven by humility and by love of the thing to-be-known, the former by pride and by fear of being left out or left behind. The two name fundamentally different postures of soul. Whereas the studious recognize that the world has an intrinsic order and telos that they must first submit themselves to in order to comprehend, the curious lay claim to a premature mastery, seeing reality as laid out passively before them, waiting to be manipulated. As theologian Paul Griffiths writes, this paradigmatically modern way of knowing “imagine[s] a world of discrete objects arrayed spatially on a grid, each related to others causally in various ways, but each definable and knowable exhaustively in itself, each, that is, fully transparent to the appropriately catechized gaze and passive before that gaze, there to be gazed upon and addressed without itself returning or exceeding the gaze” (Intellectual Appetite, 33). Paul Kingsnorth describes the same mentality as one of “datafication: the quantification of everything.” “The pattern of reality,” he writes, will be transformed into bits and bytes, comparisons and yields, numbers and statistics, until even novels and friendships and meadows and family meals on winter nights can be measured and compared and judged for their relative contributions to efficiency and sustainability” (Against the Machine, 214).
Here, then, is the key to unlock the paradox we observed above: how is it that our educational system—and indeed our society as a whole—can be simultaneously one of narrow machine-like optimization for quantitative performance, *and* one of open-ended creative exploration? If everything is only data, then nothing has any intrinsic relation to anything else; there is no telos, no ordered hierarchy of goods, no pathway into true mastery of the world to pursue. Everything is relative, and anything may be pursued as an object of knowledge, from astrophysics to pornography, Shakespeare to TikTok; what matters is only whether it tickles the curiosity of the learner. Students are thus encouraged to choose their own adventure, but whatever it is, to try and “be the best” at it. Yet, having prematurely “liberated’“ students to become their own masters, we pressure them to optimize their performance within their chosen domain, quantifying it with ever more sophisticated metrics. Byung Chul Han observes, “The freedom of Can generates even more coercion than the disciplinarian Should,” calling this “perpetual self-optimization” “a highly efficient mode of domination and exploitation.” He goes on, “As an ‘entrepreneur of himself,’ the neoliberal achievement-subject engages in auto-exploitation willingly—and even passionately” (Psychopolitics, 2, 28). No wonder that so many students turn to AI to maximize their homework outputs!
Lacking the stomach and drive for such focused self-optimization, many students will merely rove restlessly along the surface of knowledge, driven by curiosity’s love of novelty, which digital technology offers to satisfy more than any traditional field of study ever can. A handful of nerdy autodidacts may use the new information ecosystem to serially delve deep into different rabbit holes of study, developing a formidable but highly uneven body of knowledge. I did plenty of this as a teenager, going through phases where I was obsessed with Civil War military history, meteorology, astronomy, advanced aviation, and much more. And I still cherish all the facts I picked up along the way.
Formation, not Information
But we should not mistake such knowledge dilettantism for education, which is a matter of formation, in contrast to information. When we speak of formation, the metaphor that comes vividly to mind is that of the potter, who patiently but firmly shapes the clay from a useless lump into a form that will render it fit for purpose. The goal of this process is not to make every vessel utterly alike, just as a humane education should not aim to produce an army of clones, “spitting out human beings who can serve as information processing nodes” to serve as instruments for corporations to make profit, as my new friend Matt Prewitt put it in a recent conversation. (You’ll definitely want to read his fuller thoughts on this subject.) But nor is it to allow every lump of clay to simply “be itself.” Rather, a humane and liberal education seeks both to harness and restrain our native qualities, identifying each pupil’s distinctive way of being human and smoothing off the rough edges so that he or she can flourish alongside other humans. This involves the cultivation of the virtues, which are the habits of character and modes of acting that render us not only effective in steadfastly pursuing our goals, but effective at working, living, and deliberating together with others. Authentic education, then, does encourage each pupil to dig deep into whatever they are best at, but does not allow him to become trapped there; it forces him to persevere in other subjects that are not his favorite. In the process, he becomes a student, developing patience and self-discipline, but also a common language and a shared universe of goods to inhabit alongside his fellow students and future fellow citizens.
The craze for “personalized learning pathways,” which we are promised a world of on-demand AI tutors will supercharge, thus threatens a world of high productivity but mutual incomprehension. The problem won’t just be that students are all talking each to their own screens, rather than to the teacher and to one another. The problem is that when they do come back together to discuss what they’ve learned, they will find themselves like the builders of the Tower of Babel, speaking each his own tongue. If you’re a sports fan, the tutor will give you math problems using NBA stats; if you’re into politics, it will give you problems based on polling data; if you like literature, it will use a Shakespearean dialogue to introduce the concepts. In the name of “engaged learning,” we will be shunted ever further into algorithmic echo chambers. But the test scores, one suspects, will go up. And so will corporate profits, which love to see humans replaced by (or turned into) machines.
From the standpoint of the techno-optimists on both Left and Right, this will look like a success story, facilitating both greater and greater individual freedom and “empowerment,” and greater and greater economic productivity through the widgetification of homo sapiens. This dreary dual dream is captured perfectly in the White House’s effusive executive order from last April, “Advancing Artificial Intelligence Education for American Youth”:
“Early learning and exposure to AI concepts not only demystifies this powerful technology but also sparks curiosity and creativity, preparing students to become active and responsible participants in the workforce of the future and nurturing the next generation of American AI innovators to propel our Nation to new heights of scientific and economic achievement.”
Starkly missing from this sentence, and indeed from the entire Executive Order, is any mention of virtue, self-government, or even citizenship—anything, in short, which used to be understood as the purpose of public education. This machine learning serves only to prepare pupils for the workforce, albeit, we are told, as “active and responsible participants.” But it is hard to see how an educational system optimized for curiosity and productivity, rather than formation in shared virtues and common goods, will foster such responsibility. Will it not instead accelerate a society of “entrepreneurs of the self,” each optimizing in his own algorithmic echo-chamber? This, I suspect, is not a recipe for innovation, but for Babel.



Great read and valid points. I like that you pointed this out; especially about " Everything is relative". My Biblical response if open , is everything being relative is a way to remove the Absolute Truth that only comes from God. If we remove truth , there is no truth.
"Everything is relative, and anything may be pursued as an object of knowledge, from astrophysics to pornography, Shakespeare to TikTok; what matters is only whether it tickles the curiosity of the learner"
I don't really understand how Dean Ball is taken seriously, on merit.
...