Artificial Intelligence as Culture
Large language models could be a new mode for preserving human learning across generations
I’m Tim Gorichanaz, and this is Ports, a newsletter about design and ethics. You’ll find this week’s article below, followed by Ports of Call, links to things I’ve been reading and pondering this week.
“For everything that was written in the past was written to teach us, so that… we might have hope.”
—Romans 15:4
When an organism dies, whatever it managed to learn during its life disappears. That goes for most organisms, anyway. But some animals, humans among them, can preserve learning across generations.
That’s what we call culture. Stories and recipes, treatises and artifacts. It’s why we have grandmothers—that is, why we live so far past our reproductive years. And, I’d venture, why human civilization really took off once writing was invented. Human culture is about shepherding the best things from each generation to the next, and writing was a mammoth leap forward in that capacity.
Paideia, Canon and Culture
The ancient Greeks encapsulated these ideas in the concept of paideia: looking at the heritage you got from your parents and ancestors, selecting the most excellent things (“the beautiful and good”), and bringing those to your children. Paideia is about actively participating in a tradition, helping all humans move toward excellence.
For the ancient Greeks, paideia was an aristocratic, intellectual pursuit; it was only the concern of a crust of society. All the others, more or less, just had to accept what was considered beautiful and good by the elites.
Paideia somewhat recalls the idea of canon, a set of works considered to be the best in some domain (for instance, the Star Wars canon or the canon of Western literature). Nowadays, the notion of canon is fairly frowned upon for being too one-sided, too elitist. And we could levy the same complaints against paideia.
But we must remember that this is essentially how humanity works. If civilization is going to go on, we have to pass on something to the next generations, and we should do our best to pass down the best we have. So arguments about canon and paideia shouldn’t be read as being “against” canon or paideia per se, but rather as negotiations over what should be passed down. And I think that’s good; paideia is about active participation, not blind recitation.
AI as a Technology of Cultural Transmission
As I alluded to above, we can look at the history of technology for landmarks in how culture is transmitted. Three major moments: the invention of writing, the invention of the printing press, and the invention of the internet.
But just as the full effects of writing and the press weren’t evident for centuries after their invention, we can expect the same will be true of the internet. We’re in a tumultuous time as this new culture-making technology is being more fully realized and integrated into humanity. And the latest wrinkle in the unfolding of the internet is what’s being widely called “artificial intelligence,” but for the most part the discussion focuses on a specific approach called large language models (LLMs).
The internet is a vast assemblage of human texts and artifacts from all sorts of eras and for all sorts of purposes. The scale is truly immense. Even in Antiquity no single person had the time to read all the books in circulation. Today you’d need millions of lifetimes just to scratch the surface. LLMs may be an answer (assuming there is a problem). LLMs provide a way to condense all those texts and artifacts, to show patterns and insights. That’s the promise, anyway. The reality is going to be more complicated.
For one, we might wonder if preserving everything is really better than only preserving the best. The notion of paideia suggests that not everything should be handed down. That it’s a waste of time at best, or perhaps even counterproductive. Nowadays many assume that if we can preserve it, we should; but it’s an assumption, not an argument.
Next, LLMs mean we no longer know what’s in the canon. At least when we talked about canons, we could argue about whether this or that should be included, and eventually figure out something. With LLMs, we are not sure what the “canon” includes. This means there’s nothing to argue about. Sidestepping an argument is not better than having the argument, however unpleasant arguments may be. (But, to be sure, we can make some guesses about what training data was used through reverse engineering, and perhaps in the future the companies who build LLMs will be more open than OpenAI is today.)
A corollary of the training-data-opacity issue is that there’s no way for anyone to give consent or receive payment for their own work informing an LLM. Everyone from highfalutin writers and famous artists to hobbyist Wikipedia editors and random social media users are implicated here. Is it scary, is it beautiful?
With all this in mind, we can envision a future of LLMs as a shared human project, carrying forward the best humanity has to offer in the service of our descendants. There are many implementation issues to work out here. But above all, to me this suggests that LLMs are certainly not best developed by private, profit-driven companies such as OpenAI.
Perhaps a commons ownership model would be more appropriate. Yes, OpenAI risked time and capital to develop the technology, but the technology wouldn’t work if it weren’t for the combined input of untold human generations. To devalue all that as “data” is to devalue the very thing being created.
Ports of Call
Not much this week! I’ve been on break before the start of spring quarter, and tomorrow I’m running a 100km race in Oregon. (My idea of fun.) But a few things to share:
A Blog Post: Economist and broad-thinker Tyler Cowan has a new blog post reflecting on the question of inevitability and AI. We’ve been living in an ahistorical bubble, and now that’s changing.
Theory of Mind: There’s a debate ongoing about whether GPT-4 has attained theory of mind, or the understanding that one has a mind and that other minds exist. Is it true? How could we ever tell? (Philosophers call this “the problem of other minds.”)
The Limits of Design Ethics: This article reflects on the recent trend of ethical discussions in design (irony alert: this newsletter included) and the need for integrating ethics at the level of infrastructure, not just the sensibilities of individual technologists.