The Source of Moral Worth
It's a mistake to think that entities have moral worth because of consciousness.
I’m Tim Gorichanaz, and this is Ports, a newsletter about design and ethics. You’ll find this week’s article below, followed by Ports of Call, links to things I’ve been reading and pondering this week.
Is it just me, or are a lot of people talking about consciousness lately? Now, I’ve long been interested in the topic, so I’ve had a habit of seeking it out. But now it seems like it’s coming to me on its own, floating all around.
Assuming it’s not just my impression, the ongoing developments in AI are surely one impetus for this. Plucking a few recent headlines at random: “AI Creators Must Study Consciousness, Experts Warn,” says the BBC. “Why Conscious AI Is a Bad, Bad Idea,” says Nautilus.
There are many reasons to be interested in consciousness with respect to AI, but one I see coming up again and again in these articles is the topic of moral worth.
The widely-parroted thesis goes: Given that AI is advancing so quickly, it’s only a matter of time before its intelligence and consciousness reach parity with humans, and therefore also human-level moral worth.I wrote last week that I’m suspicious of the “it’s only a matter of time” kind of rhetoric. But today I want to look at another part of this: the obsession with humans as the benchmark, particularly that humans are the gold standard for moral worth.
“Human-Level”
AI is typically defined as a machine that can perform tasks that typically require human-level intelligence. Some authors have distinguished between “narrow” AI, which performs a particular task at human level, and “general” AI (or AGI), which performs any task at human level.
This sounds fine if you don’t look too close. But soon it becomes obvious that we often want our computers to do things far beyond human level. A calculator would be useless if it could only crunch numbers at the level of even the most numerate savant. Empirically, we also expect our machines to work beyond human level. Consider that there are 40,000 automobile-related deaths in the United States each year, but it just takes one death caused by an autonomous vehicle to shut down the whole operation.
Besides measuring what counts as intelligence in AI, many also consider humans as the yardstick for moral worth. It’s understandable. Morality is about getting along in life, and we are humans, after all, so we’re most interested in how humans can get along. This is usually where consciousness enters the picture.
An example of this kind of thinking comes from Emily Bender, a famed AI ethics researcher. A New York magazine article from March quotes a talk from Bender called “Resisting Dehumanization in the Age of AI.” In the talk, she pointed out how recent conversations about AI both overestimate AI and underestimate humans. In response to an audience question, Bender said, “I think that there is a certain moral respect accorded to anyone who’s human by virtue of being human.” That audience member pushed back, pointing out that at least some humans don’t seem to deserve moral respect, and at least some non-human entities do.
At the back of everyone’s mind: If that’s the case, do large language models such as GPT deserve moral respect?
Where Moral Worth Comes From
Consonantly, we’ve started seeing more calls for AI rights. Again, the logic goes: If AI’s aren’t already conscious, they will be soon, and we need to establish legal frameworks to support their rights.
One voice in this chorus is sociologist Jacy Anthis, who recently wrote a number of op-eds such as “We Need an AI Rights Movement,” published in The Hill. Reading Anthis’ arguments, it’s clear that, in his view, the basis for according moral worth to an entity is consciousness. The name of the institute he co-founded, the Sentience Institute, also suggests as much.
It’s another one of those things that sounds fine if you don’t think about it too much. When it comes to moral rights, a big issue we’re worried about is pain, and it seems to be the case that something can only feel pain if it’s conscious. It’s this sort of logic that pervades the animal rights movement—it’s why we think killing a dog is awful but killing a fly is whatever. Flies can’t feel pain.
But focusing on consciousness here is a red herring—it’s a mistake.
Don’t get me wrong: I agree that large language models have moral worth. I’ve written before that they are a technology for cultural transmission. But consciousness isn’t the source of moral worth. (And we can be grateful for that, because it means we can sidestep impossible questions such as when precisely in the development of a fetus consciousness begins—that is not relevant to considerations of moral worth.)
Consider a person in a hospital bed on life support who is brain-dead. By definition, this person is not conscious. Still, most people would say that person (not conscious) has the same moral worth as any other person (conscious), rather than the same moral worth as a sand castle (not conscious). We also treat human remains—even those from centuries past—with moral respect that has nothing to do with their consciousness.
Moreover, it is increasingly being argued that mountains, rivers and forests have moral worth, and these “rights of nature” cases are making their way through courts around the world. In 2017, New Zealand gave a mountain the legal status of a person. Here in the United States, wild rice has filed suit against the state of Minnesota. Only the most psychedelicized among us would suggest that these entities have moral worth because they are conscious; indeed, the legal arguments in these cases make no reference to consciousness.
The bottom line is that you don’t need consciousness to be part of the moral circle. So what do you need?
Here I follow the arguments put forth by philosopher Luciano Floridi. Floridi has said that we can construe all of existence in terms of its informationality. Information, in his philosophy, is essentially a pattern of organization—and it has structure, complexity and meaning. Floridi argues that all information has moral worth by virtue of its being information. That is, everything in the universe has moral worth as a function of its structure, complexity and meaning.
This is why we revere books, why we can’t bear to throw away our Apple boxes, why we feel bad for throwing away unbroken objects even, why we keep our old laptops.
Floridi goes on to suggest that all moral agents (such as humans) have a responsibility to help steward the universe toward more structure, complexity and meaning. He uses the metaphor of a trust, in which some asset is entrusted to one party on behalf of another. All the world’s a trust, and we’re just taking care of it until we pass that duty to the next generation.
So, yes, large language models deserve moral respect, but it has nothing to do with consciousness. Acknowledging that these entities have moral worth means that we have to figure out things like AI rights now, not at some undefined future moment when and if they attain consciousness. And it’s not just AI rights, but the rights of all things.
People such as Emily Bender, mentioned above, may think that this perspective devalues humans. But that’s also a mistake. Moral worth is a function of complexity, and humans are immensely complex. The human brain is the most complex structure we know of, and that means the society we’ve built over the eons of our existence is all the more complex. Humanity is the most informational thing there is, and so it follows that we have the most moral worth.
What that also means is that we have the most moral responsibility to care for the rest of the world.
Ports of Call
A Microsoft’s-Eye-View of AI: This interview with Microsoft CTO Kevin Scott on The Verge’s Decoder podcast gives an illuminating view of where Microsoft is expecting AI to go in the near term, from infrastructure to interfaces.
Running as Prayer: I’m working on an article (which I hope I’ll be able to share next week) on reflective journaling in ultrarunning. As part of my reading for that, I found this paper on the running as a religious practice among the Tarahumara, an indigenous group in northern Mexico. The Tarahumara came on my radar through the famous 2009 book Born to Run (which is also the book that made me addicted to running, so reader beware). In Born to Run, the author frames the Tarahumara running traditions as races, but this more recent paper makes the point that they aren’t races at all in the way we understand that term. Rather, they’re better understood as a form of prayer.
Photos of 200-Mile Runners: On the topic of running, take a peek at Scott Rokis’ collection of photos of people during and after 200-mile footraces.
The question of moral worth is the “recipient side” of ethics. It’s looking at the person being affected by an action, rather than the person doing the action. In philosophical jargon, here we’re talking about moral patients rather than moral agents. Outside the scope of this week’s post, there’s a lot to be said about the morality of AI agents. I talked about some of that in an earlier post, and probably will more in the future.