I’m Tim Gorichanaz, and this is Ports, a newsletter about design and ethics. You’ll find this week’s article below, followed by Ports of Call, links to things I’ve been reading and pondering this week.
“She said I spend too much time standing in the mirror, watching.”
—Humilitarian, “She Said”
I sprained my toe a few weeks ago, which has had me using the elliptical at the gym rather than running outside. A cruel sentence for springtime.
On the elliptical, I spend most of my time staring out the window. I watch the people on the sidewalk across the street. Sometimes I glance around the gym.
It’s incredibly boring, but I notice how, running by a glass building, a woman turns to look at herself in stride. Then I see, resting between sets at the rack, a guy steps toward the ceiling-height mirror and flexes, turning his arm this way and that.
We fall in love with mirrors. Even if it’s not about seeing our muscles or bodies in motion, there’s something beguiling about seeing ourselves reflected back at us.
The fact that we can see our reflection and know it is us being reflected is a test for self-awareness. Self-awareness and self-reflection—these are some of the especial qualities of humanity. We humans, along with chimpanzees and orangutans, are the only species shown conclusively to be self-aware.
But consider what your reflection really is: a play of the light, a flat representation of a three-dimensional you. It has no blood or flesh, no inner life. It is just an image from which you can infer those things.

Mirrors also distort, and sometimes that’s part of their virtue. Mirrors show things in reverse. They can magnify and stretch. There’s a meme among bodybuilders that your muscles look best reflected in car windows.
But even when a reflection doesn’t look distorted per se, it still is a distortion. It is flat, as I mentioned. It’s also completely backwards—the image is reversed. Yet somehow we get the feeling of accuracy and precision. We don’t notice these distortions.
AI as a Mirror
Artificial intelligence is much the same.
The AI used in countless applications today take inputs from our flesh-and-soul lives and output flat, distorted images that we see ourselves in, that we fall in love with.
The phenomenon goes back to one of the very first AI applications, a chatbot called Eliza from 1966. The chatbot mostly just echoed the user’s words back to them, often in the form of a question. Eliza was pretty simple, yet users couldn’t seem to help but project human traits onto it, which has been termed the Eliza effect.
The evidence that we are falling in love with AI is in the way we’re tripping over ourselves to plug it into everything. AI is being used in everything from culture recommendations to criminal sentencing to credit checking to grammar to companionship and on and on.
The philosopher Shannon Vallor develops this metaphor further in her new book The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking. In the book, Vallor warns that we are following in the footsteps of Narcissus in the Greek myth, which I mentioned in last week’s post.
Today’s AI systems replicate social biases, and thus relying on these systems creates runaway feedback loops that further exacerbate these biases. For instance, if in the past all the best engineers were men because women were not encouraged to pursue such careers, then AI systems may inadvertently conclude that one must be a man to make a good engineer. That is exactly what happened with an Amazon hiring AI system, which came to light in 2018 and was scrapped. Examples of such effects unfortunately abound. Moreover, the root issue has not been addressed since then, ensuring that countless examples are currently causing harms that have not yet been exposed.
Vallor writes, “These are the tools we are now being urged to use to study and teach our history, create art, get life advice, make policy, and plan for the future. It is these tools that are increasingly being used to tell us who we are, what we can do, and who we will become.”
As Vallor points to in that quote, relying on these AI systems has a deeper potential—and one perhaps even more damaging than “just” reifying and magnifying existing social biases. Relying on AI promotes the deskilling of our cognitive and moral capacities, robbing us of opportunities to build character and learn. Further, the latest glut of generative AI products encourage us to see human creativity itself as a matter of mere output and pastiche.
The fact that many people do not see this as a loss speaks volumes. “AI can devalue humanity only because we have already devalued it ourselves,” Vallor writes.
Toward a Better Future with AI
The majority of Vallor’s book is dedicated to spelling out the problem and its stakes. In the final two chapters, she turns toward the future.
First, Vallor calls for sensible regulation. Regulation is possible, and it is not antithetical to innovation, as we have seen in the aerospace, aviation and automotive industries. But Vallor, just like Pope Francis, says that regulation is not enough. Though whereas Francis emphasizes our spiritual needs, Vallor points to moral growth.
Vallor points to a few examples of recent imaginative sci-fi that might catalyze our moral imagination. Most sci-fi, you’ll surely know, envisions AI as an existential threat. And reflecting on that fact, Vallor wonders:
What does it say that when we imagine beings built in our image, the only purpose we can think of for their intelligence, the only task in which we imagine them seeking pleasure, is violent domination and control? Is that really what intelligence is? Why do so few depictions of AGI show us a superhuman intelligence that laughs more than we do? Where are the intelligent machines not on a mission, but mastering being silly, goofing off, exploring, playing?
Similarly, Vallor wonders what sorts of virtues might serve us in the future. Today in the West we seem to hold most highly qualities such as “productivity, confidence, resilience, independent thinking, perseverance, passion, and single-minded dedication.” These qualities may not serve us very well in a future that demands coordination and flexibility as we tackle the most dynamic wicked problems on a global stage.
So what sorts of virtues might help us? That Vallor poses this question is a bit odd, given that she already answered it in her 2016 book Technology and the Virtues. Above all, we’ll need practical wisdom; and beyond that, we might consider rest, repair and restoration. The kinds of virtues that will support our “being silly, goofing off, exploring, playing.”
From Mirrors to Windows
Vallor’s book offers broad-scoped reflections and a call to our tech companies, governments, and other institutional leaders.
But what can each of us do as individuals?
For starters, we must try our best to not become enthralled by the AI mirror. Why settle for the mirror image when you have the real thing right at hand?
Remember that a mirror is only a reduction, an image, an abstraction. Looking into a mirror is not the only way to see yourself. And it may not even be the best way. After all, they flatten, reverse, magnify and distort.
Turn to yourself rather than your reflection. And look out a window instead of into a mirror.
Ports of Call
I’m in Hawaii for break (writing this in snatches of time between platters of poke). A few things to share:
Candy-covered fruit: Discovered tanghulu yesterday in Honolulu’s Chinatown. Somehow I had never seen or heard of it before. Simple and delicious! And easy to make at home.
Music: Last weekend in my neighborhood was West Philly Porchfest, an annual event where live music can be heard from people’s porches and local businesses all over West Philly. It was a perfect day! One new local band I learned about was Humilitarian, with warm indie vibes.