I’m Tim Gorichanaz, and this is Ports, a newsletter about design and ethics. You’ll find this week’s article below, followed by Ports of Call, links to things I’ve been reading and pondering this week.
A key moment in the current wave of AI hype was when Google engineer Blake Lamoine went public in whistleblower fashion with his belief that the company’s Lamda chatbot was sentient.
That was in July 2022, shortly after Dall-E 2 was released, astounding everyone with its image-generation capabilities, and months before ChatGPT’s release, which really kicked things off.
The notion that AI could be sentient, either currently or in the near future, still creates anxieties for a certain crowd. To be sure, worries about possible sentience are really just a distraction from the actual harms currently being perpetrated by major AI companies: namely global-scale theft, slavery and environmental degradation. Rhetorically, the bosses of these AI companies must know that if they can keep the public anxious about sentience, job replacement or perhaps forthcoming utopias, then there won’t be the will to address these other—real—problems.
Still, we might wonder about the stickiness of “sentience” in these conversations and in the popular imagination. What is going on here?

Americans’ Beliefs About Sentient AI
At this year’s CHI Conference on Human Factors in Computing Systems, Jacy Anthis and colleagues presented findings from large-scale survey studies conducted in 2021 and 2023 of Americans’ beliefs about sentience in AI tools.
The high-level takeaways:
20% of U.S. adults in 2023 agreed that some current AI systems were sentient.
79% of U.S. adults in 2023 supported a ban on the development of sentient AI.
38% of U.S. adults in 2023 supported legal rights for sentient AI.
The average person in 2023 believed that the first sentient AI was 5 years away (same with “superintelligence” and “human-level AI”), and that artificial general intelligence was just 2 years away.
One interesting thing about this study is that it shows us a picture from both before and after the launch of ChatGPT. Comparing the 2021 and 2023 results shows a slight increase in people’s sense of certainty. In 2021, 35% of U.S. adults believed it was possible for an AI to ever be sentient, while 24% believed it was impossible. In 2023, those numbers were 38% and 26%. Fewer people were not sure.
What results like this tell us is not obvious.
Many people believe today’s AI tools are sentient, but more do not. People’s opinions are getting stronger.
Yet we don’t know if these beliefs are rooted in experiences with AI, such as that of Blake Lemoine, or if they are rooted in beliefs about beliefs—in other words, culture. Consider “the Entity,” the antagonist in the recent Mission: Impossible movies, a networked computer system that went rogue after developing sentience.
It may be that people’s beliefs are getting stronger because other people’s claims are getting stronger.
Unlike the economy, where if enough people believe a recession is coming then they will bring one on, when it comes to AI, no matter how many people believe an AI is sentient, that won’t make it so.
What’s at Stake?
But why the obsession with sentience, anyway?
Oftentimes the underlying question is moral worth. (The study discussed above is no exception.) As I have written before, it’s a mistake to think that moral worth comes from sentience. Moral worth, rather, is a function of complexity. It may be that sentience and complexity are in some way correlated, but we ascribe moral worth to many complex yet non-sentient things, such as people in comas, landforms and electronics.
But sentience does do something to people’s beliefs about AI. In the survey discussed above, the researchers tested people’s agreement with a variety of statements about AI with and without the word “sentient” (in surveys given to different randomized participants). Across the board, people expressed stronger opinions when the word “sentient” was used to describe the AI. For example, 61% agreed that “torturing robots/AIs is wrong,” and 76% agreed that “torturing sentient robots/AIs is wrong.” The table below shows more such examples.

Some of these items may be begging the question—that is, engaging in circular reasoning. The very concepts of torture, exploitation, consent, respect, etc., seem to imply sentience. So it would make sense that emphasizing sentience brings out stronger agreement about those issues—basically by definition.
To me, one thing that all this highlights is the risks of having overinflated expectations about AI capabilities. Considering a product “sentient,” or even thinking that a soon-to-come future version of a product could be sentient, is a recipe for thinking something can do more than it really can.
This means people will be more likely to let their guard down, putting themselves at risk of bad advice from ChatGPT or inaccurate overviews in Google’s search results.
As usual, unfortunately, caveat emptor.
Ports of Call
Are appliances dying faster?: The common wisdom suggests that appliances are subject to planned obsolescence—whereas decades ago, a fridge might last for decades, today they barely last one. It turns out that common wisdom is overblown. While appliances did last longer in the past, it’s only by a bit. But why? Mostly because today’s appliances tend to be more complex (computers, etc.), creating more points for failure—and they’re also cheaper to begin with, meaning they are more likely to be replaced than repaired. Wirecutter has a great long read covering all this.
My favorite running shorts: Okay, one of them. I’m a big fan of Janji’s 3-inch split shorts, but they didn’t sell them last year. Fortunately they are back this summer—I’ll have to stock up a bit in case they aren’t back in 2026. Breezy and the only split shorts I’ve found that hold a phone well in the back pocket.
Summer song: The upbeat “Yougotmefeeling” by Parcels is a great song to put on repeat this summer.