I’m Tim Gorichanaz, and this is Ports, a newsletter about design and ethics. You’ll find this week’s article below, followed by Ports of Call, links to things I’ve been reading and pondering this week.
I was recently subjected to a PowerPoint presentation where every slide had an AI-generated image. (It wasn’t a student.)
AI imagery is in a weird place. On one hand, the very fact that you can type in some text and instantly create an image of whatever you described—that feels like magic.
And maybe “magic” is just the right word, because on the other hand I’m not sure AI imagery is useful for anything beyond a parlor trick.
As I was sitting through the presentation, I couldn’t help but think the slides would have been better off without any images. The images didn’t add anything, they distracted. They were sleek but vapid and incoherent.
I may be in the minority here. I saw a March 1 tweet from Justine Moore, a partner at venture capital firm a16z and consistent AI pumper, gushing about the quality of AI imagery. “Accurate fingers AND text,” she writes. Apparently she doesn’t realize that hands have more than three fingers. And sure, the text is spelled right, but it doesn’t have the perspective it should if it’s on a slanted surface.

Images like this may constitute “huge improvements,” but the result is still worse than a sixth grader who just found out how to pirate Photoshop. Just like you wouldn’t want to fill an art museum with a kindergartner’s art projects, we should not strive to fill our infosphere with all of this trash. (Not to mention the extraordinarily high cost that we are paying for the trash. Just because you don’t pay doesn’t mean it’s free.)
In the presentation I recently sat through, unfortunately it wasn’t just the images that were AI-generated. Much of the content had ChatGPT’s signature formatting for bulleted lists, with the descriptor in bold title caps followed by a colon and a snippet of text. No sources, no depth, no follow-up.
Now, maybe there is a place in the world for AI-generated content, and maybe that place is indeed bullet points on slides.
But what struck me most about this presentation was that the presenter had so much content on so many slides that they ran out of time and had to scrub through many of them in a frazzled blur.
From Writing to Editing
The Economist recently had an article on how businesses are using generative AI, including such tools as ChatGPT. For the most part, the results have not lived up to the hype, amounting to what the article calls “window dressing.”
There’s also been an unintended consequence of the spread of AI:
Although AI code-writing tools are helping software engineers do their jobs, a report for GitClear, a software firm, found that in the past year or so the quality of such work has declined. Programmers may be using AI to produce a first draft only to discover that it is full of bugs or lacking concision. As a result, they could be spending less time writing code, but more time reviewing and editing it. If other companies experience something similar, the quantity of output in the modern workplace may go up—as AI churns out more emails and memos—even as that output becomes less useful for getting stuff done.
The quantity of output going up is exactly what I observed in those PowerPoint slides. More, more, more.
Similarly, recently I’ve had students send me long emails with literary flourishes asking me for this or that. (Now, these emails may or may not have been AI-generated, and it doesn’t really matter. I wrote a research paper on student experiences of being falsely accused by professors of using ChatGPT on assignments, so I’m particularly sensitive to that possibility.)
The point is these emails lacked editing. You do not need to write me an essay to tell me you’ll miss class or need an extension. Two sentences will do! And we don’t need quite so many adverbs.
Many have said that in the AI age many people will shift from being writers to being editors. The problem is that most of us have not been trained for that.
But even in some idealistic future where we are all trained as editors and work with generative AI tools as creative partners, I’m not sure we can keep up with the flood. Editing takes so much longer than writing. I typically spend about four months writing a novel and then a year editing it. Similar proportions go for other genres.
Flooding the Zone as a Political Strategy
“The Democrats don’t matter. The real opposition is the media. And the way to deal with them is to flood the zone with shit.” Now famously, this was the political strategy of Steve Bannon, who was CEO of Donald Trump’s 2016 presidential campaign and then White House chief strategist for the first seven months of Donald Trump’s administration.
The idea is not to persuade people about why your policy idea is better. The idea is not even to strongarm people into going along with you. The idea is to create so much chaos and confusion that people throw up their hands and say whatever.
Bannon did not invent this strategy. It’s been used for decades by Vladimir Putin as part of propaganda within Russia. And Putin is still following this strategy; see for instance his propaganda regarding the war in Ukraine.
According to Peter Pomerantsev, a Soviet-born media executive and now academic, Putin’s goal is to convey that “the truth is unknowable” such that the best course of action is “to follow a strong leader.” And by that logic, we can see why flooding the zone is the favored strategy among populist jostlers.
We’re All Bannon Now
In the past year, flooding the zone became a zillion times easier.
Back in 2020, as GPT-3 was just becoming known but wasn’t yet available to the public, Renée DiResta, a research manager at the Stanford Internet Observatory, foresaw that the supply of disinformation would soon be infinite. “Someday soon,” she wrote, “the reading public will miss the days when a bit of detective work could identify completely fictitious authors.”
That day has come, and it’s not just the reading public but also the viewing public.
And it’s not just political propaganda, but media generally.
Even Justine Moore, the AI pumper I mentioned earlier, observed in a thread on Twitter:
Facebook has turned into an endless scroll of AI photos and the boomers don’t appear to have noticed. / The photos don’t even have to be that realistic, they’re just happy to see them. / Even when told an image is AI, it just…doesn’t register.
Yikes.
And now Amazon is seeing an influx of AI-generated knockoff books attempting to siphon off sales from popular titles.
Stay tuned for more.
We can only imagine what this summer will hold for us, as the U.S. presidential election season gets into full swing. With multiple wars at the top of the news, and not to mention a historic rematch between two polarizing and controversial old geezers, it’s enough to make you want to throw up your hands and say whatever.
The Coming Collapse
OpenAI is blithely doing Bannon’s work for him. The hundreds of millions of users of the latest AI technologies are flooding the web so resolutely that it is starting to become unusable.
And soon, perhaps, the generative AI tools that made it all happen will also become unusable. As AI-generated content proliferates online, it will become training data for future iterations of these AI tools. This is known as “model collapse,” and AI experts say that it is already a real threat to the viability of our current AI technology.
Once we break the internet and then break the thing that broke the internet, I wonder what we’ll do. Maybe we’ll throw up our hands and say whatever and then go outside.
Ports of Call
Sorry my piece is so crabby today. Let’s see if we can find some good cheer elsewhere:
Drexel and Mac History: My university, Drexel, was the first to require all students to have a personal computer, back in the 1980s. This requirement was facilitated by a deal with then-fledgling Apple, giving our students access to the Macintosh before the general public. How cool!
How Running Builds Character: We all know that sport builds character—it’s why many parents insist their kids play a sport. But we don’t know much about how that happens, and as we get older we tend to forget it (focusing instead on sheer performance or body image). A new book by philosopher and ultrarunner Sabrina Little examines the mechanisms for how sport and character interact. She focuses on running, but the ideas are applicable elsewhere. There’s a saying that “ultrarunning is 90 percent mental and the other half is physical,” and this book shows a path for engaging the mental side.
A new Márquez book: Gabriel García Márquez, the Colombian Nobel laureate most famous for his book One Hundred Years of Solitude, died in 2014 but has a new book out this week. Apparently its publication goes against his wishes, though his children argue those wishes were contaminated by the author’s dementia.