I’m Tim Gorichanaz, and this is Ports, a newsletter about design and ethics. You’ll find this week’s article below, followed by Ports of Call, links to things I’ve been reading and pondering this week.
Until fairly recently, I was against using the dishwasher. Things never seemed to come out clean. But in the past few months we started buying enzyme-based detergent, and I have figured out the spot on the top shelf where I shouldn’t put any dishes because all the debris somehow gathers there. And now I use the dishwasher all the time. It saves me a lot of time.
Think about how dishwashers work. They aren’t humanoid robots that stand at the sink and do the dishes just like a person would. That would require a level of fine motor control that is still (and maybe always will be) far out of reach for robotics.
Rather, a dishwasher works by creating a special environment that the very low-skilled robot can work well within. And because my particular dishwasher may be a little defective, I’ve learned how to modify that environment further: by keeping a certain area inside the environment clear, and by using a certain kind of detergent.
This simple example can give us some surprisingly deep insights into our digital future.
An Exciting New Book
I’m currently reading Luciano Floridi's new book, The Ethics of Artificial Intelligence. This is the latest installment in Floridi’s Principia, a multi-volume work that outlines a comprehensive philosophy for the digital age. I eagerly await each new volume and lap them up like Belle & Sebastian albums. We only get one every few years. The Logic of Information came out in 2019, The Ethics of Information came out in 2013, and The Philosophy of Information (the first volume) came out in 2011.
The fourth and final volume was planned as The Politics of Information, but Floridi decided to split it into two books, the first of which is the one that just came out, The Ethics of Artificial Intelligence. Presumably the next and final book will be The Politics of Information or perhaps The Politics of Artificial Intelligence. But I hold out hope that Floridi’s project might turn into a Zeno’s Principia, in which each remaining book gets split into two books, the second of which is always yet to come.
Anyway, I’ll be reflecting on the contents of this book in this week’s and some of the upcoming posts here at Ports.
In the first part of the book (the first three chapters), Floridi develops some of the basic conceptual ideas for understanding AI, as preparatory work for the discussions to come later on. This first part is the “what?” and the second (much longer) part is the “so what?” and “now what?”
How We Help Technology Work
A key idea from the first part of The Ethics of Artificial Intelligence is enveloping.
In the past, we assumed that performing many tasks required bona fide intelligence. But with AI we have learned that many tasks do not—even writing. Despite the name, AI is not intelligent. The reason AI seems intelligent is that we are gradually reshaping the world to allow it to work more smoothly. This reshaping is called enveloping.
All technologies have an envelope in which they work. Again, think of the dishwasher. The box-shaped appliance is essentially an envelope for the little water-spraying robot inside.

There are many other examples of how we have created such envelopes—special environments for our machines to work, given that they are lower-skilled (but more powerful in other ways) than we are. Think of how a horse can gallop over pretty much any natural terrain, but once we invented automobiles, we needed to create paved roads everywhere. Think of how our buildings and furniture are mostly rectangular; this is because it’s easier to produce these shapes with the construction tools we have.
Today, we are reshaping our world almost wholesale in order to be friendly to digital smart technologies. In other words, “the world is becoming an infosphere increasingly well adapted to AI’s bounded capacities,” as Floridi puts it. Just think:
We have cell coverage and wi-fi now pretty much everywhere you go.
There are barcodes on every box.
People change their furniture, gardens, etc., to be more navigable by robot vacuums and lawnmowers.
We carry data sensors (i.e., smartphones) everywhere that produce data as input for AI systems.
We may think we are doing these things for our own convenience and autonomy—and maybe in a very roundabout way we are—but the more proximal cause is that we are doing these things to create a better envelope for AI.
The world is becoming an AI-friendly place. And it couldn’t be otherwise. Floridi observes that whether we like it or not, we will adapt to AI. This is because we are lazy, and AI is inflexible but hardworking. Our laziness wins.
Reshaping the World
Looking toward the future of digital technology, it’s probable that we will continue to reshape the world, creating a better and better envelope for AI.
AI is successful in tasks that are easy in terms of the skill needed to complete them but complex in terms of computing power needed. Dishwashing, for example, takes very little skill but is highly complex. Tying your shoes, on the other hand, is very simple but takes high skill (fine-grained motor control).
The task for AI progress, then, is to convert high-skill activities into low-skill but computationally-complex ones. Think of ironing shirts, which is both high-skill and high-complexity; computers won’t be able to do it until we can find a way to make it doable with less skill—and of course in a way that is commercially viable.
Essentially, Floridi writes, this means modeling more and more things as games. The game of chess, you probably know, was long the paragon domain for AI research. It worked so well because it is a type of game in which the rules constitute what is physically possible in the game. (As opposed to something like soccer, in which the rules only constrain what is legal to do in the game, not what is physically possible.) Chess takes little motor skill to play, and the rules are easy to learn, but it is immensely computationally intensive to play expertly.
As AI progresses, we’ll see more and more things modeled as games in this way, to make them analyzable as complex problems rather than difficult ones. “In short, we will seek… to deal with complex problems by purifying tasks and interactions in enveloped environments. The more this is possible, the more successful AI will be,” Floridi writes.
This work is a matter of design, and it brings up many ethical, legal, social, political issues—which we’re just starting to learn about (and which Floridi discusses in the rest of the book).
Who’s Making the World
“Who made the world?” the poet Mary Oliver asks in the opening line of one of her most famous poems. “Who made the swan, and the black bear? / Who made the grasshopper?” It is a deep question, perhaps more than it seems at first.
Most people’s answer, I guess, would be either nobody or God. But neither answer is satisfactory because, really, innumerable entities over the vast expanses of time have played a role in making the world.
And today, the unintelligent but agential entity we call AI is another one of those forces making the world. But to what end?
Ports of Call
Not much this week—I’ve been catching up on reading books instead of articles!
Tracing the Recent History of AI Research: Reading Floridi’s new book, I learned about this piece in MIT Technology Review tracing the big-picture trends in AI research over 25 years. This article came out in 2019 and basically showed that every decade in AI research sees a new underlying vogue technology. The 2010s were dominated by deep learning, whose reign still seems strong in today’s age of the transformer. But will it wind down soon?
A Conversation on AI and Law: A recent episode of the Decoder podcast features a conversation with Lawrence Lessig, an illustrious lawyer on our digital infrastructure. Lots of insights on the how and why of AI regulation.