I’m Tim Gorichanaz, and this is Ports, a newsletter about design and ethics. You’ll find this week’s article below, followed by Ports of Call, links to things I’ve been reading and pondering this week.
I attended a conference last week on the theme of college teaching in the age of generative artificial intelligence. The tenor of the conference was: Tools such as ChatGPT and DALL-E are here, so how can we incorporate them into teaching and learning?
There were, along the way, several discussions of AI ethics—at least when that topic is held under a certain light. Voices were preoccupied with, for instance, dealing with plagiarism in the age of AI, how to address AI bias (perhaps through clever prompt engineering?), fact-checking the output of AI systems, addressing equity in terms of who has access to the $20 a month for better tools, and so on.
Again, these are all valid questions of AI ethics, but in my view they are all missing the point. I, for one, don’t think these tools should be used at all.
Let’s imagine the topic was the ethics of sweatshop labor rather than AI. The kinds of conversations we had at this conference are like worrying over the details of a one-in-one-out policy for buying clothes when you shop at H&M (“Does each sock count separately?”), or arguing that everyone should shop local to cut down on carbon emissions. Such proposals are fine as far as they go, but they do nothing to improve the unjust sweatshop conditions at the heart of the operation.
Now, if you didn’t know anything about sweatshops, you could be forgiven for missing the point.
But at this particular conference, there was, in the middle of the second day’s keynote, a fleeting mention of the real harms of generative AI systems, such as the underpaid and sometimes traumatizing human labor required to make these systems work and their massive carbon footprint. But the conversation was swiftly whisked on to questions of how to engage students using generative AI.
Is Resistance Futile?
Continuing with the sweatshop analogy, we might think back to the 1990s, when it became widely known that Nike was manufacturing its goods with what amounts to slave labor—unsafe and abusive conditions, low pay and so on. As a kid I remember having Weekly Reader discussions on this, and one of the images that sticks in my mind was of a Pakistani child who worked in a soccer ball factory.
Following the scandal, Nike seems to have reformed much of its manufacturing. But more broadly, the “fast fashion” industry is larger than ever, and it’s growing faster than other parts of the apparel sector.
And the issue is not limited to clothing. Nowadays, we know that the people in China who build our iPhones live in regimented work camps far from home. They face mandatory overtime, workplace bullying and unsafe conditions, and are sometimes driven to commit suicide. (Coming to an India near you.)
But I still have an iPhone, and right now I’m wearing a Champion (rated “not good enough” by Good on You) sweatshirt. So is resistance futile?
Some Promising Trends
It may be that the road to a better future is a bumpy one, and that we all have to make compromises along the way. (In the meantime I could comfort myself by remembering my iPhone is now more than four years old, and saying I’ll keep this sweatshirt till it’s threadbare.)
On a larger scale, maybe we’re seeing evidence that resistance isn’t futile. The very existence of a site like Good on You, which rates clothing manufacturers and suggests ethical and sustainable brands, says something. In food, Fair Trade is growing: In 2006, there was just over €800 million in Fair Trade sales; and that number grew to €8 billion by 2016. Fair Trade membership and sales continue to grow. Similarly, we’re also seeing emerging trends toward shopping local and at small businesses and so on.
So maybe it’s not too late to see similar movements in generative AI. (It’s funny to even be thinking that such a young technology is already too late for anything.) It’s possible to imagine ethically-sourced datasets where contributors consented and were compensated, and where human reinforcement trainers are paid fairly and given appropriate mental health care.
But the journey may be uphill. Note our current situation: The tech sector assures us that widespread AI technologies are inevitable, that they will revolutionize society, that anyone who does not accept them will be left behind. This is akin to stocking our stores with sweatshop goods and building Walmarts in every town and driving out all the competition so it is all but impossible not to buy and use anything else.
Higher eduction (and surely other industries) is implicated in the hype: It started with fears of cheating early in 2023, and has led to a full year of policy development, conferences, research, etc., all under the assumption that these tools are here to stay and perhaps even good. Instead of addressing the root causes of cheating, we went haywire on AI.
A Wishlist for Ethical AI
Ethical generative AI is not impossible, and perhaps we’re even moving toward it. As we continue on the journey, here’s my wishlist:
Fair datasets: Transparency in what data is included in training data for a given AI model and how it was obtained. The people who originated the data should know about it, have given consent, and have been fairly compensated.
Fair pay and care for human workers: AI is not magic; it relies on troves of people to, among other things, interpret and moderate the output of AI systems so that, ultimately, the system's output is as desirable as possible. This work is physically demanding (sitting in a chair all day is bad) and mentally too (these workers encounter all forms of harmful content, just like social media content moderators). They should at the very least be paid fairly and given access to care.
Better models, not larger ones: Currently the trend in generative AI is to build bigger and bigger models, always trying to eke out that last 0.0001% of efficiency or improvement in performance. As a result, we’ve created an AI industry that is not at all efficient from an environmental perspective. We should begin optimizing not just for performance but also for environmental footprint.
Ports of Call
Questions of Taste: Is good taste one of those things you either have or don’t, or can we develop it? A recent conversation on Ezra Klein’s podcast goes into some of the simple-but-difficult precursors for taste—chief among them, paying attention. The interviewee, Kyle Chayka, has a new book out on how the algorithms of search and social media shape our taste.
On Creativity: A recent podcast of The Tim Ferriss Show discusses idea generation, creativity, taking risks and making decisions. A good listen for anyone who does anything creative!
Paper vs. Digital for Learning: A new study adds to the evidence that paper-based learning shouldn’t be thrown out in favor of screens.