I’m Tim Gorichanaz, and this is Ports, a newsletter about design and ethics. You’ll find this week’s article below, followed by Ports of Call, links to things I’ve been reading and pondering this week.
In 1965, the average CEO made 15 times more money than their average employee. In 1989, the CEO made 45 times more. Today, CEOs on average make about 300 times more than their workers. The middle class is disappearing.
The rich get richer and the poor get poorer. This phenomenon occurs not just in economics but all over the place. Academic papers with more citations are the ones that get cited the most. Children in school perceived as smart get more attention and then get even smarter. Same with sports.
It’s been called the Matthew effect, after Jesus’ words in the Gospel of Matthew (Mt 25:29) at the end of the parable of the talents: “For to everyone who has, more will be given, and he will have an abundance. But from the one who has not, even what he does have will be taken away.”
AI as the Great Equalizer?
The promise of AI has been ultimate democratization and equality. Everyone will be able to work faster and more productively. Everyone will have access to humanity’s greatest invention. People will work less, and they’ll have to find other things to do with all their free time.
But such promises typically accompany new technologies, and things have never worked that way. (In 1930, the economist John Maynard Keynes predicted that by 2030 people would only work 15 hours a week.)
So we ought to have been skeptical of such claims. Alas, we are easily beguiled. Maybe it’s that we’re always hoping.
Recent developments in generative AI products for academic research have continued to stoke these hopes. This month both OpenAI and Google released products called Deep Research, which promise to retrieve and summarize academic articles to help researchers frame and answer research questions. They purportedly specialize in helping write literature reviews—summaries of the current state of understanding on a topic.
For now, Deep Research is prohibitively expensive for most people—OpenAI’s offering costs $200 a month. That’s another way in which the rich get richer—only those who can already afford it are allowed to benefit from it.
But there are plenty of other AI-powered tools for researchers, some of which offer free or much cheaper features. A couple months back I learned about SciSpace, an AI chat-based research tool, targeted toward students and researchers. SciSpace offers an “all-in-one” platform for doing research: search the literature, get relevant articles summarized, generate paraphrased versions, and even revise your text to limit detectability from AI detectors!
In most cases SciSpace can’t access the articles themselves, since they are usually behind a paywall. (It should raise some eyebrows as to how the tool could conceivably summarize articles it cannot access, though it does so happily.) The assiduous SciSpace user can request the full text from the paper’s author right on the site. In fact, I learned about SciSpace when I got an email from a user through the platform asking for access to one of my papers. The personalized message read, “Kindly send me the pdf.”
The kicker was that this particular article was open-access anyway, so if the user had just searched on Google, they would have found the full paper. So there’s the heart of the danger of an “all-in-one” tool: it may blinker us from using the rest of our brains.
The Matthew Effect in Generative AI for Research
When the generative AI wave started, the initial research showed a decrease in inequality, giving the most benefits to low-skilled workers. These findings fed into the familiar (and perennially false) narrative and no doubt fueled the hype.
But more recent findings show that the rich get richer. The Economist reviews several illustrative examples. For instance:
Aidan Toner-Rodgers of MIT, for instance, found that using an AI tool to assist with materials discovery nearly doubled the productivity of top researchers, while having no measurable impact on the bottom third. … Elite scientists, armed with plenty of subject expertise, could identify promising suggestions and discard poor ones. Less effective researchers, by contrast, struggled to filter useful outputs from irrelevant ones.
The Sam Altmans and Ethan Mollicks of the world might reply that users just need to learn to prompt better. If only they were better users of generative AI, they would reap the benefits.
But the research is showing that becoming a high achiever with generative AI is not (just) about learning to prompt the AI better. Rather, it suggests that an effective AI user must first get the training and build the expertise outside of AI.
So there’s the key danger of using generative AI tools for cognitive work: by using them too early, they will hamstring our ability to use them well.
Alas, we may not be very good at determining when we’re ready. AI users may fall prey to the Dunning–Kruger effect, which is the observation that low-skilled people tend to overestimate their abilities, while highly skilled people tend to underestimate their abilities. People who aren’t ready to add AI into their productivity arsenal may do so too early, to their detriment; and people who are ready may not, perhaps limiting their potential.
Three Pitfalls of Using Generative AI for Research
Using generative AI for research risks three major pitfalls, as summarized in another recent Economist article reviewing the performance of OpenAI’s Deep Research for economics research.
First, while Deep Research works okay for simple queries, such as reporting a country’s unemployment rate in a certain year, it fails at more complex or creative questions. “It wrongly estimates the average amount of money that an American household headed by a 25- to 34-year-old spent on whisky in 2021, even though anyone familiar with the Bureau of Labour Statistics data can find the exact answer ($20) in a few seconds.” Of course, it will very confidently offer that wrong estimation, and a novice user won’t be any the wiser.
Second, Deep Research offers the consensus, mainstream view of a given topic, in part because it favors easily accessible sources such as newspapers and magazines. This may be appropriate for high school reports, but the intellectual cutting edge of a topic often sees nuances or expresses emerging views that are not yet reflected in the popular press. Deep Research provides the conventional wisdom on topics even in places where the view of experts may differ.
Lastly and most seriously is “the idiot trap.” In short, writing is thinking; and so if you are not writing, you are not thinking:
Paul Graham, a Silicon Valley investor, has noted that AI models, by offering to do people’s writing for them, risk making them stupid. “Writing is thinking,” he has said. “In fact there’s a kind of thinking that can only be done by writing.” The same is true for research. For many jobs, researching is thinking: noticing contradictions and gaps in the conventional wisdom. The risk of outsourcing all your research to a supergenius assistant is that you reduce the number of opportunities to have your best ideas.

Invest Your Talents
Let’s return to the parable of the talents, which is the source of the Matthew effect.
In the story Jesus tells, a rich man was going on a trip, and he entrusted three servants in charge of some goods (in “talents,” a unit of measure of about 80 pounds). Two of the servants invested their talents, doubling their value by the time the rich man got back. The third servant, “went off and dug a hole in the ground and hid his master’s money.”
The man was satisfied with the first two servants and promised to put them in charge of more responsibilities and reward them in kind. But the third servant had squandered his opportunity. His talent was given to the servant with the most, and he himself was thrown “into the darkness.”
“For to everyone who has, more will be given, and he will have an abundance. But from the one who has not, even what he does have will be taken away.”
The lesson is that taking a little risk is better than playing it safe, because some risk is necessary for reward. “A ship in a harbor is safe, but that’s not what ships are built for,” goes the aphorism.
At the risk of paternalism, using AI for research as a student or entry-level employee is the play-it-safe route. It’s not investing in your skills. Rather, it’s surrendering your cognitive development to a big tech company, burying your talent in the ground.
On the other hand, if you can put in the work to multiply your talents—taking the time to read and digest, undertake the struggle of creative thinking, internalizing the relevant literature in your field of study—then you may in the end even be given the talents of the ones who squandered theirs.
Ports of Call
Big River Man: A colleague told me about this heavy-drinking Slovenian man who does ultramarathon swims down the world’s longest rivers. The 2009 documentary Big River Man received numerous accolades when it first came out, and you can watch the whole thing on YouTube. It’s not only fascinating, but unexpectedly hilarious. “He has a special passport from the Slovenian government to enter into a giant cave.” (Shout out to Andrew!)
Global Risks Report: Each year the World Economic Forum publishes a report on the biggest risks facing the world in the coming year(s). The 2025 Global Risks Report is out, inviting us to reflect on the world we’re in. Of note: the continued trend of declining optimism worldwide.