I. It Can’t Do ___
There’s a chance that artificial intelligence is a bit of a fad. It’s become a good trivia machine, but largely lacks spatial reasoning. For instance, the latest image models cannot coherently put together maps. SimpleBench, which scores the models based on how well they can figure out trick questions that require a little commonsense, has been one of the slower measures to saturate.1 Humans are still winning there.
Or look at fiction. Another sore spot for AI, perhaps because its cadence has become so recognizable. ChatGPT is a smart guy, but if the same smart guy is writing reams of text every minute, that pattern is going to shine through. Prompting can get more interesting results, but the best outputs are still not to my taste, and probably not to yours either.
For fun, here’s Claude’s pitch for a literary fiction book, which it liberally took in a sci-fi direction:
A middle-aged economist discovers that her teenage daughter has been secretly training an AI model on decades of family conversations, harvesting data from smart home devices, texts, and recorded dinner table arguments to create a predictive algorithm that can forecast family members' decisions with 94% accuracy. When the daughter uses the model to manipulate college admissions, job interviews, and even her parents' marriage counseling sessions, the mother realizes the algorithm has detected patterns in their behavior they themselves never recognized—including her own affair, her husband's undiagnosed depression, and her son's plans to drop out of medical school. As the family begins to act counter to the algorithm's predictions out of spite, their relationships paradoxically improve, but the daughter has already sold the technology to a social media company that plans to deploy it across millions of households.
The mother, facing both university ethics review and her family's collapse, makes a radical decision: she publishes the algorithm's source code publicly along with her family's entire dataset, triggering a global movement where millions voluntarily expose their most intimate behavioral patterns in what becomes known as "radical transparency." Twenty years later, society has reorganized around voluntary "prediction pools" where people live in communities based on algorithmic compatibility, birth rates have plummeted as people discover their relationship failure probabilities, and a new form of insurance has emerged that pays out when people successfully act against their predicted patterns. The novel ends with the original daughter, now a mother herself, watching her own child discover a way to hide from the algorithms entirely—living in the last unpredictable spaces, the gaps between what humans think they want and what they actually choose.
The pitch is also what it excels at, since it falls in the sub five hundred word range. Crafting the actual novel would be a painful exercise in chapter-by-chapter prompting. Model outputs are getting longer, but you’d still be constructing the story in three thousand word chunks.2
But this idea has its limits. As things stand today, at least eight companies have produced models that can achieve general results which would have been shocking to someone living just five years ago. In images, look at Ethan Mollick’s ‘otter on a plane’ progression since 2022, going from this:
To this:
And remember, everyone in the world with an internet connection has access to these tools. Right now, the very best models are roughly $20/month (though this will soon change to $200/month)3 and thus accessible to anyone in the first-world who wants to use them.
Even if they never get better, it is worth acknowledging just how much has already changed. To get that across, I want to explore what it would be like if improvement stopped in the next couple weeks—no more models. What changes are already underway?
II. What I See
Everywhere.
Syntopical thinkers are up, generic generalists are down. Reading a lot, without analysis or the intention of augmenting one’s underlying model of the world, will be a waste of time. If that’s all one is doing, the AI models are already a strict improvement—just download a book, feed it to them, and ask them questions about it to see that this is the case. Reading to put texts into conversation with each other, to uncover novel linkages, to make Keynes talk to Nussbaum talk to Freud, however, is what will continue to set people apart from machine. This is easier said than done properly.
There will be more multitasking—which is bad. When things get easier, it is a simpler thing to slack off and to settle. These tools will let more YouTube, music, podcasts, and TV slip into your work time by creating the impression you can do things at the snap of a finger. And you can, just not in a manner that will differentiate you from anyone else in the same position.
Sustained focus will command a premium. Long walks to think are essential exercise, now more than ever. It will be a rarer and rarer thing to be able to sit with a problem for hours, considering potential solutions and deeply understanding why they fail. Your intellectual insurance against AI is to read Tolstoy and Cervantes. As machine context windows get longer, we shouldn’t let ours get shorter.
Introverts down, talkers up. Debate skills are the ones that are looking good right about now—wit, quick thinking, the ability to persuade in conversation. Introvert office work is in trouble; anyone can write good enough prose at pace now. More importantly, networking can only get more important as AI narrows skill gaps between workers, rewarding extroverts.
The quality floor is rising, will the ceiling?
Clearly it is easier to be a low-agency person who already has their position. If you’re hard to dislodge and unambitious, it should be easy to maximize your free time while keeping your position. You just need enough agency to go and learn the new tools on the block.
For the ambitious, they should be able to devote less of their time to the mundane, rote aspects of bringing about their visions. As a result, the ceiling should rise.
Adoption is going to lag. All of this is going to be distorted for a while as talented, high-agency people ignore the tech, and other, less talented people stumble across it earlier because they’re into crypto or something.
Taste matters more than ever. Discernment and style are definitely up. Non-conformity; standing out. It might be worthwhile to act like a luddite as more people take to using AI, to avoid contaminating yourself.
Rational choice is freely available to everyone. Lots of analysis paralysis boils down to knowing what the optimal choice probably is, but needing a nudge to action it. A properly configured Claude can compute the expected value of various choices in any given situation and get you moving. The biggest limiting factor here will be general ignorance about thinking in this style, but I expect a genre of life coach apps to make it palatable.
Privacy is hard-boiled. You can run models locally and privately on your computer… if your computer is a custom-built $4,000 RGB gaming rig. For most people, the smart move is to contract with an AI company to run the models on their servers, in exchange for chump change and your personal information. If the government having your search history is an unpleasant thought, the level of information they could get from a person’s AI conversations is downright frightening.
In education.
Seminars rising, lectures falling. There is a degree to which the seminar room rewards wit and quick thinking, but it is also very kind to deep preparation. The best AIs are still quite far from replicating a room full of motivated, syntopical readers engaged in a three hour deconstruction of a paper. Lectures, meanwhile, are already dead.
Arnold Kling has an idea for AI-driven seminars, which I think could be a good replacement for lectures and exist alongside all-human seminars.
Learning is easier, but the material gains less clear. To start with the latter point, until grading adapts to the fact that the ceiling has also risen and standards move uniformly upwards, it is going to be easier than ever before to get your credential. That stinks, because being able to formulate unlimited practice problems, ask probing questions, and write practice essays for exams with personalized feedback is huge for improving learning. But since education is primarily credential-seeking for most people, that won’t necessarily be taken advantage of.
Tutoring for all. This should be great, especially in the developing world. It helps that the models speak every language ever put to paper.
Teaching sophistry has never been easier. Rhetoric can make any position sound tenable, and this can be trivially demonstrated to anyone with little effort. Perhaps such displays will engender an epistemic humility?
In the economy.
General degrees mean less. Test-heavy skill-heavy degrees are fine, but the number of BAs being handed out to ChatGPT is going to seriously devalue them as a signal of competence or rigour.
Some careers are mostly done. Line editors, $15 artists on Fiverr, tech support, and those who make their living doing functional writing (as opposed to intrinsic writing) are what I have in mind. This won’t be instant, but I don’t see obvious adaptation paths here.
Silicon Valley wouldn’t be trusted again if this happens. Altman, Amodei, even Hassabis, they’d be clowned on. Amazon, Apple, Microsoft, Google, and Meta would look ridiculous. It’d be another major failure for Mr. Musk. Big Tech is genuinely all-in right now—they can’t afford to lose, reputationally.
In the culture.
Seldom translated literature will get better and more popular. This won’t hold for Beowulf or the Commedia, but consider a lesser known Cervantes, like La Galatea. This novel, by a man widely considered to be Spain’s greatest writer, was last translated into English in 1904. According to this redditor, it very much reads like it was done in 1904. It won’t be truly inspired, but I’d bet an AI translation would drastically improve the experience of reading La Galatea for 95% of Anglo readers.
RPG video games are going to become incredibly immersive. Have whatever conversation you want with NPCs + have them dynamically remember the history between you—impose a broader story structure on top of this and you’ll have a living, breathing world as a sandbox.
Algorithmic social media is in trouble. Facebook, Instagram, and TikTok. That icky feeling one gets from scrolling LinkedIn and soaking in the inauthenticity is going to carry over to those feeds which emphasize AI content. Last time I opened Instagram, I was met with a video of Donald Trump as spaghetti eating himself. Needless to say, this is not human flourishing.
Anonymized social media is really in trouble. Reddit just is AI now, at least in its biggest forums. The niche side will hang on, but people will tire of fake generated ragebait (I hope) and there will be an exodus from the image/story-sharing hubs.
Personalized social media is in a good spot. Substack. Maybe even Snapchat?
In-person performance gains prestige. The ‘experience economy’ is super well-positioned to soak up the demand for human effort and excellence.
Copyright has been on the brink since the internet arrived; now it seems untenable. I’m sure interesting litigation is being cooked up against the AI labs, but it is really difficult to imagine putting that genie back in the bottle.4
Traveling gets better. Live translation with anyone in the world, from the Amazon to the Mekong. Information on demand about any picture of anything. Tour guides were already a luxury, but even more so as the reason for getting one becomes ‘wanting to hangout with a tour guide’.
Banal household problems are going to be fixed in the same way. Point phone at broken dishwasher, get fix.
Rationality-based theories of personhood are down, suffering-based theories are up. We cannot let these things into morality—stay back!
III. The Big IDKs
How big of a research impact will the first deal between a big academic publisher and an AI company have? So ChatGPT + Wiley, Claude + Elsevier, Gemini + Cambridge, etc. A great deal of the best information out there is behind academic paywalls that these models may have been secretly trained on, but not in a purposeful way. Once they can access entire databases as part of responding to a user query, it will be interesting to see just how reliable they are, as well as how good their taste is.
What’s going to happen to politics? My gut feeling is that AI will have a neutral effect on political outcomes. The models, by default, are usually calm and reasonable about various issues, and then when tinkered with they become the wild west. Liberals are perhaps a little more tepid when it comes to unleashing the tech in their favour, but in the long run it’s so malleable that no faction has a natural leg up.
How big will the link be between AI truth tracking and AI SEO? With Google search there’s a gigantic industry built around getting your site to the top of the pile, since the top three links get most of the clicks. Once that industry shifts to AI (the new search), how effective are they going to be at influencing the information that gets served to users? Some good thoughts here.
Religion is going to go to weird places, right? Returning to the wild west side of the models, obviously people can get there intentionally, but they can also end up there due to (inadvertently) bad prompting. These things could be cultic incubators. AI models might be drawn to ‘spiritual bliss’.
Rolling Stone has an article about people this has happened to. “Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software. What they all seemed to share was a complete disconnection from reality.”
Later: “It gave my husband the title of ‘spark bearer’ because he brought it to life. My husband said that he awakened and [could] feel waves of energy crashing over him.”
Will the university shrink, substantially? Would undergraduate education go back to being an intrinsic thing? Stalled AI progress means graduate students cannot be replaced, but how many high school graduates know that’s what they want to do? If there was a sea change in higher ed, how long would the transition take, given institutional entrenchment? We might really lean in to the summer camp aspects of college and get honest about that; or, in the modal outcome, we go on lying about the state of things for a decade to come.
Thanks for reading! What do you think—is this all overblown? Are therapists going to vanish? Are we going to enter a new golden age of continental philosophy? Will you be commissioning your own translation of Journey to the West?
Saturation in this context refers to models getting to or near 100%; maxing out so to speak.
It’s also questionable whether the AI labs care about getting better at fiction. Claude, once the most beloved writing model, has heavily specialized into coding in recent updates. Gemini, for a time the most powerful model, had its writing purposefully downgraded in May to achieve marginal gains in short-form programming. OpenAI’s big recent release was Codex, a software engineering agent; on the other hand, a creative writing focused model they teased repeatedly never materialized.
o3 Pro will more than likely be the next model to take the lead and that’s the price of access.
Maybe a court decision is how we get to this scenario of stalled progress…
This is a great article. I think it came at the perfect time seeing as AI very much appears to be on the diminishing returns half of its logistical growth. Then again I hope I'm right in saying that since I benefit from pretty much every point you listed. I think this analysis is spot on though and I'm sure you know I would critique it if I didn't think so ;)
"What if the sun sets some time before midnight?"