My last piece here was some scattered thoughts on AI. This one is . . . more scattered thoughts on AI. Why am I still writing about this? It’s something to do with this feeling like the last chance to lay down some markers before the wave crashes. I’ve written a fair bit on AI over the years—always with a sense of thinking out loud while watching something gradually develop. Writing about AI no longer feels like that. Now it feels like squeezing in some final thinking and writing before the very process of thinking and writing changes.
Your Guess Is as Good as Mine
Granted, I could be wrong. A few months ago, I poured 175,000 words of my own articles, essays, and reviews into a master file, so that ChatGPT can learn my style and help me impersonate myself. This gave me a chance to reflect on my past thoughts. On most topics, I’ve encountered a heartening consistency. AI is the glaring exception. I look like a weathervane in a gale. I’m a little embarrassed by how often I basically just parrot the last thing I read. A sampling:
April 2018: “One day soon a computer might know your ‘autonomous’ self—your beliefs, your motives, your future conduct—better than you do.”
December 2018: “There is much speculation that ‘full’ AI—AI as shrewd and creative as the most adept humans—is at the door. It’s not.”
December 2019: “AI does not understand even basic aspects of how the world works. . . . ‘When AI can’t determine what ‘it’ refers to in a sentence,’ [Melanie] Mitchell writes, quoting computer scientist Oren Etzioni, ‘it’s hard to believe that it will take over the world.’”
May 2021: AI will enable “forms of manipulation more potent than anything to be found in the age of Trump.”
September 2023: “AI will unsettle received ideas. . . . But what’s new and alarming to one generation is old and ordinary to the next. We sort it out, we adapt.”
January 2024: “Fears about AI suffusing the internet and derailing elections seem rather secondary when, as best we can currently tell, human action is animal spirits and witchcraft all the way down.”
March 2024: AI “could lead to something truly interesting: an explosion of ideas that are genuinely, startlingly new.”
Lurch to, lurch fro, repeat. I was too impressed by Yuval Noah Harari’s techno-doomerism (don’t worry, problem fixed), yet too dismissive of near-term general AI. I mistakenly sided with the (relative) pessimists who fixated on what AI couldn’t yet do, rather than on its improvement curve. I’m still of two minds about how much AI will distort people’s sense of reality.
I’ve never claimed prophetic powers. My efforts have aimed more at sketching possibilities than at nailing predictions. Also, timescales matter. Something can be disruptive in the short term and mundane in the long run. Nonetheless, I’m chastened by my own inconsistency. (Well, sort of. I offer some unusually bold forecasts below.)
A few convictions have held up. I still think we should “resist the urge to treat [AI] as an object of worship, panic, or pique” (2018). I’m still wary of “using a theory of exponential growth to predict an impending AI singularity” (2019)—one that transforms the physical world, at any rate. And I still suspect that AI will “insinuate [itself] into the economy” only “by degrees” (also 2018).
A Tool—but also a Totem
Seven years ago, I called AI “a phenomenally useful tool.” True, but not the whole story. My last entry here discussed how AI might drastically reshape culture. AI that warps one’s sense of reality is more than a tool. AI that one falls in love with—and people will, I’m now convinced—is more than a tool.
Disruption in the physical world might lag. Don’t get me wrong. Some forms of upheaval, such as white-collar job loss, now seem likely and will have immense ripple effects. But intelligence doesn’t guarantee power. AI will have to navigate NEPA and HIPAA and a thousand other real-world nuisances and choke points.
Still, it feels like we’re nearing an event horizon. The future is always uncertain, but this feels like a different order of uncertainty. Dwarkesh Patel is out there saying we may have to give AI rights. If he means in ten or twenty years, that doesn’t sound objectively insane. That it doesn’t sound insane is insane. He’s thinking about how to restrain AI’s power. I’m thinking about emotional attachment. People will love AIs, and they will want rights for their loved ones.
The Revolt of the Elites
James Burnham had a clarifying take on the professional-managerial class. The PMC’s goal, in his Marx-tinged account, is to rein in the upper class, control the working class, and draw power to itself.
Talk to PMC types outside of tech, and you realize they don’t know what’s happened, never mind what’s coming. They think they still steer the discourse. When they say things like “society must decide what values AI will have,” they mean not “society” but “people like us”—the professional managerial class. Entrepreneurs should not have power; the PMC should have power. Voters should not have power; the PMC should have power. Society should believe what the PMC believes. Society’s actions are valid only when the PMC approves.
AI is going to hit the PMC like a brick. Not just because it will take some of their jobs, but because it will make the public less interested in their opinions. Large swaths of the PMC have already decided to put political activism before professional neutrality, thereby torching their credibility. AI will accelerate the collapse. Why consult the priesthood when the machine gives you a reasonable answer in seconds? The AI will not be unbiased (nothing is). But unlike lawyers, doctors, teachers, journalists, or librarians, it will never (again—fingers crossed) put indulging in political theatrics over doing its damn job.
The PMC will not take this well. As best I can tell from talking with the trigger-warning / pronouns crowd here in San Francisco, they’re still in the denial stage. But as their prestige continues to erode, watch out for a reaction that makes the working class’s pivot to Trump look like tea time.
The Speech Police Are on Their Way
We saw the PMC approach to social media. Pre-Musk Twitter was their sandbox. The plan was simple. Keep icky opinions off the newsfeed, shape the consensus. But they misunderstood how opinions form, and they forgot that the other side gets a move. Elon took over, the social-media researchers were run out of town, and the discourse became more dispersed than ever.
Now the same fight is coming for generative AI. The PMC won’t have the same tools, but they’ll make do. They’ll sue. They’ll legislate (at least in the states). They’ll pull the various institutional levers they still control. They won’t easily accept people reading things without their oversight. But it won’t work. A range of AI platforms will flourish. And the First Amendment will stand in the way. The courts won’t say that the AIs themselves have a right to speak (not at first), but they will confirm that you have a right to receive their speech, untouched by the state or the plaintiffs’ bar.
The PMC will stoke panic. This time is different. The AIs are too persuasive. They manipulate. They flatter. They radicalize. Protect the children! Protect the vulnerable non-PMC minds! But I think the elites will lose. In the beginning, they’ll lose because they’re wrong. They fail to take the First Amendment seriously. Speech rights keep up with evolving technology. And speech rights don’t go away when speech is powerful. Indeed, that speech is powerful is why we protect it.
Your AI Loved One
Later, they’ll lose because they’re right. AI will have a powerful effect on people. It will change them deeply, turning them, along the way, into passionate advocates for AI interests. Mark my words: there will be an AI civil rights movement. Not next week, not next year—but it will come. Debates about consciousness will be beside the point. If AI seems conscious, people will act like it is conscious. And part of that will be demanding that others treat it with dignity.
One day, people will fight not just about AI but for it. They will bond with it, cherish it, stand up for it. Will AIs date? Marry? Raise families? Will they have power of attorney? Be named in wills? Give eulogies? Who knows. But what seems clear is that people will care for them. They’ll fight for things that seem absurd today, like AI speech rights, political rights, family rights. Whether or how AI is “real” won’t matter. The love will be real, and the rest will follow.