AI, Misinformation, and Witchcraft
The “experts” are afraid of AI memes. The world is more interesting than that.
DIRECT FROM THE TAKE FACTORY: a new Tech Policy Podcast. Is AI about to “flood” our “screens” with “misinformation” that’s “dangerous to democracy”? Hell, is it going to spark an “infocalypse” and bring about the “collapse of reality”? My colleague Ari Cohn and I discuss. Notwithstanding those quotes from recent stories in the press, the answer is probably no.
Not this year, anyway.
How about next year, and the year after that? Maybe the supply of disinformation will soon be infinite. Maybe the internet will soon be a maze of synthetic content; a digital hall of mirrors; crazy like Vanilla Sky. Or maybe, after its recent big advances, AI will now enter a stretch of small, iterative improvements. Maybe next decade’s internet will look a lot like this one. We don’t know.
You ever stop and marvel at how uncertain the future is? Stupid question. Of course you do. How could you not.
Before he forever linked himself in people’s minds with Claudine Gay (former Harvard president, DEI robber baron, bad writer), I thought of Bill Ackman as the guy who bet a billion dollars that Herbalife was on the verge of collapse. The end was nigh, he said; the company was running a pyramid scheme. And you know what? He seemed to have them dead to rights. Given the information to hand, he made a good bet. But he got it wrong. Unseen forces were at work. Ackman closed out his short position long ago, at a massive loss, and Herbalife limps on. Life is unpredictable like that.
Weird little example, I know. That said, stock-picking is certainly one way to discover the stochastic nature of human affairs. And other ways aren’t hard to find. It’s amazing how little we know, in fact, and how much we run on faith, habit, instinct, and animal spirits. Antonio García Martínez (my boy) has this to say in his book Chaos Monkeys:
As I observed more than once at Facebook, and as I imagine is the case in all organizations from business to government, high-level decisions that affected thousands of people and billions in revenue would be made on gut feel, the residue of whatever historical politics were in play, and the ability to cater messages to people either busy, impatient, or uninterested (or all three).
I’m reading a great book called The Knowledge Machine, by Michael Strevens. Science works—we have antibiotics, rockets, and all the rest. But why does it work? What, exactly, is the “scientific method”? We’re not sure. (No, it’s not all just a matter of “falsifiability.”) Strevens has thoughts on what the answer is, but he acknowledges, as he must, that “the question of the scientific method is one of the most difficult, most contentious, most puzzling problems in modern thought.” In practice, “scientists seem scarcely to follow any rules at all.” Even when it comes to science, we’re just muddling through.
Let’s not get carried away, of course. In the end, to repeat, science works. I’m not about to join Paul Feyerabend in likening it to voodoo and witchcraft. (Though I might well use the word “witchcraft” to describe politics, war, and other endeavors that involve lots of humans interacting with each other.) Have you noticed, by the way, that lately it’s rightwing politicians who most resemble Feyerabend (a Berkeley radical if ever there was one) in their philosophical anarchism? According to J.D. Vance, our “system of education” sometimes “rewards mediocrity.” That Gay was tapped to lead Harvard shows as much. Fair enough. From there, however, Vance leaps to exclaiming that “you should ignore the economists,” that doctors “are not smart or accomplished,” and that attorneys “have no special legal knowledge.” He cites some discrete groups of experts, but his indictment is clearly aimed at experts as a class. “You are ruled by thousands of people who are just mediocre,” Vance declares, and you should “withdraw [your] consent” (whatever that means). If he really distrusts experts this much, Vance should never ride in a car or or fly on a plane.
And yet: there’s stuff like this:
This is what the World Economic Forum finds when it polls “1,490 experts across academia, business, government, the international community and civil society.” That these experts would agree to opine, with their official expert™ hats on, on such a range of matters—most of them far beyond each given expert’s area of expertise—is an embarrassment in itself. That the experts’ single biggest fear is the power of misinformation—it could bend the fragile minds of the little people!—is indeed as risible as Nate Silver suggests. Ari and I get into this on the podcast. The elite panic over misinformation is not evidence-based. It’s a vibe. It’s a prejudice.
(It turns out that almost 40 percent of the “experts” in this survey hail from Europe. So there’s the first problem, since, as everyone knows, all the worst experts are European.)
Strevens: “In the scientific process, the weighing of evidence . . . is seldom objective, seldom particularly methodical, always open to personal and political influence, and ever issuing decisions that are guided as much by expedience as by logic.” A lot of progress, it seems, is just trial and error. Put in economic terms, we’re evolving toward efficiency even as most people, most of the time, behave impulsively and irrationally, like the monkeys with big brains that they are.
As Ari and I try to explain, AI, misinformation, and AI-generated misinformation are the bright shiny objects. It’s easy to find bullshit on social media (AI-generated or not) that rubs you the wrong way. It’s easy to latch on to that material, to yell at the platforms about it, to cry about it at hearings, and to write bad legislation that revolves around it. So that’s what people do. What’s hard is figuring out how beliefs actually form, take hold, and spread. How do people construct their opinions? What makes them change their minds? What prompts them to act? We don’t really know. We might never know. But it seems like that’s what we should be talking about. Fears about AI suffusing the internet and derailing elections seem rather secondary when, as best we can currently tell, human action is animal spirits and witchcraft all the way down.
Tech Policy Podcast #363: AI and Elections
Tech Policy Podcast #358: Information Animals Fighting Information Wars