RELIGION

The grim satisfaction of AI doomsaying


(Sightings) — In the early 1960s, science fiction author Arthur C. Clarke published a short story in Playboy titled “Dial F for Frankenstein.” In the story, set in the not-too-distant future of 1975, an automated global network gets complex enough that individual phones start to act like neurons in a brain, and the system achieves consciousness.

One researcher asks, “‘What would this supermind actually do? Would it be friendly — hostile — indifferent?” Another replies “with a certain grim satisfaction” that like a newborn baby, the artificial intelligence will break things. This prediction quickly comes true as planes crash, pipes explode, and missiles are launched. The story ends with the extinction of the human race.

Years later, Tim Berners-Lee credited “Dial F for Frankenstein” for inspiring him to create the internet.

That may seem strange, but the Venn diagram of people who are worried that smarter technology will destroy us all and people who are developing smarter technology has more overlap than you might expect.

In their new book “The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want,” Emily Bender and Alex Hanna discuss the trend of researchers worrying publicly about AI causing human extinction. “Strangely enough,” they write, “despite these visions, nearly all AI Doomers think that AI development is a net good. Many of them have built their careers off the theorization, testing, development, and deployment of AI systems.”

Looking at the signatures on public statements about AI risk, Bender and Hanna note that some signatories are genuinely concerned, but “for some of them, it’s not really about trying to save humanity, but rather a running of the con: the supposed danger of the systems is a splashy way to hype their power, with the goal of scoring big investments in their own AI ventures (like [Elon] Musk and [Sam] Altman) or funding for their own research centers (like [Malo] Bourgon).” The people most vocal about the dangers of AI research tend to be the ones most interested in pursuing that research, or as Bender and Hanna put it, “Scratch a Doomer and find a Booster.”

Adam Becker makes a similar point in a recent article in The Atlantic. “Those who predict that superintelligence will destroy humanity serve the same interests as those who believe that it will solve all of our problems,” writes Becker. Technology experts who invoke apocalyptic AI scenarios like “WarGames” (1983), “Terminator” (1984), or “The Matrix” (1999) are usually “grifters,” but even those who are sincere ironically feed into the same pro-AI sentiment they are trying to challenge.

A book titled “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All” may seem like an unambiguous warning against developing Artificial General Intelligence, but for Becker the book plays into the same “fantasy of oversimplified technological salvation” that tech CEOs are preaching. That book’s co-author may be a “prophet of doom,” but like the biblical prophet Jonah, predicting people’s destruction only makes them more devoted.

A recent Super Bowl commercial continues this unsettling trend. In it, actor Chris Hemsworth expresses a worry that his Alexa+ AI technology will murder him. Viewers then see Alexa+ killing Hemsworth in a variety of ways, before being reminded in the closing seconds that this is a commercial for Alexa+. The producers of this technology think you’ll be more inclined to purchase it after you watch it kill Thor four times.

We hear an echo of François Truffaut’s remark that anti-war movies ironically glorify war. Depictions of the horrors of war may be intended as cautionary tales, but they ennoble war and wrap it in tragic necessity. Likewise, visions of AI apocalypses make the technology seem powerful and inevitable. Rather than convincing people to avoid developing “superhuman” AI, some people are driven to ensure this technology ends up in the right hands (their own).

Like Becker, who invokes the prophetic tradition, Bender and Hanna use religious language to describe AI predictions. They explain that some techno-optimists “deify AI” and that extinction scenarios make AI seem “godlike.”

This curious phenomenon of pro-AI doomsaying shares similarities with religious predictions of the end times. For some believers, speculating about the end of the world is actually reassuring, because it testifies to God’s power in the here and now. Rather than making people feel powerless, people feel empowered because they are on the side of the one who is capable of such destruction.

As the philosopher Jerry L. Walls has written, some Christian dispensationalists respond “with a certain grim satisfaction” to indications that their apocalyptic predictions are coming true. Coincidentally, perhaps, Clarke used the exact same phrase in “Dial F for Frankenstein.”

In both religious apocalypticism and AI doomsaying, there tends to be an “if” clause — at least some of us will be spared if we are faithful, or if we align AI with human values. Those who prophesy destruction encourage others to repent, while those who speculate about AI apocalypse encourage developers to factor the “alignment problem” into their software. Fears of human extinction don’t really seem to make AI “Boosters” reticent to develop Artificial General Intelligence; rather, these fears convince them that even more money and effort should be expended to make sure the AGI we will inevitably create will not turn us all into paperclips.

The term “apocalypse” derives from Greek words that mean “uncovering.” In many religious contexts, sci-fi stories, and technological prognostications, what sounds like a prediction of the future is actually an attempt to express something true but hidden about the present. For some, the prospect of an AI causing human extinction uncovers the truth that the desire for profit and discovery are often alienated from considerations of the well-being of humanity. But for the AI “Boosters” and those who buy into the hype, these apocalyptic scenarios uncover the truth that AI technology is powerful and worth investing in. To paraphrase Job 13 — though it slays us, we will trust in it.

What, then, can be done?

Bender and Hanna recommend that we pay no attention either to utopian visions of AI future or dystopian visions of AI apocalypse. Rather, we should pay attention to real distributions of power in the present. The AI technology we already have is affecting the environment, employment, and (mis)information; we ought to focus on that rather than prognosticating about what could happen if we eventually develop superintelligent software. Just as the best eschatological reflection helps people act responsibly in the present, the best moral reflection on AI helps people respond to the current needs of other humans.

(Russell P. Johnson is associate director of the undergraduate religious studies program at the University of Chicago Divinity School. A version of this article first appeared on Sightings, a publication of the divinity school’s Marty Martin Center. The views expressed in this commentary do not necessarily reflect those of Religion News Service.)



Source link

PennsylvaniaDigitalNews.com