Apocalypse How? Carrboro’s Phil Torres on Nanobots, Biotech, A.I., and Other Onrushing Threats to Our Species | Reading | Indy Week
Pin It

Apocalypse How? Carrboro’s Phil Torres on Nanobots, Biotech, A.I., and Other Onrushing Threats to Our Species 

Phil Torres

Photo by Adam Graetz

Phil Torres

Humanity has long been obsessed with the end of the world. All of the major religions are built on eschatological bedrock; parsing the apocalypse as an act of God befitted a world threatened mainly by natural disasters. But as these metaphysical beliefs wane in the uniquely imperiled twenty-first century, we are waking up to a new, almost incredible, but very real set of threats to our species' survival.

Instead of a volcano eruption or meteor strike, we have to worry—the day after tomorrow, if not quite today—about being wiped out by designer pathogens, subjugated by superintelligent computers, or mulched into gray goo by self-replicating nanobots. Because these dangers emerge from our own godlike technological powers, we can no longer turn to a distant deity, or even something as vague as fate, for succor or recrimination. If the worst comes to pass, we'll only have ourselves to blame.

Carrboro's Phil Torres illuminates this fascinating, unnerving terrain in his new book, The End: What Science and Religion Tell Us About the Apocalypse. Torres, who specializes in existential risk studies, trained in philosophy at the University of Maryland and Harvard and in neuroscience at Brandeis. His learning fuels a book that is academically rigorous but accessible for a general audience, aided by his personal story of coming to a scholarly interest in the apocalypse through a religious upbringing. The INDY recently sat down with Torres for a pleasant chat about humanity's extinction.

INDY: How did you get interested in the apocalypse?

PHIL TORRES: I grew up in a fundamentalist evangelical family; the philosophy was sort of Dispensationalist. There was always this background of eschatological expectation—lots of discussion about the Rapture. I have a vivid memory of being afraid Bill Clinton was the Antichrist. Eschatology is the heart of the major world religions. If you were to excise it, there really wouldn't be anything left. It's the ultimate hope.

Around 2001, there was this new field, existential risk studies, that began to pop up in universities, notably Oxford and Cambridge. To an extent, their warnings are similar to those of religious people, but with a completely different basis—one in science and evidence. Apocalyptic tendencies go back at least to Zoroaster, two millennia before Christ. But since 1945 and the atomic bomb, when the possibility of secular apocalypse emerged on the world stage, there has been a proliferation of risks associated with biotechnology, nanotechnology, and artificial intelligence. So first I was drawn in by religion, and then, ultimately, fascinated by the possibility of human extinction where these scenarios are based on legitimate science.

Tell us more about existential risk studies.

We used to be haunted by improbable worst-case scenarios, like volcano eruptions, but the likelihood of extinction was quite low. Many say 1945—or, you could argue, before that, when automobiles were adopted en masse, burning fossil fuels—was the first big-picture hazard, and the fact that we survived the Cold War, a lot of scholars say, was luck. We entered a new epoch in which self-annihilation was a probability, and right behind it was climate change. Peering into the future, you can see the threat rising. Existential risk studies is focused on understanding these risks and determining strategies for eliminating them. It's necessarily partly speculative, but if feeling around in the dark is the best we can do, let's do it. Now is the time to think as clearly as we can about these threats.

What are the existential risks of biotechnology and synthetic biology?

click to enlarge 3.9_page_phil-torres-book-cover.jpg

The primary danger is the creation of designer pathogens. Nature has a check on lethality; if it's too lethal it's not going to get to the next host, so there's natural selection. But if you were to weaponize Ebola to make it more contagious, you don't need selfish genes. You can give it properties it would never naturally acquire. Biotechnology is about modifying natural structures; synthetic biology is about creating structures. It allows you to create entirely new germs. It's entirely plausible that the lethality of rabies, the contagiousness of the common cold, and so on could be consolidated in a single microbe. If effectively aerosolized, you could create a pandemic of unprecedented proportions.

What about nanotechnology?

One danger is the creation of nanofactories, which, theoretically, could be small enough to fit on your desktop. You feed it a really simple molecule and, moving the atoms individually, it would assemble objects. All the properties around us—transparency, hardness, softness—these are just built up combinatorily from the atomic level. So a nanofactory could build virtually any object: a new computer, clothes. I mentioned beneficial things, but you could also print out weapons. This will enable individuals to acquire arsenals.

How is it different than 3-D printing?

3-D printing is also called additive printing; you feed it plastic and it builds an object. But this is about synthesizing the material from the atom up—plastics, metals, whatever. The other risk of nanotechnology is building immensely more powerful computers, which could feed into the danger of superintelligence. And one more risk involves autonomous nanobots rather than a nanofactory.

Superintelligent nanobots?

Kind of the opposite, actually—these nanobots are really dumb. Rather than moving molecules around in a factory, these are microscopic robots you can program. Imagine swallowing the surgeon so it can fix your heart. But the other possibility is nanobots designed to self-replicate. You drop them on a table and they start manipulating its molecules to create clones. A terrorist with the ultimate suicide wish could release a swarm of mindlessly, exponentially reproducing micromachines and not implausibly turn the Earth to dust. Scientists call this the "gray goo scenario." That's an issue of extraordinary power and increasing accessibility, which also applies to biotechnology and synthetic biology.

The risks of nuclear weapons seem pretty self-evident.

The existential issue is primarily the creation of firestorms, gale force wind conflagrations. In Hiroshima, they were actually responsible for many of the deaths. If you were to have several of these burning on the planet at once, the soot they would release could potentially cause a nuclear winter, which results in global agriculture failures, starvation, and infectious disease.

And what about the dangers of artificial intelligence, or superintelligence?

One is that electrons in circuitry move much faster than signals in our biological nervous systems. If you were to upload a mind—which would simply involve replicating the three-dimensional structure of the brain in a supercomputer simulation—virtually everybody agrees that you'd get a conscious mind, and there's no reason to think it wouldn't run many orders of magnitude faster than our brains. So immediately you'd have a quantitative superintelligence, looking out on a world moving so slowly it would look frozen.

It's also possible you could modify the architecture of the mind so that it has access to thoughts that are inaccessible to us. Just as the dog can't possibly comprehend the electron, there's no reason to think there aren't concepts we don't have access to. This could enable a superintelligence to manipulate the world in ways we find baffling.

There's the problem of a superintelligence whose values don't align with ours. It doesn't care about us or like us; it isn't moral as we understand it. This may sound like an easy problem to solve, but it's incredibly hard—how to instill values that don't end up annihilating the human species. It may also be that the superintelligence doesn't hate us but is just indifferent. It wants to harvest energy from the sun, so it covers the Earth with solar panels. Or it looks at you and goes, "You've got a lot of molecules I could use," the way we bulldoze a forest to build a house.

Given the power a superintelligence would wield in the world, it could be incredibly dangerous. Our dominance on the planet is entirely attributable to our intelligence, with a few enabling factors like opposable thumbs and bipedalism. If we lost our seat atop this pyramid of intelligence, it could be the best thing that ever happened to us, but, as Stephen Hawking put it, it could also be the worst.

There was a great article on these topics, "The Doomsday Invention," in The New Yorker recently, where Bill Gates said that even if the chance of these outcomes is very small, they're so dire that we're morally obligated to take them seriously.

It follows that even with a very small probability, if the consequences are massive, it could still be a significant risk. Indeed, one of the most robust prognostications about these technologies is that they're going to be immensely powerful, allowing us to manipulate the physical world from the level of the atom up in ways that are entirely unprecedented. Sir Martin Rees, one of the most respectable scientists in existential risk studies, has estimated that our civilization has a fifty-fifty chance of surviving this century.

It seems there is a correlation with these technologies where potential risks increase with potential benefits.

Yes, without a doubt. There's a book called Abundance by Peter Diamandis that is the flipside of my book. He makes scientific, compelling claims for the benefits of these technologies. But I wrote this book because I feel that all these technologies are Janus-faced, and without understanding the challenges they pose, we could be more vulnerable. If we do face those challenges, I don't think it's crazy to anticipate a world that, from our present vantage, is utopian. It's quite possible that good things are in the future. But I wanted to write a book that didn't succumb to panic or fatalism but was able to give greater visibility to risks people in the ivory tower are taking seriously.

This article appeared in print with the headline "Apocalypse How?"

Comments (3)

Showing 1-3 of 3

Add a comment

 
Subscribe to this thread:
Showing 1-3 of 3

Add a comment

INDY Week publishes all kinds of comments, but we don't publish everything.

  • Comments that are not contributing to the conversation will be removed.
  • Comments that include ad hominem attacks will also be removed.
  • Please do not copy and paste the full text of a press release.

Permitted HTML:
  • To create paragraphs in your comment, type <p> at the start of a paragraph and </p> at the end of each paragraph.
  • To create bold text, type <b>bolded text</b> (please note the closing tag, </b>).
  • To create italicized text, type <i>italicized text</i> (please note the closing tag, </i>).
  • Proper web addresses will automatically become links.

Latest in Reading



Twitter Activity

Most Recent Comments

Carolyn,
Liquid shampoo was invented in 1927. Shampoo was invented in 1898 as a water-soluble powder. And anyway if there …

by Constance Keptic on Pit Bulls May or May Not Be Dangerous. But Bronwen Dickey Can Attest That Writing About Them Definitely Is. (Reading)

On page 178, Dickey describes a fatal pit bull attack that Delise refuses to label as a fatal pit bull …

by Lucy Muir on Pit Bulls May or May Not Be Dangerous. But Bronwen Dickey Can Attest That Writing About Them Definitely Is. (Reading)

Bronwen Dickey says of DBO: "Dogsbite.org contradicts everything put forth by the groups most qualified to speak about animal science, …

by Lucy Muir on Pit Bulls May or May Not Be Dangerous. But Bronwen Dickey Can Attest That Writing About Them Definitely Is. (Reading)

https://www.facebook.com/james.jennings.9/videos/1040878029328341/

by Mark Adrian on Pit Bulls May or May Not Be Dangerous. But Bronwen Dickey Can Attest That Writing About Them Definitely Is. (Reading)

One of the most interesting parts of Dickey's book is the information (in Chapter 3) about breeds and genetics and …

by lxxxvc on Pit Bulls May or May Not Be Dangerous. But Bronwen Dickey Can Attest That Writing About Them Definitely Is. (Reading)

Comments

Carolyn,
Liquid shampoo was invented in 1927. Shampoo was invented in 1898 as a water-soluble powder. And anyway if there …

by Constance Keptic on Pit Bulls May or May Not Be Dangerous. But Bronwen Dickey Can Attest That Writing About Them Definitely Is. (Reading)

On page 178, Dickey describes a fatal pit bull attack that Delise refuses to label as a fatal pit bull …

by Lucy Muir on Pit Bulls May or May Not Be Dangerous. But Bronwen Dickey Can Attest That Writing About Them Definitely Is. (Reading)

© 2017 Indy Week • 201 W. Main St., Suite 101, Durham, NC 27701 • phone 919-286-1972 • fax 919-286-4274
RSS Feeds | Powered by Foundation