If that sounds callous too, I apologize again. But my point is serious. For most scientists, futurists and kindred commentators who've been studying the matter, the new century was sure to bring forms of warfare in which malevolent individuals or cliques wreak extravagant damage on First World societies, especially among civilian populations in big cities. But their weapons of choice weren't supposed to be jumbo pieces of hardware that, with luck, Bruce Willis could steer to a safe landing at the very last second. They were supposed to be much more numinous and technologically refined, including the terrible toxins of biological warfare.
We didn't have to wait long. Only a bit over a month into this strange new era, anthrax is, pardon the expression, in the air. Happily, there's so far been no outbreak of mass hysteria similar to that occasioned by Orson Welles' announcement of a Martian invasion in 1938. As I write this, the public remains commendably calm, having accepted official explanations that the threatening agents are localized, noncontagious and difficult to distribute widely. Of course, our collective equanimity could crumble with an attack on Times Square, say, or Madonna. But people evidently identify less with their news anchors and Congressional staffers than might have been imagined, and for that, perhaps, we are fortunate.
In panic's place, what reigns is a kind of fuzzy, dissociative weirdness. Americans, it appears, want to muster all the resolve and ardor necessary for a vigorously prosecuted, widely advertised "new war," while also maintaining a lifestyle of unblinking business-as-usual normalcy. This distantly recalls Lyndon Johnson's promise of guns and butter, which preceded a bitterly divided nation. Geopolitically, bombs are raining on Afghanistan, yet the press is on such a short leash that I've heard no one ask certain basic questions: How much of this lethal campaign is for show, an expression of the administration's desire to do something vivid and violent even though many innocents will die and Sept. 11's perpetrators may well go untouched? For that matter, do we recall that the Taliban aren't the enemy, merely the putative friends of the presumed enemy? And does anyone imagine that blowing Afghanistan off the map will do anything to stop the U.S. Mail's deliveries of anthrax?
In fact, I've thus far not encountered the words "Osama bin Laden" and "anthrax" in the same sentence. This may be a side effect of the government's campaign to keep bin Laden off our airwaves, which resulted in the single most shameful journalistic retreat of the last decade: the great acquiescence of Oct. 9, when the networks caved in to administration pressure faster than the Twin Towers had to assaulting aircraft. As it stands, the anthrax attacks are being reported almost as if they were a particularly odd flu; though they're presumably part of the new war, they seem strangely innocent of identifiable perpetrators. Is it possible that they're not the handiwork of bin Laden's baddies at all, but of some disgruntled domestic force--the disciples of Timothy McVeigh, say?
We have no idea, not only because the government is maintaining a strategic silence ("strategic" gives them the benefit of the doubt), but also because the press now isn't saying anything more than it's told to say. In terms of ongoing criminal investigations, that reticence may be appropriate even if the underlying principle still deserves debate. But the U.S. media are also resisting the opportunity to broaden their coverage to discuss the essential issues evoked by these events, and that strikes me as vastly irresponsible. It also reminds me of the lack of any probing public discussion 10 months ago, when the millennial clock ticked over.
Granted, calendars are arbitrary measures, but the onset of the new century seemed an ideal opportunity for some valuable stock-taking. Of those essential issues mentioned above, easily the most crucial--and the one related to anthrax--concerns technology and human survival. Looking back at the 20th century, the single most decisive reality was that humanity brought itself to the brink of planetwide extinction through the manipulation of nuclear energy. The two superpowers stepped away from the brink, but that should have left no one feeling safer. The new century holds infinitely greater potential for the species to destroy itself.
The only really trenchant and widely available discussion of these matters I've seen remains the celebrated April 2000 Wired magazine article, "Why the Future Doesn't Need Us," by Bill Joy, chief scientist at Sun Microsystems. In Joy's assessment, anthrax and its like don't even belong to the dangerous future. Like nuclear and chemical weaponry, they're creatures of the relatively idyllic 20th century, when the means of mass destruction were mostly so complex and costly they usually remained in the control of large nation-states. At this juncture, however, bioterrorism might be added to the concerns Joy voices over newer agents of destruction:
The 21st-century technologies--genetics, nanotechnology, and robotics (GNR)--are so powerful that they can spawn whole new classes of accidents and abuses. More dangerously, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them.
Thus we have the possibility not just of weapons of mass destruction but of knowledge-enabled mass destruction (KMD). ...
I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment of extreme individuals.
The successor to the 20th century's MAD (mutually assured destruction), the acronym KMD belongs in every citizen's vocabulary. It describes just what we've seen since Sept. 11. A mere year and a half passed between the publication of Joy's article and the first major realization of his prediction of "a surprising and terrible empowerment of extreme individuals." But please note: The destructive technologies employed in recent weeks are puny, feeble and laughably antique compared to the ones Joy most fears.
Though the quote above cites genetics, nanotechnology and robotics, all of which Joy shows to have vastly devastating potentialities, his argument centers on the development of artificial intelligence and the impending arrival (within 30 years, according to Moore's Law) of computers more intelligent than humans. Techno-optimists like the inventor Ray Kurzweil contemplate that prospect with relish, asserting that it will exalt humanity to new levels of being. Joy, more pessimistically, envisions humans overwhelmed and eventually eradicated by their machines. (This was also the concern of Theodore Kaczynski, the Unabomber, one of those nutcases, like John Brown or bin Laden, whose actions nonetheless signal new realities.)
In other words, the coming opportunities for wide destruction or even human extinction are dauntingly numerous, and virtually none involve the quaint specter of nation-states shooting missiles at each other. Deliberately malevolent individuals or groups may be involved; indeed, we might well imagine ones that are just as malign but far more technologically sophisticated than bin Laden and al-Qaeda. Then again, human agencies may not be directly involved; Joy, like Kaczynski, seems to reserve his greatest worries for the capacities of intelligent robots and such to self-replicate.
It is, admittedly, difficult to get one's mind around such scenarios, but we can't say it's outside the realm of human thought. The Greek myths of Prometheus, Pandora, Icarus and others all speak of the consequences of exceeding divinely ordained limits. Goethe's Faust and Mary Shelley's Frankenstein monster notwithstanding, science-happy modernity has few profound cautionary emblems to equal those. But kindred messages have been articulated by thoughtful scientists like Carl Sagan, who in this 1994 excerpt (quoted by Joy) imagines a common cosmic event:
... a planet, newly formed, placidly revolves around its star; life slowly forms; a kaleidoscopic procession of creatures evolves; intelligence emerges which, at least up to a point, confers enormous survival value; and then technology is invented. It dawns on them that there are such things as laws of Nature, that these laws can be revealed by experiment, and that knowledge of these laws can be made both to save and to take lives, both on unprecedented scales. Science, they recognize, grants immense powers. In a flash, they create world-altering contrivances. Some planetary civilizations see their way through, place limits on what may and what must not be done, and safely pass through the time of perils. Others, not so lucky or so prudent, perish.
Clearly, as the events of Sept. 11 and since have underscored, "the time of perils" is upon us. The question concerning technology (to borrow Martin Heidegger's phrase) is whether it will survive us; we will survive it; or it will perish along with us. The determining factor, Sagan and the ancient Greeks agree, lies in whether we learn to "place limits on what may and what must not be done." Will we?
That's the big question, but it points to several surbordinate questions which are each worth extended discussion. Concerning ideology: Aren't all of our standard ideological distinctions basically irrelevant to the challenges implied above? (If you think "left" and "right" still apply, and that one is better than the other, I submit you've got more invested in 18th-century Manichaeism than in your own survival.) Concerning government: Can democracy deal with problems described here? (That's a tough one, given the dearth of any demonstrably superior system, and the oft-proved failure of Platonic republics.) Concerning mass communications: Can we really afford to have our primary means of discussing such crucial issues wholly consumed with commercial inanity and vacuous "news"?
Finally, a question concerning the place of religious modes of understanding in all of this. As Bill Joy notes, Friedrich Nietzsche not only announced the "death of God" but pointed to the dangers of investing one's remaining faith in science. "The truth that science seeks," Joy concludes, "can certainly be considered a dangerous substitute for God if it is likely to lead to our extinction." Is it, then, mere coincidence that the conflict we see being played out on the world's stage right now so closely and dauntingly intertwines religion and technology, as if demanding that we finally seek a way to reconcile science and the sacred?