|The media gets flak for hyping up bad news. Everything you thought was increasing longevity in fact causes heart disease. Swine flu is coming to a city near you. And this, the latest: This may well be humanity’s final century.
But this isn’t media hype, says a team of deep-reaching philosophers (naturally), scientists and mathematicians at Oxford University’s Institute for the Future of Humanity, who are finding more and more evidence for this conclusion.
"There is a great race on between humanity’s technological powers and our wisdom to use those powers well," institute director Nick Bostrom said. "I’m worried that the former will pull too far ahead."
In a forthcoming paper, "Existential Risk as Global Priority," Bostrom and his mission crew explore today’s risk of premature human extinction.
"Several key concepts and insights have recently started falling into place," Bostrom says. "They make it possible to look at big picture questions in a different way than before, and to start doing serious analysis on some of these questions."
This could be the final century for humanity, Bostrom told the BBC, because our advancement in technology has led us to a point in which technology is far more sophisticated than we are.
This is not news to science fiction authors and Hollywood directors — Isaac Asimov’s 1950 novel "I, Robot," Danny Boyle’s 2002 box-office hit, "28 Days Later," to name barely a few — who tend to think extremely yet presciently about a world in which our messing with technology could send us into extinction.
Now Bostrom and contemporaries are removing the fiction from the predictions and warning that we might be almost there.
Legendary physicist and cosmologist Stephen Hawking, who has been talking about our vulnerability for years, is still on a mission to wake us up. "We won't survive another 1,000 years without escaping our fragile planet," Hawking told an audience in Los Angeles last week, according to the Los Angeles Times. “We must continue to go into space for humanity.”
Hawking, who is on the board of the new Center for the Study of Existential Risk at Cambridge University, has always said that it’s our curiosity that might save us. “If you understand how the universe operates, you control it in a way,” he told the LA audience.
Synthetic biology, nanotechnology and machine intelligence are areas that have contributed greatly to our quality of life, but also greatly threaten our future life. We face nuclear destruction and, as we’ve been hearing for years, environmental Armageddon.
"With any new powerful technology, we should think very carefully about what we know," the Institute for the Future of Humanity's Dr. Sean O'Heigeartaigh told the BBC last week. "But it might be more important to know what we don't have certainty about."
O'Heigeartaigh points out a conundrum in the Hawking/Carl Sagan theory that knowledge is power. The more we investigate and experiment with technology, the lesser control we seem to have.
"We are developing things that could go profoundly wrong," he told BBC.
Can knowledge save us? Maybe. Maybe not.
"It's very hard to be completely certain about complex risks," O'Heigeartaigh told MSN News. "Some factors are hard to predict, and there's always a chance that we've missed something important or made an error."
What does this mean? It means we might not even know how high the risk of premature extinction to humanity is.
"Estimates of 10-20 percent total existential risk in this century are fairly typical among those who have examined the issue," the Institute for the Future of Humanity paper states. "The most reasonable estimate might be substantially higher or lower."
This uncertainty is precisely the motivation behind such research. "This becomes especially important when we look at existential risks, many of which are high-impact, low-probability events," says O'Heigeartaigh.
That is, events that could determine whether or not we’ll be around in the next century.