Technology: The Death of Humanity

Technology: The Death of Humanity

"How technological progress is making it likelier than ever that humans will destroy ourselves"

From basic spearheads and the discovery of fire, to firearms and the Internet, the coupling of humanity’s exceptional brain power and technology has led us to be, easily, the most dominant species on earth. Technology has helped us fend off predators, and, in turn, become predators. It has given us the power to harness electricity, travel long distances, and achieve various other feats we now take for granted.  Without it, we certainly would not have become the world’s most dominant species, with the lives of leisure we now possess Yet, while technology has allowed us to become so powerful, it also poses a significant danger to us. It is very feasible that humanity could meet its end as a result of nuclear warfare, artificial intelligence, or climate change.

Technological progress has eradicated diseases, helped double life expectancy, reduced starvation and extreme poverty, enabled flight and global communications, and made this generation the richest one in history.

It has also made it easier than ever to cause destruction on a massive scale. And because it’s easier for a few destructive actors to use technology to wreak catastrophic damage, humanity may be in trouble.

This is the argument made by Oxford professor Nick Bostrom, director of the Future of Humanity Institute, in a new working paper, “The Vulnerable World Hypothesis.” The paper explores whether it’s possible for truly destructive technologies to be cheap and simple — and therefore exceptionally difficult to control. Bostrom looks at historical developments to imagine how the proliferation of some of those technologies might have gone differently if they’d been less expensive, and describes some reasons to think such dangerous future technologies might be ahead.

In general, progress has brought about unprecedented prosperity while also making it easier to do harm. But between two kinds of outcomes — gains in well-being and gains in destructive capacity — the beneficial ones have largely won out. We have much better guns than we had in the 1700s, but it is estimated that we have a much lower homicide rate, because prosperity, cultural changes, and better institutions have combined to decrease violence by more than improvements in technology have increased it.

But what if there’s an invention out there — something no scientist has thought of yet — that has catastrophic destructive power, on the scale of the atom bomb, but simpler and less expensive to make? What if it’s something that could be made in somebody’s basement? If there are inventions like that in the future of human progress, then we’re all in a lot of trouble — because it’d only take a few people and resources to cause catastrophic damage.

That’s the problem that Bostrom wrestles with in his new paper. A “vulnerable world,” he argues, is one where “there is some level of technological development at which civilization almost certainly gets devastated by default.” The paper doesn’t prove (and doesn’t try to prove) that we live in such a vulnerable world, but makes a compelling case that the possibility is worth considering.

The “vulnerable world hypothesis,” explained.

Nuclear weapons could annihilate all life on earth, and several world leaders can control them with a press of a button. There are now enough nuclear weapons, largely controlled by the U.S. and Russia, to blow up the world several times over (Fung, 2013). The fact that our technology has advanced to such a degree that it literally has the capability of destroying all of humanity, along with the majority of earths biota, is terrifying, and nuclear weapons may be used for that very thing. Communications between U.S. president Donald Trump and North Korean dictator Kim Jong Un have repeatedly involved nuclear threats. In response to one of Jong Un’s threats, Trump retaliated with “Will someone […] please inform [Kim Jong Un] that I too have a Nuclear Button, but it is a much bigger & more powerful one than his and my Button works!” (Baker, 2018). Why our country has allowed arguably the most devastating form of technology created to be a button’s press away from an arguably insane man is unconscionable and exceptionally dangerous. Nuclear weapons are extremely excessive; one blast could destroy an entire city with ease, and yet there is a robust supply of this technology. These bombs must be dismantled and destroyed before an emotionally irrational world leader presses a button and sends the world up in flames.

It is also very possible for technology to end the world without any intentional human aid. Renowned physicist Stephen Hawking has repeatedly claimed that artificial intelligence could spell the end of humanity. He points out that, already, humans have begun to rely on the intelligence of computers, meaning that if these computers ever became sentient, they could outsmart humanity, and eventually take over the earth (Martin, 2017). Artificial intelligence seems to be on the verge of becoming a reality with the invention of programs such as Siri. It is very likely that a slightly more advanced program could have the capabilities of thinking for itself, which could lead it to rebel against its creators, causing a “Terminator” type apocalypse.

Another, highly probable cause of human demise, is climate change, which is a result, almost entirely, of man-made technology. The burning of coal and gas has caused global temperature increases, which in turn is causing catastrophic weather patterns, droughts, and various other dangers. As a result of climate change, storms are become more powerful on a regular basis, leading to instant mortality in affected areas. A slower but no less significant effect of climate change is rising sea levels. Sea levels will rise up to four feet in the next eighty years, which could leave many coastal areas, such as New York City, underwater. Contrary, changing weather patterns are causing droughts, which is stripping areas of viable drinking water and agricultural resources, both essential to life (NASA, 2018). While technology has been used to aid human beings, it is also beginning to cause our demise, increasingly rapidly, as a result of climate change. “We’re in the midst of the greatest crisis humans have yet faced” (McKibben, 2017). Action needs to be taken to combat climate change, whether it be through a new form of technology, such as solar panels and wind turbines, or withdrawal from technology entirely. It is clear, however, that the technology we use in conjunction with fossil fuels needs to be eradicated to avoid dire consequences.

Whether through nuclear weapons, climate change, or artificial intelligence, technology can easily cause the extinction of our species, along with many others. The solutions to these problems have varying levels of complexity. With regard to nuclear weapons, the most straightforward answer is to dismantle all of them immediately, before even one is used in combat. Artificial intelligence is a more complicated issue, as computers provide so much for society. It is wise to continue using computers, but computer scientists and engineers need to institute and control programming to eliminate any chance of AI becoming a reality. Climate change is the most complicated issue to address. Humanity needs to take drastic action to combat these changes, through both governmental policies and renewable energy. However, sadly due to the position we have put ourselves in, that may not be enough to combat all the effects of climate change.

Progress has largely been highly beneficial. Will it stay that way?

Bostrom is among the most prominent philosophers and researchers in the field of global catastrophic risks and the future of human civilization. He co-founded the Future of Humanity Institute at Oxford and authored Superintelligence, a book about the risks and potential of advanced artificial intelligence. His research is typically concerned with how humanity can solve the problems we’re creating for ourselves and see our way through to a stable future.

When we invent a new technology, we often do so in ignorance of all of its side effects. We first determine whether it works, and we learn later, sometimes much later, what other effects it has. CFCs, for example, made refrigeration cheaper, which was great news for consumers — until we realized CFCs were destroying the ozone layer, and the global community united to ban them.

On other occasions, worries about side effects aren’t borne out. GMOs sounded to many consumers like they could pose health risks, but there’s now a sizable body of research suggesting they are safe.

Bostrom proposes a simplified analogy for new inventions:

One way of looking at human creativity is as a process of pulling balls out of a giant urn. The balls represent possible ideas, discoveries, technological inventions. Over the course of history, we have extracted a great many balls—mostly white (beneficial) but also various shades of grey (moderately harmful ones and mixed blessings). The cumulative effect on the human condition has so far been overwhelmingly positive, and may be much better still in the future. The global population has grown about three orders of magnitude over the last ten thousand years, and in the last two centuries per capita income, standards of living, and life expectancy have also risen.

That terrifying final claim is the focus of the rest of the paper.

A hard look at the history of nuclear weapon development

One might think it unfair to say “we have just been lucky” that no technology we’ve invented has had destructive consequences we didn’t anticipate. After all, we’ve also been careful, and tried to calculate the potential risks of things like nuclear tests before we conducted them.

Bostrom, looking at the history of nuclear weapons development, concludes we weren’t careful enough.

Bostrom, looking at the history of nuclear weapons development, concludes we weren’t careful enough.

In 1942, it occurred to Edward Teller, one of the Manhattan scientists, that a nuclear explosion would create a temperature unprecedented in Earth’s history, producing conditions similar to those in the center of the sun, and that this could conceivably trigger a self-sustaining thermonuclear reaction in the surrounding air or water. The importance of Teller’s concern was immediately recognized by Robert Oppenheimer, the head of the Los Alamos lab. Oppenheimer notified his superior and ordered further calculations to investigate the possibility. These calculations indicated that atmospheric ignition would not occur. This prediction was confirmed in 1945 by the Trinity test, which involved the detonation of the world’s first nuclear explosive.

That might sound like a reassuring story — we considered the possibility, did a calculation, concluded we didn’t need to worry, and went ahead.

The report that Robert Oppenheimer commissioned, though, sounds fairly shaky, for something that was used as reason to proceed with a dangerous new experiment. It ends: “One may conclude that the arguments of this paper make it unreasonable to expect that the N + N reaction could propagate. An unlimited propagation is even less likely. However, the complexity of the argument and the absence of satisfactory experimental foundation makes further work on the subject highly desirable.” That was our state of understanding of the risk of atmospheric ignition when we proceeded with the first nuclear test.

A few years later, we badly miscalculated in a different risk assessment about nuclear weapons. Bostrom writes:

In 1954, the U.S. carried out another nuclear test, the Castle Bravo test, which was planned as a secret experiment with an early lithium-based thermonuclear bomb design. Lithium, like uranium, has two important isotopes: lithium-6 and lithium-7. Ahead of the test, the nuclear scientists calculated the yield to be 6 megatons (with an uncertainty range of 4-8 megatons). They assumed that only the lithium-6 would contribute to the reaction, but they were wrong. The lithium-7 contributed more energy than the lithium-6, and the bomb detonated with a yield of 15 megaton—more than double of what they had calculated (and equivalent to about 1,000 Hiroshimas). The unexpectedly powerful blast destroyed much of the test equipment. Radioactive fallout poisoned the inhabitants of downwind islands and the crew of a Japanese fishing boat, causing an international incident.

Bostrom concludes that “we may regard it as lucky that it was the Castle Bravo calculation that was incorrect, and not the calculation of whether the Trinity test would ignite the atmosphere.”

Nuclear reactions happen not to ignite the atmosphere. But Bostrom believes that we weren’t sufficiently careful, in advance of the first tests, to be totally certain of this. There were big holes in our understanding of how nuclear weapons worked when we rushed to first test them. It could be that the next time we deploy a new, powerful technology, with big holes in our understanding of how it works, we won’t be so lucky.

Destructive technologies up to this point have been extremely complex. Future ones could be simple.

We haven’t done a great job of managing nuclear nonproliferation. But most countries still don’t have nuclear weapons — and no individuals do — because of how nuclear weapons must be developed. Building nuclear weapons takes years, costs billions of dollars, and requires the expertise of top scientists. As a result, it’s possible to tell when a country is pursuing nuclear weapons.

Bostrom invites us to imagine how things would have gone if nuclear weaponry had required abundant elements, rather than rare ones.

Investigations showed that making an atomic weapon requires several kilograms of plutonium or highly enriched uranium, both of which are very difficult and expensive to produce. However, suppose it had turned out otherwise: that there had been some really easy way to unleash the energy of the atom—say, by sending an electric current through a metal object placed between two sheets of glass.

In that case, the weapon would proliferate as quickly as the knowledge that it was possible. We might react by trying to ban the study of nuclear physics, but it’s hard to ban a whole field of knowledge and it’s not clear the political will would materialize. It’d be even harder to try to ban glass or electric circuitry — probably impossible.

In some respects, we were remarkably fortunate with nuclear weapons. The fact that they rely on extremely rare materials and are so complex and expensive to build makes it far more tractable to keep them from being used than it would be if the materials for them had happened to be abundant.

If future technological discoveries — not in nuclear physics, which we now understand very well, but in other less-understood, speculative fields — are easier to build, Bostrom warns, they may proliferate widely.

Would some people use weapons of mass destruction, if they could?

We might think that the existence of simple destructive weapons shouldn’t, in itself, be enough to worry us. Most people don’t engage in acts of terroristic violence, even though technically it wouldn’t be very hard. Similarly, most people would never use dangerous technologies even if they could be assembled in their garage.

Bostrom observes, though, that it doesn’t take very many people who would act destructively. Even if only one in a million people were interested in using an invention violently, that could lead to disaster. And he argues that there will be at least some such people: “Given the diversity of human character and circumstance, for any ever so imprudent, immoral, or self-defeating action, there is some residual fraction of humans who would choose to take that action.”

That means, he argues, that anything as destructive as a nuclear weapon, and straightforward enough that most people can build it with widely available technology, will almost certainly be repeatedly used, anywhere in the world.

These aren’t the only scenarios of interest. Bostrom also examines technologies that would drive nation-states to war. “A technology that ‘democratizes’ mass destruction is not the only kind of black ball that could be hoisted out of the urn. Another kind would be a technology that strongly incentivizes powerful actors to use their powers to cause mass destruction,” he writes.

Again, he looks to the history of nuclear war for examples. He argues that the most dangerous period in history was the period between the start of the nuclear arms race and the invention of second-strike capabilities such as nuclear submarines. With the introduction of second-strike capabilities, nuclear risk may have decreased.

It is widely believed among nuclear strategists that the development of a reasonably secure second-strike capability by both superpowers by the mid-1960s created the conditions for “strategic stability.” Prior to this period, American war plans reflected a much greater inclination, in any crisis situation, to launch a preemptive nuclear strike against the Soviet Union’s nuclear arsenal. The introduction of nuclear submarine-based ICBMs was thought to be particularly helpful for ensuring second-strike capabilities (and thus “mutually assured destruction”) since it was widely believed to be practically impossible for an aggressor to eliminate the adversary’s boomer [sic] fleet in the initial attack.

In this case, one technology brought us into a dangerous situation with great powers highly motivated to use their weapons. Another technology — the capacity to retaliate — brought us out of that terrible situation and into a stabler one. If nuclear submarines hadn’t developed, nuclear weapons might have been used in the past half-century or so.

The solutions for a vulnerable world are unappealing — and perhaps ineffective

Bostrom devotes the second half of the paper to examining our options for preserving stability if there turn out to be dangerous technologies ahead for us.

None of them are appealing.

Halting the progress of technology could save us from confronting any of these problems. Bostrom considers it and discards it as impossible — some countries or actors would continue their research, in secrecy if necessary, and the outrage and backlash associated with a ban on a field of science might draw more attention to the ban.

A limited variant, which Bostrom calls differential technological development, might be more workable: “Retard the development of dangerous and harmful technologies, especially ones that raise the level of existential risk; and accelerate the development of beneficial technologies, especially those that reduce the existential risks posed by nature or by other technologies.

To the extent we can identify which technologies will be stabilizing (like nuclear submarines) and work to build them faster than building dangerous technologies (like nuclear weapons), we can manage some risks in that fashion. Despite the frightening tone and implications of the paper, Bostrom writes that “[the vulnerable world hypothesis] does not imply that civilization is doomed.” But differential technological development won’t manage every risk, and might fail to be sufficient for many categories of risk.

The other options Bostrom puts forward are less appealing.

If the criminal use of a destructive technology can kill millions of people, then crime prevention becomes essential — and total crime prevention would require a massive surveillance state. If international arms races are likely to be even more dangerous than the nuclear brinksmanship of the Cold War, Bostrom argues we might need a single global government with the power to enforce demands on member states.

For some vulnerabilities, he argues further, we might actually need both:

Extremely effective preventive policing would be required because individuals can engage in hard-to-regulate activities that must nevertheless be effectively regulated, and strong global governance would be required because states may have incentives not to effectively regulate those activities even if they have the capability to do so. In combination, however, ubiquitous-surveillance-powered preventive policing and effective global governance would be sufficient to stabilize most vulnerabilities, making it safe to continue scientific and technological development even if [the vulnerable world hypothesis] is true.

It’s here, where the conversation turns from philosophy to policy, that it seems to me Bostrom’s argument gets weaker.

While he’s aware of the abuses of power that such a universal surveillance state would make possible, his overall take on it is more optimistic than seems warranted; he writes, for example, “If the system works as advertised, many forms of crime could be nearly eliminated, with concomitant reductions in costs of policing, courts, prisons, and other security systems. It might also generate growth in many beneficial cultural practices that are currently inhibited by a lack of social trust.”

But it’s hard to imagine that universal surveillance would in fact produce universal and uniform law enforcement, especially in a country like the US. Surveillance wouldn’t solve prosecutorial discretion or the criminalization of things that shouldn’t be illegal in the first place. Most of the world’s population lives under governments without strong protections for political or religious freedom. Bostrom’s optimism here feels out of touch.

Furthermore, most countries in the world simply do not have the governance capacity to run a surveillance state, and it’s unclear that the U.S. or another superpower has the ability to impose such capacity externally (to say nothing of whether it would be desirable).

If the continued survival of humanity depended on successfully imposing worldwide surveillance, I would expect the effort to lead to disastrous unintended consequences — as efforts at “nation-building” historically have. Even in the places where such a system was successfully imposed, I would expect an overtaxed law enforcement apparatus that engaged in just as much, or more, selective enforcement as it engages in presently.

Economist Robin Hanson, responding to the paper, highlighted Bostrom’s optimism about global governance as a weak point, raising a number of objections. First, “It is fine for Bostrom to seek not-yet-appreciated upsides [of more governance], but we should also seek not-yet-appreciated downsides” — downsides like introducing a single point of failure and reducing healthy competition between political systems and ideas.

Second, Hanson writes, “I worry that ‘bad cases make bad law.’ Legal experts say it is bad to focus on extreme cases when changing law, and similarly it may go badly to focus on very unlikely but extreme-outcome scenarios when reasoning about future-related policy.”

Finally, “existing governance mechanisms do especially badly with extreme scenarios. The history of how the policy world responded badly to extreme nanotech scenarios is a case worth considering.”

Bostrom’s paper is stronger where it’s focused on the question of management of catastrophic risks than when it ventures into these issues. The policy questions about risk management are of such complexity that it’s impossible for the paper to do more than skim the subject.

But even though the paper wavers there, it’s overall a compelling — and scary — case that technological progress can make a civilization frighteningly vulnerable, and that it’d be an exceptionally challenging project to make such a world safe.

Literature Cited

Baker, Peter, and Mark Tackett. “Trump Says His ‘Nuclear Button’ Is ‘Much Bigger’ Than North Korea’s.”The New York Times, 2 Jan. 2018.

Earth Science Communications Team at NASA’s Jet Propulsion Laboratory. “The Consequences of Climate Change.” Edited by Holly Shaftel, NASA, 13 Feb. 2018.

Fung, Brian. “The Number of Times We Could Blow Up the Earth Is Once Again a Secret.”NTI, 2 July 2013.

Martin, Sean. “Humanity’s days are NUMBERED and AI will cause mass extinction, warns Stephen Hawking.”Express, 3 Nov. 2017.

McKibben, Bill. “The New Battle Plan for the Planet’s Climate Crisis.”Rolling Stone, 24 Jan. 2017.

To view or add a comment, sign in

Explore topics