The apocalyptic cult Aum Shinrikyo sought to ignite a cataclysmic war between the United States and the Soviet Union. Asahara Shoko, the cult’s leader, believed Aum would incite the apocalypse, and he would emerge from the wreckage of humanity as a new Jesus Christ. This belief motivated Aum’s pursuit of chemical, biological, and nuclear weapons. Although Aum’s ambitions often outstripped their capability, especially when they sought an earthquake-making device on March 20 1995, Aum carried out an attack on the Tokyo subway system using sarin gas, killing thirteen and injuring thousands more.
Unfortunately, Aum Shinrikyo is not the only terrorist group with apocalyptic ambitions.
Terrorists could seek to destroy humanity due to apocalyptic beliefs, environmental fatalism, or a desire to eliminate suffering (negative utilitarianism). The history of human thought provides no shortage of apocalyptic tropes to draw upon, from the revelations of John, through the Ragnarok of the Norse, to the Zoroastrian Frashokereti. Like Aum Shinrikyo, a group with activist millenarian beliefs may turn to terrorism, believing that they will be the ones to incite a global apocalypse and that such catastrophe would be desirable. Alternatively, terrorists might draw upon biocentrist views that humanity is a plague, desiring the destruction of humanity so that the natural world can thrive. For example, the Chicago-based group RISE in the 1970s sought to destroy humanity so that they could repopulate the Earth with a small cadre of environmentally conscious revolutionaries. Or, terrorists may believe that life is nothing more than suffering, adopting an extreme negative utilitarianist view in which human existence is viewed as inevitably horrible, with ending all human life as the only way to reduce that horror.
While by no means common among terrorists, there are those who might develop the motivation to wipe out humankind. The grim reality is that there are three pathways whereby terrorists could, in fact, do so: existential attacks, spoiler attacks, and systemic harm. The silver frame on this dark portrait is that each pathway to existential harm requires the alignment of rare contingencies, whether they be temporal, political, technological, or extreme terrorist capability. Nonetheless, some degree of concern, and certainly global vigilance, is justified.
Pathways to Global Harm
Terrorists can cause existential harm by 1) developing their own “super-weapon;” 2) obstructing risk mitigation measures to allow other existential risks to manifest; or 3) causing sufficiently broad or acute harm that global governments fail to mitigate existential risks. The risk dynamics differ greatly for each pathway. For example, a terrorist group mounting a genetically engineered biological weapons attack of sufficient size to wipe out humanity would likely require significant (and historically unique) levels of technological, organizational, and financial resources. By contrast, a terrorist attack to disrupt a NASA planetary defense mission against an incoming planet-killing asteroid would only require limited capability, but could only generate existential harm in the rare contingency that such an asteroid is both imminently incoming and that delaying a defensive mission is not feasible.
Life is suffering, and Dr. Louis Therman was ready to end that suffering. He had finally engineered the perfect virus. A highly contagious pathogen that would quickly spread all around the world and do…nothing at all. Well, at least for the first few years. By then the virus will have spread to the remotest parts of the world. It’ll be too late to shut down airports and border crossings, the virus will have already penetrated. Then the genetic killswitch will engage. The virus will become uniformly lethal and humanity’s long-suffering will end.
Existential attacks are basically the plots of movie supervillains. A terrorist somehow creates a genetically engineered biological weapon that spreads through the world, and manages to kill everyone. Or, they manipulate global powers into starting a nuclear war, perhaps tricking early warning systems so an attack appears imminent at a time of high crisis. Or, perhaps, the terrorist creates an artificial super intelligence that is designed to be existentially harmful (something that academics and industry leaders are increasingly warning about occurring accidentally).
Although plausible, an existential attack would require the perpetrators to possess extraordinary amounts of scientific know-how and technical capability, together with all the logistical resources required to pull off such an endeavor. A genetically engineered pathogen appears to be the only known vector by which a terrorist could directly bring about existential harm. Causing that harm would require the terrorists to not only acquire and modify the pathogen (or create it from scratch), but to overcome the inherent tradeoffs between disease virulence and spread. The terrorist would also need to scale up, deliver and modify or design the agent in such a way so that the agent could continue to cause harm following global countermeasures, like quarantines or vaccines, or remain undetected while it spreads until suddenly manifesting lethal qualities. There are of course hypothetical technological developments like artificial super intelligences or nanorobots that might provide an alternative route, but as they are theoretical, their development would necessarily require ground-breaking advancements in science and technology to realize. Perhaps as others create the necessary fundamental breakthroughs and develop all these technologies for benign purposes, the barriers for terrorist use will become surmountable. But the timeline is uncertain and successful acquisition remains speculative.
It’s also possible that a terrorist could indirectly cause existential harm, by, for example, tricking early warning systems into starting nuclear wars. The risk dynamics will be similar to direct existential attacks, as the terrorist would need to have an extremely sophisticated understanding of technical details of early warning systems, and the capability to create plausible simulacrum that would trick those systems.
An asteroid 20km in diameter has been detected heading on a collision course with Earth. Astronomers judge a possible impact to occur within three months. The National Aeronautics and Space Administration prepares to launch a planetary defense mission: a kinetic impactor is set to crash into the asteroid, and reroute it to a safer trajectory. Over the ensuing weeks, news media covers the impending strike nonstop, with partisan commentators pointing fingers at each other for underfunding, and undervaluing scientific research and values. The global cacophony of bickering talking heads leads the leader of a Los Angeles-based spiritual group to conclude that the asteroid was sent by a higher power to lead humanity into its next state of being. He gets to work on a plan.
The day before impact, the group leader rents a truck, his followers fill it with nitrogen fertilizer explosives, and he drives to the launch site in Santa Barbara. The leader barrels through the gate, security guards open fire and even though the leader takes a bullet to the shoulder, he keeps going. The truck makes it to the launch pad where the rocket is being fueled and the leader detonates the explosives. Only about half explode, but that is enough. The damage is not extensive, but it would require at least three weeks to repair or to re-equip another rocket at an alternate location. Three weeks too long.
Existential spoilers are terrorist attacks to disrupt measures aimed at reducing or preventing sources of existential risk. Besides disrupting planetary defenses, a terrorist might spoil peace talks between two warring nuclear powers, disrupt a major geoengineering project meant to solve climate change, or remove safeguards on an artificial super intelligence.
An existential spoiler causing actual existential harm is likely to be highly contingent on temporal, spatial, astronomical, political, and other factors. Disrupting a planetary defense mission only creates existential harm when a planet killer asteroid is inbound and where there is no redundancy in the mitigation measure. A terrorist could plausibly undertake an existential spoiler attack without intending to cause existential harm, as in the hypothetical case of an ethnonationalist group upset over the terms of a prospective peace deal between two warring nuclear powers which disrupts a precarious deal and renews a spiral of nuclear instability. However, certain spoilers will almost certainly require apocalyptic motivations, as most politically inspired violent groups have no interest in having the entire strategic playing field erased by an asteroid. Importantly, at least in theory, an existential spoiler may only require modest capabilities to succeed.
When global astronomers sounded the alarm in 2052 that the big one was coming, no one was ready. Only a few years previously, the resurgent Islamic State had managed to detonate a nuclear weapon in New York City. Manhattan was flattened, millions died, global financial markets were in ruins, and the United States was hellbent on vengeance. If a program had nothing to do with rebuilding the country or killing those responsible, funding was slashed to ribbons, if not cut entirely. NASA was a major victim of the budget cuts. Why should Americans look up at the stars when the world around them was crumbling? Any public or political attention to planetary defense had faded decades ago.
Systemic harms are attacks that are not necessarily intended to cause existential harm, but cause enough harm that the international community is unwilling or unable to mitigate existential risks. Here, the harm is generated primarily through the reaction to terrorist attacks. For example, an extreme attack on the United States akin to 9/11 may cause the country to focus entirely on counterterrorism, reducing budgetary and legislative attention to reducing existential risks. Alternatively, cycles of government oppression and terrorist response might destabilize society enough to weaken global cooperation and the capacity to adequately mitigate existential risks. Whether systemic harm translates to actual extinction will be highly contingent, however, because extinction requires an existential risk scenario (e.g. global nuclear war or an incoming asteroid) to manifest while the global community is distracted or weakened. Once the world restabilizes, existential risks could receive their due attention.
Given that – at least for now – existential terrorism will only manifest in the case of extremely capable terrorists or highly contingent circumstances, the likelihood of terrorists bringing about existential levels of harm is quite low. However, looking at the inordinate consequences represented by the end of humanity, the overall risk cannot be completely ignored. Not to mention that actors, technologies and environments can change, sometimes rapidly. Policymakers should take prudent and practical measures to reduce the threat of existential terrorism.
First, while continuing to devote the bulk of counterterrorism resources to extant extremist threats, intelligence and law enforcement agencies should reserve some capacity to explicitly monitor for signs of an increase in existential threat, whether these be the rise of a group with apocalyptic or negative utilitarian motives, or improvements in technology that would bring existential harm within the competencies of a broader range of actors. Interagency coordination is especially vital when trying to discern what are often likely to be “weak signals.” International information sharing is especially critical, as the threat is necessarily trans-national and global. This could perhaps include new mechanisms for inter-governmental policy coordination and response when potential existential terrorism threats are identified.
Second, both counterterrorism agencies and the broader national security community should plan for spoilers whenever crucial existential risk mitigation measures are planned or undertaken. In many cases this will simply mean recognizing the potential for disruptive terrorist attacks and increasing security measures. States should also consider building redundancy and resilience into existential risk prevention and mitigation measures wherever possible, which has value even beyond existential terrorism. Missing parts, natural hazards, and human error all could spoil existential risk mitigation measures.
Last, since systemic harm is dependent on a type of autoimmune reaction on the part of global society, whatever actions states take to counter (non-existential) terrorism should be both proportional and not blind policymakers to actual existential risks. Simply recognizing the dangers of distraction or overreaction can go a long way towards addressing the possibility of terrorist actions resulting in systemic harm.
Several plausible pathways exist for terrorists to destroy human civilization, or at least to exacerbate overall existential risk. While these pathways are admittedly extremely unlikely at present, they may not remain so forever, and the grave consequences if they were to occur justify serious consideration of the threat. If a terrorist causes existential harm, humanity does not have a second chance.
The following article is derived from “Existential Terrorism: Can Terrorists Destroy Humanity,” recently published in the European Journal of Risk Regulation.
Zachary Kallenborn is an adjunct fellow (Non-resident) with the Center for Strategic and International Studies (CSIS), Policy Fellow at the Schar School of Policy and Government, Fellow at the National Institute for Deterrence Studies, Research Affiliate with the Unconventional Weapons and Technology Division of the National Consortium for the Study of Terrorism and Responses to Terrorism (START), an officially proclaimed U.S. Army “Mad Scientist,” and national security consultant.
Gary A. Ackerman is an Associate Professor and Associate Dean for Research in the College of Emergency Preparedness, Homeland Security and Cybersecurity at the University at Albany (SUNY), where his research focuses on assessing emerging threats and understanding how terrorists and other adversaries make tactical, operational, and strategic decisions, particularly regarding innovating in their use of weapons and tactics.
Main image: U.S. Army Reserve Staff Sgt. Eric Huggins with the 468th Engineer Detachment, 368th Engineer Battalion, 302d Maneuver Enhancement Brigade, 412th Theater Engineer Command, based in Danvers, Mass., directs his team where to wheel a “victim” after a Vehicle Extrication during the New York City Chemical, Biological, Radiological, and Nuclear Response Joint Training Exercise in New York City, July 10, 2018. (Clinton Wood/Army)