“When they say, ‘Peace and safety!’ then sudden destruction will come upon them…”
(1 Thessalonians 5:3, MEV)
The idea that an Antichrist could rise and conquer the world has historically been disregarded as the fear-mongering of religious fanatics. Saint Paul was the first to warn the Thessalonians that the Antichrist would promise them safety to achieve power and prevent Armageddon. Centuries later, Works of fiction like Lord of the World or Tale of the Antichrist imagined that this world domination would take the form of a one-world state and be achieved through satanic charisma and hypnotic speeches.
But as technology advances, Peter Thiel and René Girard warn us that the scale and destruction described in the Book of Revelation sounds increasingly familiar. Armageddon and the Antichrist that precedes it are to be taken more seriously today than ever before.1
The risk of technological Armageddon properly materialized with nuclear energy, which revealed that global annihilation was not only possible but just one red button away. It’s easy to forget nowadays that the decisions of a few world leaders are now critical for humanity’s future.
But in 1946, this risk was on everyone’s mind as short films like One World or None articulated the stakes. They argued that the only solution against nuclear Armageddon was the unification of world governments under “one world”. If any country continued developing nuclear technology independently, it could dominate the globe. If all countries continued, then nuclear Armageddon was all but guaranteed. Only a one-world government could impose the absolute control needed over the development of this elusive and uniquely destructive technology, and stop proliferation. It would be a government whose totalitarian means justified its world-saving ends.
In hindsight, avoiding nuclear annihilation probably owed as much to mutually assured destruction as global collaboration. However, the concept of a one-world government, allied against a common foe, has persisted with every emerging technology since. Most recently, the idea has been championed by the Oxford academic and author of Superintelligence, Nick Bostrom. Among other things, he floats the possibility of a “unipolar world order” to enforce preventative policing and effective global governance of AI.
Bostrom asks: What if we gave super-intelligent and super-capable AI a task as simple as creating paper clips? Without the proper safeguards, could the AI see fit to sacrifice human well-being to achieve its mission? What if the most optimal paperclip-producing strategy involves enslaving mankind? What will the AI do then, with sufficient power and without proper guardrails? A simple coding mistake or a tiny misalignment with human values could quickly snowball into Armageddon at the hands of a super-intelligence.
The current speed of AI, if anything, justifies his fear that it could escape our control. There is already a feeling that the genie is out of the bottle, that it is improving so fast that nobody knows how to stop it, and that Bostrom’s cataclysmic scenarios could be upon us. More terrifying is the unprecedented scale of this global arms race. European and Chinese developers like Mistral and DeepSeek are now going toe to toe with the best US companies. Chinese models and weights are open source, meaning they are fully accessible to Russian agents, North Korean hackers, Iranian theocrats, or the Taliban forces. Every country and terrorist organization may soon contain the means to destabilize our next elections, build swarms of autonomous drones, deploy disinformation botnets, or one day create a paperclip maximizing super-intelligence.
When presented this way, such risks necessarily overshadow AI’s benefits. They make the AI safety conversation and the need for a united, global response more important and urgent than ever.
But this is the modern promise of peace and safety. The latest threat of One World or None should remind us of Saint Paul’s warning. While it was never clear how exactly the Antichrist would rise to power, and what he would say, Peter Thiel warns us that those hypnotic speeches are today’s technological fearmongering, and, conversely, Armageddon is tomorrow’s unchecked technological growth.
How do we navigate, on the one hand, an AI Armageddon, and on the other hand, the one-world state required to prevent it? Thiel frames this dilemma with the analogy of Scylla and Charybdis, the mythical monster and whirlwind through which Ulysses is forced to navigate.
But this probably paints a false equivalence between both risks. The intangible risks and yet-unknown harms of runaway AI (Scylla) are difficult to compare to the very real, historically grounded risks of absolute power and one world government (Charybdis).
Luckily, history gives us three important lessons to navigate Thiel’s Scylla and Charybdis.
The first is that every new technology is initially misunderstood, and its benefits have a habit of outweighing its costs. The printing press, steam engine, automobile, radio, and airplane were destructive but contributed far more to human flourishing. Self-annihilation through nuclear or bioweapons raises the stakes considerably, but one suspects that a prehistoric Nick Bostrom would have prevented us from exploiting fire for similar reasons.
Thiel is right, however, that the past does not guarantee the future. The sun does not rise in the morning because it rose yesterday, but because of inherent properties like the Earth’s rotation. However, an analysis of AI’s inherent properties should yield conclusions like those of computing, cryptography, and other technologies that AI more closely resembles. Unfettered access in computing enabled deterrence and defense as much as it enabled risk and offense. The strongest botnets, cyberattacks, and drone swarms can only be defended against by our similarly powerful AI systems.
This makes the analogy to nuclear weapons seem incomplete. AI may be destructive like nuclear, but unlike atomic weapons, it can be countered with different versions of itself. The deterrence of nuclear weapons with other nuclear weapons is mostly hypothetical, understandably involving more game theory than real-world, annihilation-inducing countermeasures. With AI’s fundamental properties being closer to computing, it would be unjustified to project a doomsday scenario which never materialized in the computing industry.
The second historical lesson comes from Lord Acton, who warns that “absolute power corrupts absolutely”. The future difficulties of a one-world government echo the past failures of centralized planning and the absolute control required by such regimes to eliminate risks. As Peter Thiel warns, “A one world alliance against the development of AI would only be feasible by regulating every keystroke or every computer”. AI ultimately represents lines of code on a computer, which US courts have recognized as “speech” for more than two decades. Free speech would conveniently be the collateral damage of this peace and safety, as we attempt to regulate keystrokes and information in vain. While providing no guaranteed solution, the Charybdis of one world government certainly creates new problems.
The final lesson is the existence of a third path—neither Scylla nor Charybdis, but equally unappealing—which history suggests we are more likely to choose. This path is total stagnation, driven by an increasingly risk-averse society. Seemingly benign regulation, incrementally but unevenly adopted amongst various countries with some minimal multilateral collaboration, could give AI the same trajectory as nuclear energy rather than nuclear weapons.
This path was chosen in 1974 when the Nuclear Regulatory Commission (NRC) was formed. Since its creation, only two new nuclear reactors have been built in half a century, despite overwhelming evidence that nuclear energy is safe and effective. The NRC, along with environmental regulation and general panic from the population, killed nuclear energy in its infant crib by means of stagnation, possibly preventing an energy-abundant future.
AI abundance, like nuclear abundance before it, is a real possibility. AI is on track to save 1.2 million lives per year from self-driving cars, and millions more in fields like medicine and biology. Without better evidence or data on AI’s harm, books like Superintelligence remain speculative fiction, which will undoubtedly slow down the development of life-saving technology. Intellectuals will do what intellectuals are paid to do, but debates like Bostrom’s “if this then that” are unlikely to give us strong policy outcomes without actual data. Meanwhile, the damage of intellectual fear-mongering can be found in the technological, scientific, and quality of life improvements, which threaten to be slowed by premature regulation.
If Scylla represents the risks of AI acceleration and Charybdis the risks of one world government, trying to navigate between them ensures more stagnation, the opportunity cost of which will be measured in human lives. Accelerationism shouldn’t be blind, but we should be more worried about the draconian measures required to slow down technology and the immorality of stagnation. The best way to navigate Thiel’s dilemma is likely to chart a course for Scylla’s scary accelerationism and continue our trust in science and technology. This faith in free exploration from individuals has proven crucial to every technological innovation and is likely an important part of future innovations.
P.S. - This post compiles ideas from Peter Thiel, René Girard, Tobias Huber, Nick Bostrom, and Eliezer Yudkowsky, who have all done far more work than I have in analyzing the concepts distilled here. This attempt to break out of what has become a circular conversation on AI safety is not a policy proposal. To make up your own mind, I highly recommend the original talk by Thiel and the first analysis by Tobias Huber. Thanks to Matthieu du Crest and Theo Le Fur for reading drafts of this post.
Because the Antichrist would come before Armageddon, it follows that a technological Antichrist would prelude a technological Armageddon, heralding the same message of “peace and safety”.