aiArtificial intelligenceChinaCold WarDaily Caller News FoundationDeepSeekDepartment of DefenseElon MuskEric SchmidtFeaturedGoogle

EXCLUSIVE: Defense, Tech Leaders Warn AI Could Unleash A New Cold War

In the latter half of the 20th century, global security hinged on nuclear deterrence — now a group of leading AI experts warns that advanced artificial intelligence demands a similar national security strategy.

Just as the specter of mutual assured destruction (MAD) once kept superpowers in check, the experts argue, the threat of a runaway AI catastrophe must spur a new doctrine of mutual assured AI malfunction (MAIM). In a new paper coauthored by Dan Hendrycks, former Google CEO Eric Schmidt and Scale AI CEO Alexandr Wang, the authors call for treating frontier AI development as an urgent national security priority on par with the nuclear arms race. (RELATED: ‘Rapid Expansion’: China’s Nuclear Weapon Stockpile Is Way Bigger Than The Pentagon Previously Thought)

“Whether Iran gets the bomb depends on what we do and how we strategize,” Hendrycks said in an interview with the Daily Caller News Foundation. “Whether AI goes well or not also depends on how we handle these issues.”

Hendrycks, who serves as a safety advisor for Elon Musk’s xAI and leads the nonprofit Center for AI Safety, and his coauthors lay out a three-pronged framework for this new AI strategy: deterrence through MAIM, nonproliferation of dangerous AI capabilities and bolstering national competitiveness in AI. At its core, their argument is that extremely advanced AI — especially “super-intelligent” systems that could outthink humans — poses unprecedented risks that no nation can afford to ignore.

An AI that surpasses humans in virtually every intellectual domain “would amount to the most precarious technological development since the nuclear bomb,” the paper says. Like the atomic scientists of the Manhattan Project, today’s AI leaders are urging policymakers to plan for worst-case scenarios before potential catastrophe.

Pillar I: Deterrence Through MAIM

If any one state rushes to attain unchecked, superhuman AI dominance, the result could be disastrous, the authors argue. In a hasty bid to pull ahead, a country might accidentally lose control of a powerful AI and stir global chaos. Conversely, if that country succeeded in successfully monopolizing a super-AI model, it could wield destabilizing power over its rivals.

Either a rogue AI or a single nation armed with supreme AI threatens the survival of other states, the authors argue. Just as rivals plotted to sabotage nascent nuclear programs in the past, nations today may be compelled to preemptively sabotage each other’s AI projects to preserve the balance of power.

“We want to develop our ability to do cyber attacks on data centers as a deterrent,” Hendrycks said. “The ability to disrupt adversarial AI projects could be essential for maintaining stability.”

The proposed concept of MAIM echoes Cold War deterrence strategies. The goal would not be to encourage physical strikes, but rather pose a credible a threat that any attempt at an AI monopoly will be met with force. By making it clear that an unchecked AI arms-grab would invite a debilitating response, states could dissuade each other from ever trying. To keep such a fragile peace stable, the paper suggests measures reminiscent of nuclear arms labs protocols: clearly communicated “escalation ladders” so nations know the red lines, placing AI research labs and data centers far from population centers and increasing transparency (monitoring rivals’ AI compute facilities, for example) to reduce the chance of miscalculation.

Pillar II: Nonproliferation

The second pillar of the strategy focuses on preventing advanced AI capabilities from spreading to what Hendrycks described as “aggrieved individuals”: terrorists, criminal networks or “random states” that might use them recklessly.

“When AI is more capable at cyberattacks, if those skills are democratized, they can run 100,000 hacking AIs simultaneously to attack our critical infrastructure. This is a capability they do not have now,” Hendrycks continued. “They’ve maybe got, like, a handful of OK hackers — that’s not a capability we want them having. At all.” (RELATED: Chinese Hackers Have Been Feasting On Key US Infrastructure On Biden’s Watch)

Just as global regimes tightly control fissile materials like enriched uranium and plutonium, the authors call for strict controls on the key ingredients of powerful AI — especially high-end AI chips and the data to train top-tier models. Advanced semiconductor chips, they note, are the strategic resource fueling AI progress, and like uranium and plutonium, are physical objects that can be tracked, counted and intercepted unlike intangible software algorithms.

Controlling Access To AI’s Physical Resources

By aggressively tightening export controls and surveillance on semiconductor shipments, governments can “know where the chips are at — that way, we can detect if they’re being smuggled, and this will help us stop the bleeding much earlier,” Hendrycks said. He suggests working directly with chipmakers on counter-proliferation; for example, asking companies like Nvidia to embed security features that report a chip’s location or prevent unauthorized use, helping authorities detect if the processors as are being diverted to illicit buyers or underground labs.

The Commerce Department already moved in this direction by banning exports of advanced AI chips to China in 2022, but the country is finding workarounds. Chinese military research institutes and other buyers acquired small batches of Nvidia’s banned A100 and H100 AI chips through gray markets, despite U.S. bans, a Reuters report found in February 2024. Smuggling networks have reportedly rerouted high-performance GPUs, or graphics cards, via intermediary firms in places like Singapore.

Hendrycks and his coauthors emphasize shoring up these export controls to keep “weaponizable” AI tools out of the wrong hands. During the Cold War, rival superpowers quietly cooperated to prevent nuclear theft and terrorism — and Hendrycks suggests a similar dynamic may be possible with AI nonproliferation.

“The purpose of the nonproliferation part is not about restricting access to other superpowers. That seems harder … there’s plenty of incentives — you might even want to coordinate with China on that type of thing,” he said.

That said, the authors recognize the limits of how far AI containment can go. Trying to completely deny peer competitors like China access to advanced AI hardware or algorithms may be infeasible, and potentially dangerous.

The logo of Deepseek is seen during the Global Developer Conference, organised by the Shanghai AI Industry Association, in Shanghai on February 21, 2025. (Photo by Hector RETAMAL / AFP)

The logo of Deepseek is seen during the Global Developer Conference, organised by the Shanghai AI Industry Association, in Shanghai on February 21, 2025. (Photo by Hector RETAMAL / AFP)

“You might incentivize the Taiwan invasion eventually, because if they have no chips, and if AI is important, then they might want to just make it so the U.S. doesn’t have this huge advantage anymore,” Hendrycks said, arguing an attempt to strangle China’s AI progress might prompt an attempt to seize Taiwan’s semiconductor manufacturing facilities, which manufacture 44% of chips used in American technology, according to a U.S. International Trade Commission report. For the great powers, the emphasis is not on nonproliferation, but on deterrence and competition.

Pillar III: Competitiveness

The final prong of the strategy is ensuring the U.S. and its allies maintain a lead in AI capabilities and safely reap AI’s benefits.

“Successful AI adoption will be a determining factor in national strength,” the paper argues.

“Adoption” goes beyond just training the most powerful algorithms; it means integrating AI across the economy and military so the U.S. wins the contest for future power without a hot war. Hendrycks and his colleagues highlight several arenas of competition. One is the AI chip supply itself — by investing in domestic semiconducting manufacturing , the U.S. can ensure it has steady access to the resources necessary for AI’s progression.

Recent bipartisan policy moves indicate Washington’s awareness of this — the 2022 CHIPS Act authorized roughly $280 billion into U.S.-based semiconductor manufacturing and research, along with the tightening of export rules that year to slow China’s progress. For now, America holds crucial advantages: over 90% of the value-add in advanced AI chip production occurs in the U.S., its partners or Taiwan, Hendrycks said, thanks to choke points like the Netherlands’ ASML, which has a de facto monopoly on extreme lithography equipment needed to make cutting-edge chips. (RELATED: How Freaked Out Should We Really Be About Communist China’s New AI Model?)

Intel CEO Pat Gelsinger (L) and Intel Factory Manager Hugh Green (R) watch as US President Joe Biden (C) looks at a semiconductor wafer during a tour at Intel Ocotillo Campus in Chandler, Arizona, on March 20, 2024. (Photo by BRENDAN SMIALOWSKI/AFP via Getty Images)

Intel CEO Pat Gelsinger (L) and Intel Factory Manager Hugh Green (R) watch as US President Joe Biden (C) looks at a semiconductor wafer during a tour at Intel Ocotillo Campus in Chandler, Arizona, on March 20, 2024. (Photo by BRENDAN SMIALOWSKI/AFP via Getty Images)

Hendrycks said China is spending some $50 billion a year to catch up in chip manufacturing, but explained that “it’s very difficult to just spend your way out of this, and it will take them many years, and they may not succeed” in reaching parity.

Another competitive arena is military adoption of AI. Schmidt, the inaugural chair of the National Security Commission on AI, has long pressured the Pentagon to modernize with AI-enabled systems — deploying autonomous drone swarms, for example, rather than relying on conventional hardware like tanks or fighter jets. The new paper echoes this, arguing that adopting AI in command, control and weapons will be “increasingly essential for military strength” going forward. The authors urge the U.S. to expedite integration of technologies like uncrewed aircraft and AI-driven analysis, lest it fall behind more innovative adversaries.

“It would be extremely bad news if two years from now, there was not substantial headway in securing our drone supply chain or capacity to manufacture them,” Hendrycks warned, citing the lessons of Ukraine’s drone-heavy conflict and the pace of Chinese improvements in this area. (RELATED: REPORT: Ukraine Sent Drone Operators And Drones To US-Designated Terrorist Group)

Competitiveness also has a domestic stability angle — the paper suggests governments invest in measures to manage AI’s disruptive impacts (like job displacement or misinformation campaigns) so that rapid automation doesn’t undermine society from within. In short, winning the AI race isn’t just about scoring higher on benchmarks — it’s about keeping the technological revolution stable.

After decades during which technology was largely seen as a commercial domain, Hendrycks and his coauthors are making a case that frontier AI development must be treated with the gravity of a defense program — and the national security establishment seems to be listening.

“They aren’t just going to hope that AI will be irrelevant,” Hendrycks observed of the national security community; he said they recognize that if AI can eventually cure cancer, it can also eventually create new plagues, and if it can write software, it can also hack systems. “I think it’s been pretty easy for them to understand that many of these capabilities are dual-use, and as a consequence, they’ve been more receptive to contemplating these sorts of risks.”

Policymakers are now grappling with questions familiar to Cold War strategists: how to verify what rival labs are doing, how to respond if an adversary breaks the rules and how to negotiate limits on a technology that could end civilization as we know it if misused. This time, though, the threat doesn’t come from warheads or missiles, but from lines of code and clusters of GPUs. Crafting agreements and norms for AI will be tricky — defense officials can’t count algorithms the way they counted warheads — but the alternative, according to Hendrycks, Schmidt and Wang, is to blunder forward blindly into an AI arms race with potentially dire consequences.

The stakes, they argue, could not be higher.

All content created by the Daily Caller News Foundation, an independent and nonpartisan newswire service, is available without charge to any legitimate news publisher that can provide a large audience. All republished articles must include our logo, our reporter’s byline and their DCNF affiliation. For any questions about our guidelines or partnering with us, please contact licensing@dailycallernewsfoundation.org.



Source link

Related Posts

1 of 93