artificial intelligence (AI)

AI at a dangerous threshold: An interdisciplinary call for disarmament

Counter-terrorism Technology

Officials from the United Nations Institute for Disarmament Research (UNIDIR) and representatives from leading tech companies called for slowing the rapid development of artificial intelligence, particularly due to risks of misuse in military applications. The need for a comprehensive approach to mitigate these risks was emphasized.

UNIDIR co-deputy head, Gosia Loy, underscored the importance of involving the tech community early in defense policy. Speaking at the Global Conference on AI Security and Ethics conference in Geneva in March 2025, she referred to the “Oppenheimer moment of AI” and called for robust oversight grounded in human rights and international law. Such collaboration is seen as essential for the safe and ethical development of AI.

AI’s dual-use nature in civilian and military sectors has created a global security dilemma. 

Arnaud Valli, Head of Public Affairs at Comand AI, warned that developers may lose touch with battlefield realities, risking life-threatening errors. Concerns are growing over AI making autonomous combat decisions, prompting urgent calls for regulation. David Sully, CEO of Advai, noted the fragility of current AI systems, stating they "fail constantly" and are easily misled.

Sulyna Nur Abdullah, Head of Strategic Planning at ITU, highlighted the “AI governance paradox,” where AI's rapid evolution outpaces countries’ ability to manage its risks. She called for sustained dialogue between policymakers and technical experts and stressed including developing countries in these discussions.

Over a decade ago, Christof Heyns warned that removing humans from lethal decision-making risks removing humanity itself. Today, OHCHR’s Peggy Hicks reiterates that life-and-death decisions must remain with humans, as legal judgments cannot easily be delegated to software.

While there is broad consensus on AI defense principles, these often clash with profit motives, noted Comand AI’s Mr. Valli. He stressed the immense responsibility in building trustworthy systems.

Despite developers’ commitment to “fair, safe, and robust” AI, Sully, a former CTBT Organization employee and AI regulation advocate, warned of a lack of clear implementation guidance. He reminded policymakers that “AI is still in its early stages” and questioned what true robustness really means. Sully pointed out a major regulatory challenge: unlike nuclear weapons, AI-guided weapons leave no forensic signature, complicating international oversight and verification. He emphasized that this compliance gap remains largely unaddressed.

Jibu Elias of Mozilla stressed that future developers must understand the societal impacts of the technologies they build. Dr. Moses B. Khanyile from Stellenbosch University highlighted academia’s ethical responsibilities and the need to align military and regulatory interests.

Diplomats from countries including China, the Netherlands, Pakistan, France, Italy, and South Korea called for stronger export controls and greater international trust. Dutch Ambassador Robert in den Bosch emphasized the importance of understanding AI in relation to other domains like cyber, quantum, and space, due to the complexity of global security challenges.

Links

Keywords

artificial intelligence (AI) disarmament United Nations illicit weapons technology