President of Ukraine Volodymyr Zelensky speaks during the United Nations General Assembly (UNGA) at the United Nations headquarters on September 24, 2025 in New York City.
Getty Images
In his September 24th address to the UN General Assembly, Ukrainian President Volodymyr Zelenskyy issued one of the sharpest calls yet for global rules on military AI.
“Weapons are evolving faster than our ability to defend ourselves,” Zelenskyy told world leaders.
For Zelenskyy, this is not theory, but lived reality. Ukraine has spent more than three years fighting a war against Russia that has evolved into one of the most technologically sophisticated conflicts in modern history. Precision artillery, autonomous drones, and algorithm-driven targeting systems are no longer niche. They are shaping the outcome of daily battles.
His appeal to the UN is simple. Create rules before these technologies escape human control.
“We need global rules – now – for how AI can be used in weapons. And this is just as urgent as preventing the spread of nuclear weapons,” he explained to the leaders.
Why AI Weapons Are Different
Unlike nuclear or chemical arms, AI-driven weapons don’t require regulated materials, advanced research facilities or enormous industrial capacity. Algorithms can be replicated, shared, and deployed at scale with relatively low barriers. Drones can be simply built using low-cost equipment and tools. The barrier to entry is lower, the costs are smaller, and the spread is faster. That creates the conditions for a fast-moving arms race, one Zelenskyy argued could destabilize the globe.
“Now, there are tens of thousands of people who know how to professionally kill using drones. Stopping that kind of attack is harder than stopping any gun, knife, or bomb. This is what Russia has brought with its war,” he said.
International treaties have successfully banned or restricted certain classes of arms before. But AI has so far slipped through the cracks of regulation.
Current international law, rooted in post-WWII conventions, doesn’t directly address autonomous weapons. Efforts at the UN to ban or limit “lethal autonomous weapons systems” have stalled, largely due to resistance from major military powers.
Instead, what exists today are voluntary guidelines. The U.S. Department of Defense has adopted AI ethical principles. The EU has published frameworks for “trustworthy AI.” Yet none carry enforcement mechanisms, and none specifically outlaw weapons that can select and engage targets without human intervention.
From Fiction to Fact
For decades, the image of machines making kill decisions lived in science fiction from Terminator to Black Mirror. But as Zelenskyy made clear, this is no longer fiction. Examples on the battlefield already exist.
What began as quadcopters modified to drop grenades has evolved into long-range kamikaze drones that fly hundreds of miles. Russia’s Lancet drones are believed to use semi-autonomous targeting capabilities. Ukraine has employed AI-assisted systems to process satellite imagery and battlefield data in seconds. Both sides rely on drone swarms for reconnaissance and strikes.
For policymakers, the takeaway is clear. Just as generative AI disrupted media, design, and office productivity faster than regulators could respond, military AI is advancing without legal guardrails.
Zelenskyy’s concern is not only ethical but strategic. If one nation demonstrates battlefield success with autonomous weapons, others will rush to match or surpass it. That dynamic mirrors the nuclear arms race of the 20th century, but this time the pace could be far quicker.
“We are now living through the most destructive arms race in human history – because this time, it includes artificial intelligence,” said Zelenskyy. “And if there are no real security guarantees – except friends and weapons, and if the world can’t respond even to old threats, and if there’s no strong platform for international security – will there be any place left on Earth that’s still safe for people?”
