The UN Security Council debates “Artificial intelligence: opportunities and risks for international peace and security” at UN headquarters in New York City. (Photo by ED JONES/AFP via Getty Images)
AFP via Getty Images
As the United Nations General Assembly convened this week in New York, the global body stepped up its efforts to formalize multilateral oversight of artificial intelligence, marking a notable evolution in the UN’s engagement with this technology.
Leaders from around the world gathered to debate and define what “responsible AI” might look like on an international stage, and two newly established bodies were discussed: the Independent International Scientific Panel on AI, designed to provide rigorous, impartial scientific assessment, and the Global Dialogue on AI Governance, a forum meant to bridge states, industry and civil society. Together, these comprise the UN’s first coordinated, multilateral, institutionalized effort to translate the rapid, chaotic evolution of AI into a framework for shared oversight.
Antonio Guterres, the UN Secretary-General, framed the moment in equally serious terms. “This is a significant step forward in global efforts to harness the benefits of artificial intelligence while addressing its risks,” he said in a statement, calling out the need to “discuss the critical issues concerning AI facing humanity today.”
Over the past year, AI systems have increasingly influenced everything from financial markets to news consumption, even public health. With power concentrated in a handful of corporations and nations, the potential for misuse, whether through autonomous weaponry, mass surveillance or deepfakes capable of destabilizing democracy, has magnified.
Nobel laureate Maria Ressa framed the current discussion as a fight for the very foundations of informed public life. “Information integrity is the mother of all battles. Win this, and we can win the rest. Lose this, and we lose everything,” Ressa said in her speech.
This week, several Nobel laureates, tech pioneers and global policymakers converged virtually and in person, issuing a “Global Call for AI Red Lines” — a plea for binding international agreements on what AI should never be allowed to do. Among the signatories were Geoffrey Hinton and Yoshua Bengio, two of AI’s so-called “godfathers,” alongside Nobel Prize winners in chemistry, economics, peace and physics, and authors including Yuval Noah Harari.
“With AI, we may not get a chance to learn from our mistakes, because AI is the first technology that can make decisions by itself, invent new ideas by itself and escape our control,” said Harari. “Humans must agree on clear red lines for AI before the technology reshapes society beyond our understanding and destroys the foundations of our humanity.”
The UN’s approach harbours a recognition that AI is a societal concern. The Scientific Panel, they say, will act as a bridge between cutting-edge research and policymaking, anticipating emerging threats before they escalate into crises, while The Global Dialogue will attempt to bring consensus among states with widely divergent priorities, from AI superpowers to countries just beginning to build their digital infrastructure.
For consumers and the tech industry, the assembly reveals a glimpse into the likely pressures to come within the AI landscape around voluntary guidelines and corporate safety pledges having to yield to formal international agreements. However, the questions are enormous: Which applications of AI will be globally sanctioned? How will accountability be framed and enforced? Perhaps most importantly, who gets to decide?
Furthermore, while these initiatives, outlined in last year’s Pact for the Future, envision consistent evidence-based assessment and policy-relevant recommendations, their ultimate success will hinge on meaningful leadership from the Global South, particularly Africa, according to Brookings Institution.
“By 2050, one in four people will be African, and by 2035, Africa will be home to the world’s largest youth workforce influx, outpacing the rest of the world combined,” the Brookings team stated in an analysis.
“Despite this potential, Africa lags behind with less than 1% of the world’s data centers and less than 1% of AI research and development, while only 0.02% of internet content is in African languages. The new AI initiatives could usefully take on these issues for Africa’s large and growing share of humanity. Doing so would show results from multilateralism in a period where support by some is waning.”
They also noted a “pacing problem,” where technology evolves in weeks while governance lags for years, which brings to mind Alvin Toffler’s famous warning: “The future always arrives too fast, and in the wrong order.”
“Instead of annual check-ins, these sessions need to become ongoing policy processes with built-in forecasting tools to spot emerging risks early and real-time monitoring dashboards to track how AI deployment affects people in real time,” the institution recommended.
These debates could determine the technologies that enter daily life, from the AI that recommends what consumers buy to the systems that moderate what they see online. Even as the UN takes these first formal steps, the challenge ahead is immense. Establishing frameworks and dialogues is one thing, enforcing them across nations with divergent priorities, economic stakes and digital capabilities is another.
AI governance, as we know, is perhaps not likely be solved in a single session, or even a single year — the Scientific Panel and Global Dialogue may offer the architecture for anticipatory regulation, but their effectiveness will depend on transparency, sustained funding and the willingness — as well as ability — of states to abide by collectively agreed standards.