The UNGA Science Summit 2025 Offered A Glimpse On The Future Of AI

Science themed icons in atomic model. Molecule scheme, microscope and flask, yellow lamp, iron gears, pills, heart with cardiogram vector illustrations.
getty
As artificial intelligence shapes ever more aspects of human life, the question must be asked whether these powerful technologies will serve humanity’s best interests, or inadvertently deepen existing inequalities and environmental degradation? The answer lies not in the technology itself, but in how we design, deploy and govern it. ProSocial AI offers a pragmatic framework for ensuring that AI serves NI – natural intelligence, as a force for collective flourishing not another driver to divide. Four panels at this year’s United Nations General Assembly’s Science Summit offered a peek on what this means.
Beyond Efficiency: AI With Regenerative Intent
ProSocial AI represents a departure from traditional approaches to AI that prioritize speed, efficiency and profit maximization. Instead, it encompasses AI systems that are deliberately tailored, trained, tested and targeted to bring out the best in and for both people and planet. This take recognizes that AI is not an end in itself, but a means to create more equitable, sustainable and compassionate societies.
The shift from return on investment to return on values represents more than semantic change — it’s a radical reorientation of how we measure success in a hybrid society. While traditional metrics focus on financial gains and operational efficiency, return on values considers broader impacts: human wellbeing, environmental sustainability, social cohesion and the preservation of human agency in an increasingly automated world.
To be sustainable this transformation requires hybrid intelligence — the intentional synthesis of natural human intelligence with artificial capabilities. Unlike pure automation that replaces human judgment, hybrid intelligence draws on human values while expanding individual experience and creativity in combination with AI’s computational power and pattern recognition abilities.
The Individual Level: Compassionate Care In Crisis
At the personal level, ProSocial AI manifests as tools that support individuals during their most vulnerable moments. Consider the challenges faced by caregivers of people with disabilities — a population often overwhelmed, under-resourced and emotionally depleted. Traditional support systems frequently fail these essential care providers, leaving them to navigate complex medical, educational, and behavioral challenges largely alone.
AI-powered companion tools like the Compassionate Caregiver Companion Coach (C4) are emerging that offer personalized, real-time guidance during crisis moments when human support isn’t immediately available. These systems provide practical advice for managing difficult behaviors, recommend self-care strategies to prevent burnout, and connect caregivers with peer networks and evidence-based resources. Crucially, they’re designed not to replace human connection but to supplement it, offering 24/7 availability when family, friends, or professionals aren’t accessible.
The sophistication of these tools extends beyond simple information retrieval. They can detect emotional states, recognize patterns in caregiving challenges and provide tailored coaching that adapts to individual circumstances. By reducing caregiver burnout and increasing confidence, these AI systems create ripple effects that improve outcomes for both caregivers and care recipients, strengthening the foundation of community care networks.
The Organizational Level: Governance With Artificial Wisdom
Moving from individual applications to organizational contexts, prosocial AI is transforming how institutions make decisions and govern themselves. Traditional corporate governance often suffers from cognitive biases, information silos and short-term thinking that can lead to decisions benefiting shareholders at the expense of broader stakeholder wellbeing. AI agents don’t have personal agendas, which can be harnessed to compensate for caveats in human foresight.
Drawing on this potential AI agents are now being integrated into boardroom operations, to expand and augment human judgment with unbiased analysis and long-term perspective. These systems can process gigantic amounts of data to identify patterns people might miss, raise uncomfortable ethical considerations that might otherwise be overlooked and model the long-term consequences of strategic decisions on various stakeholder groups.
The challenge lies in encoding moral considerations into these systems while maintaining transparency about their decision-making processes. When properly implemented, AI governance tools can help organizations move beyond reactive management toward anticipatory statecraft — identifying systemic risks and opportunities before they manifest as full-blown crises.
This requires careful attention to the rules and guidelines governing AI agents, particularly regarding bias mitigation and value alignment. Double alignment is needed to make this balance work; because the alignment of aspirations and algorithms can follow only after human aspirations and actions are in sync. As long as there is a mismatch of values, words and behavior that incoherence is amplified by AI.
The goal is not to automate human governance but to create more savvy, ethical and forward-thinking decision-making processes that consider societal impact alongside financial performance. The transition to a hybrid boardroom is an opportunity to put things into perspective and address some of the white elephants that have been sleeping underneath your boardroom table for too long.
The National Level: Building Future Brain Capital
At the country level, prosocial AI’s most critical application may be in early childhood development — the foundation upon which all future human potential is built. How societies integrate AI into children’s cognitive, emotional, and social development will largely determine whether future generations possess the skills and resilience needed to thrive in an AI-enhanced world.
Brain capital, the cognitive, emotional and social resources that individuals and societies possess, becomes crucial in this context. AI can enhance early childhood development through personalized learning experiences, inclusive educational environments for children with disabilities, neuro-developmental monitoring and parenting support systems.
But this big promise comes with big risks. Privacy concerns, cognitive dependency, inequitable access and the potential for AI systems to manipulate young minds all demand careful consideration. Perhaps most concerning are the risks of agency decay and empathy gaps, as the possibility that excessive reliance on AI interactions during critical developmental periods can impair children’s ability to form authentic human connections – and weaken their ability and appetite for cognitive effort.
Part of the solution lies in human-centered approaches that use AI to build not abolish interpersonal interaction in children’s lives. This requires double literacy, combining human literacy – a holistic understanding of self and society, and algorithmic literacy – candid comprehension of artificial assets to guide both children and their caregivers. Only such 360º literacy equips societies to harness AI’s potential to give every individual a fair chance to fulfil their inherent potential.
The Planetary Level: Health Within Boundaries
At the global level, prosocial AI enters the sensitive balance of humans and nature that is increasingly out of sync. Despite increased life expectancy in many regions, quality of life faces growing threats from non-communicable diseases, environmental degradation and climate change. AI systems themselves contribute to these challenges through massive energy consumption and resource requirements.
The intersection of AI acceleration, planetary boundary transgression, institutional adaptation lag and the dissolution of human agency constitutes a hybrid tipping zone, which we are navigating half blind.
A commitment to planetary health demands a systemic approach that considers AI as both an underlying driver of environmental problems and a potential catalyst for solutions. ProSocial AI applications can monitor environmental patterns, predict ecological crises before they reach tipping points and optimize resource use across complex systems.
However, realizing this potential requires shifting from centralized AI systems controlled by a few powerful actors toward more decentralized approaches that empower local communities and preserve data ownership. This transition involves defining shared benefits, making incentive structures transparent, distributing intelligence to local levels, and rigorously tracking impacts to ensure equitable and sustainable outcomes.
The ultimate goal is creating AI systems that contribute to regenerative rather than extractive economic models — technologies that cultivate and heal the natural and social systems they operate within.
UNGA Systems Thinking: The Interconnected Web
These four levels — individual, organizational, national and planetary — are interconnected. They form an organically evolving kaleidoscope where changes at one level cascade through the others. A caregiver supported by compassionate AI tools contributes to stronger community networks. Organizations practicing ethical AI governance create market pressures that influence industry standards. National investments in children’s brain capital determine future societies’ capacity to govern AI responsibly. And planetary health considerations shape the constraints within which all other applications must operate.
This system’s perspective reveals why prosocial AI cannot be achieved through technological solutions alone. It requires coordinated changes in values, policies, economic incentives and social norms across all levels of society. The shift from return on investment to return on values must happen simultaneously in corporate boardrooms, government agencies, educational institutions and individual decision-making processes.
Perhaps most importantly, this approach recognizes that AI development and deployment are not neutral technical processes but deeply political and moral choices about what kind of future we want to create. ProSocial AI offers a framework for making those choices deliberately and inclusively, ensuring that as artificial intelligence becomes more powerful, it also becomes more aligned with humanity’s highest aspirations.
The path forward requires meaningful collaboration between all parts of society. It is a challenge and an adventure that is intersectorial, transdisciplinary, multicultural and intergenerational. None of us can step back and consider it as a task to be tackled by others. The stakes are too high.
This article draws insights from presentations delivered at four ProSocial AI panels held during the UNGA Science Summit 2025.