Geoff Hinton Warns Humanity’s Future May Depend On AI ‘Motherly Instincts’

Geoff Hinton and Shirin Ghaffary speak at Ai4 2025 Conference in Las Vegas
Ron Schmelzer
Speaking at the recent Ai4 Conference in Las Vegas, Geoff Hinton, one of the most influential voices in artificial intelligence, warned that humanity is running out of time to prepare for machines that outthink us. He now believes artificial general intelligence, or AGI, could be here within a decade.
Shirin Ghaffary of Bloomberg News opened their conversation with a light jab about a robots-versus-human boxing match staged before the session. The human won handily, “for now,” she joked. Hinton grinned at the banter, but his tone shifted once talk turned to the central question of his later career: when will AI surpass the human mind?
“Most experts think sometime between five and twenty years,” he said. His own forecast has tightened sharply. “I used to say thirty to fifty years. Now, it could be more than twenty years, or just a few years.”
Hinton isn’t picturing minor upgrades. He’s thinking of systems far more capable than any person alive and he doubts we can control them once they arrive.
Why human dominance over AI won’t work
In much of the tech world, the future of AI is framed as a contest for control: humans must keep the upper hand. Hinton calls that a false hope. “They’re going to be much smarter than us,” he said. “Imagine you were in charge of a playground of three-year-olds and you worked for them. It wouldn’t be very hard for them to get around you if they were smarter.”
His solution turns the usual script upside down. Instead of fighting to stay in charge, he believes we should design AI to care about us. The analogy he uses is a mother and her child. The stronger being naturally committed to the weaker one’s survival. “We need AI mothers rather than AI assistants. An assistant is someone you can fire. You can’t fire your mother, thankfully.”
That means building “maternal instincts” into advanced systems, a kind of embedded drive to protect human life. Hinton admits he doesn’t know how to engineer it yet, but he insists it’s a research priority as important as improving raw intelligence. He emphasized that this is a different kind of research, not about making systems smarter, but about making them care. He also sees this as one of the few areas where countries might truly work together, since no nation wants to be ruled by its machines.
Technical unknowns, political possibilities
Hinton doesn’t expect collaboration to stretch far. The AI race, especially between the U.S. and China, is accelerating, and neither side is likely to slow down. He does believe there’s a chance for agreement on curbing risky biotech applications, such as synthetic virus creation, and on exploring ways humans could coexist with more powerful systems.
Part of his conviction that dominance won’t work comes from how AI is built. Digital models can share what they’ve learned instantly with thousands of copies. “If people could do that in a university, you’d take one course, your friends would take different courses, and you’d all know everything,” he said. “We can share just a few bits a second. AI can share a trillion bits every time they update.”
This ability to learn collectively means AI could outpace human progress by orders of magnitude. Coupled with vast sums of investment, Hinton doubts the climb toward superintelligence can be stopped.
Regulation’s limits
When asked if rules could head off the worst risks, Hinton was direct. “If the regulation says don’t develop AI, that’s not going to happen.” He supports specific safety measures, especially those aimed at blocking small groups from producing dangerous biological agents, but sees sweeping pauses as unrealistic.
His frustration with U.S. politics is clear. Even simple proposals, such as requiring DNA synthesis labs to screen for lethal pathogens, have failed in Congress. “The Republicans wouldn’t collaborate because it would be a win for Biden,” he said.
Winners, losers, and the state of research
Hinton left Google in 2023, he insists because he felt too old for code debugging sessions, but also to speak more openly about AI’s dangers. He still credits several major labs, including Anthropic and DeepMind, for taking safety seriously. Yet he worries about deep cuts to U.S. basic research funding, which he sees as the seedbed for future breakthroughs. “The return on investment from funding basic research is huge. You’d only cut it if you didn’t care about the long-term future.”
Private labs can play a role. Hinton likens their potential to Bell Labs at its peak, but he argues that universities remain the best source of transformative ideas.
Guarded optimism
Despite his warnings, Hinton finds reasons to be hopeful. He points to healthcare as an area where AI could make a decisive difference. By unlocking the rich but underused data in medical scans and patient records, AI might deliver faster diagnoses, more targeted drugs, and treatments tailored to each patient.
As for erasing aging altogether, Hinton is doubtful. “Living forever would be a big mistake. Do you want the world run by 200-year-old white men?” he asked with a wry smile.
Still, he returns to his central belief: if we succeed in building AI with genuine care for its human “children,” the species might not only survive superintelligence but also prosper under its watch. “That’ll be wonderful if we can make it work.”