Jane Goodall’s Lesson: Rethinking Intelligence

Jane Goodall’s Lesson: Rethinking Intelligence


When Jane Goodall arrived at Tanzania’s Gombe Stream in 1960, she carried neither a doctoral degree nor conventional scientific training. What she possessed was something far more useful: the willingness to see intelligence where others saw only instinct, to recognize personhood where convention saw specimens and to understand that the boundaries we draw around cognition say more about our current limitations than nature’s possibilities.

Dr. Goodall’s passing this week at age 91 closes a chapter in scientific history – and it opens a question for our present moment: As we forge partnerships between natural and artificial intelligence, have we absorbed the most vital lesson from her six decades of research, that intelligence itself is not singular, but magnificently plural?

The Revolutionary Recognition

Goodall’s most famous discovery came early. When she observed a chimpanzee named David Greybeard fashioning tools to fish for termites, she didn’t merely document the behavior. She shattered a foundational assumption of Western science: that toolmaking was the exclusive province of humans, the bright line separating us from “mere” animals.

Her mentor, Louis Leakey, reportedly responded: “Now we must redefine tools, redefine Man, or accept chimpanzees as human.” But Goodall’s insight went deeper than tool use. Over decades of patient observation at Gombe Stream National Park, she documented something more intriguing, that chimpanzees possessed emotional intelligence, social intelligence, practical intelligence and what we might call ecological intelligence: a refined understanding of their environment and their place within it.

She discovered that chimps wage war and make peace, grieve their dead and celebrate reunions, form political alliances and nurse grudges, solve complex problems and pass cultural knowledge across generations. Each of these capacities represents a distinct form of intelligence – and none of them mapped neatly onto the narrow, often anthropocentric definitions that dominated twentieth-century science.

A Dangerous Hierarchy

For centuries, Western thought organized intelligence hierarchically: humans at the apex, other primates below, then mammals, then other creatures in descending order of presumed sophistication. This Great Chain of Being wasn’t merely descriptive, it was prescriptive, justifying exploitation, experimentation and wholesale destruction of beings deemed “less intelligent.”

Goodall’s work systematically dismantled this hierarchy. She revealed that intelligence operates across multiple dimensions, emotional, social, kinesthetic, spatial, creative and that different species and different individuals within species, excel in different domains. A chimpanzee’s working memory can outperform a human’s in certain spatial tasks. Octopi demonstrate problem-solving abilities that emerge from a radically decentralized nervous system. Corvids, wild crows from New Caledonian, can fashion tools without ever having seen them used.

The lesson wasn’t that all intelligences are identical. It was that they’re complementary, evolved for different ecological niches, optimized for different challenges, valuable in different contexts. This recognition of cognitive diversity, what we might call “intelligences” rather than “intelligence,” remains Goodall’s most enduring scientific legacy.

The AI Parallel

Now consider our current moment. We’re witnessing the rapid emergence of artificial intelligence systems that demonstrate remarkable capabilities: pattern recognition at superhuman scales, natural language processing, complex game-playing, even nascent forms of reasoning. Yet we’re making the same categorical error that Goodall spent her life correcting.

We ask whether AI is “intelligent” or not, whether it “understands” or merely “processes,” whether it “thinks” or simply “computes.” These binary questions repeat the mistake of assuming intelligence is a single, scalar property that you either possess or lack. They ignore what Goodall taught us: that intelligence is multidimensional, contextual and collaborative.

Artificial intelligence excels in domains where humans may struggle, processing vast datasets, identifying subtle patterns across millions of variables, maintaining perfect consistency, operating without fatigue or bias drift. Natural intelligence excels where AI currently falters, contextual judgment, ethical reasoning, creative synthesis, emotional attunement, understanding ambiguity and nuance.

The salient question isn’t whether AI matches human intelligence on some imagined universal scale. It’s how different forms of intelligence, natural and artificial, can complement each other to create something neither could achieve alone: hybrid intelligence.

Prosocial AI And Goodall’s Vision

Goodall dedicated her later decades to conservation advocacy, recognizing that understanding intelligence meant little if we destroyed the ecosystems in which it flourished. This holistic view, that cognitive capacity exists embedded in social and ecological contexts, provides the essential framework for developing prosocial AI.

Prosocial AI represents a fundamental reorientation: artificial intelligence designed not to replace human intelligence but to amplify our collective capacity for flourishing. It’s pro-people (enhancing human agency and wellbeing), pro-planet (supporting ecological stability and restoration), pro-profit (creating sustainable economic value) and pro-potential (expanding rather than constraining future possibilities). This quadruple win isn’t utopian fantasy, it’s pragmatic systems thinking.

Consider precision conservation, where AI-powered monitoring systems help track endangered species across vast territories, while human rangers provide the contextual judgment about when and how to intervene. Or collaborative medical diagnosis, where AI detects subtle patterns in imaging while physicians integrate patient history, values, and lived experience. Or climate modeling, where machine learning identifies tipping points while human judgment navigates political and ethical tradeoffs.

In each case, hybrid intelligence emerges from complementarity: artificial systems handle computational intensity while human intelligence provides contextual wisdom, ethical grounding, and creative synthesis. Neither replaces the other; together, they become capable of addressing challenges that neither could tackle alone.

From Observation To Action

Goodall’s methodology was as revolutionary as her findings. She named her subjects rather than numbering them, acknowledged their personalities and recognized her own subjectivity as inseparable from the research process. This approach, once dismissed as unscientific anthropomorphism, is now understood as a more sophisticated epistemology, one that accepts observer and observed as mutually constituted.

The same methodological humility must inform our development of AI systems. We cannot design artificial intelligence from a position of presumed neutrality. Our values, biases, priorities, and blind spots are encoded into every training dataset, every objective function, every deployment decision. Acknowledging this doesn’t weaken AI development, it strengthens it, forcing us to make our normative commitments explicit and contestable.

Prosocial AI requires Goodallian patience: long-term observation of how AI systems actually behave in complex social ecologies, not just how we imagine they’ll behave in controlled environments. It requires naming and tracking individual systems, understanding their “personalities” (behavioral patterns), and accepting that we’re participants in, not external controllers of, the sociotechnical systems we’re creating. That requires double literacy, a holistic understanding of self and society, people and planet and the mutual influence of natural and artificial intelligences onto each other.

The Stakes Of This Transition

We stand at a threshold Goodall would recognize. She arrived at Gombe when chimpanzee populations were beginning their catastrophic decline due to the impact of habitat loss, human disease and internal conflict, which led to a major population split. Today we are navigating a hybrid tipping zone, where the societal integration of AI accelerates exponentially. Yet while our technology becomes ever more powerful, our individual agenda erodes, as does the resilience of our planet. Seven out of nine planetary boundaries have been crossed now, with irreversible damages.

The next decade will determine whether artificial intelligence amplifies the best parts of natural intelligence – creativity, compassion and collaboration, or whether it entrenches our worst tendencies: exploitation, ego and ecological destruction.

Rather than debating whether AI is good or bad, the question now should be to design, deliver and deploy it to bring out the best in and for people and planet. Whether we develop it with the intellectual humility Goodall modeled: recognizing multiple valid forms of intelligence, resisting hierarchical thinking, acknowledging our cognitive limitations and committing to the patient, rigorous work of understanding how different intelligences can support rather than supplant each other.

Goodall spent 60 years teaching us that other minds don’t need to resemble human minds to deserve recognition, respect, and protection. The parallel lesson for our AI age: artificial minds don’t need to replicate human cognition to be valuable partners in addressing our shared challenges. What matters is complementarity, not comparability.

4 Essential Commitments For Decision-Makers

As we honor Dr. Goodall’s legacy, policymakers, business leaders and technologists can translate her insights into concrete action:

1. Mandate Cognitive Diversity Assessments In AI Development

Require AI developers to explicitly map which dimensions of intelligence their systems enhance and which they diminish. Before deployment, teams must demonstrate how artificial and human intelligence will complement each other in practice, not just in theory. Reject the dichotomy of “humans or machines” in favor of “natural and artificial intelligences curated together.”

2. Establish Long-Term AI Behavioral Observatories

Create institutional mechanisms, modeled on Gombe’s continuous chimpanzee monitoring, to track how AI systems actually behave over extended periods in diverse contexts. Short-term testing misses emergent properties, just as brief encounters with chimpanzees miss their complex social dynamics. Fund multi-year, multi-site studies of AI impacts on human cognition, social structures, and ecological systems.

3. Implement Prosocial AI Certification Standards

Develop measurable criteria for the quadruple win: demonstrable benefits for people (enhanced agency, wellbeing), planet (reduced environmental impact, supported conservation), profit (sustainable economic value) and potential (expanded future capabilities). Make prosocial certification a prerequisite for public procurement and create market incentives for AI that serves collective flourishing, not just extraction.

4. Invest in Hybrid Intelligence Infrastructure

Redirect resources from winner-take-all AI races toward building the sociotechnical infrastructure for human-AI collaboration. This means redesigning education systems to cultivate uniquely human capacities (ethical reasoning, creative synthesis, contextual judgment), creating new professional roles that bridge human and artificial intelligence, and ensuring AI systems amplify rather than replace human decision-making in critical domains.

Jane Goodall didn’t just study chimpanzees. She studied intelligence itself, and discovered it was richer, stranger and more distributed than anyone imagined. As we navigate our cohabitation with artificial minds, her legacy offers more than inspiration. It provides a rigorous framework for recognizing cognitive diversity, a methodological commitment to long-term observation and an ethical insistence that understanding intelligence means protecting the conditions in which it flourishes.

Can we draw on the best of the past and present to curate a hybrid future in which everyone has a fair chance to fulfill their inherent potential? Are we wise enough to let different forms of thinking, primate and algorithmic, emotional and computational, evolved and engineered, teach each other, challenge each other, and ultimately serve the flourishing of all intelligence on this magical living planet?

That would be the tribute she deserves.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *