Happy Birthday, ChatGPT. Are The Past 3 Years A Reason To Celebrate?

Happy Birthday, ChatGPT. Are The Past 3 Years A Reason To Celebrate?


Three years ago, ChatGTP entered the public space. On November 30, 2022, ChatGPT didn’t just launch a product; it launched an era. Today, as we mark its third birthday, the hype is far from over. Every day sees new tools and updates by OpenAI and its competitors. While we are still marveling at the prowess of this creation, something more pervasive is happening underneath the surface. Our expanding artificial assets have begun to affect our sense of self and our ability to act autonomously

Three years ago we entered the hybrid tipping zone and made substantial progress in it. That is not a reason for celebration, but commemoration. Agency decay, AI mainstreaming, the race toward AI supremacy and planetary damage are moving forward at full speed, mutually accelerating each other’s pace.

This day should serve as a wake-up call to step up and intervene. We are navigating a critical historical window, likely spanning only a few more years, where the integration of natural and artificial intelligences becomes gradually irreversible – and invisible. The question today is not if our cognitive processes are affected by AI (they are) but if and how we are stepping up to protect and preserve them. Will this hybridity be symbiotic, enhancing our biological potential? Or will it be parasitic, leading to the atrophy of the very qualities that make us human?

Looking at the past 1,095 days the trend is not promising. We are trading agency for convenience at a discount rate that future generations cannot afford.

From The Outside In And Vice-Versa

The expansion of AI, an external asset, is affecting our ability to think and feel. The interface of the human mind is being rewritten and we are barely aware of it. Two examples:

In 2022, a student used AI to cheat on an essay. In 2025, the writer as an identity is dissolving. We are seeing the rise of cognitive atrophy, where the process of writing, the uncomfortable, non-linear struggle that forces us to structure our thoughts, is being offloaded. But skipping the struggle, we forego the chance to learn and grow.
In the coding world, the entry-level barrier has collapsed; and so has the barrier of understanding. We are breeding a generation of algorithmic architects who have never laid a brick. They can prompt a system to build a bridge, but lack the granular intuition to know why it stands, or when it will fall.

We do more but think less. Quantity primes quality.

Expanding The ABCD Of AI Issues: EFGH

There are four systemic feedback loops that have quietly solidified while we were busy marveling at our tools. They are expanding the ABCD of underappreciated AI-issues looked at previously (agency decay, bond erosion, climate conundrum, division of society):

E – Eccentric Or Equalized

Just as social media optimized for engagement, generative AI optimizes for plausibility. It feeds us the average of human thought, smoothing out the edges of creativity and eccentricity. We risk trapping ourselves in a feedback loop where we train the AI and the AI trains us back to be more predictable, generic and machine-readable. (Are we repeating in the cognitive sphere the phenomenon of American fast food culture which has taken over palates around the world, making us attuned to overly sweet and salty, but bland food?)

F – Fast Or Fantastic

We are witnessing a decoupling of knowledge where we have instant access to infinite information online but are losing the internal scaffolding to assess its truth and store it internally for the future. Sadly, new ideas are born by the brain when it connects present experiences with knowledge and past memories, making unexpected connections within the brain. If everything is stored externally there is nothing to draw from inside.

G – Guardian Or Guest

We are moving from being guardians of AI to being guests in our own cognitive house. When an algorithm predicts your email reply, your code block, or your dinner choice, it is saving you time and gently pruning the tree of your volition. At the current stage the danger is not that AI will deliberately disobey (although it is prone to make errors), but that we gradually cease to command it, preferring the path of least resistance offered by the predictive engine.

H – Hyperbolic Or Holistic

The cost of generating nonsense has dropped to zero. The public sphere is flooded with synthetic reality. Compounded with the polarization of society and the decrease of our appetite for critical thinking, this environment is fertile ground for populism. When everything on screen can be fake, a feature of our decision-making whereby we tend to believe what we see becomes dangerous. When information dissolves into millions of personalized, AI-curated realities, does truth become relative?

The “Use it or lose it” logic applies to the brain as much as the bicep. Excessive reliance on AI for critical thinking tasks correlates with a decline in those very skills. Knowing this does not protect us from the facts; unless we take action to protect our biggest capital in a hybrid future – natural intelligence.

Deja Vu – The Social Media Echo

We have seen this movie before. Ten years ago, we let the attention economy take over, fueled by Silicon’s mantra “move fast and break things.” We broke our attention spans, our teenagers’ mental health and our civic discourse.

Today, we are at risk of letting the intention economy make the same mistake.

Social Media hijacked our attention.

Generative AI hijacks our agency.

We cannot repeat the mistakes of the past decade. The choices we make, now, are the chances we do (not) take. A wait and see approach, with the justification that overregulation stifles innovation, is a decision to let commercial incentives dictate the architecture of the human mind.

Another Move, Forward: Two Systemic Interventions

To fulfill the ambition of bringing out the best in people and planet, we must move from passive adoption to active architectural design.

1. Invest in “Double Literacy” (Cradle To Grave)

We need a radical update to our educational OS. We are currently teaching “prompt engineering,” which is fleeting. We need double literacy:

Human Literacy: A holistic understanding of self and society, people and planet. To master assets that can answer any question, we must become experts at asking questions and questioning answers. We must double down on the skills that AI cannot replicate: compassion, moral reasoning and chaotic creativity. It is time to identify and cultivate what makes us unique, individually and as a species.

Algorithmic Literacy: Not just how to use it, but how it works and why. Understanding the black box, the biases of training data and the economic incentives behind the bot. The UNESCO Competency Frameworks are a start, but we need to go beyond, to embrace a more holistic understanding of the impact that generative AI has on our ability to think.

2. Design An Algorithmic Architecture For Social Potential

AI development must move from user-friendly to humanity-centric and planetary fair.

Friction by Design: We need interfaces that introduce friction when it matters. If a student asks an AI to write an essay, the AI should act as a Socratic coach, asking guiding questions rather than ejecting answers.

Optimizing for Flourishing, Not Engagement: We need design principles that measure success not by time spent or tokens generated, but by the user’s growth in capability and autonomy. The system should aim to make itself less necessary over time, not more. New metrics are needed; the prosocial AI index is a point of departure.

Three years in, the toddler is walking. Soon, it will run. We cannot put it back in the womb, but we can decide how we raise it and how we fortify ourselves alongside it. The hybrid future is not something that happens to us. It is something we design. Let’s design for the best in us.

Practical Takeaway : Friction Audit

For the next week, identify one cognitive task you have recently offloaded to AI (writing emails, summarizing reports, coding, brainstorming).

The Challenge: Reclaim the task. Do it entirely manually.

  • Observe: Notice the “pain” or “boredom” you feel. That sensation is your brain doing the heavy lifting required to maintain the skill.
  • Reflect: Ask yourself, “If I stop doing this task forever, what part of my thinking capability will atrophy?”

At this onset of the 4th year with ChatGTP and its algorithmic peers, let’s decide to use them as assistants to clear clutter, not crutches to skip the climb to a higher version of ourselves.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *