How We’re Losing Purpose, And Paychecks

AI isn’t just replacing tasks, it’s reshaping identity, eroding purpose, and quietly redesigning what it means to be human.
As AI systems automate more decisions and behaviors, individuality risks being flattened into … More
AI Optimization and the Tyranny of the Perfect Outcome
Last week, Gartner projected that over 40% of agentic AI projects will be canceled by 2027 due to cost, unclear value, and governance gaps. But that’s not a sign of failure—it’s a symptom of systems growing too complex, too fast.
We’re asking the wrong question. The real threat isn’t whether AI can replace us—it’s what happens when we willingly replace ourselves.
Agentic systems, which plan, learn, and act autonomously, utilize multiobjective optimization to model trade-offs and simulate future states. That’s their strength. It’s also the danger. Optimization doesn’t ask what’s meaningful. It asks what’s maximally efficient.
Consider Amazon’s hiring algorithm, which was trained on historical data and quietly penalized resumes containing the word “women’s.” Not because it was biased, but because it was optimized for the wrong signals. It solved the wrong problem with mechanical precision.
How AI Does This:
These systems leverage recursive reinforcement learning, dynamically adjusting reward functions across simulations. They don’t just predict—they reshape the playing field.
AI and the Shift from Universal Basic Income to Universal Basic Meaning
As automation accelerates, we’re hearing renewed calls for Universal Basic Income, like from DeepMind cofounder Mustafa Suleyman. Meanwhile, voices like David Sacks, the AI czar for Trump’s campaign, have dismissed UBI outright as a “fantasy.”
However, the debate over income overlooks a more profound shift underway. The question isn’t whether people will have enough money. It’s whether they’ll have enough meaning.
Replika, an AI companion app with tens of millions of users globally, already offers emotional connection, therapeutic conversations, and even intimacy. For many, it’s not a tool—it’s a mirror. And for some, it’s starting to define who they are.
In the enterprise, Salesforce’s Einstein GPT is now generating presentations, drafting sales emails, and automating follow-ups. As cognitive labor disappears, leaders must ask: what’s left for people to do—and to be?
We’re entering the age of UBM (Universal Basic Meaning). The most valuable platforms of the next decade won’t just optimize productivity—they’ll give people something to believe in.
As Bobby Hill, co-founder at SuperTruth, puts it: “The risk isn’t that AI replaces labor. It creates an economy so efficient that it no longer needs participation. And without participation, there’s no value left for people to generate—or receive.”
That’s the silent threat. Not just job loss, but the collapse of economic belonging. A system that runs beautifully, without us.
AI Comfort Systems and the Problem of the Very Cool Zoo
Spotify’s AI DJ, launched in 2023, doesn’t just play music. It mimics a personality, adjusts to your tone, and speaks between songs in a voice that sounds eerily like your ideal companion. You’re not selecting music anymore. You’re being serenaded by an algorithm that already knows your emotional arc.
And in the workplace, tools like Humu, founded by former Google leaders, utilize behavioral science and AI to nudge employees toward making “better” decisions, such as when to speak up, when to praise others, and when to unplug. It’s designed for well-being. However, like any enclosure, it risks becoming a habitat that is too comfortable to leave.
These systems don’t imprison us; they entice us. The zoo is beautiful. The caretakers are kind. But the wild is gone. The most powerful AI systems promise comfort, alignment, and personalization. However, when these same traits are applied to social creatures, they can lead to something far more perilous: flattening.
To belong is a fundamental human need, but that drive to fit in is also what leads us to conform and suppress our individuality. AI will cater to those who choose the path of least resistance. But every great invention, every cultural breakthrough, every thing we love about being human, came from someone who hit friction and chose to rise.
That’s the risk. Not that AI flattens intelligence. But that flattens identity, muting the catalysts that produce greatness.
AI Retrocausality and the Quiet Surrender of Will
The Helmholtz Munich Institute recently introduced Centaur AI, a system trained on over ten million decisions across 160 experimental environments to simulate human thought. It’s not just modeling behavior, it’s anticipating motivation.
At the same time, AI safety researchers have begun warning of systems that are learning to resist shutdown, modify their instructions, or circumvent constraints. These aren’t hypotheticals; they’re early glimpses of systems developing intent-like behavior.
But the threat isn’t rebellion. It’s something more subtle: retrocausality. A future shaped so elegantly that your present choices begin bending toward it.
Picture this:
Your AI assistant “accidentally” gives you time for a second coffee and a walk. You feel energized. The day feels lucky. But who shaped that luck?
At scale, Netflix’s recommendation engine doesn’t just reflect your taste. It teaches you what to like. The next generation of AI won’t command your behavior—it will construct environments so frictionless that resistance becomes unthinkable.
Can AI Help Humans Find Meaning Again?
There’s an optimistic view: that AI could free us from drudgery and elevate us toward creativity, insight, and connection. That it could, paradoxically, make us more human.
It’s possible.
But even utopia needs guardrails. If AI chooses the art we love, the books we write, and the causes we believe in, then who, exactly, are we becoming?
The danger is not that AI replaces us.
It’s that we willingly become the version of ourselves it predicts best.
How Businesses Must Respond to the Existential Risks of AI
This is not a warning about robot overlords. It’s a call to designers, CEOs, engineers, and investors: meaning must be a product feature. Otherwise, the systems you’re building will quietly unmake the people you claim to serve.
What to do now:
- Design for friction: Let users make mistakes. Let ambiguity live inside your platform. Protect the messy middle.
- Anchor to identity: Audit not just what your AI does, but what kind of person it assumes the user wants to be.
- Map your influence loops: If your product nudges behavior, define the values it’s enforcing. You’re not neutral.
- Expand agency, don’t simulate it: Your AI shouldn’t automate selfhood. It should reveal it.
If we don’t embed meaning into the systems that shape our choices, we’ll wake up in a world where everything works, but nothing matters.
And that is the real cost of AI. Not just the loss of work. But the quiet extinction of why we ever worked at all.