Survey Says 67% Of Jobs Use AI, But Do Leaders Understand Its Limits?

In a survey of 1,000 knowledge workers, 67% of respondents said that their companies use AI, with 56% of companies encouraging AI usage. Owl Labs created the survey, and they found that Gen Z employees are more likely to report being “heavily reliant” on AI, with 70% of the youngest generation leaning in on the tech for a multitude of tasks. That lines up with the observations of OpenAI CEO, Sam Altman, who recently shared that Gen Z workers view ChatGPT as a “life adviser”. Embracing technology is a good idea, but is Gen Z squeezing LLMs too tightly? What do these current and future workforce leaders need to know, about what AI can (and can’t) do?
Survey shows AI adoption increasing – but are we relying too heavily on ChatGPT?
Tell Me How You Use ChatGPT and I’ll Guess Your Age
“Older people,” Altman told an audience at Sequoia Capital’s AI Ascent Event, “use ChatGPT as a Google replacement. People in college use it as an operating system.” Tech Crunch reports that younger adults have fairly complex prompts memorized, which also means saved in the notes on their phone, and they rapidly and frequently pepper the program with problems and queries. Altman goes on to say that, “they [Gen Z and Gen Alpha] don’t really make life decisions without asking ChatGPT what they should do. It has the full context on every person in their life and what they’ve talked about.”
AI expert Amanda Caswell, a writer for Tom’s Guide, says, “Personally, I’ve used ChatGPT to tackle everything from project summaries to panic attacks, and have found it to be a great sounding board when facing tough choices. While it’s no substitute for human guidance or a therapist, ChatGPT can be a great assistant in a pinch.”
There’s definitely an upside to having a second opinion on various aspects of your life and work – especially when that perspective has access to trillions of data points, the works of Freud, Jung, and James, as well as most of recorded history. But there are risks that can come with using AI, and leaning in too heavily on the platform. Experts say that it’s important to be careful how much you share with AI – and how you use the tool at work. The rewards are many, but so are the risks – and using LLMs wisely is good counsel for every generation.
What to Watch Out For With AI: Experts Weigh In
“AI has no world model,” according to process scientist, Sam Drauschak. Questions that require real-world context can be a challenge for ChatGPT and other AI platforms. “When you ask it to read this picture of a clock, it’s strictly pattern prediction.”
Louis Rosenberg, Stanford professor, inventor, AI scientist and author of Our Next Reality, says that sometimes AI is dyslexic. And, as someone who is dyslexic, Rosenberg’s perspective gives an interesting read on why AI struggles to tell time. “When I recall things in my mind (objects, environments, images, or text), I don’t visualize them from a fixed first-person perspective. I think about them from all directions at once, more as a vague cloud of perspectives than a single, grounded orientation,” he explains – articulating the AI point of view. That’s how large language models see things – and it often means that vector orientations (such as “clockwise”) are elusive to AI platforms. “When you ask AI to interpret a tissue sample,” he says, using an application from healthcare that is becoming more and more common, “accuracy is not impacted by orientation.”
But innovative problem-solving and creative thinking is. While AI can generate novel combinations for text and produce creative works in art and music, its capacity for truly original thought and breakthrough innovation remains limited. But the speed of the platform can be astonishing. So, should we treat AI like just another voice in the room, or the voice of God? “Think about it more like an intern,” Drauschak advises.
“AI doesn’t do anything new. It can synthesize things from lots of different domains. And the labor of synthesizing things can seem to generate insight. Like a brilliant intern – with the ability to process billions of data points at once – it’s going to come up with good ideas and contribute. But it’s a really good idea to check on the work,” he explains. While you may not be asking AI to read a clock, perspective always matters – especially when it comes to work product, context and point of view.
Room for Error with AI
AI doesn’t possess emotions or empathy, Draushak explains. “It doesn’t have a conversation where it acknowledges any sort of room for error or humility or, you know, ‘I only feel 80% about this’”. Drauschak likens it to the consultant’s mantra: “I may be incorrect, but I’m never unsure of myself.” Hallucinations and facts are presented with the same level of confidence.
AI has no world model, so anything that requires real-time context creates progressive levels of failure, according to Drauschak. Long-term planning can be a challenge as well, as AI platforms have limited memory. To be fair, certain programs and prompts can provide reference and context – a context which comes to human beings naturally, as a result of being in the real world.
And reading the room is a challenge for AI. “AI is the future,” Dylan Matthews shares in Vox. “It’ just can’t predict it.” Science Daily echoes that sentiment, in a post that proclaims that humans are better at predicting social interactions than AI. “AI for a self-driving car, for example, would need to recognize the intentions, goals, and actions of human drivers and pedestrians. You would want it to know which way a pedestrian is about to start walking, or whether two people are in conversation versus about to cross the street,” said lead author Leyla Isik, an assistant professor of cognitive science at Johns Hopkins University who did an extensive study into the limitations of current LLMs. “Any time you want an AI to interact with humans, you want it to be able to recognize what people are doing. I think this sheds light on the fact that these systems can’t right now.” Cathy Garcia, a colleague and contributor to the Johns Hopkins study, says, “Real life isn’t static. We need AI to understand the story that is unfolding in a scene. Understanding the relationships, context, and dynamics of social interactions is the next step, and this research suggests there might be a blind spot in AI model development.”
Leading with AI: Understanding Its Limits to Access Its Capabilities
For leaders and aspiring leaders, the message is one of balance. While the capabilities of AI have opened up seemingly limitless possibility, those possibilities actually do have limits. The real question, across every generation in the workforce, is: what do you want to outsource to AI? Tools can accelerate results when used correctly. But turning to AI for every direction in your life seems unwise. Indeed, research already shows a decline in cognitive skills from over-reliance on LLMs. Convenience, speed, ease: these are the advantages of AI. But understanding how best to use an AI platform like ChatGPT is crucial for leaders today.
In side by side tournaments run by Metaculus, human beings have beaten AI at forecasting for the last three quarters running. But that gap is narrowing. Think of how prediction matters in your business: for lawyers, negotiating a settlement requires instincts around agreement based on real-time contexts – and intuition. Producers at Netflix predict what shows will hit. Intuition, innovation, direction: these uniquely human characteristics still matter in decision-making – even when 67% of workers are using ChatGPT and its cousins. AI can help with the predictive journey, but it’s not the whole trip – at least, not yet.