Starved Of Context, AI Is Failing Where It Matters Most

Posted by Kolawole Samuel Adebayo, Contributor | 15 hours ago | /ai, /blk-leadership, /innovation, /leadership, AI, ForbesBLK, Innovation, Leadership, standard | Views: 12


In late 2024, Texas Attorney General Ken Paxton announced a first-of-its-kind settlement with Pieces Technologies, a Dallas-based health-AI company that had marketed its clinical assistant as nearly flawless — touting a “severe hallucination rate” of less than one in 100,000.

But an investigation by the AG’s office found those numbers lacked sufficient evidence. The state concluded that Pieces had misled consumers — specifically hospital systems — into believing the tool could summarize medical records with a level of precision it simply didn’t possess.

Although no patients were harmed and no fines were issued, Pieces agreed to new disclosures about accuracy, risk and appropriate use — an early legal signal that performance on paper isn’t the same as performance in the world.

Critics like cognitive scientist and AI expert Gary Marcus have long warned that today’s large language models are fundamentally limited. As he put it, “they are approximations to language use rather than language understanding” — a distinction that becomes most dangerous when models trained on general data are dropped into highly specific environments and misinterpret how real work is done.

According to Gal Steinberg, cofounder and CEO of Twofold Health, the problem at the heart of many AI disappointments isn’t bad code. It’s context starvation. “Because the ‘paper’ only sees patterns, not purpose,” he told me. “A model can rank words or clicks perfectly, yet still miss the regulations, workflows, and unspoken norms that govern a clinic or any business. When the optimization target ignores those constraints, the AI hits its metric and misses the mission.”

Context: The Missing Ingredient

Steinberg described context as “everything the spreadsheet leaves out — goals, guardrails, jargon, user emotions, compliance rules and timing.”

When AI tools fail, it’s often not because they’re underpowered but because they’re underinformed. They lack the cultural cues, domain nuance, or temporal awareness that human teams take for granted. For example, a 90-second silence during a medical therapy session might be a red flag. In an AI transcript, it’s just dead air. In financial reporting, a missing initialism could signify fraud. To a model trained on public language, it could just be another acronym.

That’s why at Twofold Health, he noted, the company maps context by asking a simple set of questions: Who is in the room? What are they trying to get done? And what happens if we get it wrong?

Another big problem, he argued, is that most companies treat context like it’s something you just upload once and forget about. But things change. Rules change. Requirements change. “If you don’t update the prompts and training, the AI will get off track,” Steinberg told me.

That’s why a lot of early AI projects are now sitting unused. The RAND Corporation says that over 80% of AI projects fail or stall, often not because the models don’t work, but because the context they were trained in no longer matches the environment they’re deployed in. So, the AI seems right but does badly, like an actor in the wrong play.

Building True Intelligence

The solution, according to Steinberg, isn’t just to make AI models smarter but to make them better understand their environments of deployment. “This starts with putting people who know the field into the AI process. At Twofold, clinicians, not engineers, do some of the most important work. They help the AI understand language, ethics, and rules based on experience,” he said.

And then there’s the unglamorous work that people rarely talk about: Choosing which edge cases matter, deciding how to standardize informal language, or recognizing when a form’s structure matters more than its content. These decisions often seem too small to matter until they compound into system-level failure.

Earlier research has shown that AI models trained on generalized datasets often perform unpredictably when deployed in more specialized environments — a phenomenon known as domain shift. In one widely cited paper, researchers from Google and Stanford noted that modern machine learning models are often “underspecified,” meaning they can pass validation tests but still fail under real-world conditions.

In healthcare and finance, where stakes are high and decisions carry liability, that margin of error isn’t tolerable. It’s a lawsuit waiting to happen.

Even Meta’s chief AI Scientist, Yann LeCun, has argued — sometimes bluntly — that today’s large models lack common sense and warned that the industry is moving too fast in deploying general-purpose models without domain grounding. Speaking at the National University of Singapore in April 2025, LeCun challenged the prevailing belief that larger models mean smarter AI: “You cannot just assume that more data and more compute means smarter AI.”

He argued that while scaling works for simpler tasks, it fails to address real-world complexity — nuance, ambiguity and change, calling instead for “AI systems that can reason, plan and understand environments in a human-like way.”

And yet, Cisco’s 2024 AI Readiness Index, a staggering 98% of business leaders reported increased urgency around AI and the need to deploy AI solutions in the past year, often without a clear framework for measurement or accountability. In that climate, it’s easy to see how context falls to the bottom of the checklist.

That’s the risk Steinberg is trying to flag: Not just that models might hallucinate, but that no one inside the business is prepared to take ownership when they do. “We talk a lot about accuracy and too little about accountability,” he said. “Context is not only knowing the right answer; it’s knowing who owns the consequence when the answer is wrong. Build that accountability path first, and your AI will have a healthier diet of context from day one.”

Your AI Needs Better Anchoring

Context doesn’t come from adding more layers or more compute. It comes from treating AI like a living, evolving system that needs guidance — not just training. And it comes from putting humans — not just prompts — in the loop.

AI isn’t dumb. But if you starve it of context, it will act that way. The solution isn’t to trust it blindly. It’s to feed it better, check it often and make sure someone is watching when it gets too confident.

“Because a model that hits its metric but misses the mission isn’t just expensive. It’s dangerous,” Steinberg said-



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *