Humans And AI Share The Same Flawed Ways Of Thinking

Posted by Ted Ladd, Contributor | 10 hours ago | /enterprise-tech, /innovation, Enterprise Tech, Innovation, Sharing Economy, standard | Views: 8


Not long ago, a fintech startup ran a simple experiment. They gave their AI loan approval model two identical applicants. In one file, the applicant was described as having “a steady record of employment with two short gaps.” In the other, the wording flipped: “multiple breaks in employment.” Same facts, different frame.

The result? The AI approved one loan and denied the other.

That’s when it hit the founders: the machine wasn’t neutral. It was reacting to words, just like humans do.

Behavioral economists Amos Tversky and Daniel Kahneman showed decades ago that humans aren’t rational calculators. We rely on shortcuts called heuristics that help us make quick decisions but often lead us astray. The unsettling twist is that artificial intelligence, built on our data and decisions, contains those same flaws.

Representativeness: The Stereotype Trap

We confuse resemblance with reality.

We often judge probability by resemblance rather than logic. That’s why people assume “feminist bank teller” is more likely than “bank teller”—a classic Tversky and Kahneman scenario known as the Linda problem. Or we assume tall people must be good basketball players, even though most people – including tall people – do not play basketball.

AI isn’t immune from this trap. Amazon once shelved a résumé-screening tool because it kept favoring male candidates for coding jobs. The reason? It had been trained on years of historical hiring data. However, most coding jobs in the past were indeed performed by males. In another notorious case, an image recognition system labeled Black individuals as “gorillas.” The machine didn’t invent the bias. It absorbed it from human responses.

Availability: Vivid Feels Common

What’s easy to remember feels more likely than it is.

We overestimate risks when examples are vivid and memorable. After 9/11, many Americans drove instead of flying, vastly overstating the danger of planes. Shark attacks get headlines, so people fear them far more than drowning or car accidents, which are statistically far more common.

AI makes the same mistake. Chatbots love to serve up splashy celebrity scandals because those stories dominate the web, while more mundane but common events get buried. Predictive policing systems over-target neighborhoods with long arrest records—not necessarily because they’re more dangerous, but because the “data” is more visible there. Ask AI for a picture of a field and it will deliver a spectactular landscape with a mountain range or a glimmering ocean rather than a much more common patch of dirt because humans rarely take pictures of a patch of dirt.

Anchoring: First Numbers Stick

The first figure we see tilts everything after it.

The first number we see lingers in our minds, skewing everything after it. Shoppers overvalue discounts by fixating on inflated “original prices.” Even judges have been shown to hand down harsher sentences after rolling higher numbers on dice—an anchor that should have been irrelevant. Ask students to estimate the revenue for a Starbucks store, but first ask them to write down the last two digits of their cell phone numer. Those with high cell numbers consistently deliver higher revenue estimates, even though obviously the two values are not related.

AI responses anchor just as easily. Ask a model, “Why is remote work failing?” and it will generate a list of failures. Reframe it as, “Why is remote work thriving?” and you’ll get the opposite. The anchor is set in the very first phrase.

Framing: Same Facts, Different Choices

It’s not just the numbers—it’s how you spin them.

The way information is presented changes decisions—even if the facts are identical. A surgery with a “90% survival rate” sounds far more appealing than one with a “10% mortality rate.” Investors lean toward a stock pitched as having “20% upside potential” over one with “an 80% chance of no gain.”

AI picks up the same cues. A sentiment model tilts differently depending on whether you say “the outlook is improved” or “the outlook is less worse.” AI marketing systems shift tone if a product is described as “saving money” rather than “avoiding losses.” The words change the frame—and the output follows.

Loss Aversion: Losses Hurt More Than Gains Help

The pain of losing outweighs the joy of winning.

Losing stings more than winning pleases. Investors hang onto losing stocks because selling feels like locking in pain. People turn down fair-market offers for concert tickets that were given to them for free because selling them feels like giving up something valuable—even if they wouldn’t have bought them at market price.

AI shows its own version of loss aversion. Recommendation systems lean heavily toward safe, mainstream choices like pop hits, rather than risk suggesting obscure gems that could flop. And once a model has been trained, it clings to its old “priors,” resisting updates even when new evidence suggests a better path.

Overconfidence: The Illusion of Precision

We sound more certain than we should be.

We consistently overrate our knowledge. Entrepreneurs exaggerate their odds of startup success. Doctors make bold diagnoses with unwarranted certainty, even when the data doesn’t support it. People believe more in a precise value like 1,249.36 than in a rounded value like 1,250, even when they come from the same data.

AI behaves the same way. Generative models spin out fabricated citations or other “hallucinations” with total confidence, as if delivering gospel truth. Forecasting systems spit out narrow probability ranges that miss the messy uncertainty of real life. Like us, they sound sure when they shouldn’t be.

Probability Weighting: Distorted Odds

Tiny chances look bigger, and big chances look smaller.

Humans exaggerate small chances and downplay large ones. That’s why millions buy lottery tickets despite absurd odds, or overpay for extended warranties on electronics.

AI misreads odds too. Fraud detection systems raise alarms over rare anomalies, while chatbots sometimes give dramatic but unlikely answers—underplaying the boring, common outcomes that are far more probable.

Status Quo: The Power of Default

We stick with what we know, even when better options exist.

Change feels risky, so we cling to what we know. Retirement savings surge when enrollment shifts from requiring a person to opt-in as compared to automatically opting them in and requiring them to opt-out. Many households stick with expensive cable packages even after cheaper streaming options emerge.

AI also favors the familiar. Autocomplete functions recycle common word sequences, and even repeat common typos, locking language into predictable grooves. Legal AI tools lean heavily on precedent, echoing “how it’s always been done” instead of venturing new ground.

The Mirror We Built

Human biases evolved as survival shortcuts. In today’s world, they often backfire. AI’s biases, meanwhile, are the shadows of our data and our design choices. When combined, they form a feedback loop: human distortions train the machines, and the machines reinforce those distortions back onto us. We built AI in our image and it shows.

The danger isn’t just that AI shares our biases. It’s that it does so at scale, with speed, polish, and authority. A single human bias might cost a bad investment or a poor judgment call. An AI bias, amplified across millions of interactions, can quietly shift markets, policies, and beliefs.

The question isn’t whether AI will be biased. It already is. The real question is: what do we do about it?

This is the question that Priyanka Shrivastava and I are exploring in our project on Skeptical Intelligence.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *