Is TikTok Making Us Sicker? Inside Medical Misinformation

Is TikTok Making Us Sicker? Inside Medical Misinformation


The priciest thing in American healthcare right now isn’t a biologic or a breakthrough device. It’s bad information. Medical misinformation from social media platforms isn’t just a battle in the culture wars; it’s an expanding cost center. AI has supercharged the volume and velocity of bad claims and plausible nonsense, platforms still reward novelty over accuracy, and the bill shows up as ER visits that never had to happen, delayed care and counterfeit meds. Slowing the spread of misinformation is practical: prebunk early, vet before sharing, require ad transparency, and enforce obvious scams — steps families, employers, platforms, and health systems can take now.

Why This Feels Different Now

If it feels like falsehoods move faster than facts, that’s because they do. An M.I.T. study of Twitter/X rumor cascades found false news is 70% more likely to be reshared and travels farther, faster, deeper than the truth — because people, not bots, boost novelty and emotion. Swap in “cancer cures” or “weight-loss hacks,” and the mechanics don’t change. The attention economy rewards outrage. We already knew falsehoods spread faster; add algorithmic optimization and you supercharge the spread.

The World Health Organization calls it an infodemic: an excessive amount of information. Some of it is accurate, some not so much. This makes it difficult for people to find trustworthy guidance when it matters most. You manage an infodemic with discipline, critical thinking and tapping into public health tools: monitor, listen, intervene. A person’s inherent bias, of course, will play into what they choose to believe or discredit. During crises, when things move fast, bold claims and storytelling set up shop before the facts — which means your health may hinge on your ability to think critically.

Doctors are feeling it at the point of care. In a 2025 survey of 1,000+ U.S. physicians, 61% said their patients were influenced by misinformation at least a moderate amount in the past year; 86% said the overall incidence has increased vs. five years ago, and 57% say it significantly undermines their ability to deliver quality care. Of course, this is nothing new. Snake oil and healing water has been around forever, but the AI and social networks has put misinformation into hyperdrive.

Where Misinformation Hurts Most

Sadly, the resurgence of infectious disease is real. The U.S. declared measles eliminated in 2000, case closed. Yet as of August 26, 2025, CDC reports 1,408 confirmed cases across 43 jurisdictions — already one of the highest tallies in a quarter-century — with earlier federal data noting hundreds hospitalized and three deaths by mid-April.

Systematic reviews show cancer misinformation (with clickbait like “alkaline diets cure tumors”) circulates widely online and often rakes in more engagement than accurate content — exploiting fear and uncertainty, with measurable exposure across Facebook, YouTube, Instagram, and TikTok.

And let’s not forget the brick-and-mortar monetization: the FTC recently banned a network behind deceptive stem-cell claims and ordered more than $5 million in penalties and refunds. Enforcement is necessary, but it’s reactive by design.

Mental Health: Misused Terms, DIY Diagnoses, Delayed Care

Social platforms have helped raise awareness and reduce stigma — that’s the upside. The downside: psychological terms are getting hollowed out in the feed. As Lebanon Valley College counseling professor Dr. Kathy Richardson notes, words like “gaslighting,” “boundaries,” “toxic,” and even “trauma” are used so loosely that they lose clinical meaning. Calling a minor inconvenience “traumatic” doesn’t just water down language; it makes it harder to recognize real abuse and serious conditions. Mislabeling a disagreement as “gaslighting,” for instance, obscures a very specific manipulation tactic designed to make someone doubt their reality.

Clinicians are also seeing a spike in self-diagnosis and self-prescribed “protocols” sourced from short videos and influencer threads. Richardson says new clients increasingly arrive with a diagnosis and treatment plan in hand — courtesy of social media. That creates friction in care, delays proper evaluation, and, in some cases, keeps people from seeking professional help at all, as they chase unproven or experimental fixes. The motif with the rest of the infodemic: high-arousal language and confident storytelling from young, camera-ready faces outpace careful assessment, and patients pay for it in time, money, and outcomes.

On TikTok, about half of popular ADHD videos are misleading; newer analyses show heavy exposure can warp symptom perceptions among young adults. Meanwhile, GLP-1 content (drafting off of Ozempic and Wegovy) has exploded, but side-effect discussions are inconsistent and often lack actionability. That gap drives risky self-experimentation and counterfeit “alternatives.” The point isn’t to bash TikTok — it’s about acknowledging the incentive structure.

What’s changed since the pandemic

The U.S. Surgeon General formally designated health misinformation a public-health threat in 2021 and called for a whole-of-society response. In 2024–25, that posture expanded to social-platform harms and youth mental health. Translation: it’s not about cleaning up after COVID; the risk remains.

Europe, meanwhile, is testing hard-edged accountability via the Digital Services Act (DSA). In May, the European Commission preliminarily found TikTok breached transparency rules on its ad library — rules meant to let researchers spot manipulation and scams, including health misinformation. Fines can reach 6% of global revenue.

AI just turned up the volume

Large language models can be coaxed — or manipulated — into producing plausible but incorrect medical guidance. Generative AI now mass-produces plausible health advice — some of it wrong — and can supercharge the spread of misinformation. The World Health Organization warns that large multimodal models used in health care need strict guardrails: rigorous pre-deployment evaluation, transparency about data and limitations, human oversight, and continuous monitoring after launch. In short: the tech scales fast, so the governance must scale along with it.

If you want to frame things in economic terms, Johns Hopkins estimated U.S. COVID mis/disinformation inflicted $50–$300 million per day in monetary harms — and that’s a lower-bound focusing on avoidable health-care use and productivity losses, not the full societal cost.

What works in the fight (and what doesn’t)

“Just add fact-checkers” is not a strategy. The evidence is clearer — and based in reality:

  • Prebunking (inoculation)—teaching people the techniques of manipulation upfront — charged words like “horrifying” and “terrifying” are commonly employed — improves discernment at scale (tested via YouTube ad buys and multi-country randomized trials). It’s not a silver bullet, but it’s one of the few interventions with real-world impact.
  • High-friction sharing (pauses, link-read prompts) and ad transparency reduce junk reach. That’s where the European Commission’s Digital Services Act ad-repository rules matter.
  • Clinician scripts beat combativeness. Acknowledge concerns, correct with plain language, and offer an action alternative (“Here’s what to do instead”). Toolkits now exist for health systems and community leaders.

A pragmatic playbook (for families, employers, and institutions)

Create a default “pause protocol.” Before you share health advice: Who said it? What’s their financial stake? Is there a study? If it’s a video that promises “one weird trick,” there’s your red flag. Always follow the money.

Just as banks and other financial institutions require two-step verification, so should you when it comes to medical information found online. First, search reputable sources. Second, as your doctor or insurer’s nurse line what is evidence-based on your particular condition.

Follow sources, not vibes. Make a short list (CDC, NIH, academic centers, your insurer’s clinician library). If a claim cannot be found there, treat it as unconfirmed. Miracle cures and biohacks are titillating but skip the sizzle until you see the study.

Platforms and policymakers: adopt standardized ad repositories, rapid-response labels for acute outbreaks, and auditable APIs for bona fide researchers. UNESCO’s 2025 digital-platform guidelines are a reasonable blueprint; national regulators should align.

The cost of doing nothing isn’t theoretical. It shows up in outbreaks we’d already solved, in delayed cancer care, in counterfeit weight-loss meds, and in hospital bills that didn’t need to happen. If we want a healthier country, we must improve the information environment we’re asking patients to navigate.

Medical misinformation is not a side effect of the internet; it’s a bona fide business model. AI raises both the ceiling and the floor — more precision for good actors, more plausible poison for bad ones. Pretending it’s a “user education” problem lets institutions off the hook. Fix the incentives, standardize transparency, prebunk at scale and give clinicians the time and tools to counsel without playing whack-a-mole in their spare minutes.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *