Should You Be Worried If Your Doctor Uses ChatGPT?

6 years ago, I wrote a piece, “Doctors Use Youtube And Google All The Time. Should You Be Worried?” In 2025, it’s time to ask, “Your doctor may be using ChatGPT. Should you be worried?”
In a recent unscientific survey, technology entrepreneur Jonas Vollmer asked physicians how many used ChatGPT. 76% percent of the respondents answered “yes.” According to Volmer, a physician friend also told him, “most doctors use ChatGPT daily. They routinely paste the full anonymized patient history (along with x-rays, etc.) into their personal ChatGPT account.”
My own unofficial conversations with colleagues bears this out, with younger physicians more likely to regularly use AI than older ones
I think AI tools such as ChatGPT, Grok, Claude, and other LLMs can be very helpful for physicians after they take a good patient history and perform a properly thorough physical exam. The physician can describe patient signs and symptoms with appropriate medical precision for the AI to analyze
In particular, the AI can frequently suggest diagnoses that would not otherwise occur to the physician. For example, Vollmer noted that in a busy urgent care clinic, a patient might be taking some “alternative medicines” with unusual side effects that might not be widely known in the traditional medical literature, but have been discussed in recent online articles and discussion forums. Thus, ChatGPT acts as an extension to a good physician, not a replacement.
As always, the physician has the final responsibility of confirming any novel hypothesis offered by the AI with their own human judgment, which might include running additional tests to confirm the diagnosis.
We’ve already seen non-physician patients report how ChatGPT made a diagnosis on themselves or loved ones after stumping doctors for years.
And there are multiple studies showing that AI tools like ChatCPT can be surprisingly good at diagnoses when offered patient case reports.
Of course, physicians need to be careful to adhere to all relevant medical privacy laws in their states/countries. And they may even consider getting explicit consent from their patients ahead of time to run their (anonymized) data through AI. Currently, physicians are allowed to seek second opinions from fellow doctors all the time, as long as privacy rules are met. The same guidelines should apply to consultations with AI.
In many ways, this is comparable to how AIs in driverless cars perform comparably to human drivers. Driverless Waymo taxicabs in selected cities like Los Angeles perform as safely (or better) than human drivers in appropriately restricted settings. Tesla owners who use the self-driving mode can rely on the AI to drive safely most of the time, although they still have to be prepared to take control of the wheel in an emergency. Robot cars are not yet ready to replace human drivers in all settings (such as icy Colorado mountain highways in wintertime), but they continue to improve rapidly.
Similarly, we may soon reach the point that a physician who does not use an AI consultant to double-check his diagnoses will be considered practicing below the standard of care. We are not there yet, but I can see that coming in the next few years.
Summary: Tools like ChatGPT can be enormously helpful for physicians, provided that the doctor retains ultimate responsibility for the final diagnosis and treatments, and respects the appropriate privacy rules.