Look It Up, Doctor – How AI Has Revolutionized Clinician Research

Posted by John Werner, Contributor | 4 hours ago | /ai, /innovation, AI, Innovation, standard | Views: 8


We’ve seen a lot of coverage of AI applications to healthcare within the last three years – as ChatGPT and other models have rapidly evolved, it didn’t take long for professionals and others to realize that the data capabilities inherent in these technologies would be good for medicine.

After all, many doctors now describe care as “data-driven” – with access to more information, doctors can make better decisions. They can source from surveys showing trends in patient care and outcomes. They can get better diagnosis as AI reviews scans. All of this relies on aggregating and evaluating a lot of data.

But one aspect of this is medical research – what your doctor does when trying to figure out an optimized care plan.

Personal Care and Big Data

Of course, care plans have to be personal – it’s not a one-size-fits-all type of thing. But personal care plans can be greatly enhanced by knowledge about what works for larger numbers of patients.

Enter a tool called OpenEvidence. It’s taking off in a big way across the American healthcare industry. Currently, experts estimate that OpenEvidence is being used in 10,000 hospitals and care centers, and by approximately 40% of American doctors. So this is a big business.

What does OpenEvidence actually do?

AI Medical Research in Action

When prompted by a doctor, the software will search over 35 million peer-reviewed studies and clinical guidelines, and come back with a synthesized answer, showing sources for validation.

Suppose a doctor needs to find the best and safest anticoagulant for a 68-year-old patient with atrial fibrillation and chronic kidney disease. He or she can fire up OpenEvidence, and get that recommendation quickly.

OpenEvidence can also automate charting, and find good trial data, while the clinician is focusing on patient interaction. Or it can automate parts of an authorization letter, with data on things like FDA approvals, solving regulatory problems that bedevil doctors who have to do everything manually.

It’s a big time-saver. That’s the long and short of it.

Then there’s interventionary medicine: for example, an ICU doctor might need to know whether a given drug is safe for a given patient. That’s another place where OpenEvidence can shine, combing through resources like chronicled drug interactions.

Doctors Explain

“The biggest difference has been the time savings,” says Dr. Antonio Jorge Forte, a Mayo Clinic advisor, on the role of OpenEvidence. “Rather than having to read through the equivalent of a book chapter, I can get an answer within 30 seconds, not within 10 minutes.”

And then there are the specialists, who see the value in this software applied to their particular fields, where, for example, OpenEvidence can help in wading through the alphabet soup of specialized clinical work.

“OpenEvidence isn’t just clinical decision support, it’s your second set of eyes,” says Roupen Odabashian, a practicing oncologist published in OncoDaily. “Think of it as the colleague who reviews your plan and points out what’s missing … It actually understands acronyms like FOLFOX or CHOP. That’s a big win in oncology. It goes beyond passive documentation and becomes an active tool for continuous learning.”

More Testimony

You can also get more insight on how doctors are using this tool by reviewing some reddits.

“I can endorse OpenEvidence LLM,” writes a user self-identified as Dr. Autumnwnd, who sounds like a pediatrician. “I use it frequently to get some input into niche clinical questions, like the kind that love to roll in from nursery at 0300. It trawls medical literature (some quite old however) and provides a very readable output with references. The references include a drop down with the abstract. I have been very selective about when I use it, but have definitely woven it into my hospitalist practice, but I am the only one in my group to do so.”

Dr. Autumnwnd also brings up a notable example of a suspicious result from the model:

“It did return MRSA as an example of a gr+ bacilli once, and I yelled at it. It apologized and said it would send the response for review. That did give me pause.”

It would seem inscrutable to the lay person. When I googled, it looked like MRSA is an example of a gram-positive bacillus, so I’m not sure what the doctor is getting at. Nevertheless, it’s always good practice to back up AI findings with human review.

Many Times a Day

Some of the most impressive aspects of OpenEvidence have to do with its daily user volume: with up to 100,000 doctors using it, this tool really gets a lot of attention. Tech media reports the company has raised $210 million at a $3.5 billion valuation. That’s nothing to sneeze at, and one suspects that the leaders of this company have brought the right solutions at the right time, as we integrate AI where it makes sense.

For years, many of those getting consulted on AI have been saying that the best applications will not eliminate the human in the loop – they will be assistive. This is an excellent example of that. It helps doctors to save time – and a doctor’s time is valuable. Just ask an ER patient.

Look for more of this to come out of the industry as healthcare benefits from technical advances.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *