FDA’s New AI Tool Cuts Review Time From 3 Days To 6 Minutes

Posted by Ron Schmelzer, Contributor | 1 day ago | /ai, /innovation, AI, Innovation, standard | Views: 19


The U.S. Food and Drug Administration announced this week that it deployed a generative AI tool called ELSA (Evidence-based Learning System Assistant), across its organization. After a low-profile pilot that delivered measurable gains, the system is now in use by staff across the agency, several weeks ahead of its original schedule.

Dr. Marty Makary, the FDA’s commissioner, shared a major outcome. A review task that once took two or three days now takes six minutes.

“Today, we met our goal ahead of schedule and under budget,” said Makary. “What took one scientific reviewer two to three days [before] now takes six minutes.”

What ELSA Does… And Doesn’t Do

The FDA has thousands of reviewers, analysts, and inspectors who deal with massive volumes of unstructured data such as clinical trial documents, safety reports, inspection records. Automating any meaningful portion of that stack creates outsized returns.

ELSA helps FDA teams speed up several essential tasks. Staff are already using it to summarize adverse event data for safety assessments, compare drug labels, generate basic code for nonclinical database setup, and identify priority sites for inspections, among other tasks.

This last item, using data to rank where inspectors should go, could have a real-world impact on how the FDA oversees the drug and food supply chain and impacts on how the FDA delivers its services.

Importantly, however, the tool isn’t making autonomous decisions without a human in the loop. The system prepares information so that experts can decide faster. It cuts through the routine, not the judgment.

Focus on safety: No Industry Data, No External Training

One of the biggest questions about AI systems in the public sector revolves around the use of data and third party AI systems. Makary addressed this directly by saying that “All information stays within the agency. The AI models are not being trained on data submitted by the industry.”

That’s a sharp contrast to the AI approaches being taken in the private sector, where many large language models have faced criticism over training on proprietary or user-submitted content. In the enterprise world, this has created mounting demand for “air-gapped” AI solutions that keep data locked inside the company.

That makes the FDA’s model different from many corporate tools, which often rely on open or external data sources. The agency isn’t building a public-facing product. It’s building a controlled internal system, one that helps it do its job better.

Other Agencies Are Steadily Making Progress with AI

Federal departments have been slow to move past AI experimentation. The Department of Veterans Affairs has started testing predictive tools to manage appointments. The SEC has explored market surveillance AI for years. But few have pushed into full and widespread production.

The federal government has thousands of employees processing huge volumes of information, most of it unstructured sitting in documents, files, and even paper. That means AI is being focused most on operational and process-oriented activities. It’s shaping up to be a key piece of how agencies process data, make recommendations, and act.

Makary put it simply that ELSA is just the beginning for AI adoption within the FDA.

“Today’s rollout of ELSA will be the first of many initiatives to come,” he said. “This is how we’ll better serve the American people.”​​



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *