FDA Embraces AI To Accelerate Drug Review Process

Posted by Tor Constantino, MBA, Contributor | 6 hours ago | /ai, /consumer-tech, /innovation, AI, Consumer Tech, Innovation, standard | Views: 12


In a landmark for U.S. public health digital transformation, the FDA said it will deploy generative AI enterprise-wide by June 2025. The aggressive adoption comes after the agency finalized its inaugural AI-enabled scientific review pilot, positioning it to revolutionize operations during a period of growing regulatory complexity and data quantity.

From the FDA’s official announcement, the new AI approach of the agency will speed up document review, track compliance with regulations and assist in scientific analysis. More importantly the AI upgrade comes with a public promise that it will occur without compromise on safety, privacy or decision integrity. While the agency did not reply to comment request for this article, the public statement made it clear that no AI models by themselves would be making decisions, and all output will continue to be reviewed by humans.

“This is a game-changer technology that has enabled me to perform scientific review tasks in minutes that used to take three days,” said Jinzhong (Jin) Liu, Deputy Director, Office of Drug Evaluation Sciences, Office of New Drugs in FDA’s Center for Drug Evaluation and Research.

FDA Wants AI To Reduce ‘Non-Productive Busywork’

For a regulatory agency whose remit spans every therapeutic area from Alzheimer’s disease to Zika virus, the capacity for rapid and responsible handling of data is more than a tech upgrade. It’s a public health necessity.

FDA Commissioner Martin A. Makary, M.D., M.P.H., said the success of the first AI-assisted scientific review pilot is driving the accelerated timeline.

“We need to value our scientists’ time and reduce the amount of non-productive busywork that has historically consumed much of the review process. The agency-wide deployment of these capabilities holds tremendous promise in accelerating the review time for new therapies,” said Dr. Makary.

The agency has also established a new Center of Excellence and AI Governance Board, and has already deployed secure AI platforms for internal testing and rollout.

FDA’s First AI Pilot

The FDA’s CDER division conducted the first AI-assisted review pilot with generative AI for reviewing documents that were pertinent to Investigational New Drug applications. The outcome was favorable.

According to the statement, FDA intends to scale up its generative AI capabilities across all centers on a secure, shared platform. Improvements in usability, broader document integration and customization of outputs to the needs of specific centers, with strong information security and adherence to FDA policy, will be the focus of future efforts.

The agency will also continually evaluate performance, gather user feedback and enhance features to address the changing needs of FDA staff and further its public health mission. More information and updates on the initiative will be released in June.

Pharma And FDA Need To Coordinate AI Adoption

Fouad Akkad, ex-Pfizer and Abbott executive and founder of pharma AI consultancy WePhlan, thinks this is a significant cultural change in the way industry and regulators will communicate.

“We’re entering a phase where regulators aren’t just observers — they’re becoming digital collaborators,” Akkad wrote in an email response. “That changes the dynamic entirely. It incentivizes pharma to improve their own AI literacy to keep pace.”

He also envisions practical uses for drug development timelines.

“AI won’t cut corners, but it can cut down time-to-decision. If the FDA can process supporting documents faster without losing analytical rigor, that’s a win for patients and sponsors.”

But Akkad warned that generative AI must also have clear mechanisms for ensuring fairness, reproducibility and accountability.

“Let’s not mistake automation for objectivity,” he said. “Biases in training data can easily be baked into model outputs, so transparency in methodology is going to be crucial.”

He further added that transparency doesn’t just mean algorithmic disclosures — it’s cross-functional communication.

“When I train pharma teams, I emphasize the importance of translating AI results into language regulatory bodies understand. FDA’s use of AI should ideally foster better dialogue, not obscure it.”

FDA’s AI Roadmap

As the June go-live deadline approaches, FDA’s launch will be closely monitored not just by pharma sponsors, but by other government regulators around the world.

“There have been years of talk about AI capabilities in frameworks, conferences and panels but we cannot afford to keep talking. It is time to take action. The opportunity to reduce tasks that once took days to just minutes is too important to delay,” said Dr. Makary.

If successful, the U.S. could ultimately be setting a standard for how AI can responsibly scale within government. While generative AI is not taking the place of FDA scientists it will be a force multiplier — one that has the potential to bring about a new era of speed, precision and data transparency to drug development and oversight.

The question is not whether industry will follow. It’s whether they’ll be able to keep up.

How the FDA’s AI Move Could Catalyze Industry Rollout

While the FDA’s announcement is a regulatory turning point, it also sends a signal to an industry already wrestling with how to apply AI at scale. A recent 2025 clinical research workforce analysis from Parexel found that artificial intelligence is reshaping the clinical trial landscape — from study design to data monitoring and regulatory submissions. Yet for most biopharma firms, the technology still outpaces team readiness.

“AI is vastly improving all areas of drug development,” the report notes, “powering our analysis of complex datasets to derive new insights faster and automating manual tasks… but realizing the full potential requires people with cross-functional skill sets who understand both data science and clinical research.”

Despite broad optimism, the study revealed slow adoption across the sector, with many leaders citing a lack of AI fluency and insufficient training infrastructure. Like the FDA’s approach, the report emphasized the need for “human-in-the-loop” systems — where AI augments, rather than replaces, clinical staff — and warned that overlooking the upskilling imperative could limit AI’s impact. The Parexel study concludes that effective AI integration requires not just tools but talent.

That insight dovetails with the FDA’s own approach — a focus on governance, transparency and retaining human oversight. As regulators and industry stakeholders move in parallel, the challenge ahead is clear: build a workforce as adaptive as the AI technology itself.

ForbesThis Decentralized AI Could Revolutionize Drug Development



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *