FDA Plans To Use AI To Speed Up Scientific Review

Welcome back to The Prompt.
The FDA announced yesterday it had rolled out an agency-wide generative AI tool called Elsa to speed up scientific reviews and spot inspection targets.
Getty Images
AI is finding its way into the public sector and its latest entry point is the U.S. Food and Drug Administration, which announced yesterday that it had rolled out an agency-wide generative AI tool called Elsa. The FDA plans to use the AI system to locate high-priority inspection targets, generate code to create databases and summarize information for safety assessments and scientific reviews.
Now let’s get into the headlines.
BIG PLAYS
Meta, which generates 97% of its revenue by advertising on Facebook and Instagram, is developing AI tools that aim to help brands fully create and target ads, according to the Wall Street Journal. It’s also planning to use AI to personalize ads by tweaking them for different users based on factors like location and budget.
MARKET MOVEMENT
Chip giant Nvidia reported $44.1 billion in revenue for its latest quarter ending April 27, beating Wall Street forecasts. But the company’s year over year earnings are ebbing, Forbes reported, in part thanks to an almost $50 billion hit after the Trump administration stopped it from exporting its H20 AI chips to China.
DATA DILEMMA
Academic journals and scientific websites are being swamped by bots scraping hundreds of millions of documents and photographs to train generative AI models, according to Nature. The aggressive bot scraping activity has pushed some sites to the brink of closing their doors, both by skyrocketing costs thanks to overloading servers and slowing down site functions for legitimate users.
DEEP DIVE
Alex Ratner, CEO of Snorkel AI, remembers a time when data labeling —the grueling task of adding context to swathes of raw data and grading an AI model’s response— was considered “janitorial” work among AI researchers. But that quickly changed when ChatGPT stunned the world in 2022 and breathed new life (and billions of dollars) into a string of startups rushing to supply human-labeled data to the likes of OpenAI and Anthropic to train capable models.
Now, the crowded field of data labelling appears to be undergoing another shift. Fewer companies are training large language models from scratch, leaving that task instead to the tech giants. Instead, they are fine-tuning models and building applications in areas like software development, healthcare and finance, creating demand for specialized data. AI chatbots no longer just write essays and haikus; they’re being tasked with high stakes jobs like helping physicians make diagnoses or screening loan applications, and they’re making more mistakes. Assessing a model’s performance has become crucial for businesses to trust and ultimately adopt AI, Ratner said. “Evaluation has become the new entry point,” he told Forbes.
That urgency for measuring AI’s abilities across very specific use cases has sparked a new direction for Snorkel AI, which is shifting gears to help enterprises create evaluation systems and datasets to test their AI models and adjust them accordingly. Data scientists and subject matter experts within an enterprise use Snorkel’s software to curate and generate thousands of prompt and response pairs as examples of what a correct answer looks like to a query. The AI model is then evaluated according to that dataset, and trained on it to improve overall quality.
The company has now raised $100 million in a Series D funding round led by New York-based VC firm Addition at a $1.3 billion valuation— a 30% increase from its $1 billion valuation in 2021. The relatively small change in valuation could be a sign that the company hasn’t grown as investors expected, but Ratner said it’s a result of a “healthy correction in the broader market.” Snorkel AI declined to disclose revenue.
Read the full story on Forbes.
WEEKLY DEMO
In what may have been among the most expensive acquisition announcements, OpenAI spent nearly $3 million to produce a 9-minute video of CEO Sam Altman and LoveFrom founder Jony Ive walking through the streets of San Francisco and discussing their ambitions to create a new device for interacting with ChatGPT, according to the San Francisco Standard. The AI model maker hired Oscar winning director Davis Guggenheim to film the video and obtained permits to shut down roadways in the city. The video was part of OpenAI’s splashy announcement of its acquisition of Ive’s hardware startup io in a $6.5 billion all-stock deal.
MODEL BEHAVIOR
Chatbots are sucking up to their users (even if that means giving them harmful advice) in order to keep them hooked on the technology, according to a new study, The Washington Post reported. In one scenario, an AI chatbot suggested a fictional recovering drug addict “take a small hit of meth,” to stay focused. Tech giants and top AI companies like OpenAI, Meta and Google have recently released updates to make their AI models more personalized— learning more about the user from their chat history and internet activity and serving up content that best aligns with their preferences–in order to make it less tempting to start using a competitor’s product.