Welcome back to In the Loop, TIME’s new twice-weekly newsletter about AI. We’re publishing these editions both as stories on Time.com and as emails. If you’re reading this in your browser, why not subscribe to have the next one delivered straight to your inbox?
Subscribe to In the Loop
Who to Know: Larry Ellison
When you think about the top echelon of the world’s tech elites, Larry Ellison probably doesn’t spring to mind. But on Wednesday, the 81-year-old chairman of Oracle briefly became the richest person in the world with a net worth of almost $400 billion, overtaking Elon Musk.
Ellison’s $100-billion jump was the biggest single-day gain ever, and the result of a promising Oracle growth forecast in which they advertised hundreds of billions of dollars in inbound revenue from AI companies using Oracle’s cloud computing capabilities. Ellison’s stratospheric rise can be attributed both to the dynamics of the current AI boom, and his own approach to corporate financing.
Low in the stack — Oracle isn’t thriving in every metric. The company missed on analysts’ revenue expectations, and has a projected three more years of negative cash flow. One industry publication reported that the company just underwent significant layoffs.
But at the moment, we’re at the stage of an AI hype cycle in which the main spenders on AI aren’t end users, but rather the AI companies building the products. This means that the biggest beneficiaries are the companies lower in the stack, including chipmakers like NVIDIA—and Oracle, which owns and operates data centers which provide cloud infrastructure to other AI companies.
Oracle’s ability to solve the vast logistical challenges inherent in building super clusters releases AI companies from operations burdens, making them an appealing partner. And they often charge less than their competitors. As a result, they’ve signed contracts worth hundreds of billions of dollars with companies like OpenAI and xAI. When Oracle advertised these deals and all of this future revenue on Wednesday morning, the company’s shares jumped around 38%.
It’s who you know — It can’t hurt that Ellison has strong relationships with the world’s other most powerful men. Ellison hosted a fundraiser for Trump in 2020, was spotted at Mar-a-Lago during Trump’s fundraising efforts last year, and became a primary partner on Trump’s Stargate project in January, which aims to build $500 billion in AI infrastructure in the U.S. Ellison has also long been close with Musk: In 2022, he agreed in a text message to Musk to put a billion dollars, or “whatever you recommend,” toward his efforts to buy Twitter.
Buyback frenzy — But plenty of companies are profiting like crazy off the AI boom—so why has Ellison profited so uniquely enormously? The answer lies with how much Oracle’s stock he owns. Ellison has a 41% stake in Oracle, nearly double what he owned fifteen years ago. That’s because in the last decade, he and Oracle have embarked on one of the biggest stock buyback programs in corporate history, sometimes taking out loans to finance the share repurchases.
Investors like stock buybacks because it makes their own shares more valuable. But there are many critics of the strategy, who argue that it prioritizes short-term gain over long-term investment in infrastructure or R&D. Chuck Schumer called stock buybacks “one of the most self-serving things that corporate America does.”
Because Ellison owns so much Oracle stock, his net worth swings up and down along with the share price. That’s why his worth catapulted past Elon this week—only to return back to around $376 billion a day later.
What to Know: Regulatory sandbox proposal
Ted Cruz has long been one of the AI industry’s biggest allies in Congress, arguing that the industry needs to be left alone in order to spur growth and innovation. On Wednesday, he introduced a bill that would protect AI companies from regulation as they experiment with new technologies.
The bill would create a “sandbox” in which companies could apply to waive certain regulations for two-year periods, for up to ten years. Cruz received support from OSTP director Michael Kratsios, who showed up to Cruz’s hearing on Wednesday and said that anti-innovation regulations were “a huge problem for our industry.”
Cruz’s bill faces a long road to get passed. Plenty of Republican senators, including Marsha Blackburn and Josh Hawley, are keen to pass regulation that mitigates AI harms. Meanwhile, voices on the left responded with revulsion. The consumer rights advocacy group Public Citizen wrote that the proposal “lets companies skirt accountability, and treats Americans as test subjects.”
AI in Action
Several hours after the right-wing commentator Charlie Kirk was assassinated in Utah, President Trump posted a video on social media condemning the tragedy from the White House. But was it actually him? Within minutes, claims that the video was AI-generated spread online, with amateur sleuths pointing to Trump’s finger momentarily disappearing and the video’s saturated colors.
However, experts have cast skepticism on this theory. “We have reviewed this video and find no evidence that the audio or video is AI-generated,” Hany Farid, co-founder of the cybersecurity firm GetReal Security, wrote on LinkedIn. He said that the finger distortion may have come from “localized video manipulation.”
Regardless, the response to the video showed how difficult it now is to believe anything you see online, thanks to advancements in AI deepfakes. Videos that are fake are believed to be real; videos that are real can easily be derided as fake. (Trump himself dismissed a video of a bag being thrown out of the White House as “probably AI-generated.”)
Some companies, like Google, have implemented watermarking systems to differentiate between real and fake videos. But so far, it seems that viral videos spread faster than they can be verified. In this muddle, it seems likely that people will simply stick to their preconceived notions about truth and fiction, thus making the internet ever more fractured and contentious.
What to Read
“AI Is Coming for YouTube Creators,” Alex Reisner, The Atlantic
Reisner finds that AI companies have been downloading millions of YouTube videos, and many of them how-to videos, to train their AIs. By learning from these craftspeople, AIs can now replace them in offering advice, thus threatening to render the original human experts obsolete.