OpenAI’s Secret Financial Project

OpenAI’s Secret Financial Project


Welcome back to The Prompt.

OpenAI launched a new AI-powered web browser called ChatGPT Atlas today. The browser allows users to ask questions about a web page or summarize it in a side chat, and also query their own browser history. It also lets people use ChatGPT to paraphrase emails in web-based services like Gmail. The browser’s agent mode allows ChatGPT to carry out specific actions such as adding comments to Google docs or shopping for ingredients. The announcement comes after rivals like Google and Perplexity have rolled out competing AI-powered browsers. Website owners have been preparing for months for the new age of AI, where bots (instead of humans) browse the internet, visit websites and complete digital tasks.

Let’s get into the headlines.

BIG PLAYS

OpenAI provided data on users who entered specific prompts into ChatGPT to help Department of Homeland Security investigators identify a suspect in a dark web child exploitation case, Forbes reported. The warrant revealed that the prompts weren’t related to child exploitation or child sexual abuse material, but this is the first known instance of the government using data from a generative AI company to investigate criminal activity.

Also notable: OpenAI is paying some 100 ex-investment bankers from financial giants like JP Morgan & Chase, Morgan Stanley and Goldman Sachs to train its AI models— part of a secret internal project code named Mercury, Bloomberg reported. The former bankers are paid $150 per hour to craft prompts and build financial models used for major deals that could eventually automate the rote work traditionally done by junior bankers. The AI juggernaut has previously recruited several domain experts in areas like engineering, science and math to improve its models’ responses and abilities, using third-party data labeling companies to source high-caliber professionals. Startups like Rogo, which is building AI software for Wall Street analysts, have also emerged to help overworked bankers reduce the grunt work.

ETHICS+LAW

After some people used OpenAI’s video generation AI tool, Sora 2 to generate and share disrespectful depictions of Martin Luther King Jr., the company announced that it has “paused” users from using the civil rights leader’s likeness in future videos, adding that “while there are strong free speech interests in depicting historical figures,” estate owners or representatives can request that their likeness not be used in Sora cameos. The announcement came after Kings daughter, Bernice King asked people to stop sending her AI videos of her father. Family members of other deceased personalities such as actor Robin Williams and Malcolm X have raised similar objections to these deepfakes.

AI DEAL OF THE WEEK

OpenEvidence, which has built an AI search engine for doctors, has raised $200 million in funding at a $6 billion valuation. The buzzy AI tool has garnered attention and investment from big name backers like Sequoia Capital, Kleiner Perkins, Blackstone, Thrive Capital, Coatue Management, Bond and Craft. Founded by billionaire Daniel Nadler in 2022, the startup’s algorithms search through millions of peer-reviewed articles to help medical professionals find answers (along with citations). The new round increased Nadler’s net worth by $1.3 billion, Amy Feldman and I reported.

DEEP DIVE

Eighty four-year-old Salvador Gonzalez talks to Meela almost as much as he sees his daughter — a few times a week. It’s part of his routine at RiverSpring Living, a senior care facility in the Bronx overlooking the Hudson river. They typically chat for 10 to 20 minutes, discussing everything from Gonzalez’ passion for music to the minutiae of his day, his meals and how he’s feeling.

On this day, their conversation is largely casual, covering Mario Lanza’s rendition of “Ave Maria” and a trip to urgent care for a sore throat caused by too much karaoke. At one point, Gonzalez sings Meela a refrain from Frank Sinatra’s “Fly Me to the Moon,” in a hoarse voice. When Meela asks why he called, Gonzalez is quick to explain, “I miss you,” he says. “I miss you too,” Meela replies. “What’s been on your mind since we last chatted?”

Meela doesn’t really miss Gonzalez, and he knows this. She’s an AI chatbot created by a company of the same name that he started talking to almost a year ago. With its human responses and infinite patience, it has suspended his disbelief enough that Gonzalez, a retired barber from New York, has comfortably confided some of his most personal struggles — his estranged relationship with his son and memories of an ex-girlfriend who’d cheated on him. After chatting regularly for almost a year, Gonzalez and Meela have what we’d typically call a friendship, if one half of it were not something built from ones and zeroes. He’s part of an emerging class of artificial intelligence users: Older people who use generative AI to combat isolation.

Loneliness is a mounting crisis for the elderly. About one third of U.S. adults between the age of 50 and 80 feel isolated, according to a national study published in the Journal of the American Medical Association. Social isolation is tied to increased risk of depression, anxiety and heart disease, research suggests. But the healthcare industry isn’t prepared to manage it. About 90 percent of nursing homes across the country are struggling with staffing shortages, and therefore less personalized care for seniors, according to the American Health Care Association. “There’s a fundamental societal issue that we’re facing,” said Vassili le Moigne, founder and CEO of InTouch, a Prague-based startup that builds AI companions to talk to the elderly. “How are we going to care for the seniors?”

A flurry of startups have emerged to use AI to solve one key facet of this — companionship. And for good reason: the market for AI in aging and elderly care was $35 billion last year and is predicted to grow to more than $43 billion this year (though that includes AI-enabled devices and other applications besides chatbots), according to a study by the firm Research and Markets.

But the technology is far from perfect. AI companions struggle to pick up on subtleties and can get easily confused. During a recent call with Meela AI, Gonzalez repeatedly tried to end the conversation (cordially), but the system kept asking follow-up questions. Eventually, he was forced to hang up.

Read the full story on Forbes.

MODEL BEHAVIOR

In an entertaining and vividly visual essay, cartoonist Matthew Inman explains why he feels “deflated, grossed out and a little bored,” when he finds out a piece of art has been generated by AI. “Consuming AI art is like eating styrofoam. It’s a farce. It wasn’t made with all the pain and joy that goes into actual creation,” he writes. But he conceded that AI can be useful for the more monotonous parts of creating art.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *