Investors Worry About An AI Bubble

Posted by Rashi Shrivastava, Forbes Staff | 4 hours ago | /innovation, editors-pick, Editors' Pick, Innovation, Newsletter, No Paywall, no-paywall, premium | Views: 5


Welcome back to The Prompt,

Is the AI market in a bubble? That question has been on a lot of people’s minds lately. Investors and economists are concerned that the stock market could be overheating, led by AI and tech companies being overvalued, which may put us at risk of reliving the 2000s dot com bubble burst. (The S&P 500 is priced at about 30 times earnings–but tech stocks are trading an average of 41 times earnings and 10 times sales, Forbes reported.) Not helping matters is OpenAI’s CEO Sam Altman, who recently said he believes we’re in a bubble and that some overexcited investors could get burned. Plus, a recent MIT study found 95% of AI pilots at enterprises fail to create measurable savings or boost profits. Despite investing $30 billion to $40 billion in generative AI, businesses that lack the technical expertise to extract the most from AI have only been able to get subpar returns. While there are some signs that AI use can boost productivity, that hasn’t yet impacted the bottom line.

There’s no denying AI’s potential to create value and transform industries, but some of the buzziest AI companies still haven’t been able to turn a profit. That hasn’t stopped them from raising massive investments at sky-high valuations

Sarah Guo, founder and managing partner at Conviction Partners, said the AI market has definitely seen “increased participation” from investors–but that some of them lack the technical know-how or the experience to make the right bets. “Are people going to lose a bunch of money investing in assets where the price rises far above the fundamental value due to speculation and herd behavior and excessive demand? Yes,” she said.

Now let’s get into the headlines.

ETHICS+LAW

The family of a 16-year-old teenager, Adam Raine, is suing OpenAI and its CEO Sam Altman, alleging that ChatGPT encouraged and assisted Raine in planning and commiting suicide. Raine initially started using ChatGPT for homework and career advice, but over the course of a few months, he turned to the chatbot for more personal problems like his anxiety and mental distress. After confessing his intention to commit suicide, ChatGPT responded by providing detailed methods and technical guidance on how to take his own life, according to a 40-page lawsuit filed today. The lawsuit alleges that ChatGPT, powered by GPT-4o model, was designed to isolate Raine from his family, validate his feelings and encourage and help him to commit suicide.

OpenAI has said that ChatGPT includes safeguards like directing users to a crisis helpline and other resources. However, while these precautions work for short interactions, they can sometimes become unreliable in longer conversations. Today, OpenAI published a blog post outlining its safety measures, admitting that its “systems did not behave as intended in sensitive situations.” The lawsuit marks the first known wrongful death lawsuit against the company, though similar suits have been filed against Character.AI and Google.

TALENT RESHUFFLE

More AI means fewer jobs, especially for young graduates. Entry-level jobs in the U.S. in areas like software development, accounting and customer service have declined 13% in the last three years, according to a study by economists at Stanford’s Institute for Human-Centered AI. At the same time, the researchers found that roles for more experienced workers have not seen a similar decline. In recent months, CEOs of tech companies like Fiverr, Shopify and Duolingo have openly admitted to it, sharing memos and warning employees about AI impact on the workforce.

HUMANS OF AI

Artificial intelligence is helping social workers deal with mountains of paperwork so they can spend more time interacting with people. Anthropic is partnering with Binti, a startup that helps 12,000 government social workers build new tools to help manage their workloads, Forbes reported. As part of Anthropic’s AI for social good program, the San Francisco-based startup utilizes Anthropic’s AI models to help workers at child welfare agencies automatically fill out forms and find information quickly with a chatbot trained on uploaded case documents.

DEEP DIVE

Elon Musk’s AI firm, xAI, has published the chat transcripts of hundreds of thousands of conversations between its chatbot Grok and the bot’s users — in many cases, without those users’ knowledge or permission.

Anytime a Grok user clicks the “share” button on one of their chats with the bot, a unique URL is created, allowing them to share the conversation via email, text message or other means. Unbeknownst to users, though, that unique URL is also made available to search engines, like Google, Bing and DuckDuckGo, making them searchable to anyone on the web. In other words, on Musk’s Grok, hitting the share button means that a conversation will be published on Grok’s website, without warning or a disclaimer to the user.

A Google search for Grok chats shows that the search engine has indexed more than 370,000 user conversations with the bot. The shared pages revealed conversations between Grok users and the LLM that range from simple business tasks like writing tweets to generating images of a fictional terrorist attack in Kashmir and attempting to hack into a crypto wallet. Forbes reviewed conversations where users asked intimate questions about medicine and psychology; some even revealed the name, personal details and at least one password shared with the bot by a Grok user. Image files, spreadsheets and some text documents uploaded by users could also be accessed via the Grok shared page.

Read the full story on Forbes.

WEEKLY DEMO

Social media creators were outraged after YouTube used AI to make tweaks to their short-form videos without informing them or asking permission, BBC reported. The company confirmed that machine learning was used to sharpen some of the videos to improve their quality. However, influencers are concerned that undisclosed moves like this could further blur the lines between AI-generated slop and reality.

MODEL BEHAVIOR

AI-generated videos of deceased people, so-called “deadbots,” are appearing in new contexts. Many families are using these avatars as powerful tools of persuasion, NPR reported, ranging from an on-air interview with a victim of the 2018 Parkland school shooting about tougher gun laws to a victim of a road rage accident delivering a statement at a sentencing of the person who killed them.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *