New Models From OpenAI, Anthropic, Google – All At The Same Time

Posted by John Werner, Contributor | 5 hours ago | /ai, /innovation, AI, Innovation, standard | Views: 9


It’s Christmas in August – at least, for those tech-wonks who are interested in new model releases. Today’s news is a very full stocking of brand new LLM editions from three of the biggies – OpenAI, Anthropic, and Google.

I’ll go over these one by one, discussing what these most recent model iterations bring to the table.

OpenAI OSS Models

First, the tech community is getting an eye on OpenAI OSS 120b and OSS 20b, the first open-weight systems from this company since ChatGPT 2. Now, coverage from Computerworld and elsewhere points out that, although these models have Apache licenses, they are not fully open source in the conventional way, ,but partly open: the weights are open source, while the training data is not.

Powered by one 80GB GPU chip, the larger OSS models, according to the above report, “achieves parity” with the o4-mini model vis a vis reasoning power. The smaller one can run on smartphones and other edge devices. The models come quantized with MXFP4, a low-precision data type for accelerating matrix multiplications.

Let Them Work

Another interesting aspect of the new OSS models has to do with chain of thought, something that has revolutionized inference, while raising questions about comparative methodology.

Basically, we want the LLMs to be accurate, but engineers have found that, in many cases, restricting or overly guiding systems causes them to “hide” CoT. So OpenAI has chosen not to optimize the models in this way.

“OpenAI is intentionally leaving Chain of Thought (CoTs) unfiltered during training to preserve their usefulness for monitoring, based on the concern that optimization could cause models to hide their real reasoning,” writes Roger Montti at Search Engine Journal. “This, however, could result in hallucinations.”

Montti cites the following model card report from OpenAI:

“In our recent research, we found that monitoring a reasoning model’s chain of thought can be helpful for detecting misbehavior. We further found that models could learn to hide their thinking while still misbehaving if their CoTs were directly pressured against having ‘bad thoughts.’…In accord with these concerns, we decided not to put any direct optimization pressure on the CoT for either of our two open-weight models. We hope that this gives developers the opportunity to implement CoT monitoring systems in their projects and enables the research community to further study CoT monitorability.”

So, the models are allowed to have these “bad thoughts” in aid of, I suppose, transparency. OpenAI is then honest about the higher chance for hallucinations, so that users know that this trade-off has been made.

Claude Opus 4.1

Here’s how spokespersons rolled out the announcement of this new model Aug. 5:

“Today we’re releasing Claude Opus 4.1, an upgrade to Claude Opus 4 on agentic tasks, real-world coding, and reasoning. We plan to release substantially larger improvements to our models in the coming weeks. Opus 4.1 is now available to paid Claude users and in Claude Code. It’s also on our API, Amazon Bedrock, and Google Cloud’s Vertex AI. Pricing is the same as Opus 4.”

What’s under the hood? The new Opus 4.1 model ups SWE-Bench Verified marks, and boosts agentic research skills. A breakdown of capabilities shows a 2-point increase in SWE-based agentic coding (72.5% – 74.5%) and improvement in graduate-level reasoning with GPQA Diamond (79.6% – 80.9%) over Opus 4, and slight increases in visual reasoning and agentic tool use. For a model set that pioneered human-like user capabilities, this continues to push the envelope. As for strategy:

“The release comes as Anthropic has achieved spectacular growth, with annual recurring revenue jumping five-fold from $1 billion to $5 billion in just seven months, according to industry data,” writes Michael Nunez at VentureBeat. “However, the company’s meteoric rise has created a dangerous dependency: nearly half of its $3.1 billion in API revenue stems from just two customers — coding assistant Cursor and Microsoft’s GitHub Copilot — generating $1.4 billion combined. … The upgrade represents Anthropic’s latest move to fortify its position before OpenAI launches GPT-5, expected to challenge Claude’s coding supremacy. Some industry watchers questioned whether the timing suggests urgency rather than readiness.”

Regardless, this is big news in and of itself, including for the millions of users who rely on Claude for business process engineering or anything else.

Genie 3

This is the latest in the series of Genie models coming out of Google’s DeepMind lab that create controlled environments. In other words, this is a gaming world model.

Proponents of the new model cite longer-term memory over Genie 2’s limit of about 10 seconds, as well as better visual fidelity and real-time responses.

“DeepMind claims that the new system can generate entire worlds that you can interact with consistently for several minutes in up to 720p resolution,” reports Joshua Hawkins at BGR. “Additionally, the company says that the system will be able to respond to what it calls ‘promptable world events’ with real-time latency. Based on what the videos show off, it seems like Google has taken a major step forward in creating entire video game worlds using AI.”

“Genie 3 is the first real-time interactive general-purpose world model,” said DeepMind’s Shlomi Fruchter in a press statement according to a TechCrunch piece suggesting that the lab considers Genie 3 to be a “stepping stone to AGI,” a big claim in these interesting times. “It goes beyond narrow world models that existed before. It’s not specific to any particular environment. It can generate both photo-realistic and imaginary worlds, and everything in between.”

All of these new models are getting their first rafts of public users today! It’s enough to make your head spin, especially if you’re responsible for any kind of implementation. What do you choose? To be fair, there is some amount of specialization involved. But many professionals closest to the industry would tell you it’s the speed of innovation that’s challenging: given the track record of most companies, by the time you get something worked into business operations, it’s likely to already be obsolete!

Stay tuned.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *