Anthropic adds Claude 4 security measures to limit risk of users developing weapons

Posted by Chris Eudaily, CNBC | 8 hours ago | News | Views: 11



Anthropic on Thursday said it activated a tighter artificial intelligence control for Claude Opus 4, its latest AI model.

The new AI Safety Level 3 (ASL-3) controls are to “limit the risk of Claude being misused specifically for the development or acquisition of chemical, biological, radiological, and nuclear (CBRN) weapons,” the company wrote in a blog post.

The company, which is backed by Amazon, said it was taking the measures as a precaution and that the team had not yet determined if Opus 4 has crossed the benchmark that would require that protection.

Anthropic announced Claude Opus 4 and Claude Sonnet 4 on Thursday, touting the advanced ability of the models to “analyze thousands of data sources, execute long-running tasks, write human-quality content, and perform complex actions,” per a release.

The company said Sonnet 4 did not need the tighter controls.



NBC News

Leave a Reply

Your email address will not be published. Required fields are marked *