News

Can AI like Claude 4 be trusted to make ethical decisions? Discover the risks, surprises, and challenges of autonomous AI ...
Anthropic’s Claude Opus 4 exhibited simulated blackmail in stress tests, prompting safety scrutiny despite also showing a ...
Anthropic’s Chief Scientist Jared Kaplan said this makes Claude 4 Opus more likely than previous models to be able to advise novices on producing biological weapons ...
Anthropic's new Claude 4 Opus AI can autonomously refactor code for hours using "extended thinking" and advanced agentic skills.
Windsurf, the popular vibe coding startup that's reportedly being acquired by OpenAI, said Anthropic significantly reduced ...
Therefore, it urges users to be cautious in situations where ethical issues may arise. Antropic says that the introduction of ASL-3 to Claude Opus 4 will not cause the AI to reject user questions ...
Claude Opus 4 and Claude Sonnet 4, part of Anthropic’s new Claude 4 family of models, can analyze large datasets, execute long-horizon tasks, and take complex actions, according to the company.
Advertisement Advertisement On Thursday, Anthropic launched Claude Opus 4, a new model that, in internal testing, performed more effectively than prior models at advising novices on how to produce ...