Anthropic, the AI enterprise behind the Claude chatbot, is revising its safety policies to stay competitive, shifting focus away from stringent safety measures. The company recently updated its responsible-scaling policy, which previously aimed to prevent potential AI dangers like cyberattacks. While the revised guidelines state that Anthropic will still require a solid argument that catastrophic risks are manageable, they now allow development to proceed if the company believes it holds a significant lead over competitors.
This change is attributed to a shift in prioritization from AI safety to economic potential in the U.S., as per a blog post by the company. The alteration in safety guidelines coincides with pressure from the Pentagon, threatening to terminate contracts unless Anthropic’s technology is permitted for all legal military applications, although the company claims this change is unrelated.
Founded in 2021 by ex-OpenAI employees, Anthropic initially positioned itself as safety-focused, with CEO Dario Amodei emphasizing safety as the top priority. Despite the company’s commitment to transparency and accountability with updated safety measures, critics like Heidy Khlaaf from the AI Now Institute argue that Anthropic has historically underestimated the risks associated with its AI technology, citing instances of misuse and security breaches involving the Claude chatbot.
The competitive landscape among leading AI companies like Anthropic, OpenAI, and Google intensifies as they forge partnerships with businesses and government agencies. The U.S. government’s proclivity towards unrestricted AI development poses challenges for companies to balance safety with innovation to avoid falling behind.
With the Pentagon pressuring Anthropic to broaden the scope of AI technology usage, tensions rise as the company stands firm in its refusal to facilitate military applications involving autonomous weapons and mass surveillance. The company’s stance on ethical AI deployment is tested as it navigates demands from government entities while upholding its principles.

