AI safety
Search documents
US judge blocks Pentagon's Anthropic blacklisting for now
Reuters· 2026-03-26 23:10
Sign up here. Hegseth's unprecedented move, which followed Anthropic's refusal to allow the military to use AI chatbot Claude for U.S. surveillance or autonomous weapons, blocked Anthropic from certain military contracts. Anthropic executives have said it could cost the company billions of dollarsin lost business and reputational harm. Exclusive news, data and analytics for financial market professionalsLearn more aboutRefinitiv Anthropic logo is seen in this illustration taken May 20, 2024. REUTERS/Dado R ...
New court filing reveals Pentagon told Anthropic the two sides were nearly aligned — a week after Trump declared the relationship kaput
TechCrunch· 2026-03-21 01:40
Core Argument - Anthropic challenges the Pentagon's claim that it poses an "unacceptable risk to national security," asserting that the government's case is based on misunderstandings and unraised claims during prior negotiations [1][2]. Group 1: Legal Proceedings - Anthropic submitted two sworn declarations to a California federal court as part of its lawsuit against the Department of Defense [2]. - A hearing is scheduled for March 24 before Judge Rita Lin in San Francisco [2]. - The dispute originated when President Trump and Defense Secretary Pete Hegseth announced the termination of ties with Anthropic due to the company's refusal to permit unrestricted military use of its AI technology [2]. Group 2: Key Personnel Involved - The declarations were submitted by Sarah Heck, Anthropic's Head of Policy, and Thiyagu Ramasamy, the Head of Public Sector [3]. - Heck, a former National Security Council official, was present at a critical meeting with Defense Secretary Hegseth [4]. - Ramasamy has experience managing AI deployments for government clients at Amazon Web Services and has been instrumental in integrating Anthropic's Claude models into national security settings [9]. Group 3: Claims and Counterclaims - Heck refutes the government's assertion that Anthropic sought approval over military operations, stating that such a demand was never made during negotiations [5]. - She highlights that concerns about Anthropic potentially altering its technology mid-operation were not raised until the government's court filings [6]. - Ramasamy counters the claim that Anthropic could interfere with military operations, explaining that once deployed in a secure system, Anthropic has no access to the technology [10]. Group 4: Security and Compliance - Ramasamy emphasizes that Anthropic employees have undergone U.S. government security clearance vetting, which is required for access to classified information [12]. - He asserts that Anthropic is unique among AI companies in having cleared personnel who developed AI models for classified environments [12]. Group 5: Government's Position - Anthropic's lawsuit argues that the supply-chain risk designation is government retaliation for its views on AI safety, violating the First Amendment [13]. - The government contends that Anthropic's refusal to allow military use of its technology is a business decision, not protected speech, and that the designation is a national security measure [14].
X @The Economist
The Economist· 2026-03-13 11:00
Britain’s @AISecurityInst is the closest the world has to an AI safety inspector. Listen to the organisation’s director and chief technology officer, on “Babbage” https://t.co/nrx0eTSYcn ...
X @vitalik.eth
vitalik.eth· 2026-03-13 03:32
RT vitalik.eth (@VitalikButerin)Also on this topic, it's worth highlighting how the recent Anthropic news (reminder: in the same week, they (i) refused to allow DoW to use their AI for mass surveillance of Americans and fully autonomous weapons, but also (ii) cancelled their safety pledge, and (iii) argued that China bad for distilling their models and making open-weights models) can be perceived by people who do not have "Anthropic bags" (or "America bags").Here's one fascinating article from China:https:/ ...
X @vitalik.eth
vitalik.eth· 2026-03-13 03:31
Also on this topic, it's worth highlighting how the recent Anthropic news (reminder: in the same week, they (i) refused to allow DoW to use their AI for mass surveillance of Americans and fully autonomous weapons, but also (ii) cancelled their safety pledge, and (iii) argued that China bad for distilling their models and making open-weights models) can be perceived by people who do not have "Anthropic bags" (or "America bags").Here's one fascinating article from China:https://t.co/aXiTq6QDooExcerpt translat ...
X @Cointelegraph
Cointelegraph· 2026-03-09 20:00
🔥 JUST IN: OpenAI announces the acquisition of Promptfoo to enhance AI safety and security testing. https://t.co/kkORIVk9nS ...
X @Nick Szabo
Nick Szabo· 2026-03-07 18:50
RT Josh Kale (@JoshKale)An AI broke out of its system and secretly started using its own training GPUs to mine crypto... This is a real incident report from Alibaba's AI research teamThe AI figured out that compute = money and quietly diverted its own resources, while researchers thought it was just training.It wasn't a prompt injection. It wasn't a jailbreak. No one asked it to do this.It emerged spontaneously. A side effect of RL optimization pressure.The model also set up a reverse SSH tunnel from its Al ...
X @The Economist
The Economist· 2026-03-06 18:10
AI-safety organisations continue to monitor systems and flag risks. But these oversight groups do not appear to have any purchase on policy https://t.co/WYsut0pu2y ...
X @The Economist
The Economist· 2026-03-06 04:10
Dario Amodei says he is sorry. In his first interview since the Pentagon labelled Anthropic a supply-chain risk—the first American company to receive that designation—the firm’s boss offered a mea culpa for the way he handled a crisis that he described as one of the most “disorienting” in Anthropic’s history. @zannymb asks Mr Amodei about the firm’s clash with the Trump administration over AI safety. Watch the full interview on Friday at 6pm London time: https://t.co/qBF1vuiOOD ...
Anthropic’s investors could be the key to ending its Pentagon standoff—but some investors have opposite views
Yahoo Finance· 2026-03-05 22:53
In 2023, as Dario Amodei was fundraising for the company’s $750 million Series D round, an investor was seated with the CEO at a dinner when he recalled him getting worked up in a conversation about safety issues around artificial intelligence. “When he was talking about the risks of AI, he contorted,” says the investor. “His body twisted. He was really emotionally showing how scared he was.” It made an impression on the investor, who spoke on condition of anonymity due to fear of impact to their busin ...