Anthropic
Search documents
X @Anthropic
Anthropic· 2025-08-21 16:33
We’ve made three new AI fluency courses, co-created with educators, to help teachers and students build practical, responsible AI skills.They’re available for free to any institution. https://t.co/nK2D3W5YcU ...
X @Anthropic
Anthropic· 2025-08-21 10:36
This demonstrates what's possible when government expertise meets industry capability. NNSA understands nuclear risks better than any company could; we have the technical capacity to build the safeguards. ...
X @Anthropic
Anthropic· 2025-08-21 10:36
We don't need to choose between innovation and safety. With the right public-private partnerships, we can have both. We’re sharing our approach with @fmf_org members so any AI company can implement similar protections.Read more: https://t.co/HxgrIwK8n9 ...
X @Anthropic
Anthropic· 2025-08-21 10:36
We partnered with @NNSANews to build first-of-their-kind nuclear weapons safeguards for AI.We've developed a classifier that detects nuclear weapons queries while preserving legitimate uses for students, doctors, and researchers. https://t.co/PlZ55ot74l ...
X @Anthropic
Anthropic· 2025-08-20 18:14
Product Update - Claude Code 现在已在 Team 和 Enterprise 计划中提供 [1] - 灵活的定价允许组织混合使用标准和高级 Claude Code 席位,并根据使用情况进行扩展 [1]
X @Anthropic
Anthropic· 2025-08-15 20:41
AI Model Research - Anthropic interpretability researchers are discussing looking into the mind of an AI model [1] Interpretability - The discussion highlights the importance of understanding AI model's decision-making processes [1]
X @Anthropic
Anthropic· 2025-08-15 19:41
The vast majority of users will never experience Claude ending a conversation, but if you do, we welcome feedback.Read more: https://t.co/hmCSOSFupB ...
X @Anthropic
Anthropic· 2025-08-15 19:41
Purpose of the Feature - This is an experimental feature designed for Claude's use as a last resort [1] - The feature is intended for extreme cases of persistently harmful and abusive conversations [1] Usage Restriction - The feature is intended only for use by Claude [1]
X @Anthropic
Anthropic· 2025-08-15 19:41
Model Capabilities - Claude Opus 4 and 4.1 were given the ability to end a rare subset of conversations on a specific platform [1] Research & Development - The company is conducting exploratory work on potential model welfare [1]
X @Anthropic
Anthropic· 2025-08-14 19:00
A reminder that applications for our Anthropic Fellows program are due by this Sunday, August 17.Fellowships can start anytime from October to January. You can find more details, and the relevant application links, in the thread below.Anthropic (@AnthropicAI):We’re running another round of the Anthropic Fellows program.If you're an engineer or researcher with a strong coding or technical background, you can apply to receive funding, compute, and mentorship from Anthropic, beginning this October. There'll be ...