Workflow
Anthropic
icon
Search documents
X @Anthropic
Anthropic· 2025-08-21 16:33
We’re also announcing a new Higher Education Advisory Board, which helps guide how Claude is used in teaching, learning, and research.Read more about the courses and the Board: https://t.co/TorRcYMHnd ...
X @Anthropic
Anthropic· 2025-08-21 16:33
We’ve made three new AI fluency courses, co-created with educators, to help teachers and students build practical, responsible AI skills.They’re available for free to any institution. https://t.co/nK2D3W5YcU ...
X @Anthropic
Anthropic· 2025-08-21 10:36
This demonstrates what's possible when government expertise meets industry capability. NNSA understands nuclear risks better than any company could; we have the technical capacity to build the safeguards. ...
X @Anthropic
Anthropic· 2025-08-21 10:36
We don't need to choose between innovation and safety. With the right public-private partnerships, we can have both. We’re sharing our approach with @fmf_org members so any AI company can implement similar protections.Read more: https://t.co/HxgrIwK8n9 ...
X @Anthropic
Anthropic· 2025-08-21 10:36
We partnered with @NNSANews to build first-of-their-kind nuclear weapons safeguards for AI.We've developed a classifier that detects nuclear weapons queries while preserving legitimate uses for students, doctors, and researchers. https://t.co/PlZ55ot74l ...
X @Anthropic
Anthropic· 2025-08-20 18:14
RT Claude (@claudeai)Claude Code is now available on Team and Enterprise plans.Flexible pricing lets you mix standard and premium Claude Code seats across your organization and scale with usage. https://t.co/co3UT5PcP3 ...
X @Anthropic
Anthropic· 2025-08-15 20:41
AI Model Research - Anthropic interpretability researchers are discussing looking into the mind of an AI model [1] Interpretability - The discussion highlights the importance of understanding AI model's decision-making processes [1]
X @Anthropic
Anthropic· 2025-08-15 19:41
The vast majority of users will never experience Claude ending a conversation, but if you do, we welcome feedback.Read more: https://t.co/hmCSOSFupB ...
X @Anthropic
Anthropic· 2025-08-15 19:41
This is an experimental feature, intended only for use by Claude as a last resort in extreme cases of persistently harmful and abusive conversations. ...
X @Anthropic
Anthropic· 2025-08-15 19:41
As part of our exploratory work on potential model welfare, we recently gave Claude Opus 4 and 4.1 the ability to end a rare subset of conversations on https://t.co/uLbS2JNczH. https://t.co/O6WIc7b9Jp ...