Anthropic
Search documents
X @Anthropic
Anthropic· 2025-08-26 13:57
AI in Education - The report reveals a nuanced understanding of how educators are balancing AI augmentation (collaborative use) versus automation (delegating tasks entirely) [1]
X @Anthropic
Anthropic· 2025-08-26 13:57
How do educators use Claude?We ran a privacy-preserving analysis of 74,000 real conversations to identify trends in how teachers and professors use AI at work. https://t.co/U7K8eS92GR ...
X @Anthropic
Anthropic· 2025-08-25 16:06
RT Alex Albert (@alexalbert__)A conversation with @_catwu on:- some tips for using Claude Code- how we prototype new features- customizing Claude Code- how we think about the Claude Code SDK and agents https://t.co/jr04eY17pj ...
X @Anthropic
Anthropic· 2025-08-22 16:19
If you’re interested in joining us to work on these and related issues, you can apply for our Research Engineer/Scientist role (https://t.co/x3G4F5qVWv) on the Alignment Science team. ...
X @Anthropic
Anthropic· 2025-08-22 16:19
Future Development - Classifiers 需要更多工作以提高准确性和有效性 [1] - Classifiers 未来可能能够删除与风险相关的数据 (例如:阴谋、欺骗等) 以及 CBRN 风险 [1]
X @Anthropic
Anthropic· 2025-08-22 16:19
Research Focus - Anthropic正在试验从模型训练数据中移除关于化学、生物、放射性和核武器 (CBRN) 的信息的方法 [1] - 目标是在不影响模型在无害任务上的表现的前提下,过滤掉危险信息 [1]
X @Anthropic
Anthropic· 2025-08-21 16:33
Higher Education Initiatives - A new Higher Education Advisory Board is announced to guide the use of Claude in teaching, learning, and research [1]
X @Anthropic
Anthropic· 2025-08-21 16:33
We’ve made three new AI fluency courses, co-created with educators, to help teachers and students build practical, responsible AI skills.They’re available for free to any institution. https://t.co/nK2D3W5YcU ...
X @Anthropic
Anthropic· 2025-08-21 10:36
This demonstrates what's possible when government expertise meets industry capability. NNSA understands nuclear risks better than any company could; we have the technical capacity to build the safeguards. ...
X @Anthropic
Anthropic· 2025-08-21 10:36
We don't need to choose between innovation and safety. With the right public-private partnerships, we can have both. We’re sharing our approach with @fmf_org members so any AI company can implement similar protections.Read more: https://t.co/HxgrIwK8n9 ...