Workflow
Anthropic
icon
Search documents
X @Anthropic
Anthropic· 2025-10-12 16:40
Our team was grateful for the opportunity to meet with PM @narendramodi and Minister @AshwiniVaishnaw to discuss India's AI future.We're keen to work together to advance the country's digital ambitions and support the AI Summit in February 2026.Ashwini Vaishnaw (@AshwiniVaishnaw):Anthropic CEO @DarioAmodei expressed the company’s trust in Bharat’s AI ecosystem and deep talent pool. He also committed to promote safe and responsible use of AI. https://t.co/kAF8xALqTu ...
X @Anthropic
Anthropic· 2025-10-11 13:59
Anthropic CEO Dario Amodei met today with Prime Minister @narendramodi.We're looking forward to growing our Indian team and supporting India's AI ecosystem as it develops the next generation of dynamic companies.Dario Amodei (@DarioAmodei):Today I met with PM @narendramodi to discuss Anthropic's expansion to India—where Claude Code use is up 5× since June. How India deploys AI across critical sectors like education, healthcare, and agriculture for over a billion people will be essential in shaping the futur ...
X @Anthropic
Anthropic· 2025-10-09 16:28
All the technical details are in the full paper: https://t.co/zPS1eRXbIG ...
X @Anthropic
Anthropic· 2025-10-09 16:28
Previous research suggested that attackers might need to poison a percentage of an AI model’s training data to produce a backdoor.Our results challenge this—we find that even a small, fixed number of documents can poison an LLM of any size.Read more: https://t.co/HGMA7k1Lnf ...
X @Anthropic
Anthropic· 2025-10-09 16:28
New research with the UK @AISecurityInst and the @turinginst:We found that just a few malicious documents can produce vulnerabilities in an LLM—regardless of the size of the model or its training data.Data-poisoning attacks might be more practical than previously believed. https://t.co/TXOCY9c25t ...
X @Anthropic
Anthropic· 2025-10-09 16:06
This research was a collaboration between Anthropic, the @AISecurityInst, and the @turinginst.Read the full paper: https://t.co/zPS1eRXbIG ...
X @Anthropic
Anthropic· 2025-10-09 16:06
Previous research suggested that attackers might need to poison a percentage of an AI model’s training data to produce a backdoor.Our results challenge this—we find that even a small, fixed number of documents can poison an LLM of any size.Read more: https://t.co/HGMA7k1Lnf ...
X @Anthropic
Anthropic· 2025-10-09 16:06
New Anthropic research: We found that just a few malicious documents can produce vulnerabilities in an AI model—regardless of the size of the model or its training data.This means that data-poisoning attacks might be more practical than previously believed. https://t.co/YMod3czB4X ...
X @Anthropic
Anthropic· 2025-10-08 00:59
We’re opening an office in Bengaluru, India in early 2026. We look forward to building with India’s developer community, deploying AI for social benefit, and partnering with enterprises.Read more: https://t.co/x5otepbqs8 ...
X @Anthropic
Anthropic· 2025-10-06 17:15
Petri builds on our alignment assessments in the Claude 4 and 4.5 System Cards; the @AISecurityInst also successfully built on a pre-release version of Petri for their assessments of our models. ...