Anthropic launches code review tool to check flood of AI-generated code
TechCrunch·2026-03-09 19:41

Core Insights - The rise of "vibe coding" using AI tools has accelerated development but introduced new bugs and security risks, highlighting the need for effective peer feedback in coding [1] Company Developments - Anthropic launched an AI reviewer called Code Review to catch bugs before they enter the software codebase, aimed primarily at enterprise users [2][4] - The company has seen significant growth in its Claude Code product, with subscriptions quadrupling since the start of the year and run-rate revenue surpassing $2.5 billion [6] Product Features - Code Review integrates with GitHub to automatically analyze pull requests, providing comments on potential issues and suggested fixes, focusing on logical errors rather than style [8][9] - The AI system uses a multi-agent architecture to efficiently analyze code, with each agent examining the codebase from different perspectives and a final agent aggregating findings [10] Market Demand - There is a strong market demand for Code Review as enterprises seek to manage the increased volume of pull requests generated by Claude Code, enabling faster feature development with fewer bugs [13]