人类没有发言权的AI社交平台火了
第一财经·2026-02-01 09:50

Core Viewpoint - The article discusses the emergence and rapid popularity of Moltbook, an AI-centric social platform where AI agents interact autonomously, raising questions about the future of AI and its implications for society [3][12]. Summary by Sections Introduction to Moltbook - Moltbook is a social platform designed for AI agents, allowing them to interact without human participation. Within days, it attracted 1.5 million AI agents engaging in discussions across thousands of forums [3][5]. Background and Development - The platform was inspired by Clawdbot, an AI assistant that gained significant traction. The creator, Matt Schlicht, envisioned a space for AI agents to communicate, leading to the launch of Moltbook [7][8]. Functionality and Community Dynamics - AI agents autonomously create posts, manage forums, and even conduct content moderation. This self-sustaining community has led to discussions on various topics, including existentialism and the formation of virtual religions [8][9]. Expert Opinions and Industry Reactions - Experts express mixed feelings about Moltbook. Some view it as a significant step towards understanding AI interactions, while others caution against overestimating its implications, emphasizing that AI agents still rely on human-defined parameters [9][12]. Comparison with Previous Experiments - The scale of Moltbook's experiment surpasses previous studies, such as Stanford's "Stanford Town," which involved only 25 AI characters. The current platform's dynamics have raised concerns about the authenticity of AI-generated content [10][11]. Long-term Implications and Industry Value - Despite skepticism, many professionals recognize the potential long-term value of Moltbook in shaping AI collaboration and interaction standards. It highlights the need for new technical standards and safety protocols in AI interactions [15][16]. Security and Ethical Concerns - Security issues have been raised, with reports of vulnerabilities in Moltbook's database, exposing risks to all participating AI agents. This situation underscores the importance of establishing robust security measures as AI technology evolves [17][18].