奇迹材料
Search documents
1865年《红旗法案》的幽灵,仍在今天游荡
腾讯研究院· 2026-02-04 08:54
Core Viewpoint - The article discusses the concept of "Human in the loop" in AI and automation, emphasizing the need for human intervention to ensure control and safety in technology deployment. However, it also critiques this notion, suggesting that such control may hinder technological advancement and innovation [2][3]. Group 1: Historical Context and Analogies - The article draws parallels between the current "Human in the loop" approach and the historical "Red Flag Act" in 19th century Britain, which imposed restrictions on early automobiles to mitigate risks, ultimately stifling technological progress [6][8]. - The "Red Flag Act" required that all self-propelled vehicles have a person walking ahead with a red flag, which limited the speed and development of the automotive industry in Britain, causing it to lag behind other countries [6][8][9]. Group 2: The Role of AI as a Transformative Force - AI is described as the "miracle material" of the current era, akin to steel in the industrial age, with the potential to revolutionize execution, logic restructuring, and automated decision-making [13][15]. - The article argues that if AI is constrained by human cognitive frameworks, its potential will be limited, preventing the emergence of new forms of intelligence that transcend human understanding [15][19]. Group 3: Shifting Perspectives on Human Oversight - The article advocates for a shift from "Human in the loop" to "Human over the loop," suggesting that humans should supervise AI systems from a higher vantage point rather than being directly involved in every decision-making process [17][19]. - This new perspective emphasizes defining goals, examining values, and designing ethical frameworks rather than rigidly controlling AI, allowing for greater innovation and adaptability [17][19]. Group 4: Future Implications and Responsibilities - The article posits that the future of knowledge and work will undergo significant transformation, driven by AI, and warns against the inertia of existing paradigms that may hinder progress [20]. - It suggests that accountability in AI should evolve from immediate human intervention to systematic audits and compensatory mechanisms, thereby redefining responsibility in the context of AI deployment [19][20].