Workflow
人在回路中
icon
Search documents
1865年《红旗法案》的幽灵,仍在今天游荡
腾讯研究院· 2026-02-04 08:54
Core Viewpoint - The article discusses the concept of "Human in the loop" in AI and automation, emphasizing the need for human intervention to ensure control and safety in technology deployment. However, it also critiques this notion, suggesting that such control may hinder technological advancement and innovation [2][3]. Group 1: Historical Context and Analogies - The article draws parallels between the current "Human in the loop" approach and the historical "Red Flag Act" in 19th century Britain, which imposed restrictions on early automobiles to mitigate risks, ultimately stifling technological progress [6][8]. - The "Red Flag Act" required that all self-propelled vehicles have a person walking ahead with a red flag, which limited the speed and development of the automotive industry in Britain, causing it to lag behind other countries [6][8][9]. Group 2: The Role of AI as a Transformative Force - AI is described as the "miracle material" of the current era, akin to steel in the industrial age, with the potential to revolutionize execution, logic restructuring, and automated decision-making [13][15]. - The article argues that if AI is constrained by human cognitive frameworks, its potential will be limited, preventing the emergence of new forms of intelligence that transcend human understanding [15][19]. Group 3: Shifting Perspectives on Human Oversight - The article advocates for a shift from "Human in the loop" to "Human over the loop," suggesting that humans should supervise AI systems from a higher vantage point rather than being directly involved in every decision-making process [17][19]. - This new perspective emphasizes defining goals, examining values, and designing ethical frameworks rather than rigidly controlling AI, allowing for greater innovation and adaptability [17][19]. Group 4: Future Implications and Responsibilities - The article posits that the future of knowledge and work will undergo significant transformation, driven by AI, and warns against the inertia of existing paradigms that may hinder progress [20]. - It suggests that accountability in AI should evolve from immediate human intervention to systematic audits and compensatory mechanisms, thereby redefining responsibility in the context of AI deployment [19][20].
王江平详解如何破除AI科学发现“堰塞湖”
Zhong Guo Xin Wen Wang· 2025-12-16 08:21
Core Insights - The rapid growth of AI prediction results is not matched by human verification and industrialization capabilities, creating a "bottleneck" in scientific discovery application [3][4] - The disparity between the exponential increase in AI predictions and the linear growth of human validation leads to a significant backlog of unverified results [3] Group 1: Reasons for the Bottleneck - The limitations of predictive models, including insufficient logical reasoning, depth of knowledge, and the presence of black box issues and hallucination risks [3] - The absence of standards and evaluation systems makes it difficult to determine the accuracy and composability of numerous prediction results [3] - Insufficient experimental validation capabilities due to poor environmental adaptability, low cross-platform interoperability, and a lack of a closed-loop system for autonomous experiments [3] Group 2: Proposed Solutions - Strengthening the construction of datasets, high-value knowledge centers, and evaluation standards for AI prediction results to reduce redundancy and establish authoritative assessment systems [4] - Accelerating the development of AI autonomous laboratories by promoting open-source and modular approaches, and exploring hybrid augmented intelligence that involves human participation [5] - Enhancing pilot testing platforms to leverage China's application scenarios and foster engineering innovation, while promoting collaboration between academia and industry [5]