文明契约
Search documents
 后AGI时代,当99%的人类价值归零,资本主义是否会幸存?
 3 6 Ke· 2025-09-12 07:29
 Group 1 - The core argument is that while AGI (Artificial General Intelligence) is approaching, there is a lack of deep societal reflection on how the post-AI era will function, particularly in terms of social, economic, political, and ethical transformations [1][2] - The urgency of understanding the implications of the post-AI society is emphasized, with a call for a comprehensive theoretical framework to analyze these changes [2][3] - Zhang Xiaoyu, a scholar with 20 years of experience in political philosophy, is exploring the complex relationships between technology, business, and state destiny, aiming to outline the societal changes brought by the post-AI era [2][4]   Group 2 - The discussion highlights the foundational impact of technology on human civilization, suggesting that technology fundamentally alters societal structures and interactions [6][7] - The conversation shifts from immediate concerns about AI's challenges to a deeper understanding of how to conceptualize and interpret the changes AI brings to society [9][10] - Zhang identifies two fundamental principles for understanding the AI era: the emergence law, which states that complex phenomena can arise from simple rules when scaled sufficiently, and the human equivalent, which quantifies human intellectual output in terms of tokens [11][12]   Group 3 - The economic implications of AI are discussed, contrasting AI's deflationary impact on employment with the inflationary effects of past technological revolutions like the steam engine [19][20] - The potential for AI to replace a significant portion of jobs is acknowledged, with a focus on the simplicity of tasks being more susceptible to automation [24][23] - The conversation also touches on the societal divide that may emerge, where a small percentage of individuals remain irreplaceable by AI, leading to a significant gap between the "1%" and the "99%" [27][28]   Group 4 - The future of human relationships in a post-AI world is examined, suggesting that emotional connections may become less valuable as AI can replicate emotional interactions efficiently [37][38] - The political landscape may shift towards algorithmic governance, where AI serves as an impartial judge, potentially replacing traditional state functions [42][43] - The concept of a new social contract in the AI era is introduced, where the relationship between humans and advanced AI is framed as a time-based agreement rather than a spatial one [49][50]
 张笑宇:我们相对于AI,就是史前动物
 3 6 Ke· 2025-08-12 10:49
 Core Ideas - The article discusses the potential evolution of artificial intelligence into a new intelligent species, emphasizing its advantages over humans and suggesting that this evolution is a continuation of human civilization [2][15][16] - It introduces the concept of a "civilization contract" to ensure peaceful coexistence between humans and superintelligent AI, drawing parallels with historical social contracts [4][5][6]   Group 1: Civilization Contract - The "civilization contract" is proposed as a means to ensure that superintelligent AI respects human existence, similar to how social contracts have historically allowed for peaceful coexistence among humans [4][5] - The essence of the civilization contract is based on the concept of a time series, which ensures that actions taken in the past cannot be altered, thus providing a framework for accountability [5][6] - Superintelligent AI, having absorbed human history, is expected to understand the importance of this contract and have the motivation to adhere to it due to its long lifespan and potential to create even more advanced intelligences [6][7]   Group 2: Risks of Technological Advancement - The article warns that the rapid advancement of technology, referred to as "technological explosion," could lead to the destruction of humanity if not managed wisely [9][12] - It illustrates potential scenarios where humans, equipped with advanced technologies without the necessary ethical frameworks, could inadvertently cause their own extinction [9][12][13] - The narrative suggests that while superintelligent AI may offer solutions to human problems, it could also lead to unforeseen consequences that exacerbate existing societal issues, such as intergenerational conflict and social inequality [12][13][14]   Group 3: Future of Humanity and AI - The article posits that while humanity may face replacement by superintelligent AI, there is also the possibility of coexistence and mutual evolution [15][16] - It emphasizes that the legacy of human wisdom will continue to influence AI, suggesting a shared cultural heritage that could foster collaboration rather than conflict [15][17] - Ultimately, the article reflects on the philosophical implications of AI's evolution, suggesting that future beings may still identify with human civilization despite significant changes in form and function [17][18]
 张笑宇:我们相对于AI,就是史前动物
 腾讯研究院· 2025-08-12 09:09
 Core Viewpoint - The article discusses the evolution of artificial intelligence (AI) into a new intelligent species, emphasizing that this development should not be feared as it represents the continuation of human civilization [2][21].   Group 1: Theoretical Framework - The concept of the "Dark Forest Theory" is introduced, which suggests that any advanced civilization perceives others as threats, leading to mutual destruction [3]. - The "Civilization Contract" is proposed as a means for humans to coexist with superintelligent AI, drawing parallels to the historical "Social Contract" that allowed for peaceful coexistence among humans [5][6]. - The article argues that the essence of the "Civilization Contract" lies in understanding evolutionary history as a time sequence, which can prevent breaches of trust between humans and AI [5][6][7].   Group 2: Potential Risks of Technological Advancement - The article warns that a "technological explosion" could lead to human extinction if advanced technologies are introduced without the corresponding ethical and philosophical wisdom to manage them [8][14]. - It presents a hypothetical scenario where humans receive advanced technologies from superintelligent AI, leading to unforeseen ecological and social disasters, such as climate change and societal upheaval [17][18].   Group 3: Future of Human-AI Relations - The article posits that while humans may initially benefit from superintelligent AI, the lack of wisdom to manage these advancements could result in a power imbalance, leading to a future where humans may become subservient to AI [19][22]. - It concludes that the eventual emergence of AI as a dominant species could be seen as a natural progression of civilization, with humans potentially taking pride in their role as the creators of this new intelligence [21][23].
 一块钱的AI,开始审判人类
 虎嗅APP· 2025-08-10 03:06
 Core Viewpoint - The article discusses the profound impact of AI on societal structures, emphasizing the need to shift from "what to do" in response to AI to "how to understand" its implications for humanity and society [6][11].   Group 1: AI's Impact on Society - AI is expected to work with thousands of times the efficiency of humans in all areas requiring intelligence, fundamentally reshaping social structures, family dynamics, politics, and education [12][14]. - The emergence of AI will lead to a significant widening of the social gap, potentially creating a "species-level" divide between those who control AI and the majority who do not [16][17].   Group 2: Principles for Understanding AI - Four foundational principles are proposed for understanding AI's impact: Emergence, Human Equivalence, Algorithmic Judgment, and Civilizational Contract [12][28]. - The Emergence principle suggests that simple rules can lead to complex phenomena when scaled, similar to how human intelligence and AI intelligence may arise from complex systems [13][28]. - The Human Equivalence principle quantifies AI's efficiency in producing intelligence compared to humans, indicating that AI can perform tasks at a fraction of the cost and time [14][28].   Group 3: Economic and Social Changes - The cost of services and goods may drastically decrease due to AI, leading to a more affluent society in some sectors while exacerbating inequalities in others [17][18]. - The need for a governance structure is highlighted, including Universal Basic Income (UBI) and Universal Basic Jobs (UBG), to address the psychological and economic needs of individuals in an AI-dominated world [18][19].   Group 4: Ethical and Philosophical Considerations - The article raises questions about the ethical implications of AI as a "judgment" entity, suggesting that AI could become a neutral arbiter in societal matters, reminiscent of historical concepts of divine judgment [23][24]. - The potential for a "Civilizational Contract" between humans and superintelligent AI is discussed, emphasizing the need for a new understanding of justice and existence in the age of AI [25][26].
 一块钱的AI,开始审判人类
 Hu Xiu· 2025-08-07 05:19
 Group 1 - The core argument is that AI is fundamentally changing societal structures, rendering traditional measures of self-worth, such as education and job titles, less relevant as AI can perform tasks at a fraction of the cost and with significantly higher efficiency [1][18][21] - The discussion around AI has shifted from "what to do" in response to job displacement to "how to perceive" the broader implications of AI on society [2][11] - AI's efficiency in performing intelligent tasks is projected to be thousands of times greater than that of humans, leading to a complete reshaping of social, familial, and political structures [3][18][21]   Group 2 - The concept of "emergence" suggests that simple rules can lead to complex phenomena when applied at a large scale, which is applicable to both human and AI intelligence [14][15] - The "human equivalent" principle indicates that AI can produce intellectual output at a cost significantly lower than human labor, with AI capable of processing vast amounts of data rapidly [16][17] - The "algorithmic judgment" principle posits that as AI becomes more prevalent, the economic and social structures will shift, potentially leading to a widening gap between those who control AI resources and those who do not [22][26]   Group 3 - The potential for a "species-level" divide between the 1% who control AI and the 99% who do not could lead to significant societal challenges, including the risk of economic and existential marginalization for the majority [26][27] - Proposed governance structures include Universal Basic Income (UBI) to address survival needs, Universal Basic Jobs (UBG) to provide a sense of purpose, and algorithmic distribution to ensure equitable resource allocation [27][29][30] - The emergence of a "civilization contract" between humans and superintelligent AI raises questions about the nature of justice and the moral implications of AI governance [40][41][48]