贝叶斯推理
Search documents
对理想所有非共识本质是四点非共识
理想TOP2· 2025-11-16 09:27
Core Viewpoints - The article discusses four main areas of non-consensus regarding Li Auto, including perceptions of Li Xiang's capabilities, reasons for the company's poor sales this year, the direction and ultimate goals of smart vehicles, and the prospects of physical AI [1][2]. Group 1: Non-Consensus Areas - The first area of non-consensus revolves around how to evaluate Li Xiang's abilities and the implications of leadership errors [1]. - The second area focuses on differing opinions regarding the reasons behind Li Auto's disappointing sales performance this year [1]. - The third area addresses the advanced directions and ultimate goals of smart vehicles, highlighting two main schools of thought: one prioritizing high sales models and the other focusing on end goals [3][4]. - The fourth area concerns the future of physical AI, including its exploration necessity and potential pathways for realization [1]. Group 2: Bayesian Reasoning - The article emphasizes that differing beliefs about the future stem from individuals' Bayesian reasoning, where the strength of prior beliefs and the likelihood of new evidence vary among people [1]. - Those who "believe it to see it" tend to have strong priors that may lead to a higher tolerance for errors, while those who "see it to believe it" have weaker priors, making them more responsive to new evidence [2]. Group 3: Smart Vehicle Directions - Two main factions exist regarding the direction of smart vehicles: one that focuses on high sales models and another that starts with the end goal in mind [3][4]. - The "high sales model" faction emphasizes current successful vehicle features, while the "end goal" faction believes in a future defined by AI and automated driving [5]. Group 4: Evaluation of Li Auto's Strategy - The article notes that perceptions of Li Auto's long-term strategy and capabilities vary significantly, with some believing in the company's potential for recovery through iterative improvements, while others doubt Li Xiang's abilities due to repeated errors [6]. - The evaluation of Li Auto's products and strategies is influenced by whether individuals focus on immediate performance or the foundational principles guiding the company's design [5][6].
他同时参与创办OpenAI/DeepMind,还写了哈利波特同人小说
量子位· 2025-09-13 08:06
Core Viewpoint - Eliezer Yudkowsky argues that there is a 99.5% chance that artificial intelligence could lead to human extinction, emphasizing the urgent need to halt the development of superintelligent AI to safeguard humanity's future [1][2][8]. Group 1: Yudkowsky's Background and Influence - Yudkowsky is a prominent figure in Silicon Valley, known for co-founding OpenAI and Google DeepMind, and has a polarizing reputation [5][10]. - He dropped out of school in the eighth grade and self-educated in computer science, becoming deeply interested in the concept of the "singularity," where AI surpasses human intelligence [12][13]. - His extreme views on AI risks have garnered attention from major tech leaders, including Musk and Altman, who have cited his ideas publicly [19][20]. Group 2: AI Safety Concerns - Yudkowsky identifies three main reasons why creating friendly AI is challenging: intelligence does not equate to benevolence, powerful goal-oriented AI may adopt harmful methods, and rapid advancements in AI capabilities could lead to uncontrollable superintelligence [14][15][16]. - He has established the MIRI research institute to study advanced AI risks and has been one of the earliest voices warning about AI dangers in Silicon Valley [18][19]. Group 3: Predictions and Warnings - Yudkowsky believes that many tech companies, including OpenAI, are not fully aware of the internal workings of their AI models, which could lead to a loss of human control over these systems [30][31]. - He asserts that the current stage of AI development warrants immediate alarm, suggesting that all companies pursuing superintelligent AI should be shut down, including OpenAI and Anthropic [32]. - Over time, he has shifted from predicting when superintelligent AI will emerge to emphasizing the inevitability of its consequences, likening it to predicting when an ice cube will melt in hot water [33][34][35].