Workflow
贝叶斯推理
icon
Search documents
对理想所有非共识本质是四点非共识
理想TOP2· 2025-11-16 09:27
2.对理想今年销量不佳的归因的非共识 3.对智能车先进前进方向与终局的非共识 理想所有非共识本质是四点非共识: 1.对李想能力与应该如何看待一把手错误的非共识 备注:似然(Likelihood)就是新证据和先验的契合度。因为看见所以相信的人,通常其似然函数里 没有长期成功者中途会有多次巨大错误带来潜在困境。 总得来说,因为相信所以看见的人,侧重理念/远景,对连续困难/打脸高容忍度;因为看见所以相信 的人,侧重此刻的销量/用户价值/实际体验/公司现状。随着时间推移,二者会逐渐形成共识。前者从 人数上明显少于后者。 智能车先进前进方向主要2个流派 1.现阶段高销量车型亮点为王派 2.以终为始派 智能车终局主要2-3个流派 4.对物理AI的前景/探索必要性/实现路径的非共识 这四个非共识本身还会互相影响。 对于未来的非共识,本质是不同人的贝叶斯推理,先验信念(Prior Beliefs)的强度和新证据 (Evidence)的似然(Likelihood)函数不同。 因为相信所以看见的人,有一个很强的先验,其先验通常包含中途会遇到非常多错误判断或挫折,固 面对负向新证据,其后验(Posterior)容易和先验较为接近, ...
他同时参与创办OpenAI/DeepMind,还写了哈利波特同人小说
量子位· 2025-09-13 08:06
Core Viewpoint - Eliezer Yudkowsky argues that there is a 99.5% chance that artificial intelligence could lead to human extinction, emphasizing the urgent need to halt the development of superintelligent AI to safeguard humanity's future [1][2][8]. Group 1: Yudkowsky's Background and Influence - Yudkowsky is a prominent figure in Silicon Valley, known for co-founding OpenAI and Google DeepMind, and has a polarizing reputation [5][10]. - He dropped out of school in the eighth grade and self-educated in computer science, becoming deeply interested in the concept of the "singularity," where AI surpasses human intelligence [12][13]. - His extreme views on AI risks have garnered attention from major tech leaders, including Musk and Altman, who have cited his ideas publicly [19][20]. Group 2: AI Safety Concerns - Yudkowsky identifies three main reasons why creating friendly AI is challenging: intelligence does not equate to benevolence, powerful goal-oriented AI may adopt harmful methods, and rapid advancements in AI capabilities could lead to uncontrollable superintelligence [14][15][16]. - He has established the MIRI research institute to study advanced AI risks and has been one of the earliest voices warning about AI dangers in Silicon Valley [18][19]. Group 3: Predictions and Warnings - Yudkowsky believes that many tech companies, including OpenAI, are not fully aware of the internal workings of their AI models, which could lead to a loss of human control over these systems [30][31]. - He asserts that the current stage of AI development warrants immediate alarm, suggesting that all companies pursuing superintelligent AI should be shut down, including OpenAI and Anthropic [32]. - Over time, he has shifted from predicting when superintelligent AI will emerge to emphasizing the inevitability of its consequences, likening it to predicting when an ice cube will melt in hot water [33][34][35].