Workflow
理解
icon
Search documents
生活不是孤岛,我们比任何时候都渴望拥抱 | 创业Lifestyle
红杉汇· 2026-01-16 00:05
Core Viewpoint - Trust is defined as a scientific decision-making model composed of understanding, motivation, ability, character, and integrity record, rather than a simple feeling of belief [2] Group 1: Understanding as the Foundation of Trust - The starting point of trust is not persuading others to trust, but making them feel understood [6] - Understanding encompasses emotional resonance, psychological insight, physiological perception, and communication comprehension [7] Group 2: Motivation Fostering Trust - Genuine trust transcends moral or responsibility codes, rooted in love, care, and compassion [8] - A mindset of considering others' well-being is crucial for fostering trust in collaborative environments [8] Group 3: Ability as a Component of Trust - Trust requires the support of ability; people often delegate tasks without fully understanding the other person's capabilities [9] - Organizations need knowledgeable individuals in leadership roles to ensure effective governance [10] Group 4: Character's Role in Trust - Character influences trust significantly; it involves more than just honesty and moral behavior [10] - Integrity is defined as a state of wholeness, requiring a combination of various traits beyond mere honesty [16] Group 5: Importance of Integrity Record - Past behavior is the best predictor of future actions; trust is built on the expectation that individuals will act consistently based on their history [17]
你在考AI?其实是AI在“考”你 | 红杉Library
红杉汇· 2026-01-09 00:07
Core Insights - The article discusses the revolutionary hypothesis of "reverse Turing test" proposed by Terrence Sejnowski in his new book "The Large Language Model," suggesting that large language models act like "Eris's magic mirror," reflecting the intelligence level and quality of prompts from the interlocutor rather than merely passing human tests [2][4] - The traditional cognitive framework based on natural intelligence is becoming inadequate for large language models, necessitating an update in the definitions of core concepts like "intelligence" and "understanding" [2][12] - The rapid development of large language models could lead to groundbreaking discoveries in new principles of intelligence and mathematics, potentially revolutionizing the field of artificial intelligence in a manner akin to the role of DNA in biology [2][12] Summary by Sections Reverse Turing Test Hypothesis - Sejnowski posits that large language models can assess the intelligence of users through their responses, indicating that higher quality prompts lead to more sophisticated model outputs [4][7] - This phenomenon is described as a mapping effect, where the model's performance improves with the depth of the user's input [8] Reevaluation of Intelligence Standards - The article emphasizes the need to redefine human standards of intelligence, moving from idealized human comparisons to more realistic assessments based on ordinary individuals [10][11] - The ongoing debate about whether large language models truly understand their outputs reflects a broader discussion about the nature of intelligence itself [14] Implications for Understanding Intelligence - The emergence of large language models provides an opportunity to rethink and deepen the understanding of concepts like "intelligence," "understanding," and "ethics," which have been shaped by outdated 19th-century psychological frameworks [12][13] - The article draws parallels between the current discussions on intelligence and historical debates on the essence of life, suggesting that advancements in machine learning may lead to a new conceptual framework for artificial intelligence [14]
X @Yuyue
Yuyue· 2025-11-26 12:34
林宥嘉这首歌,经典,太对了,“人和人的沟通,有时候没有用”人类的理解和表达能力真的很差,跟 AI 说话说多了,你会觉得跟人说话真的很累就拿剪头发为例,在沟通的时候哪怕把需求说得很明白了 “要求刘海不要太短”、“不要遮住额头”,Tony 老师还是会按他自己的想法来…总是有亿点自己的想法… https://t.co/4Ib3FFmTks ...