Workflow
二八法则
icon
Search documents
用AI两年半,我常用到的12个思维模型
Hu Xiu· 2025-06-16 06:40
Core Insights - The article discusses the transformative impact of AI, particularly ChatGPT, on business and entrepreneurship, highlighting the importance of strategic thinking and problem-solving models in leveraging AI for growth [2][4][70]. Group 1: Discovering Problems - Many AI experiments fail not due to technical limitations but because of incorrect problem identification [8]. - The Johari Window model helps in understanding boundaries and expectations, revealing opportunities in the "AI doesn't know" quadrant [9][10]. - Emphasizing the need to respect the "I don't know" quadrant to avoid repeated investments based on false assumptions [12]. Group 2: Problem Decomposition - The Pyramid Principle and MECE framework are essential for structured problem decomposition, ensuring clarity and comprehensive coverage [28][30]. - The principle of Occam's Razor suggests prioritizing the simplest solution to avoid over-engineering [34][36]. - First Principles thinking encourages breaking down problems to their core elements for innovative solutions [39][41]. Group 3: Validation and Iteration - The MVP (Minimum Viable Product) approach advocates for quickly launching prototypes to gather user feedback and iterate based on data [49][51]. - Iterative thinking involves a cycle of prompt, output, review, and refinement to achieve optimal results [54][56]. - ROI (Return on Investment) awareness is crucial for understanding costs and benefits, emphasizing the importance of time and opportunity costs in decision-making [64][66].
教育部严管下,独立老师涨价 30%:是谁在推高 “地下补课” 成本?
3 6 Ke· 2025-06-11 03:41
Core Viewpoint - The independent tutoring industry has emerged as a lucrative opportunity for many educators following the "double reduction" policy, but it faces significant challenges regarding recognition, regulation, and sustainability [2][5][21]. Group 1: Industry Overview - The independent tutoring sector has gained traction as many former educators transitioned to one-on-one tutoring after the "double reduction" policy led to the closure of numerous tutoring institutions [2][19]. - Independent tutors operate outside formal institutions, relying on personal networks and word-of-mouth for student recruitment, which has allowed them to capture a significant share of the tutoring market [2][19]. - The industry is characterized by a "80/20 rule," where 20% of the top tutors earn 80% of the income, while the majority struggle to maintain a stable income [2][28]. Group 2: Regulatory Environment - The Ministry of Education's recent regulations prohibit any individual or organization from operating tutoring institutions outside of schools, further complicating the landscape for independent tutors [2][21]. - The enforcement of these regulations has led to a climate of fear among independent tutors, who must navigate the risks of being reported for illegal tutoring activities [3][24]. Group 3: Income and Challenges - Many independent tutors report high monthly incomes, with some earning over 50,000 yuan during peak seasons, but they often lack job security and benefits [8][14]. - The income of independent tutors is heavily dependent on their ability to attract and retain students, which can fluctuate based on market demand and regulatory scrutiny [28][30]. - The lack of formal recognition and the stigma associated with being an independent tutor contribute to feelings of insecurity and a lack of professional respect within the education community [3][5][18]. Group 4: Personal Experiences - Individual stories highlight the struggles and successes of independent tutors, with many expressing a desire for greater recognition and stability in their profession [10][36]. - Despite the challenges, many tutors remain committed to their roles, citing the flexibility and potential for high earnings as key reasons for staying in the industry [42][44].
Qwen&清华团队颠覆常识:大模型强化学习仅用20%关键token,比用全部token训练还好
量子位· 2025-06-05 10:28
Core Insights - The article discusses a recent breakthrough by the LeapLab team from Tsinghua University, revealing that only 20% of high-entropy tokens can significantly enhance the training effectiveness of large models in reinforcement learning, outperforming the use of all tokens [1][6]. Group 1: Research Findings - The team achieved new state-of-the-art (SOTA) records with the Qwen3-32B model, scoring 63.5 in AIME'24 and 56.7 in AIME'25, marking the highest scores for models with fewer than 600 billion parameters trained directly from the base model [2]. - The maximum response length was extended from 20k to 29k, resulting in a score increase to 68.1 in AIME'24 [4]. - The research challenges the classic Pareto principle, indicating that in large model reinforcement learning, 80% of low-entropy tokens can be discarded without detrimental effects, and may even have adverse impacts [5][6]. Group 2: Token Analysis - The study reveals a unique entropy distribution pattern during chain-of-thought reasoning, where over 50% of tokens have an entropy value below 0.01, while only 20% exceed 0.672 [9][10]. - High-entropy tokens serve as "logical connectors" in reasoning, while low-entropy tokens are often deterministic components, such as affixes or mathematical expressions [11]. - The team conducted experiments showing that increasing the temperature of high-entropy tokens improves reasoning performance, while lowering it decreases performance, underscoring the importance of maintaining high entropy in critical positions [13]. Group 3: Training Methodology - By focusing solely on the top 20% of high-entropy tokens during reinforcement learning training, the Qwen3-32B model saw significant performance improvements, with AIME'24 scores increasing by 7.71 points and AIME'25 by 11.04 points, alongside an average response length increase of approximately 1378 tokens [15][17]. - Similar performance enhancements were observed in the Qwen3-14B model, while the Qwen3-8B model maintained stable performance [16]. - Conversely, training with 80% low-entropy tokens led to a sharp decline in model performance, indicating their minimal contribution to reasoning capabilities [18]. Group 4: Implications and Generalization - The findings suggest that high-entropy tokens facilitate exploration of different reasoning paths, while low-entropy tokens may restrict this exploration due to their determinism [20]. - The advantages of training with high-entropy tokens become more pronounced with larger models, with the 32B model showing the most significant improvements [22]. - Models trained with high-entropy tokens also performed exceptionally well on out-of-domain tasks, indicating a potential link between high-entropy tokens and the model's generalization capabilities [22]. Group 5: Reinforcement Learning Insights - The research indicates that reinforcement learning with verifiable rewards (RLVR) does not completely overhaul the base model but rather fine-tunes it, maintaining a high overlap of 86.67% in high-entropy token positions even after extensive training [24][25]. - The study highlights that higher initial entropy in tokens correlates with greater increases in entropy during RLVR training, while low-entropy tokens remain largely unchanged [25]. - Discussions raised in the article suggest that high-entropy tokens may explain why reinforcement learning can generalize better than supervised fine-tuning, which tends to lead to memorization and overfitting [26][27].
连续40年增长,英国“餐饮界蜜雪冰城”凭什么?
FBIF食品饮料创新· 2025-04-27 00:55
以下文章来源于联商网 ,作者联商网编辑部 联商网 . 中国零售门户网站联商网订阅号,聚焦零售行业,全面提供购物中心、快消、电商、时尚品牌等第一手 热点资讯,深度观察、数据分析等。 如今,红底白字的蜜雪冰城在中国高歌猛进,遍布各大城市的大街小巷;而在世界的另一端,蓝底白 字的GREGGS广泛分布于英国主要城镇。它们最大的共同点在于:同为本土"最亲民"的餐饮品牌。 在英国餐饮业普遍承压的背景下,GREGGS却逆势上扬。 数据显示,该公司2024年销售额达20亿 英镑(约合人民币188亿),店铺数量突破2600家,远超身后的麦当劳(1456家)和星巴克(1381家)。 值得一提的是,GREGGS的消费者满意度指数是行业平均水平的6倍左右,稳坐英国"最受欢迎餐饮 品牌"位置。 图片来源:小红书@一枚学术猿_爱雨木木 图片来源:公众号@联商网 为什么一家看似平平无奇的烘焙店能俘获人心?为什么它既没有星巴克的高端咖啡文化,也没有麦当 劳的全球化标准,却能成为英国餐饮界的国民品牌? 从街边小店到全英连锁 在英国,提到GREGGS,几乎无人不晓。 它不仅是连锁烘焙店,更成为一种文化象征。《卫报》记者乔尔·戈尔比(Joel G ...