Workflow
支持向量机
icon
Search documents
Scaling Laws起源于1993年?OpenAI总裁:深度学习的根本已揭秘
具身智能之心· 2025-09-03 00:03
Core Viewpoint - The article discusses the historical development and significance of the Scaling Law in artificial intelligence, emphasizing its foundational role in understanding model performance in relation to computational resources [2][34][43]. Group 1: Historical Context - The Scaling Law's origins are debated, with claims that it was first proposed by OpenAI in 2020 or discovered by Baidu in 2017 [2]. - Recent discussions attribute the initial exploration of Scaling Law to Bell Labs, dating back to 1993 [3][5]. - The paper from Bell Labs demonstrated the relationship between model size, data set size, and classifier performance, highlighting the long-standing nature of these findings [5][9]. Group 2: Key Findings of the Research - The NeurIPS paper from Bell Labs outlines a method for efficiently predicting classifier suitability, which is crucial for resource allocation in AI model training [12]. - The authors established that as training data increases, the error rate of models follows a predictable logarithmic pattern, reinforcing the Scaling Law's validity [12][16]. - The research indicates that after training on 12,000 patterns, new networks significantly outperform older ones, showcasing the benefits of scaling [16]. Group 3: Contributions of Authors - The paper features five notable authors, including Corinna Cortes and Vladimir Vapnik, both of whom have made significant contributions to machine learning and statistical theory [18][19][27]. - Corinna Cortes has over 100,000 citations and is recognized for her work on support vector machines and the MNIST dataset [21][22]. - Vladimir Vapnik, with over 335,000 citations, is known for his foundational work in statistical learning theory [27]. Group 4: Broader Implications - The article suggests that the Scaling Law is not a sudden insight but rather a cumulative result of interdisciplinary research spanning decades, from psychology to neural networks [34][43]. - The evolution of the Scaling Law reflects a broader scientific journey, with contributions from various fields and researchers, ultimately leading to its current understanding in deep learning [43].