Core Viewpoint - The article discusses the historical development and significance of Scaling Laws in artificial intelligence, emphasizing their foundational role in understanding model performance in relation to computational resources [1][41]. Group 1: Origin and Development of Scaling Laws - There are various claims regarding the origin of Scaling Laws, with some attributing it to OpenAI in 2020, while others credit Baidu in 2017, and recent claims suggest that Bell Labs was the true pioneer as early as 1993 [1][3][32]. - The paper from Bell Labs, which is highlighted in the article, trained classifiers on datasets of varying sizes and model scales, establishing a power law relationship that has been recognized for over three decades [3][10]. Group 2: Practical Implications of Scaling Laws - The paper proposes a practical method for predicting classifier suitability, which helps allocate resources efficiently to the most promising candidates, thereby avoiding the high costs associated with training underperforming classifiers [10][14]. - The findings indicate that as the scale of the model increases, the intelligence of AI systems also improves, demonstrating the long-term validity of Scaling Laws from early machine learning models to modern large-scale models like GPT-4 [14][41]. Group 3: Contributions of Key Researchers - The article highlights the contributions of the five authors of the influential paper, including Corinna Cortes, who has over 100,000 citations and is known for her work on support vector machines and the MNIST dataset [17][19][20]. - Vladimir Vapnik, another key figure, is recognized for his foundational work in statistical learning theory, which has significantly influenced the field of machine learning [25][26]. - John S. Denker is noted for his diverse research interests and contributions across various domains, including neural networks and quantum mechanics [27][30]. Group 4: Broader Context and Historical Significance - The article suggests that the exploration of learning curves and Scaling Laws spans multiple disciplines and decades, indicating a cumulative effort from various researchers across different fields [32][41]. - Comments from researchers in the article suggest that the roots of Scaling Laws may extend even further back, with early explorations in psychology and other domains predating the work at Bell Labs [34][39].
Scaling Laws起源于1993年?OpenAI总裁:深度学习的根本已揭秘
机器之心·2025-09-02 06:32