Core Insights - The core question in the field of Artificial General Intelligence (AGI) is whether large language models (LLMs) can learn a "world model" or if they are merely playing a "next word prediction" game [1][2] - A recent experiment by Harvard and MIT tested LLMs using orbital mechanics to determine if they could derive the underlying laws of physics, specifically Newton's laws, from their predictions [2][4] - The results indicated a disconnection between prediction and explanation, as the AI models could accurately predict planetary trajectories but failed to encode the underlying physical laws [4][6] Experiment Design and Findings - The research utilized 10 million simulated solar system coordinate sequences (totaling 20 billion tokens) to train a small Transformer model [4] - The hypothesis was that if the model could make accurate predictions without understanding Newton's laws, it would not possess a complete "world model" [2][4] - The findings showed that while the AI could predict trajectories well, the derived force vectors were chaotic and unrelated to Newton's laws, indicating a lack of a stable guiding framework [6][8] Implications for AI Development - The inability of AI models to maintain consistent errors across different samples suggests they do not possess a stable world model, which is essential for scientific discovery [8][9] - The research highlights a fundamental limitation in current AI models, as they can achieve high accuracy in predictions but lack the capability to construct a reality-based world model [10][11] - Future AI development may require a combination of larger models and new methodologies to enhance understanding and generalization capabilities [12][13] Broader Context - The study reflects a classic scientific debate about whether the essence of science lies in precise predictions or in understanding the underlying reasons for phenomena [12][14] - The quest for AI to evolve from being merely a "prediction machine" to a "thinker" capable of understanding the logic of the world is crucial for its future impact on scientific discovery [14]
哈佛&MIT:AI能预测,但它还解释不了“why”
3 6 Ke·2025-10-22 00:56