Workflow
挑战AI数学推理极限!大规模形式化数学基准FormalMATH发布,最强模型成功率仅16%
量子位·2025-05-07 09:33

Core Insights - The FormalMATH benchmark test, developed by institutions such as The Chinese University of Hong Kong and Zhejiang University, consists of 5,560 rigorously validated mathematical problems, covering various fields from Olympiad level to undergraduate courses, and is 22.8 times larger than existing benchmarks [1][5][4]. Group 1: Performance of LLMs - The performance of current LLM-driven theorem provers is significantly below expectations, with the best model, Kimina-Prover, achieving a success rate of only 16.46% under resource constraints [3][15]. - Most models perform close to random guessing in calculus and other areas, indicating a substantial capability gap [3][7]. - There is a notable domain bias, with better performance in algebra compared to weaker results in calculus [11][12]. Group 2: Error Analysis - Common error patterns include: - Redundant assumptions (34%): Introducing irrelevant premises [16]. - Incomplete proofs (62%): Missing critical steps in the proof [16]. - Misuse of automation strategies (65%): Incorrectly applying automated tools [16]. - Inability to handle inequalities correctly (13%): Over-reliance on automated inequality calculation strategies [16]. - The analysis shows that LLM provers often resort to shortcut tactics, which leads to significant errors [14]. Group 3: Future Directions - To enhance the formal reasoning capabilities of LLMs, three areas of focus are proposed: - Strengthening multi-step planning to reduce reliance on single-step tactics [19]. - Cross-domain generalization through curriculum learning to balance training data across different mathematical fields [19]. - Development of interactive proof-assistance tools for collaboration between LLMs and human experts [19]. Group 4: Open Source Initiative - The research team has made the FormalMATH benchmark's code, training data, and evaluation models publicly available, encouraging collaboration between academia and industry to advance formal mathematical reasoning technologies [20][21].