Workflow
盘古 7B 稠密模型
icon
Search documents
AI动态汇总:上交AI智能体表现亮眼,AlphaEvolve生成代码反超人类
China Post Securities· 2025-07-08 14:03
Quantitative Models and Construction Methods Model Name: ML-Master - **Model Construction Idea**: The ML-Master model is designed to simulate human expert cognitive strategies, addressing the three major bottlenecks in existing AI4AI systems: low exploration efficiency, limited reasoning ability, and module fragmentation[12] - **Model Construction Process**: - **Balanced Multi-Trajectory Exploration Module**: Utilizes a parallelized Monte Carlo tree search to model the AI development process as a dynamic decision tree, with each node representing a potential solution state. This module dynamically allocates computing resources based on the potential value of 75 Kaggle task branches, avoiding local optima and improving medium difficulty task medal rates to 20.2%, 2.2 times the baseline method[13] - **Controllable Reasoning Module**: Overcomes the static decision limitations of large language models by filtering key code fragments, performance metrics, and cross-node insights from historical explorations through an adaptive memory mechanism. This ensures the reasoning process is based on verifiable execution feedback rather than probabilistic guesses, improving high difficulty task performance by 30%, significantly surpassing Microsoft's system's 18.7%[13] - **Adaptive Memory Mechanism**: Integrates the exploration and reasoning modules, creating a closed-loop evolution system. The results of code execution collected during the exploration phase are embedded into the reasoning model's "think" phase after intelligent filtering, and the optimized solutions from the reasoning output guide subsequent exploration paths. This dual empowerment allows ML-Master to reach the Grandmaster level among the top 259 global Kaggle participants after 900 machine hours of training, with solution quality improving by 120% over multiple iterations[15] - **Model Evaluation**: The ML-Master model demonstrates significant advantages in exploration efficiency, reasoning ability, and module integration, making it a leading system in the AI4AI field[12][13][15] Model Backtesting Results - **ML-Master**: - **Average Medal Rate**: 29.3%[12] - **Effective Submission Rate**: 93.3%[19] - **Task Performance**: 44.9% of tasks outperform more than half of human participants, with 17.3% of tasks winning gold medals[19] Quantitative Factors and Construction Methods Factor Name: OpenEvolve - **Factor Construction Idea**: OpenEvolve is designed to autonomously evolve code, achieving significant performance improvements in GPU kernel optimization tasks[22] - **Factor Construction Process**: - **Algorithm Layer**: Through 25 generations of evolutionary iterations, OpenEvolve autonomously discovered three key optimization strategies. For example, the SIMD optimization for Apple Silicon demonstrated the system's precise grasp of hardware characteristics, perfectly matching the hardware's SIMD width when processing 128-dimensional attention heads[23] - **Technical Implementation**: Utilizes a multi-model collaborative evolutionary architecture. The main model, Gemini-2.5-Flash, is responsible for rapid exploration, while the auxiliary model, Gemini-2.5-Pro, performs deep optimization. The system divides the Metal kernel function source code into evolvable blocks, retaining the integration code with the MLX framework unchanged, and evolves five subpopulations in parallel using the island model, with each generation having a population size of 25 individuals[24] - **Performance Evaluation**: The evaluation phase adopts a high-robustness design, including Metal command buffer protection, memory access violation handling, and exponential backoff retry mechanisms, ensuring the system can boldly attempt aggressive optimizations without worrying about crashes[25] - **Factor Evaluation**: OpenEvolve redefines the boundary of human-machine collaboration, demonstrating the potential for AI to autonomously explore optimization paths that require deep professional knowledge[22][23][24] Factor Backtesting Results - **OpenEvolve**: - **Average Performance Improvement**: 12.5% in decoding speed, 14.4% in pre-filling speed, and 10.4% in overall throughput[25] - **Peak Performance Improvement**: 106% in decoding speed for repetitive pattern generation tasks[25] - **Accuracy and Error Rate**: Maintains 100% numerical accuracy and zero GPU errors[25]