Workflow
非 Transformer 架构
icon
Search documents
硬核「吵」了30分钟:这场大模型圆桌,把AI行业的分歧说透了
机器之心· 2025-07-28 04:24
Core Viewpoint - The article discusses a heated debate among industry leaders at the WAIC 2025 forum regarding the evolution of large model technologies, focusing on training paradigms, model architectures, and data sources, highlighting a significant shift from pre-training to reinforcement learning as a dominant approach in AI development [2][10][68]. Group 1: Training Paradigms - The forum highlighted a paradigm shift in AI from a pre-training dominant model to one that emphasizes reinforcement learning, marking a significant evolution in AI technology [10][19]. - OpenAI's transition from pre-training to reinforcement learning is seen as a critical development, with experts suggesting that the pre-training era is nearing its end [19][20]. - The balance between pre-training and reinforcement learning is a key topic, with experts discussing the importance of pre-training in establishing a strong foundation for reinforcement learning [25][26]. Group 2: Model Architectures - The dominance of the Transformer architecture in AI has been evident since 2017, but its limitations are becoming apparent as model parameters increase and context windows expand [31][32]. - There are two main exploration paths in model architecture: optimizing existing Transformer architectures and developing entirely new paradigms, such as Mamba and RetNet, which aim to improve efficiency and performance [33][34]. - The future of model architecture may involve a return to RNN structures as the industry shifts towards agent-based applications that require models to interact autonomously with their environments [38]. Group 3: Data Sources - The article discusses the looming challenge of high-quality data scarcity, predicting that by 2028, existing data reserves may be fully utilized, potentially stalling the development of large models [41][42]. - Synthetic data is being explored as a solution to data scarcity, with companies like Anthropic and OpenAI utilizing model-generated data to supplement training [43][44]. - Concerns about the reliability of synthetic data are raised, emphasizing the need for validation mechanisms to ensure the quality of training data [45][50]. Group 4: Open Source vs. Closed Source - The ongoing debate between open-source and closed-source models is highlighted, with open-source models like DeepSeek gaining traction and challenging the dominance of closed-source models [60][61]. - Open-source initiatives are seen as a way to promote resource allocation efficiency and drive industry evolution, even if they do not always produce the highest-performing models [63][64]. - The future may see a hybrid model combining open-source and closed-source approaches, addressing challenges such as model fragmentation and misuse [66][67].