Workflow
全网开测GPT-oss!技术架构也扒明白了
量子位·2025-08-07 00:56

Core Insights - The article highlights the impressive performance of GPT-oss, which surpasses many existing open-source models and is poised to lead in the SaaS fast-fashion era [1][3][4]. Performance Testing - GPT-oss has successfully passed multiple performance tests, achieving top rankings in various benchmarks, including GPQA Diamond, AIME 2024, AIME 2025, and Codeforces, outperforming models like DeepSeek R1, Qwen3, and Llama 4 [5][6]. - In the MMLU benchmark, GPT-oss achieved scores of 85.9 for the low 120B model and 88 for the medium model, while Qwen3-235B performed slightly better in MMLU [6][7]. Model Architecture - The architecture of GPT-oss is noted for its wider structure, more attention heads, and higher hidden dimensions compared to similar models, incorporating advanced techniques such as attention bias units [22][24][26]. - The model retains the core MoE Transformer architecture while optimizing performance and reducing complexity, making it suitable for open-source applications [26][28]. Cost and Training - The estimated cost for training the GPT-oss-120B model is between $4.2 million and $23.1 million, while the 20B model costs between $420,000 and $2.3 million [30]. - There are indications that the model may have limitations in non-English text performance, with a significant portion of responses containing grammatical or spelling errors [30]. User Applications - Users have begun exploring various applications for GPT-oss, including its integration into platforms for academic paper understanding and data transformation [17][19][20]. - The model can be easily accessed and utilized through platforms like LM Studio and AWS, facilitating rapid development of AI applications [33][34]. Community Engagement - The article encourages users to test GPT-oss and share their experiences, indicating a growing community interest in the model's capabilities [39].