Workflow
蚂蚁发布并开源万亿参数思考模型Ring-1T
Xin Jing Bao·2025-10-14 04:20

Core Viewpoint - Ant Group has officially launched the trillion-parameter thinking model Ring-1T, which is fully open-sourced, including model weights and training recipes, enhancing its natural language reasoning capabilities and overall performance across various tasks [1] Group 1: Model Development - Ring-1T builds upon the previously released preview version Ring-1T-preview, expanding large-scale verifiable reward reinforcement learning (RLVR) training [1] - The model aims to improve general capabilities through Reinforcement Learning from Human Feedback (RLHF) training, resulting in more balanced performance on various task leaderboards [1] Group 2: Model Availability - Users can download the Ring-1T model through platforms like HuggingFace and Modao Community, and experience it online via Ant Group's Baibao Box [1] - Ant Group's Beiling team has released a total of 18 models, creating a product matrix of large language models ranging from 16 billion to 1 trillion parameters [1] Group 3: Product Evolution - The release of Ring-1T and the general-purpose trillion-parameter model Ling-1T marks the transition of Beiling's large model into its 2.0 phase [1]