Workflow
小红书开源1420亿参数大模型,部分性能与阿里Qwen3模型相当
Tai Mei Ti A P P·2025-06-10 01:07

Core Insights - Xiaohongshu has recently open-sourced its first self-developed large model, dots.llm1, through platforms like Github and Hugging Face [2][9] - The model has been trained using 11.2 trillion high-quality tokens, significantly outperforming the open-source TxT360 data [5] - Xiaohongshu's valuation has surged from $20 billion to $26 billion as of March 2023, surpassing the market values of companies like Bilibili and Zhihu [9] Model Performance - Dots.llm1 features a mixture of experts (MoE) model with 142 billion parameters, activating only 14 billion during inference to reduce costs while maintaining performance [3][5] - In various benchmarks, dots.llm1 shows competitive performance against Alibaba's Qwen models, particularly excelling in Chinese language tasks [7][8] - The model achieved a score of 92.6 on CLUEWSC and 92.2 on C-Eval, indicating industry-leading performance in Chinese semantic understanding [7] Training Efficiency - The hi lab team has implemented advanced training techniques, achieving a 14% improvement in forward computation and a 6.68% improvement in backward computation compared to NVIDIA's Transformer Engine [5] - Future plans include integrating more efficient architectural designs and exploring sparse MoE layers to enhance computational efficiency [10] Strategic Direction - Xiaohongshu is shifting focus from being merely a content community and live e-commerce platform to actively developing AI technologies, particularly large language models [9][10] - The company aims to deepen its understanding of optimal training data and explore methods to achieve human-like learning efficiency [11]