Workflow
蚂蚁集团(06688)
icon
Search documents
蚂蚁集团,首次回应
Zhong Guo Ji Jin Bao· 2025-03-26 09:31
Group 1 - Ant Group has made strategic adjustments by reducing its holdings in A-share listed companies, obtaining over 775 million yuan in investment returns [1][4] - The company aims to reinvest the recovered funds into more forward-looking areas such as large models, embodied intelligence, and AI computing power [1][2] - Ant Group's investment strategy focuses on supporting the growth of next-generation technology innovations while respecting market dynamics and potential returns [2][3] Group 2 - Ant Group's recent divestment includes selling 8.82 million shares of Aobi Zhongguang, amounting to approximately 556 million yuan, while still retaining a 10% stake [4] - Aobi Zhongguang's stock price has surged over 200% this year, despite the company still facing losses of 60.31 million yuan in the first three quarters of 2024 [7] - In the shared mobility sector, Ant Group's subsidiary Shanghai Yunxin transferred its 6% stake in Yong'anxing for 219 million yuan, indicating ongoing adjustments in its investment portfolio [7]
做空英伟达的又一理由?蚂蚁集团发布最新AI成果:无需高端GPU,计算成本降低20%,训练1万亿Token只需508万元
Mei Ri Jing Ji Xin Wen· 2025-03-25 10:45
每经记者 宋欣悦 每经编辑 兰素英 做空英伟达的又一理由?蚂蚁集团发布最新AI成果:无需高端GPU,计算成 本降低20%,训练1万亿Token只需508万元 长久以来,英伟达凭借高性能芯片构筑起"算力霸权",在AI模型训练领域占据主导地位。这一局面在DeepSeek问世后遭遇到不小的冲击,如今又面临着新的 挑战。 3月初,由蚂蚁集团首席技术官何征宇带领的Ling团队发表了一篇技术成果论文。 论文显示,该团队开发了两款百灵系列开源混合专家(MoE)模型Ling-Lite(总参数为16.8B)和Ling-Plus(总参数为290B)。相比之下,据《麻省理工科 技评论》,GPT-4.5总参数为1.8T,DeepSeek-R1总参数为671B。 而惊艳之处在于,Ling团队在模型预训练阶段使用较低规格的硬件系统,将计算成本降低约20%,训练1万亿Token成本从635万元降至508万元,最终取得 了与使用高性能芯片(如英伟达H100、H800等)的模型相当的效果。 蚂蚁集团有关人士告诉《每日经济新闻》记者,在模型训练过程中,他们既使用了国产芯片,也采用了英伟达芯片。 但百灵系列混合专家模型的横空出世,让市场再次质疑英 ...
蚂蚁集团使用国产AI芯片训练大模型 成本可进一步降低
Zheng Quan Shi Bao· 2025-03-24 07:08
蚂蚁集团使用国产AI芯片训练大模型 成本可进一步 降低 (原标题:蚂蚁集团,AI重大突破!) 蚂蚁集团使用国产AI芯片训练大模型,成本可进一步降低。 近日,蚂蚁集团Ling团队发表了一篇技术成果论文。论文显示,蚂蚁集团推出了两款不同规模的MoE大 语言模型——百灵轻量版(Ling-Lite)与百灵增强版(Ling-Plus),前者参数规模为168亿(激活参数 27.5亿),Plus基座模型参数规模高达2900亿(激活参数288亿),两者性能均达到行业领先水平。 除了自研性能领先的大模型以外,该技术论文最大的突破在于提出了一系列创新方法,以提升资源受限 环境下AI开发的效率与可及性。实验表明,其3000亿参数的MoE(混合专家)大模型可在使用国产 GPU的低性能设备上完成高效训练,性能与完全使用英伟达芯片、同规模的稠密模型及MoE模型相当。 在低性能硬件上高效训练的自研大模型 目前,蚂蚁集团Ling团队的技术成果论文《每一个FLOP都至关重要:无需高级GPU即可扩展3000亿参 数混合专家LING大模型》已发表在预印版Arxiv平台上。 据技术成果论文,虽然DeepSeek、阿里通义千问、MiniMax等系列的 ...
蚂蚁集团,AI重大突破!
Core Insights - Ant Group has developed two different scales of MoE large language models, Ling-Lite with 16.8 billion parameters and Ling-Plus with 290 billion parameters, achieving industry-leading performance [1][2] - The innovation lies in enhancing AI development efficiency and accessibility in resource-constrained environments, allowing for effective training on low-performance devices using domestic GPUs [1][2] Model Training Efficiency - The Ling team's paper emphasizes the ability to train a 300 billion parameter MoE model without high-end GPUs, addressing the high costs associated with traditional training methods [2][3] - The training cost for 1 trillion tokens on high-performance hardware is approximately 6.35 million RMB, while using optimized methods on lower-spec hardware reduces the cost to around 5.08 million RMB, saving nearly 20% [3] AI Applications and Developments - Ant Group's large models are focused on applications in life services, financial services, and healthcare, with key products including a life assistant, medical assistant, and financial assistant [4] - The company has announced advancements in AI healthcare products, collaborating with major tech firms to provide comprehensive solutions for medical institutions and users [4][5] Robotics Initiatives - Ant Group is expanding into the field of humanoid robots, establishing Shanghai Ant Lingbo Technology Co., Ltd. to focus on embodied intelligence and robotics [5] - The company aims to leverage its strengths in AI, big data, and cloud computing to accelerate the development and application of humanoid robots in various sectors, including healthcare and elderly care [5]