Workflow
AI技术生态
icon
Search documents
智谱发布GLM-4.6,联手寒武纪,摩尔线程推出模型芯片一体解决方案
Guan Cha Zhe Wang· 2025-10-01 01:37
9月30日,国产大模型"六小龙"之一的智谱发布GLM-4.6新模型。 作为GLM系列最新版本,GLM-4.6在真实编程、长上下文处理、推理能力、信息搜索、写作能力与智能体应用等多个方面能力有所提升。 官方信息显示,此次升级表现在公开基准与真实编程任务中,GLM-4.6代码能力对齐Claude Sonnet4;上下文窗口由128K提升至200K,适应更长的代码和智 能体任务;新模型提升推理能力,并支持在推理过程中调用工具;搜索方面增强模型的工具调用和搜索智能体。 另外,"模芯联动"是此次新模型发布的重点,GLM-4.6已在寒武纪国产芯片上实现FP8+Int4混合量化部署,这也是行业首次在国产芯片上投产的FP8+Int4模 型芯片一体解决方案,在保持精度不变的前提下,降低推理成本,为国产芯片在大模型本地化运行上探索可行路径。 具体到模型适配过程中,占总内存的60%-80%的大模型核心参数通过Int4量化后,可将权重体积直接压缩为FP16的1/4,大幅降低芯片显存的占用压力;推理 环节积累的临时对话数据可以通过Int4压缩内存的同时,将精度损失控制在"轻微"范围。而FP8可重点针对模型中"数值敏感、影响推理准确性" ...
智谱发布GLM-4.6,寒武纪,摩尔线程完成适配
Guan Cha Zhe Wang· 2025-10-01 01:36
官方信息显示,此次升级表现在公开基准与真实编程任务中,GLM-4.6代码能力对齐Claude Sonnet 4; 上下文窗口由128K提升至200K,适应更长的代码和智能体任务;新模型提升推理能力,并支持在推理 过程中调用工具;搜索方面增强模型的工具调用和搜索智能体。 另外,"模芯联动"是此次新模型发布的重点,GLM-4.6已在寒武纪国产芯片上实现FP8+Int4混合量化部 署,这也是行业首次在国产芯片上投产的FP8+Int4模型芯片一体解决方案,在保持精度不变的前提下, 降低推理成本,为国产芯片在大模型本地化运行上探索可行路径。 FP8是8位浮点数(Floating-Point 8)数据类型,动态范围广、精度损失小;Int4是4 位整数(Integer 4) 数据类型,压缩比极高,内存占用最少,适配低算力硬件但精度损失相对明显。此次尝试的"FP8+Int4 混合" 模式,并非简单将两种格式叠加,而是根据大模型的"模块功能差异",针对性分配量化格式,让 该省内存的地方用Int4压到极致,该保精度的地方用FP8守住底线,实现合理资源分配。 具体到模型适配过程中,占总内存的60%-80%的大模型核心参数通过Int ...
智谱正式发布并开源新一代大模型GLM-4.6 寒武纪、摩尔线程完成适配
Mei Ri Jing Ji Xin Wen· 2025-09-30 07:42
Core Insights - The domestic large model company Zhipu has officially released and open-sourced its next-generation large model GLM-4.6, achieving significant advancements in core capabilities such as Agentic Coding [1] Group 1: Model Development - GLM-4.6 has been deployed on Cambricon AI chips using FP8+Int4 mixed precision computing technology, marking the first production of an FP8+Int4 model on domestic chips [1] - This mixed-precision solution significantly reduces inference costs while maintaining model accuracy, providing a feasible path for localized operation of large models on domestic chips [1] Group 2: Ecosystem Compatibility - Moore Threads has adapted GLM-4.6 based on the vLLM inference framework, demonstrating that the new generation of GPUs can stably run the model at native FP8 precision [1] - This adaptation validates the advantages of the MUSA (Meta-computing Unified System Architecture) and full-function GPUs in terms of ecological compatibility and rapid adaptability [1] Group 3: Industry Implications - The collaboration between Cambricon and Moore Threads on GLM-4.6 signifies that domestic GPUs are now capable of iterating in tandem with cutting-edge large models, accelerating the construction of a self-controlled AI technology ecosystem [1] - The combination of GLM-4.6 and domestic chips will initially be offered to enterprises and the public through the Zhipu MaaS platform [1]
智谱联手寒武纪,推出模型芯片一体解决方案
Di Yi Cai Jing· 2025-09-30 07:38
Core Insights - The latest model GLM-4.6 from the domestic AI startup Zhipu has been released, showcasing improvements in programming, long context handling, reasoning capabilities, information retrieval, writing skills, and agent applications [3] Model Enhancements - GLM-4.6 aligns its coding capabilities with Claude Sonnet 4 in public benchmarks and real programming tasks [3] - The context window has been increased from 128K to 200K, allowing for longer code and agent tasks [3] - The new model enhances reasoning abilities and supports tool invocation during reasoning processes [3] - There is an improvement in the model's tool invocation and search capabilities [3] Chip Integration - The "MoCore linkage" is a key focus of the new model, with GLM-4.6 achieving FP8+Int4 mixed quantization deployment on domestic Cambricon chips, marking the industry's first production of an FP8+Int4 model chip solution on domestic hardware [3] - This approach maintains accuracy while reducing inference costs, exploring feasible paths for localized operation of large models on domestic chips [3] Quantization Techniques - FP8 (Floating-Point 8) offers a wide dynamic range with minimal precision loss, while Int4 (Integer 4) provides high compression ratios with lower memory usage but more noticeable precision loss [4] - The "FP8+Int4 mixed" mode allocates quantization formats based on the functional differences of the model's modules, optimizing memory usage [4] Memory Efficiency - Core parameters of the large model, which account for 60%-80% of total memory, can be compressed to 1/4 of FP16 size through Int4 quantization, significantly reducing the memory pressure on chips [5] - Temporary dialogue data accumulated during inference can also be compressed using Int4 while keeping precision loss minimal [5] - FP8 is used for numerically sensitive modules to minimize precision loss and retain fine semantic information [5] Ecosystem Development - Cambricon and Moore Threads have successfully adapted GLM-4.6 based on the vLLM inference framework, demonstrating the capabilities of the new generation of GPUs to run the model stably at native FP8 precision [5] - This adaptation signifies that domestic GPUs are now capable of collaborating and iterating with cutting-edge large models, accelerating the development of a self-controlled AI technology ecosystem [5] - The combination of GLM-4.6 and domestic chips will be offered to enterprises and the public through the Zhipu MaaS platform [5]
洞见 | 申万宏源董事长刘健:强化专业能力 服务现代资本市场体系建设
今年5月,中央政治局会议明确"持续稳定和活跃资本市场"目标,释放出重大政策信号。随即,三大 金融管理部门迅速响应,打出一系列精准施策"组合拳",标志着资本市场改革迈向更深层次。 在刘健看来,资本市场作为经济运行的晴雨表,在金融体系和宏观经济运行中具有牵一发而动全身的 作用。作为我国企业投融资和居民财富管理的重要平台,资本市场稳定运行不仅是提升居民财产性收入、 改善社会预期的关键抓手,更是经济健康发展的直接体现。 近年来,国家层面高度重视资本市场,资本市场的稳定运行成为金融调控的重要目标之一。党的二十 届三中全会决定提出,"健全投资和融资相协调的资本市场功能,防风险、强监管,促进资本市场健康稳 定发展"。2024年7月以来,中央政治局会议多次提及资本市场,围绕维护资本市场稳定和提振资本市场 活力进行系统部署。中国人民银行、金融监管总局、中国证监会等部门持续推出增量政策,在推动资本市 场稳健发展上逐步形成政策集成合力。今年二季度,面对外部因素冲击,5月7日多部门又及时推出一揽子 政策,全力巩固市场回稳向好势头。 从基本面来看,刘健认为,上市公司质量的提升为资本市场走稳走强提供坚实基础。上市公司是资本 市场之基,2 ...