Workflow
通用人工智能(AGI)
icon
Search documents
智谱冲刺“全球大模型第一股” AGI梦想背后 高增长与高投入并行
Mei Ri Jing Ji Xin Wen· 2025-12-20 15:00
在AGI(通用人工智能)的全球竞赛中,中国大模型独角兽企业的资本化进程终于迈出了实质性的一 步。 12月19日晚间,北京智谱华章科技股份有限公司(简称智谱)披露招股书(申请版本,下同),宣布赴 港冲刺"全球大模型第一股"。 这家成立于2019年的公司,由清华大学技术成果转化而来。招股书显示,智谱的营收在过去三年实现年 均复合增长率超过130%。不过,在营收高歌猛进的同时,作为一家典型技术驱动型企业的智谱也面临 着巨额研发投入带来的亏损挑战。 MaaS支撑下收入高速增长,毛利率高达50% 从财务数据看,近几年,智谱的收入增长呈现出明显的加速态势。 2022年至2024年,智谱收入分别为5740万元、1.25亿元和3.12亿元,年均复合增长率超130%。收入增速 在2025年上半年进一步放大,2025年上半年,智谱实现营收1.91亿元,较2024年同期的4490万元同比增 长325.39%。 毛利率层面,2022年至2024年,智谱的毛利率分别是54.6%、64.6%、56.3%,2025年上半年毛利率为 50.0%。 收入的扩张,与其商业模式高度相关。 招股书显示,智谱的商业模式以模型即服务(MaaS)平台为核 ...
智谱,通过港交所IPO聆讯,或很快香港上市,中金公司独家保荐
Sou Hu Cai Jing· 2025-12-20 12:11
| 纂]項下的圖纂 數目 | [编纂]股H股(視乎[编纂]行使與否而定) | | --- | --- | | [编纂]數目 | [编纂]股H股(可予重新分配) | | [编纂]數目 | [編纂]股H股(可予重新分配及視乎[編纂] | | 行使與否而定) | | | 最高 编纂] | 每股H股[編纂]港元,另加1.0%經紀佣金、0.0027% | | | 證監會交易徵費、0.00565%聯交所交易費及 | | | 0.00015%會財局交易徵費(須於申請時以港元繳 | | 足,多繳股款可予退還) | | | 面值 | 每股H股人民幣[0.10]元 | | [編纂] | [編纂] | | CICC中金公司 | 獨家保薦人、 编纂] 编纂] | 2025年12月19日,来自北京海淀区的北京智谱华章科技股份有限公司Knowledge Atlas Technology Joint Stock Company Limited.(以下简 称"智谱")在港交所披露聆讯后的招股书,或很快在香港主板IPO上市。 智谱华章招股书链接: 主要业务 智谱 ,成立于2019年,作为中国领先的人工智能公司,追求通用人工智能( AGI) 创 ...
港股GPU第一股将诞生
财联社· 2025-12-20 06:02
Core Viewpoint - The domestic computing chip industry is experiencing a significant moment in the capital market, highlighted by the successful listing applications of companies like Birran Technology and Tensu Zhixin on the Hong Kong Stock Exchange [1][7]. Group 1: Birran Technology - Birran Technology, established in 2019, focuses on the development of General-Purpose Graphics Processing Unit (GPGPU) chips and intelligent computing solutions, positioning itself among the top domestic GPU companies [2]. - The company has developed a three-in-one core business system comprising GPGPU chip hardware systems, the BIRENSUPA software platform, and intelligent computing cluster delivery, supporting high-performance computing for various sectors [2][3]. - Financially, Birran Technology's revenue has surged from 499,000 yuan in 2022 to 337 million yuan in 2024, marking a cumulative growth of over 675 times, although it reported a net loss of 15.38 billion yuan in 2024 [4]. Group 2: Tensu Zhixin - Tensu Zhixin is the first domestic company to achieve mass production of training and inference general-purpose GPUs using 7nm advanced technology, with a product matrix covering all AI computing scenarios [5][6]. - The company has seen significant growth in its customer base, expanding from 22 clients in 2022 to over 290 by mid-2025, with a total of over 900 deployments in key sectors [6]. - Revenue for Tensu Zhixin has also increased, reaching 5.4 billion yuan in 2024, with a compound annual growth rate of 68.8% from 2022 to 2024 [6]. Group 3: Market Trends and Regulations - The Hong Kong Stock Exchange's 18C special technology listing system has lowered financial thresholds for unprofitable tech companies, leading to a surge in listing applications, with 19 companies having applied under this rule as of now [7][9]. - Industry experts predict that 150 to 200 companies will list in Hong Kong next year, with an expected IPO total of 300 billion yuan, making it the largest globally [8]. - The 18C mechanism provides crucial capital access and market confidence for high-investment, long-cycle, unprofitable domestic GPU companies, although it raises expectations for stable cash flow and predictable orders from investors [9].
Alex Wang“没资格接替我”!Yann LeCun揭露Meta AI“内斗”真相,直言AGI是“彻头彻尾的胡扯”
AI前线· 2025-12-20 05:32
编译|冬梅 "通往超级智能的那条路——无非是不断训练大语言模型、喂更多合成数据、雇上几千人做后训练、再在强化学习上搞点新花样——在我看来完全是胡 扯,这条路根本行不通。" 近日,在一档名为《The Information Bottleneck》的访谈栏目中,主持人 Ravid Shwartz-Ziv 和 Allen Roush 与图灵奖得主、前 Meta 首席 AI 科学家 Yann LeCun 展开了一场近两小时的高质量对话,在访谈中,LeCun 解释了为什么会在 65 岁这个别人已经退休的年纪他还在创业,此外,他也对当前 硅谷主流的人工智能发展路径给出了罕见而尖锐的评价。 结束在 Meta 长达 12 年的职业生涯后,LeCun 正将个人学术声誉与职业"遗产"押注在一套截然不同的 AI 愿景之上。他直言,业界对大语言模型规模化 的执念,正在把人工智能引向一条看似高速、实则封闭的死胡同。 在 LeCun 看来,真正制约 AI 进步的关键,并不是如何更快地逼近"人类级智能",而是如何跨越一个常被低估却极其困难的门槛—— 让机器具备"狗的智 能水平" 。这一判断挑战了当前以语言能力和知识覆盖面为中心的评估体系。 ...
对话小马智行王皓俊:Robotaxi正进入1到1000的阶段
Hua Er Jie Jian Wen· 2025-12-20 05:31
作者 | 周智宇 编辑 | 张晓玲 2025年,全球智驾行业正经历一场范式转移。过去十年,自动驾驶是实验室里的代码游戏,是靠Demo 和PPT堆砌的幻梦;而现在,这门生意正式从虚空坠入实地,开始在财务报表上硬碰硬。 当曾经光环满身的L4独角兽因无法跨越规模生死线而陷入停摆,先行者们已经悄然扣响了盈利的大 门。2025年二季度,百度萝卜快跑在武汉实现收支平衡;11月,小马智行宣布其第七代Robotaxi在广州 实现单位经济模型(UE)转正。 小马智行联合创始人、CFO王皓俊在近期的采访中对华尔街见闻表示,能够在广州实现UE转正,意味 着小马智行在规模上量的过程中,逐渐打磨出一个标准的运营流程,能够赋能给小马智行的合作伙伴。 王皓俊认为,前几年Robotaxi的商业化还更多处于0到1的阶段,现在已经逐渐进入到了一个1到100、1 到1000的阶段。 一张清晰的商业化时间表已经浮出水面:从2025年底冲击千辆级车队,2026年提升至3000辆,到2030年 迈向10万辆规模,Robotaxi将成为人们日常生活的一部分。 商业闭环 这意味着,Robotaxi的竞争主战场已经转移。当单车硬件成本下探至25万人民币的生死 ...
北京将跑出“全球大模型第一股”!
Xin Lang Cai Jing· 2025-12-19 15:20
转自:北京日报客户端 12月19日,据港交所网站,智谱通过港交所聆讯并正式递交招股书,资本市场将首次迎来一家以AGI (通用人工智能)基座模型为核心业务的上市公司。这也意味着,这家总部位于北京、中国最大的独立 大模型厂商有望以"全球大模型第一股"身份在港交所挂牌上市。 智谱成立于2019年,由清华大学技术成果转化而来。团队在国内率先启动大模型的研究工作,研发出基 于自回归填空的全国产预训练架构GLM,在鲁棒性(稳定性)、可控性和幻觉性方面取得突破性进 展,并适配了40余款国产芯片。 2025年9月,智谱发布GLM-4.6。在全球公认百万用户盲测的大模型竞技场Code Arena上,GLM在代码 生成能力上与Anthropic、OpenAI等国际公司的模型并列全球第一,超越海外闭源模型谷歌Gemini和xAI 的Grok。成立以来,该团队从基座大模型的底层架构起步,坚持自研与完全自主可控,陆续推出中国 首个百亿模型、首个开源千亿模型、首个对话模型、首个多模态模型和全球首个设备操控智能体。 招股书显示,智谱已连续三年营收翻倍,2022年、2023年、2024年公司收入分别为5740万元、1.245亿 元、3.12 ...
智驾人才涌入具身智能,热钱有了新叙事
创业邦· 2025-12-19 14:57
Core Viewpoint - The article discusses the rising interest and investment in the field of embodied intelligence, particularly in humanoid robots, highlighting the shift in investor focus and the challenges faced by startups in this sector [5][6][13]. Investment Trends - In 2023, there has been a significant influx of venture capital into the embodied intelligence sector, with estimates suggesting over 100 active investment firms and early-stage funding exceeding $10 billion in China [6]. - Investors are particularly interested in startups led by individuals with backgrounds in intelligent driving, as they bring valuable experience in productization and operational expertise [6][7]. Entrepreneurial Landscape - The article identifies a new wave of entrepreneurs in the embodied intelligence space, many of whom have transitioned from the intelligent driving industry, including notable figures from companies like Huawei, Xpeng, and Baidu [7][8]. - The "Berkeley Four," a group of entrepreneurs from the University of California, Berkeley, have gained attention for their contributions to the field, reflecting a shift in investor preferences towards teams with practical experience [7]. Technological Challenges - The transition from intelligent driving to embodied intelligence involves overcoming significant technical hurdles, including the need for high-quality interaction data and the development of robust algorithms capable of generalizing across various tasks [12][10]. - Current embodied robots face challenges in cost-effectiveness, with prices for certain models around 600,000 yuan (approximately $90,000), which may decrease to 350,000-400,000 yuan (about $50,000-$60,000) by 2027, but this does not account for maintenance and operational costs [12]. Market Sentiment - There is a growing skepticism in the secondary market regarding the sustainability of investments in embodied intelligence, with some analysts suggesting that the best opportunities may have already passed [13]. - The article notes that the number of humanoid robot companies in China has surpassed 150, raising concerns about market saturation and the potential for a bubble in the sector [13]. Investment Logic - Investors are prioritizing projects that focus on the core components of embodied intelligence, including decision-making models, control systems, and the physical robots themselves, while also being cautious of the high similarity in pitches from various startups [14][15].
万字拆解371页HBM路线图
半导体行业观察· 2025-12-19 09:47
Core Insights - The article emphasizes the critical role of High Bandwidth Memory (HBM) in supporting AI technologies, highlighting its evolution from a niche technology to a necessity for AI performance [1][2][15] - A comprehensive roadmap for HBM development from HBM4 to HBM8 is outlined, indicating significant advancements in bandwidth, capacity, and efficiency over the next decade [15][80] Understanding HBM - HBM is designed to address the limitations of traditional memory types, such as DDR5, which struggle to meet the high data transfer demands of AI applications [4][7] - The architecture of HBM utilizes a 3D stacking method, significantly improving data transfer efficiency compared to traditional flat layouts [7][8] HBM Advantages - HBM offers three main advantages: superior bandwidth, reduced power consumption, and compact size, making it essential for AI applications [11][12][14] - For instance, training a model like GPT-3 takes 20 days with DDR5 but only 5 days with HBM3, showcasing the drastic difference in performance [12] HBM Generational Upgrades - HBM4, expected in 2026, will introduce customizable base dies to enhance memory performance and capacity, addressing mid-range AI server needs [17][21] - HBM5, anticipated in 2029, will incorporate near-memory computing capabilities, allowing memory to perform calculations, thus reducing GPU wait times [27][28] - HBM6, projected for 2032, will focus on high throughput for real-time AI applications, with significant improvements in bandwidth and capacity [32][35] - HBM7, set for 2035, will integrate high-bandwidth flash memory to balance high-speed access with large storage needs, particularly for multimodal AI systems [41][44] - HBM8, expected in 2038, will feature full 3D integration, allowing seamless interaction between memory and GPU, crucial for advanced AI applications [49][54] Industry Landscape - The global HBM market is dominated by three major players: SK Hynix, Samsung, and Micron, which collectively control over 90% of the market share [81][84] - The demand for HBM is projected to grow significantly, with the market expected to reach $98 billion by 2030, driven by the increasing need for high-performance computing in AI [80] Future Challenges - The HBM industry faces challenges related to cost, thermal management, and ecosystem development, which must be addressed to facilitate widespread adoption [86] - Strategies for overcoming these challenges include improving yield rates, expanding production capacity, and innovating cost-reduction technologies [86]
DeepMind掌门人万字详解通往AGI之路
量子位· 2025-12-19 07:20
在最新一期播客中,DeepMind掌门人哈萨比斯清晰地勾勒了他心目中通往AGI的一条现实路径: 一半靠规模扩展,另一半靠真正的科学突破。 henry 发自 凹非寺 量子位 | 公众号 QbitAI 想要实现AGI,技术创新和规模扩展得五五开,缺一不可。 从世界模型、模拟和智能体,一路聊到材料、超导体,甚至可控核聚变。这期播客里,哈萨比斯几乎是站在谷歌的当下,眺望AGI的全局图 景。 以下是哈萨比斯的核心观点速览: AGI实现需要创新与规模化的双重努力 :约50%的努力集中在模型扩展,50%集中在技术创新,二者结合是通向AGI的关键路径。 根节点问题推动科学突破 :AlphaFold的成功验证了AI解决基础科学难题的潜力,当前研究正拓展至材料科学(如室温超导体、更优电 池)、核聚变及量子计算等领域。 AI在数学等领域的表现存在"锯齿状智能"现象 :尽管能在国际数学奥林匹克竞赛中获奖,但在简单逻辑题上仍可能出错,反映出系统在 一致性与可靠推理方面的不足,需提升其自我反思与验证能力。 当前模型依赖人类知识,未来需实现自主学习 :现有大模型基于互联网知识进行压缩与泛化,类似于AlphaGo;下一步目标是实现类似 Al ...
Altman谈OpenAI:算力成收入最大瓶颈,只要算力翻倍,收入就能翻倍
Xin Lang Cai Jing· 2025-12-19 05:18
Core Insights - The focus of the AI competition is shifting from model strength to the ability to convert model capabilities into revenue and cash flow, marking a critical transition for companies like OpenAI [1] - OpenAI is at a pivotal point, transitioning from a "phenomenal product company" to an "enterprise-level AI platform" [1] Business Strategy - OpenAI is not transitioning from a consumer company to an enterprise market but is instead capitalizing on existing trends [4] - The company has over 1 million enterprise users, with API business growth outpacing that of ChatGPT itself [4][17] - Altman emphasizes that enterprises require a complete, unified, and scalable AI platform rather than fragmented AI functionalities [4] Product Development - OpenAI plans to release a significantly upgraded model in Q1 of next year, although the naming of models is no longer a priority [5][45] - The company is preparing to launch a series of small AI devices, moving towards a new generation of hardware that supports long-term memory and proactive decision-making [5] Infrastructure Investment - Altman highlights that the bottleneck for revenue is in infrastructure rather than demand, indicating that existing AI capabilities are underutilized [6] - OpenAI's computational capacity has tripled over the past year, with revenue growth closely following this increase [6][50] - The company is committed to investing $1.4 trillion in infrastructure over time, aiming to leverage AI for scientific discovery and other significant advancements [9][47] Competitive Landscape - OpenAI acknowledges competitive pressures from models like Gemini and DeepSeek but believes that productization and distribution capabilities will be the key differentiators [8] - The company has seen a rapid increase in active users, with ChatGPT's weekly active user count nearing 900 million, enhancing its competitive position in the enterprise market [8][15] Future Outlook - Altman expresses confidence that the demand for AI capabilities will continue to grow, with expectations that the company will eventually achieve profitability as revenue scales with infrastructure investments [51][54] - The company is aware of the potential risks associated with over-investment in infrastructure but believes that the value generated from AI will justify these investments [57]