后摩漫界M50芯片

Search documents
主题投资月度观察(2025年第7期):AI革命浪潮与“反内卷”共振-20250803
Guoxin Securities· 2025-08-03 15:15
证券研究报告 | 2025年8月3日 主题投资月度观察(2025年第7期) AI革命浪潮与"反内卷"共振 策略研究 · 策略专题 证券分析师:王开 021-60933132 wangkai8@guosen.com.cn S0980521030001 证券分析师:陈凯畅 021-60375429 chengkaichang@guosen.com.cn S0980523090002 请务必阅读正文之后的免责声明及其项下所有内容 核心观点 请务必阅读正文之后的免责声明及其项下所有内容 ◼ 海外科技映射:(1) OpenAI发布ChatGPT Agent:通用型AI代理实现端到端交互,内置工具自主选择能力,HLE测试得分41.6% (工具辅助);(2) xAI推出Grok-4模型:Scaling强化学习驱动, "全球最聪明AI",HLE测试中Grok 4 Heavy达44.4%,优 化策略后创50.7%新纪录;(3) 美国《天才法案》签署生效:确立联邦稳定币监管框架,限定银行主导发行权,强制储备透明 化,特朗普称其为"金融科技最重大变革" ; (4) 2025世界人工智能大会(WAIC):聚焦十大方向(大模型/算力基 ...
三天超150亿!WAIC 2025上海收官;M50芯片 10W功耗干翻英伟达;OpenAI深夜引爆学习革命 | 混沌AI一周焦点
混沌学园· 2025-08-01 12:06
本周AI商业焦点必读 (2025.7.24-7.31) 本周核心趋势 2025年7月31日 1、 「国产开源」 中国AI霸榜Hugging Face前十!开源狂潮颠覆全球AI格局! 中国AI巨头智谱、Qwen、腾讯混元等集体发力,在Hugging Face榜单包揽前10名,全部为开源模型,近 一个月密集发布超10款创新模型如GLM-4.5登顶、Qwen占5席。此开源浪潮推动全球AI生态向中国倾 斜,对比海外闭源涨价趋势,重塑产业竞争规则,加速创新普惠化。 原文链接: 整个HuggingFace榜,已经被中国AI模型一统江湖了 2025年7月30日 2、 「功能上新」 OpenAI深夜引爆学习革命!Study Mode免费上线,10亿用户AI导师时代开启! OpenAI推出ChatGPT学习模式,通过交互式提示和个性化支持,引导学生主动探索知识而非直接获取答 案。该功能免费开放所有用户,或将重塑教育科技竞争格局,加速AI在教育领域的渗透和用户粘性提升。 AI从"炫技"变"实干": WAIC现场投资额150亿、35万人潮凸显AI从参数竞赛转向实用价值,具身 智能与智能体成新维度,预示产业重心向生产力迁移。 Age ...
死磕存算一体,后摩智能发布重磅新品
半导体芯闻· 2025-07-29 10:29
Core Viewpoint - The article discusses the limitations of the traditional von Neumann architecture in processing power, especially in the context of artificial intelligence and large models, and highlights the potential of in-memory computing technology as a solution to achieve high computing power, high bandwidth, and low power consumption simultaneously [1][5]. Group 1: In-Memory Computing Technology - In-memory computing technology is not new, but its commercial application has only recently begun to gain traction [5]. - The challenges in adopting this technology include the gap between theoretical research and practical implementation, as well as the need for software that provides a user experience similar to traditional chips [6][5]. - The company has focused on in-memory computing due to its research background in high energy efficiency computing and the need to compete with major players like NVIDIA [6][5]. Group 2: Development and Research Focus - The arrival of large AI models has prompted the company to deepen its exploration of the integration of in-memory computing technology with AI applications [7]. - The company has committed significant resources to research architecture, design, and quantization, aiming to create a synergy between in-memory computing and large models [7]. Group 3: New Product Launch - M50 Chip - The M50 chip is described as the most energy-efficient edge AI chip currently available, built on the second-generation SRAM-CIM dual-port architecture [8][10]. - It achieves 160 TOPS at INT8 and 100 TFLOPS at bFP16 with a typical power consumption of only 10W, making it suitable for various smart mobile terminals [10]. - Compared to traditional architectures, the M50 chip offers a 5 to 10 times improvement in energy efficiency [10]. Group 4: Compiler and Software Tools - The new compiler toolchain, "后摩大道," is designed to optimize the performance of the M50 chip, featuring flexible operator support and automated optimization capabilities [11][12]. - This tool aims to lower the entry barrier for developers and enhance the usability of the in-memory computing technology [11]. Group 5: Product Matrix and Applications - The company has introduced a diverse product matrix, including the "力擎" series and various M.2 cards, to support edge applications [13][14]. - These products are designed for a wide range of applications, including consumer electronics, smart offices, and industrial automation, enabling local processing without data transmission risks [16]. Group 6: Future Goals and Innovations - The company aims to become a leader in edge AI chip technology and is developing next-generation DRAM-PIM technology to further enhance computing and storage efficiency [18]. - The goal is to achieve over 1 TB/s on-chip bandwidth and triple the energy efficiency of current technologies, facilitating the deployment of large AI models in everyday devices [18].