Workflow
超级计算机
icon
Search documents
日本组团搞存储,旨在干掉HBM
半导体行业观察· 2025-12-26 01:57
公众号记得加星标⭐️,第一时间看推送不会错过。 据《日经新闻》报道,富士通将加入由日本科技投资者软银集团牵头的项目,共同开发用于人工智能 和超级计算机的下一代存储器。日本希望借此重振其曾经的存储器生产技术,使其公司成为世界顶级 存储器生产商之一。 软银新成立的 Saimemory 公司将作为该公私合作项目的指挥中心,与富士通和其他合作伙伴进行协 调。 Saimemory 致力于开发高性能存储器,以替代目前通过堆叠 DRAM 芯片实现高带宽存储器 (HBM) 的高带宽存储器。该项目计划在 2027 财年前投资 80 亿日元(约合 5120 万美元)完成原型机的研 发,并力争在 2029 财年前建立量产体系。 富士通持续研发节能型中央处理器,并与客户保持着紧密的合作关系。日本顶级超级计算机"富岳"就 采用了富士通的产品。 Saimemory的目标是量产存储容量是HBM两到三倍、功耗仅为HBM一半的内存,价格与HBM持平甚 至更低。该公司将采用英特尔和东京大学联合开发的半导体技术,并与新光电气工业株式会社和台湾 力芯半导体制造股份有限公司合作进行生产和原型制作。 英特尔将提供底层堆叠技术,该技术是在美国国防高级研究 ...
谷歌对外销售芯片:博通大涨,英伟达AMD应声下跌
半导体行业观察· 2025-11-25 01:20
公众号记得加星标⭐️,第一时间看推送不会错过。 来 源 : 内容来自半导体行业观察综合 。 据报道,谷歌母公司Alphabet (正与Meta Platforms 等公司洽谈,希望它们能使用谷歌的Tensor AI 芯片,此举将加剧其与英伟达的竞争。谷歌及其AI芯片合作伙伴博通股价尾盘上涨,而英伟达和 AMD股价则下跌。 谷歌传统上将客户使用的张量处理单元(TPU)用于自己的数据中心,然后出租给客户。但据The Information周一晚间报道,谷歌现在开始向客户出售TPU,供其在自己的数据中心使用。 报 道 指 出 , Meta Platforms 正 在 考 虑 从 2027 年 开 始 在 其 数 据 中 心 购 买 价 值 数 十 亿 美 元 的 谷 歌 TPU,同时最早从 2026 年就开始从谷歌云租用 TPU 容量。Meta 一直以来主要依靠英伟达图形处理 器 (GPU) 来满足其人工智能需求。 对于谷歌和博通(它们参与了Tensor AI芯片的设计)来说,这可能是一个巨大的新市场。但它也可 能对英伟达和AMD构成重大竞争,威胁到它们巨大的销售和定价权。 受The Information报道的影 ...
英伟达最强对手,来了
半导体行业观察· 2025-11-07 01:00
Core Insights - Google’s TPU v7 accelerators demonstrate significant performance improvements, with Ironwood being the most powerful TPU to date, achieving 10 times the performance of TPU v5p and 4 times that of TPU v6e [4][11] - The TPU v7 offers competitive performance against Nvidia's Blackwell GPUs, with Ironwood providing 4.6 petaFLOPS of dense FP8 performance, slightly surpassing Nvidia's B200 [3][4] - Google’s unique scaling approach allows for the connection of up to 9216 TPU chips, enabling massive computational capabilities and high bandwidth memory sharing [7][8] Performance Comparison - Ironwood TPU has a performance of 4.6 petaFLOPS, compared to Nvidia's B200 at 4.5 petaFLOPS and the more powerful GB200 and GB300 at 5 petaFLOPS [3] - Each Ironwood module can connect up to 9216 chips with a total bidirectional bandwidth of 9.6 Tbps, allowing for efficient data sharing [7][8] Architectural Innovations - Google employs a unique 3D toroidal topology for chip interconnects, which reduces latency compared to traditional high-performance packet switches used by competitors [8][9] - The optical circuit switching (OCS) technology enhances fault tolerance and allows for dynamic reconfiguration in case of component failures [9][10] Processor Development - In addition to TPU, Google is deploying its first general-purpose processor, Axion, based on the Armv9 architecture, aimed at improving performance and energy efficiency [11][12] - Axion is designed to handle various tasks such as data ingestion and application logic, complementing the TPU's role in AI model execution [12] Software Integration - Google emphasizes the importance of software tools in maximizing hardware performance, integrating Ironwood and Axion into an AI supercomputing system [14] - The introduction of intelligent scheduling and load balancing through software enhancements aims to optimize TPU utilization and reduce operational costs [14][15] Competitive Landscape - Google’s advancements in TPU technology are attracting attention from major model builders, including Anthropic, which plans to utilize a significant number of TPUs for its next-generation models [16][17] - The competition between Google and Nvidia is intensifying, with both companies focusing on enhancing their hardware capabilities and software ecosystems to maintain market leadership [17]
黄仁勋:希望特朗普帮帮忙
半导体芯闻· 2025-10-29 10:40
Core Insights - The article highlights NVIDIA's advancements in AI technology and its collaborations with major companies like Uber, Palantir, Amazon, and Microsoft, emphasizing the significance of domestic manufacturing in the U.S. [1][2] - NVIDIA's CEO Jensen Huang showcased the capabilities of the Blackwell GPU, which has seen substantial demand, with 6 million units shipped and 14 million units ordered, translating to potential sales of $500 billion [1][2] - Huang's remarks reflect a strategic push for U.S. manufacturing and a desire to reduce reliance on foreign products in the AI sector [2][3] Group 1 - Huang praised the Blackwell GPU's computational power and highlighted the integration of 72 GPUs in a single server rack, weighing 3,000 pounds and costing millions [1] - The company has begun mass production of Blackwell chips in Arizona, although not all production processes are completed in the U.S. [1][2] - Huang expressed concerns about the U.S. potentially losing its market in AI technology due to reliance on foreign products and called for solutions from government officials [2][3] Group 2 - NVIDIA's partnerships include collaborations with Lucid Motors for autonomous driving and Eli Lilly for supercomputing in drug development [3] - The company is investing $1 billion in Nokia to integrate AI into 6G wireless networks, showcasing its commitment to enhancing energy efficiency in data centers [3][4] - Huang indicated the intention to hold annual AI conferences in Washington, reflecting a growing influence in the tech policy landscape [5]
智能早报丨苹果被法国罚款4800万欧元;亚马逊史上最大规模裁员
Guan Cha Zhe Wang· 2025-10-28 03:17
Group 1: Market Developments - The first batch of three newly registered companies in the Sci-Tech Innovation Board's growth layer officially listed today, marking a significant step in the establishment of this new segment just over four months after the announcement by the China Securities Regulatory Commission [1] - The three companies include two high-tech enterprises in the biopharmaceutical sector and one in the semiconductor materials field, all of which are currently unprofitable [1] Group 2: Stock Market Performance - The "Magnificent 7" index of major U.S. tech stocks rose by 2.40%, reaching a record high of 208.95 points, with a cumulative increase of 4.35% over the last three trading days [2] - Tesla shares increased by 4.31%, Google by 3.6%, and both Nvidia and Apple saw gains of over 2% [2][3] Group 3: Corporate News - Former Alibaba Group Vice President Peng Chao has launched a new company named "Yun Jue Technology," focusing on AI wearable devices and smart agents [5] - The first product from Yun Jue Technology will combine wearable hardware with an intelligent agent, aiming to enhance consumer experiences in high-frequency sports environments [5][6] - AMD has established a $1 billion partnership with the U.S. Department of Energy to build two supercomputers for various scientific challenges, including nuclear energy and cancer treatment [10] Group 4: Regulatory Issues - Apple has been fined €48 million (approximately $55.9 million) by a French court for unfair marketing practices related to its iPhone sales contracts with mobile operators [8] - The penalties include €8 million in fines and compensation to several mobile operators, with Bouygues Telecom receiving €16 million, Iliad €15 million, and SFR €7.7 million [8]
再创新高!AMD与美国能源部达成10亿美元AI合作,打造两台超算
美股IPO· 2025-10-28 00:25
Core Insights - AMD has entered a $1 billion partnership with the U.S. Department of Energy to develop two supercomputers aimed at advancing research in nuclear energy, cancer treatment, and national security [3][4][9] - The first supercomputer, named Lux, is set to be operational within six months and will utilize AMD's MI355X AI chip, providing approximately three times the AI computing power of current supercomputers [6][7] - The second supercomputer, Discovery, is expected to be delivered in 2028 and operational by 2029, utilizing the more advanced MI430 series AI chips [8] Group 1: Supercomputer Development - The first supercomputer, Lux, will be developed in collaboration with various partners including HP and Oracle's cloud infrastructure, and is designed to enhance computational capabilities for complex scientific experiments [6][8] - Discovery, the second supercomputer, will be designed for high-performance computing and is expected to significantly improve performance, although specific metrics on its computing power increase are not yet available [8] Group 2: Applications and Impact - The supercomputers will focus on critical areas such as fusion energy, where scientists aim to replicate solar reactions to release energy, and in the medical field for accelerating drug discovery through molecular-level simulations [9] - The U.S. Department of Energy emphasizes the importance of these systems in ensuring sufficient computational power to handle increasingly complex data requirements in scientific research [3][4] Group 3: Market Reaction - Following the announcement of the partnership, AMD's stock experienced a notable increase, rising nearly 2.7% and reaching a new closing high, indicating positive market sentiment towards the collaboration [4]
【环时深度】从“先驱”到“跟不上步伐”,日本反思AI落后
Huan Qiu Shi Bao· 2025-08-24 23:05
Core Viewpoint - Japan is reflecting on its lagging position in AI development compared to the US and China, despite its historical contributions to the field during the first AI boom from the 1950s to 1970s [1][2] Group 1: Historical Context - Japan's AI research began in the 1950s, contributing significantly to advancements in natural language processing, speech recognition, and image processing [1] - The "Fifth Generation Computer Systems" project launched in 1982 aimed to create a "supercomputer" capable of reasoning and learning, but it failed to deliver expected results and was terminated in 1992, leading to a stagnation in AI research [2][3] - From the 1990s to the early 2000s, Japan's focus on manufacturing overshadowed the importance of software and services, causing it to miss opportunities in the internet revolution [3][4] Group 2: Current Challenges - Japan's AI development is hindered by a lack of digital workforce, high legal barriers, and conservative capital investment, with only 26.7% of the population having used generative AI by 2024 [5][6] - The aging population and declining birth rates contribute to a shortage of talent and acceptance of AI technologies, with low usage rates among younger demographics [7][8] - Japan's strict data protection laws complicate the creation of effective training datasets for AI, limiting the potential for innovation [9] Group 3: Government and Industry Response - The Japanese government has initiated several strategies to boost AI development, including the "e-Japan Strategy" and recent AI legislation aimed at elevating AI to a national strategic priority [4][10] - There is a call for increased investment in AI education and attracting global talent to address the skills gap, with only 12 universities offering dedicated data science programs [8][11] - Collaborative efforts between government and industry, such as the "Manufacturing AI Visual Inspection Plan," aim to enhance AI integration in various sectors [11][12]
英伟达将参与开发日本超算“富岳”的后续机型
日经中文网· 2025-08-22 08:00
Core Insights - Nvidia will collaborate with RIKEN to develop AI-oriented GPUs for Japan's next-generation supercomputer, marking the first involvement of a foreign company in Japan's core supercomputer development [1][3] - The new model, codenamed "Fugaku NEXT," aims to achieve the highest global performance in supercomputing by integrating advanced technologies from both Japan and the U.S. [1][3] Development Details - Fujitsu is confirmed to participate in the development, responsible for the overall system design and the central processing unit (CPU), with plans to begin operations around 2030 [3][6] - The project aims to establish Japan's position as a leading AI nation by designing the highest-level CPU and GPU for the AI era [3][6] Performance Goals - The Fugaku NEXT aims for a peak performance of 10 zetta operations per second, significantly enhancing capabilities in AI and advanced simulation computing [5][6] - The new system is expected to improve computational power by 5 to 10 times compared to the existing Fugaku supercomputer [5] Competitive Landscape - The global trend of utilizing AI to accelerate scientific research is intensifying, with the U.S. and China rapidly developing next-generation high-performance supercomputers [9] - Japan aims to enhance its AI capabilities to secure a top-tier position in the international supercomputing arena [9]
世运电路:公司与特斯拉合作的Dojo项目
Mei Ri Jing Ji Xin Wen· 2025-08-06 09:01
Core Viewpoint - The company, Shiyun Circuit (603920.SH), is collaborating with Tesla on the Dojo project, specifically supplying PCB products for Tesla's supercomputer applications [1]. Group 1: Company Collaboration - The company confirmed its involvement in the Dojo project, which is associated with Tesla's supercomputer [1]. - There are two distinct Dojo projects mentioned: one related to Tesla and the other to a blockchain gaming engine [3]. Group 2: Production and Sales Inquiry - An investor inquired about the production and sales volume of the PCB products for the Dojo project, as well as the expected revenue and profit contribution for the year [3]. - The company indicated that further details regarding production and sales data would be provided in future announcements [1].
特斯拉(TSLA.O)预计将在明年某个时候实现超级计算机机房的规模化运行。
news flash· 2025-07-23 22:10
Core Insights - Tesla (TSLA.O) is expected to achieve large-scale operation of its supercomputer facility sometime next year [1] Group 1 - The company is making significant advancements in its supercomputing capabilities [1]