MTIA 400芯片
Search documents
Meta发布四颗芯片,疯狂堆料
半导体行业观察· 2026-03-12 01:39
Core Viewpoint - Meta has introduced a new line of AI accelerators, the Meta Training and Inference Accelerator (MTIA), with a focus on high bandwidth memory (HBM) to enhance performance in AI workloads [2][3]. Group 1: Product Overview - Meta launched four new MTIA chips designed for specific tasks: MTIA 300 for R&R training, MTIA 400 for general workloads, and MTIA 450 and 500 for advanced AI workloads [3][4]. - The MTIA 500 chip is expected to have a performance of 30 petaflops and is set to be implemented in Meta's data centers by 2027 [4][5]. Group 2: Technical Specifications - MTIA 300 features 216 GB HBM with a bandwidth of 6.1 TB/s, while MTIA 400 has 288 GB HBM and 9.2 TB/s bandwidth [4]. - MTIA 450 doubles the memory bandwidth to 18.4 TB/s, and MTIA 500 offers a bandwidth of 27.6 TB/s with HBM capacity ranging from 384 GB to 512 GB [3][4]. Group 3: Competitive Landscape - Meta's MTIA 500 competes closely with NVIDIA's upcoming Rubin GPU, which offers 22 TB/s HBM4 bandwidth and 35 petaflops training capability [5]. - The MTIA 400 is noted as Meta's first fully in-house developed chip aimed at competing with the fastest AI accelerators on the market [5][6]. Group 4: Design Innovations - The MTIA 500 incorporates design innovations such as a 2×2 configuration and modular chip design, allowing for easier upgrades and cost efficiency [11]. - All MTIA models share the same chassis and network infrastructure, facilitating seamless upgrades across different chip generations [11].