Workflow
专用集成电路 (ASIC)
icon
Search documents
巨头入局玻璃基板
半导体行业观察· 2025-10-01 00:32
来源 : 内容 编译自 etnews 。 公众号记得加星标⭐️,第一时间看推送不会错过。 特斯拉和苹果正在探索引入半导体玻璃基板。随着人工智能(AI)需求的不断增长,此举被解读为试 图通过玻璃基板提升半导体和数据中心性能。作为两家全球领先的科技公司,他们的采用预计将对行 业产生重大影响。 据报道,特斯拉和苹果近期与一家正在准备玻璃基板的制造商会面,了解了半导体玻璃基板技术,并 讨论了合作计划。目前尚未达成具体的合同或技术合作,但据悉,双方在合作中表达了广泛的兴趣和 意见。 据信,苹果也在探索将玻璃基板作为人工智能技术。苹果曾因应对人工智能时代不足而受到批评,但 据信其目标是实现以 iPhone 为中心的人工智能服务。预计苹果将在其人工智能基础设施(包括服务 器和数据中心)中使用玻璃基板。 多位知情人士表示:"虽然尚未进入具体讨论阶段,但双方对玻璃基板的需求,包括对该技术的技术 理解,达成了共识。他们可能会重新审视技术开发流程,并决定是否采用。" 苹果公司的主要高管不仅拜访了玻璃基板制造商,还拜访了拥有相关工艺技术的设备供应商,以了解 玻璃基板技术。 玻璃基板由于其与传统塑料相比翘曲度更小,并且易于实现微电路, ...
AI芯片的双刃剑
半导体行业观察· 2025-02-28 03:08
Core Viewpoint - The article discusses the transformative shift from traditional software programming to AI software modeling, highlighting the implications for processing hardware and the development of dedicated AI accelerators. Group 1: Traditional Software Programming - Traditional software programming is based on writing explicit instructions to complete specific tasks, making it suitable for predictable and reliable scenarios [2] - As tasks become more complex, the size and complexity of codebases increase, requiring manual updates by programmers, which limits dynamic adaptability [2] Group 2: AI Software Modeling - AI software modeling represents a fundamental shift in problem-solving approaches, allowing systems to learn patterns from data through iterative training [3] - AI utilizes probabilistic reasoning to make predictions and decisions, enabling it to handle uncertainty and adapt to changes [3] - The complexity of AI systems lies in the architecture and scale of the models rather than the amount of code written, with advanced models containing hundreds of billions to trillions of parameters [3] Group 3: Impact on Processing Hardware - The primary architecture for executing software programs has been the CPU, which processes instructions sequentially, limiting its ability to handle the parallelism required for AI models [4] - Modern CPUs have adopted multi-core and multi-threaded architectures to improve performance, but still lack the massive parallelism needed for AI workloads [4][5] Group 4: AI Accelerators - GPUs have become the backbone of AI workloads due to their unparalleled parallel computing capabilities, offering performance levels in the range of petaflops [6] - However, GPUs face efficiency bottlenecks during inference, particularly with large language models (LLMs), where theoretical peak performance may not be achieved [6][7] - The energy demands of AI data centers pose sustainability challenges, prompting the industry to seek more efficient alternatives, such as dedicated AI accelerators [7] Group 5: Key Attributes of AI Accelerators - AI processors require unique attributes not found in traditional CPUs, with batch size and token throughput being critical for performance [8] - Larger batch sizes can improve throughput but may lead to increased latency, posing challenges for real-time applications [12] Group 6: Overcoming Hardware Challenges - The main bottleneck for AI accelerators is memory bandwidth, often referred to as the memory wall, which affects performance when processing large batches [19] - Innovations in memory architecture, such as high bandwidth memory (HBM), can help alleviate memory access delays and improve overall efficiency [21] - Dedicated hardware accelerators designed for LLM workloads can significantly enhance performance by optimizing data flow and minimizing unnecessary data movement [22] Group 7: Software Optimization - Software optimization plays a crucial role in leveraging hardware capabilities, with highly optimized kernels for LLM operations improving performance [23] - Techniques like gradient checkpointing and pipeline parallelism can reduce memory usage and enhance throughput [23][24]