Workflow
Transformer模型
icon
Search documents
一种新型的超大规模光电混合存算方案
半导体行业观察· 2025-06-29 01:51
Core Viewpoint - The article discusses the development of a novel 2T1M optoelectronic hybrid computing architecture that addresses the IR drop issue in traditional CIM architectures, enabling larger array sizes and improved performance for deep learning applications, particularly in large-scale Transformer models [1][2][9]. Group 1: Architecture Design and Working Principle - The 2T1M architecture integrates electronic and photonic technologies to mitigate IR drop issues, utilizing a combination of two transistors and a modulator in each storage unit [2]. - The architecture employs FeFETs for multiplication operations, which exhibit low static power consumption and excellent linear characteristics in the subthreshold region [2]. - FeFETs demonstrate a sub-pA cutoff current and are expected to maintain performance over 10 years with over 10 million cycles [2]. Group 2: Optoelectronic Conversion and Lossless Summation - The architecture utilizes lithium niobate (LN) modulators for converting electrical signals to optical signals, leveraging the Pockels effect to achieve phase shifts in light signals [4][6]. - The integration of multiple 2T1M units in a Mach-Zehnder interferometer allows for effective accumulation of phase shifts, enabling lossless summation of vector-matrix multiplication results [4][6]. Group 3: Transformer Application - Experimental results indicate that the 2T1M architecture achieves a 93.3% inference accuracy when running the ALBERT model, significantly outperforming traditional CIM architectures, which only achieve 48.3% accuracy under the same conditions [9]. - The 2T1M architecture supports an array size of up to 3750kb, which is over 150 times larger than traditional CIM architectures limited to 256kb due to IR drop constraints [9]. - The architecture's power efficiency is reported to be 164 TOPS/W, representing a 37-fold improvement over state-of-the-art traditional CIM architectures, which is crucial for enhancing energy efficiency in edge computing and data centers [9].
信息过载时代,如何真正「懂」LLM?从MIT分享的50个面试题开始
机器之心· 2025-06-18 06:09
机器之心报道 编辑:+0 人类从农耕时代到工业时代花了数千年,从工业时代到信息时代又花了两百多年,而 LLM 仅出现不到十年,就已将曾经遥不可及的人工智能能力普及给大 众,让全球数亿人能够通过自然语言进行创作、编程和推理。 LLM 的技术版图正以前所未有的速度扩张,从不断刷新型号的「模型竞赛」,到能够自主执行任务的智能体,技术的浪潮既令人振奋,也带来了前所未有 的挑战。 如何在海量信息中建立真正的认知深度,而非仅仅成为一个热点的追随者?也许可以从「做题」开始。 最近,MIT CSAIL 分享了一份由工程师 Hao Hoang 编写的 LLM 面试指南,精选了 50 个关键问题,旨在帮助专业人士和AI爱好者深入理解其核心概念、 技术与挑战。 文档链接:https://drive.google.com/file/d/1wolNOcHzi7-sKhj5Hdh9awC9Z9dWuWMC/view 我们将这 50 个问题划分为了几大主题,并附上图示和关键论文。希望这份指南能成为您的「寻宝图」,助您开启 LLM 探索之旅,无论是在面试中,还是 在未来的技术浪潮中,都能保持清醒的认知和持续探索的热情。 LLM 发 展历程。 ...
哈佛新论文揭示 Transformer 模型与人脑“同步纠结”全过程,AI也会犹豫、反悔?
3 6 Ke· 2025-05-12 00:22
近日,来自哈佛大学、布朗大学以及图宾根大学研究者们,共同发表了一项关于Transformer模型与人类认知处理相关性的研 究论文: ——《Linking forward-pass dynamics in Transformers and real-time human processing》 意译过来就是:Transformer模型的"思考过程"与人类大脑实时认知的奇妙相似 问题来了:AI和人类,不只是最后的选项,连中间的"挣扎"和"转变"也能被对齐吗? 这篇论文的作者,换了个角度:不只看AI模型的输出,还要扒一扒Transformer每一层的"处理动态",与人脑处理信息的"实时 轨迹"是否能对上。 01 AI和人脑,真的在"想"同一件事吗? 换句话说,它想搞清楚一个"老问题":AI模型的内部处理过程,和人类大脑的实时认知,有多少相似? 过去我们研究AI和人类的相似性,最常见的做法是什么?"看结果":让AI做题,看它答对多少,概率分布和人的选择对不对 得上。例如,让GPT写作文、识别图片、做逻辑推理,然后对照人类的数据,得出一个"AI越来越像人了"的结论。 但这其实只是表象。 想象一个场景:在答一道不太确定的 ...