Transformer
Search documents
Cartesia: 3 个月融资 9100 万美元,从 Transformer 到 Mamba 重塑语音 AI
海外独角兽· 2025-04-03 12:04
作者:linlin 编辑:haina 2025 年 3 月 11 日,语音生成初创公司 Cartesia 宣布完成 6400 万美元 A 轮融资,距其 2700 万美元种 子轮融资仅过去不到 3 个月。本轮融资由 Kleiner Perkins 领投,Lightspeed、Index、A*、Greycroft、 Dell Technologies Capital 和 Samsung Ventures 等跟投。Cartesia 还同时推出了其旗舰产品 Sonic 2.0, 系统延迟从 90 毫秒缩短至 45 毫秒,为语音 AI 领域高效、实时且低成本的多模态交互提供了新动 力。 Cartesia 的核心团队均来自 Stanford AI labs,包括 Karan Goel、Albert Gu、Arjun Desai、Brandon Yang 四位校友及其共同导师 Chris Ré。团队共同的研究方向在于 SSM(状态空间模型)。从 S4 到 Mamba 的 SSM 系列研究,以线性时间复杂度,为解决 LLMs 主流架构 Transformer 在上下文长度的 固有局限提供了潜在解决方案,意味着更快的生成速度、 ...
3700 次预训练寻找 “线性注意力” 非共识,MiniMax-01 开发者讲述 4 年探索
晚点LatePost· 2025-03-09 12:00
"我们跑的是下半场,赌的就是未来的长文本需求。" MiniMax 在今年 1 月发布了参数为 4560 亿的开源大模型 MiniMax-01,该模型就用到了他们开发的线 性注意力机制 "Lightning Attention"。 我们邀请了这个项目的负责人,MiniMax 高级研究总监钟怡然,来与我们一起聊线性注意力的研发过 程。钟怡然在 MiniMax 负责大模型网络架构设计,目前正开发多模态深度推理模型。 钟怡然曾担任上海人工智能实验室青年科学家,是新架构探索组的 PI(项目负责人);他在澳洲国立大 学获得博士学位,师从李宏东教授和 Richard Hartley 院士。他和他的团队已在一些国际顶级学术会议和 期刊上发表了 20 余篇关于模型新架构的论文,覆盖了当前多类非 Transformer 架构,如线性注意力机制 (线性注意力)、长卷积(Long Convolution)和线性循环网络(Linear RNN)。 在 2021 年,线性注意力还是一个 "看起来很美好的泡泡",怡然和团队就开始探索线性架构的实现。 嘉宾 丨 钟怡然 整理 丨 刘倩 程曼祺 上期播客中, 我们与清华的两位博士生,肖朝军和傅 ...
【广发金工】神经常微分方程与液态神经网络
广发金融工程研究· 2025-03-06 00:16
广发证券首席金工分析师 安宁宁 anningning@gf.com.cn 广发证券资深金工分析师 陈原文 chenyuanwen@gf.com.cn 联系人:广发证券金工研究员 林涛 gflintao@gf.com.cn 广发金工安宁宁陈原文团队 摘要 神经常微分方程: 在机器学习国际顶会NeurIPS 2018上,Chen等人发表的论文《Neural Ordinary Differential Equations》获得了大会的最佳论文奖。简单来 说,一个常见的ResNet网络通常由多个形如h_{t+1}=f(h_t,_t)+h_t的残差结构所组成。在常规求解中,需计算出每一个残差结构中最能拟合训练数据的网 络参数。而该论文提出,假设当ResNet网络中的残差结构无限堆叠时,则每一个残差结构的参数都可以通过求解同一个常微分方程来获得。 液态神经网络: 基于上述工作,来自麻省理工学院的Ramin Hasani等人,创新性地以常微分方程的形式描述循环神经网络的隐藏状态变化,提出了一类被 称之为液态神经网络的模型,这些研究成果被发表在《Nature:Machine Intelligence》等国际顶级期刊上。此类模 ...
AI芯片的双刃剑
半导体行业观察· 2025-02-28 03:08
Core Viewpoint - The article discusses the transformative shift from traditional software programming to AI software modeling, highlighting the implications for processing hardware and the development of dedicated AI accelerators. Group 1: Traditional Software Programming - Traditional software programming is based on writing explicit instructions to complete specific tasks, making it suitable for predictable and reliable scenarios [2] - As tasks become more complex, the size and complexity of codebases increase, requiring manual updates by programmers, which limits dynamic adaptability [2] Group 2: AI Software Modeling - AI software modeling represents a fundamental shift in problem-solving approaches, allowing systems to learn patterns from data through iterative training [3] - AI utilizes probabilistic reasoning to make predictions and decisions, enabling it to handle uncertainty and adapt to changes [3] - The complexity of AI systems lies in the architecture and scale of the models rather than the amount of code written, with advanced models containing hundreds of billions to trillions of parameters [3] Group 3: Impact on Processing Hardware - The primary architecture for executing software programs has been the CPU, which processes instructions sequentially, limiting its ability to handle the parallelism required for AI models [4] - Modern CPUs have adopted multi-core and multi-threaded architectures to improve performance, but still lack the massive parallelism needed for AI workloads [4][5] Group 4: AI Accelerators - GPUs have become the backbone of AI workloads due to their unparalleled parallel computing capabilities, offering performance levels in the range of petaflops [6] - However, GPUs face efficiency bottlenecks during inference, particularly with large language models (LLMs), where theoretical peak performance may not be achieved [6][7] - The energy demands of AI data centers pose sustainability challenges, prompting the industry to seek more efficient alternatives, such as dedicated AI accelerators [7] Group 5: Key Attributes of AI Accelerators - AI processors require unique attributes not found in traditional CPUs, with batch size and token throughput being critical for performance [8] - Larger batch sizes can improve throughput but may lead to increased latency, posing challenges for real-time applications [12] Group 6: Overcoming Hardware Challenges - The main bottleneck for AI accelerators is memory bandwidth, often referred to as the memory wall, which affects performance when processing large batches [19] - Innovations in memory architecture, such as high bandwidth memory (HBM), can help alleviate memory access delays and improve overall efficiency [21] - Dedicated hardware accelerators designed for LLM workloads can significantly enhance performance by optimizing data flow and minimizing unnecessary data movement [22] Group 7: Software Optimization - Software optimization plays a crucial role in leveraging hardware capabilities, with highly optimized kernels for LLM operations improving performance [23] - Techniques like gradient checkpointing and pipeline parallelism can reduce memory usage and enhance throughput [23][24]