Workflow
量子叠加
icon
Search documents
“诺奖赢家”量子计算,落地到哪一步了?
Hu Xiu· 2025-10-13 07:37
最近有朋友问到一个很关键的问题:很多美股在关注到的时候已经暴涨过了,如何挖掘出下一个百倍股呢?这个问题仁者见仁智者见智,但我认为最好的 办法是对关键产业趋势建立前瞻认知,提前布局,买在无人问津时。 就在最近,2025年诺贝尔物理学奖正式揭晓,量子计算成最大赢家。本期我们就来仔细聊聊量子计算,老规矩用AlphaEngine来解读。量子计算是一个非 常宏大且精彩的话题,本期是专题的上篇,主要讲清楚量子计算的基本原理、最新技术突破、核心标的清单。下篇我会详细介绍量子计算目前的六种主流 技术路径,一二级市场头部公司的最新进展。 量子计算发展三阶段:从NISQ迈向FTQC 量子计算行业正处于从"科学狂想"向产业化落地的关键拐点。 驱动这一转变的核心是量子纠错(QEC)技术的实质性突破。 当前量子计算处于"含噪声的中尺度量子阶"(NISQ,Noisy Intermediate-Scale Quantum)。 每台量子计算机包含数十到数千个物理量子比特,但这些比特易受环境噪声干扰,导致计算保真度有限,无法执行需要高精度的大规模算法。 因此,产业界聚焦于专用机商业化与混合算法应用两大路径。 以D-Wave的量子退火机为代表的 ...
田渊栋:连续思维链效率更高,可同时编码多个路径,“叠加态”式并行搜索
量子位· 2025-06-19 06:25
Core Viewpoint - The article discusses a new research achievement by a team led by AI expert Tian Yuandong, which introduces a continuous thinking chain model that parallels quantum superposition, enhancing efficiency in complex tasks compared to traditional discrete thinking chains [2][4]. Group 1: Research Findings - Traditional large language models (LLMs) utilize discrete tokens for reasoning, which can be inefficient for complex tasks, requiring O(n^2) decoding steps and often getting stuck in local optima [4]. - Recent studies indicate that using continuous hidden vectors for reasoning can significantly improve performance, although theoretical explanations were previously lacking [5]. - The team demonstrated that a two-layer Transformer with D-step continuous chains of thought (CoTs) can solve directed graph reachability problems, outperforming discrete CoTs models that require O(n^2) decoding steps [7]. Group 2: Methodology - The continuous thinking chain allows for simultaneous encoding of multiple candidate graph paths, akin to breadth-first search (BFS), providing a significant advantage over discrete thinking chains, which resemble depth-first search (DFS) [8]. - A designed attention selector mechanism enables the model to focus on specific positions based on the current token, ensuring effective information extraction [11][12]. - The first layer of the Transformer organizes edge information, while the second layer facilitates parallel exploration of all possible paths [21][22]. Group 3: Experimental Results - The team conducted experiments using a subset of the ProsQA dataset, which required 3-4 reasoning steps to solve, with each node represented as a dedicated token [26]. - The COCONUT model, utilizing a two-layer Transformer, achieved an accuracy close to 100% in solving ProsQA problems, while a 12-layer discrete CoT model only reached 83% accuracy, and a baseline model solved approximately 75% of tasks [27][28]. - The model's behavior was further validated through analysis of attention patterns and continuous thinking representations, supporting the theoretical hypothesis of superposition search behavior [30].