量子位
Search documents
万万没想到,大学生都开始拿AI来养猪了
量子位· 2025-09-11 10:19
Core Insights - The article highlights the increasing integration of AI tools in the daily lives of university students in China, showcasing their diverse applications in academic and personal contexts [1][3][5]. Group 1: AI Usage Among University Students - 70% of university students are using the Quark app, with high usage rates observed not only in major cities but also in key provinces [3][4]. - The top five popular AI applications among students include AI search, AI question answering, AI scanning, AI writing, and AI summarization [4]. - 28.8% of university students use Quark to generate campaign PPTs for class committee elections, with a total of 420,000 PPT requests related to student elections and club interviews in early September [4][7]. Group 2: Deep Interaction with AI - The penetration rate of AI among Quark's university users has reached 80%, indicating a shift towards more complex and professional inquiries [5]. - Medical students frequently use Quark for searching complex professional questions, with over 50% of them engaging with the app for academic purposes [5][6]. - The most searched topics in academic searches are in the fields of medicine, economics, and social issues [6]. Group 3: Diverse Applications Beyond Academics - University students are also using AI for personal matters, such as fortune-telling and dream interpretation [8]. - Examples include a veterinary student using AI to assess pig breeding practices and an enology student identifying grape varieties through image recognition [8]. Group 4: AI in College Entrance Examination Context - Quark's AI features for college entrance examination planning are particularly valued by students, with one freshman sharing her experience of using the app to generate her application strategy [10][11]. - Concerns were raised about the potential for uniformity in college applications due to AI assistance, but Quark's product manager clarified that the tool requires personalized input from users [12][13]. - Quark has shown adaptability by correcting previous misinformation regarding college programs, demonstrating a commitment to product optimization [14][16].
DeepDiver-V2来了,华为最新开源原生多智能体系统,“团战”深度研究效果惊人
量子位· 2025-09-11 10:19
允中 发自 凹非寺 量子位 | 公众号 QbitAI 采用了 "团队作战" 模式:一个Planner负责任务分解,任务分发,进度审视和成果验收,多个专业Executor并行处理子任务,通过共享文件 系统高效交换信息。 与仅通过推理框架实现的多智能体系统不同,DeepDiver-V2以多智能体形态进行训练,模型天然具备更强的角色扮演和协同推理能力。这套 系统不仅在复杂知识问答任务上取得突破,更是能够 生成数万字的高质量深度研究报告 ,在多个榜单中表现亮眼。 它基于华为openPangu Agent推出的DeepDiver-V2,这是一个专攻AI深度搜索和长文调研报告生成的模型。 目前已开源 。 性能爆表:优于同规格竞品 数字最有说服力。DeepDiver-V2-7B和DeepDiver-V2-38B和在多个权威基准测试中表现亮眼: 让智能体组团搞深度研究,效果爆表! 华为最新发布 DeepDiver-V2原生多智能体系统 。 在长文报告生成方面 ,DeepDiver-V2提出了一个全新的面向深度调研报告生成的基准测试WebPuzzle-Writing,该基准给每个调研query设 置了详细的调研范围而非开放生成 ...
央企怎么做超级智能体?对谈中电信天翼AI:自研模型为底座,自主规划是必须,能适应千行百业才行
量子位· 2025-09-11 10:19
Core Viewpoint - The article discusses the launch of China Telecom's Tianyi AI "Star Super Intelligent Agent," which ranks first among state-owned enterprises in the DBC Deben Consulting 2025 Enterprise-level AI Agent list, highlighting its capabilities and market potential [1][4]. Group 1: Overview of Star Super Intelligent Agent - The Star Super Intelligent Agent is based on China Telecom's self-developed "Star Big Model" technology, designed for industrial intelligent upgrades [2][8]. - It supports multimodal understanding, including voice, vision, and text, and can generate images and videos from text, showcasing its rich capabilities [11][12]. - The agent emphasizes enhanced complex reasoning and memory capabilities, making it suitable for various real-world applications such as customer service and financial operations [13][14]. Group 2: Market Trends and Development - The article notes a surge in interest in intelligent agents, driven by government initiatives promoting AI integration across industries [4][43]. - There are ongoing discussions about the practical applications and effectiveness of intelligent agents in real-world scenarios, with a focus on their ability to automate complex tasks [5][6]. Group 3: Technical Insights - The Star Super Intelligent Agent framework is designed to be highly customizable, allowing businesses to integrate it into their existing systems effectively [16][17]. - It operates through a four-module architecture: perception and understanding, cognition and decision-making, memory and knowledge, and action and execution, enabling it to perform tasks similarly to humans [27][29]. Group 4: Industry Applications and Case Studies - Successful implementation examples include the development of an intelligent customer service system that automates complaint processing, demonstrating the agent's ability to integrate with existing business systems [36][54]. - The article emphasizes that sectors with high IT integration, such as customer service and marketing, are prime candidates for the rapid deployment of intelligent agents [52]. Group 5: Competitive Landscape - The market is characterized by various players, including large model manufacturers, tech giants, startups, and state-owned enterprises, each focusing on different aspects of intelligent agent development [53][54]. - China Telecom's unique advantage lies in its extensive local service teams and existing digital infrastructure, allowing for scalable and effective deployment of intelligent agents across industries [54][56].
攻克AI过度思考难题!美团新研究让通过“可验证”过程奖励激活LRM的高效推理
量子位· 2025-09-11 10:19
美团搜推Agentic System X (AsX) 团队 投稿 量子位 | 公众号 QbitAI LRM通过简单却有效的RLVR范式,培养了强大的CoT推理能力,但伴随而来的冗长的输出内容,不仅显著增加推理开销,还会影响服务的吞 吐量,这种消磨用户耐心的现象被称为"过度思考"问题。 针对这一缺陷,来自美团等机构的研究团队提出 可验证的过程奖励机制(VSRM) , 鼓励CoT中的"有效步骤",惩戒"无效步骤",最大限 度保持性能的同时,实现高效推理 。 通过在数学任务上的实验显示,在多个常用benchmark上, VSRM加持的后训练使得不同尺度的模型实现了输出长度的大幅缩减 ,甚至在部 分情况下提升了模型表现。 过度思考问题的本质 此前的工作将过度思考问题的现象总结为:对于一个问题,模型倾向于给出多种不同的解答,特别简单的问题。在这一认识的基础上,作者团 队更进一步,对现有LRM在MATH-500上做出的回复进行了深入的case study。 | Find the number of integer values of k in the closed interval [-500,500] for whic ...
国产类脑大模型适配国产沐曦GPU!长序列推理提速超百倍,仅用2%数据匹敌主流模型
量子位· 2025-09-11 10:19
SpikingBrain团队 投稿 量子位 | 公众号 QbitAI 超长序列推理时的巨大开销如何降低? 中国科学院自动化所李国齐、徐波团队 发布的 类脑脉冲大模型SpikingBrain (瞬悉)-1.0 提出了新思路。 SpikingBrain借鉴大脑信息处理机制,具有线性/近线性复杂度,在超长序列上具有显著速度优势。 在GPU上1M长度下TTFT 速度相比主流大模型提升26.5x,4M长度下保守估计速度提升超过100x;在手机CPU端64k-128k-256k长度下较 Llama3.2的同规模模型Decoding速度提升4.04x-7.52x-15.39x。 SpikingBrain适配了面向 沐曦MetaX国产GPU集群 的高效训练和推理框架、Triton算子库、模型并行策略以及集群通信原语,表明了构建国 产自主可控的新型非Transformer大模型架构生态的可行性。 SpikingBrain-1.0就是这一思路下的初步尝试。 大模型时代的新视角 人脑是目前唯一已知的通用智能系统,包含约1000亿神经元和约1000万亿突触数量、具有丰富的神经元种类、不同神经元又具有丰富的内部 结构,但功耗仅20W左 ...
2025人工智能年度评选启动!3大维度5类奖项,正在寻找AI+时代领航者
量子位· 2025-09-11 07:43
组委会 发自 凹非寺 量子位|公众号 QbitAI 为了让更多从业者感受智能浪潮的跃迁,也为了给予更多同行同路人掌声与鼓舞,我们将正式启动 「2025人工智能年度榜单」评选报名 。 这是量子位人工智能年度榜单的 第8年 。八年来,我们见证了技术的突破与落地,产业的融合与重塑,也见证了一批又一批推动时代前行 的企业、人物与产品。 在人工智能重新定义一切的时代里,智能技术已不再是单一工具,而是产业与社会协同进化的驱动力。我们期待通过这场年度评选,去发现 并致敬那些真正引领变革、开拓边界的探索者与实践者。 本次评选将从 企业 、 产品 、 人物 三大维度,设立五类奖项。欢迎企业踊跃报名! 让我们共同见证年度之星,点亮未来的方向。 企业榜 产品榜 人物榜 2025 人工智能年度 焦点人物 详细评选标准及报名方式如下。 2025 人工智能年度领航企业 2025 人工智能年度 领航企业 2025 人工智能年度 潜力创业公司 2025 人工智能年度 杰出产品 2025 人工智能年度 杰出解决方案 将面向中国人工智能领域,评选出最具综合实力的企业, 参选条件 : 评选标准 : 2025 人工智能年度潜力创业公司 聚焦于中国人 ...
Kimi开源又放大招!20秒更新万亿参数的中间件来了
量子位· 2025-09-11 05:19
Core Viewpoint - The article discusses the introduction of a middleware called "checkpoint-engine" that enables the Kimi K2 model, which has one trillion parameters, to update its model weights in approximately 20 seconds across thousands of GPUs, marking a significant advancement in the efficiency of large language model training and inference [6][7]. Group 1: Middleware Functionality - The checkpoint-engine is designed to facilitate the updating of model weights during the inference process of large language models [6]. - It allows for both simultaneous broadcasting of updated weights to all nodes and point-to-point dynamic updates [2][24]. - The middleware supports a pipeline approach for parameter updates, minimizing memory usage by updating parameters one at a time [19][20]. Group 2: System Architecture - Kimi K2 employs a hybrid co-location architecture where the training and inference engines are deployed on the same set of nodes [8]. - During each reinforcement learning iteration, a centralized controller generates new training data using the inference engine and then instructs the training engine to update parameters based on this data [9]. - The system is optimized for high throughput, with each engine deeply optimized for performance [10]. Group 3: Parameter Update Process - The training engine's parameters are unloaded to DRAM, allowing for quick activation of the training engine with minimal data transfer [12]. - The checkpoint engine manages parameter states by first obtaining local parameter copies from the training engine and then broadcasting the complete parameter set to all checkpoint nodes [16][17]. - The inference engine retrieves only the necessary parameter slices from the checkpoint engine, streamlining the update process [18]. Group 4: Performance Optimization - The design sacrifices some data transfer efficiency for a simpler system architecture, which reduces the complexity of maintenance and testing [25][26]. - During the startup of the training engine, nodes selectively read parameters from disk to minimize expensive disk I/O operations [28]. - The checkpoint engine can independently restart in case of failures, enhancing system resilience [33].
81岁甲骨文创始人冲上首富!难怪马斯克念念不忘OpenAI
量子位· 2025-09-11 05:19
西风 发自 凹非寺 量子位 | 公众号 QbitAI 真是谁也没想到…… 就在昨天美股开盘后,传统数据库公司 甲 骨文 Oracle股价一度狂飙暴涨43% ,虽然盘中有所回落,但最后 收盘依然上涨近36% ,打破了 多项美股涨幅纪录。 一度还把甲骨文创始人,现年81岁的 拉里·埃里森 (Larry Ellison) 送上了全球首富宝座 。 埃里森一夜身价暴涨1000亿美元,总身家达到3930亿美元, (短暂) 超过3850亿美元的马斯克。 而且更有意思的是,推动甲骨文这波股价的疯狂暴涨的并非其传统优势的数据库业务突破。 功劳依然是当红趋势AI,功劳是马斯克爱过恨过还在打官司的 OpenAI 。 甲骨文 披露了一项与Op enAI之间的 3000亿美元算力采购 协议 。 这也是全球规模最大的云计算合同之一。 知情人士透露,该协议将于 2027年生效 ,OpenAI计划约五年内分批采购,年均支付额将高达600亿美元。 "星际之门"计划的一部分 实际上,甲骨文早在6月的一份文件中就暗示过这笔交易,称其已达成一份云服务协议,自2027年起每年将为其带来超过300亿美元的收入。 这份合同,对两家公司而言都是一场高风险豪 ...
李飞飞一年前究竟说了啥?怎么又火了
量子位· 2025-09-11 01:58
Core Viewpoint - The limitations of large language models (LLMs) in understanding the physical world are highlighted, emphasizing that language is a generated signal dependent on human input, while the physical world is an objective reality governed by its own laws [1][5][19]. Group 1: Language Models and Their Limitations - Language models operate on a one-dimensional representation of discrete tokens, making them adept at handling written text but inadequate for representing the three-dimensional nature of the physical world [12][14]. - The challenge of spatial intelligence lies in extracting, representing, and generating information from the real world, which is fundamentally different from language processing [17][19]. - Experiments show that LLMs struggle with physical tasks, performing poorly compared to human children and specialized robots [22][28]. Group 2: Experimental Findings - In a test using the Animal-AI environment, LLMs could only complete simple tasks, failing at more complex ones even with additional teaching examples [26][27]. - A tool named ABench-Physics was developed to assess LLMs' physical reasoning abilities, revealing that even the best models achieved only a 43% accuracy rate on basic physics problems [30][34]. - Visual tasks further demonstrated the limitations of LLMs, with human accuracy at 95.7% compared to a maximum of 51% for the models [37][41]. Group 3: Philosophical and Future Considerations - The discussion includes perspectives on whether language can sometimes describe reality better than perception and the potential for AI to develop its own language for understanding the physical world [46][47]. - The ongoing development of models based on physical and multimodal understanding indicates a shift towards addressing these limitations [44].
她们估值840亿,刚发了第一个AI成果
量子位· 2025-09-11 01:58
Core Insights - Thinking Machines, valued at $12 billion, has released its first research blog focusing on overcoming nondeterminism in large language model (LLM) inference [1][51]. - The research emphasizes the challenge of reproducibility in LLM outputs, attributing it to batch non-invariance [3][12]. Group 1: Research Focus - The main theme of the research is "Defeating Nondeterminism in LLM Inference," which addresses why LLM inference results are often non-reproducible [3][8]. - The root cause identified is batch non-invariance, where the output of a single request is influenced by the number of requests in the same batch [14][15]. Group 2: Technical Findings - The research indicates that floating-point non-associativity and concurrent execution lead to different results in LLM inference, but this explanation is incomplete [9][10]. - The study reveals that the lack of batch invariance is the primary issue, as dynamic adjustments to batch sizes during deployment affect the computation order of key operations [15][16]. Group 3: Proposed Solutions - To achieve batch invariance, the research suggests fixing the reduction order in operations like RMSNorm and matrix multiplication, regardless of batch size [18][19]. - The proposed method involves compiling a unified kernel configuration for all input shapes to avoid switching parallel strategies due to batch size changes, even if it results in a performance loss of about 20% [22][21]. Group 4: Experimental Validation - Three types of experiments were conducted to validate the findings: inference determinism verification, performance verification, and real online policy reinforcement learning application verification [25]. - Results showed that using batch invariant kernels led to 1000 identical outputs, achieving deterministic inference, while non-invariant kernels produced 80 different results [27][28]. Group 5: Company Background - Thinking Machines was co-founded by Mira Murati, former CTO of OpenAI, and includes a team of notable figures from the AI industry, primarily from OpenAI [36][38][46]. - The company recently completed a $2 billion seed funding round, setting a record for AI funding, and is now valued at $12 billion despite not having any product yet [51][50].