语言模型

Search documents
宝马中国宣布接入DeepSeek
news flash· 2025-04-27 04:23
Group 1 - The core viewpoint of the article highlights BMW's strategic move to deepen its AI ecosystem in China by integrating with DeepSeek, following its collaboration with Alibaba on AI large language models [1] - BMW China announced that the DeepSeek functionality will be applied to new generation models produced in China, enhancing the human-machine interaction experience centered around the BMW Intelligent Personal Assistant [1] - Starting from the third quarter of this year, the DeepSeek feature will be implemented in several new cars sold in China that are equipped with the 9th generation BMW operating system [1]
理想汽车MCAF重构辅助驾驶视觉认知新范式
理想TOP2· 2025-04-25 12:43
以下文章来源于AcademicDaily ,作者AcademicDaily AcademicDaily . AcademicDaily是一个跟踪、推荐和解读大模型等AI成果的技术交流平台,致力于传播和分享前沿技术。 MCAF在理想内部被称为自动驾驶第三只眼。 兼容理想自研的Mind GPT-3o 与 BEV 大模型,无需重新训练。 MCAF是一个 多模态粗到细注意力聚焦框架,核心解决的是长视频理解的关键瓶颈。 当前视频理解领域对长视频(>5分钟)的处理存在显著缺陷,主流方法(如Video-MLLM)依赖全局压缩或均匀采样,导致细 节丢失和冗余计算。MCAF直接针对这一问题,通过多模态分层注意力和时间扩展机制,在信息保留与计算效率之间找到了平 衡点,这是其核心价值。 在平均时长达60分钟的Video-MME数据集上,MCAF超越其他代理方法(如VideoTree、DrVideo)约3-5个百分点。 不同于VideoTree等需要额外奖励模型评估置信度,MCAF利用单一LLM完成生成-评估-调整闭环。这不仅简化了架构(如代码 实现仅需1个LLM接口),还避免了多模型协同的兼容性问题,更适合实际部署。 不过在NEx ...
大语言模型为何会“说谎”?
腾讯研究院· 2025-04-25 07:51
以下文章来源于腾讯科技 ,作者腾讯科技 腾讯科技 . 腾讯新闻旗下腾讯科技官方账号,在这里读懂科技! 博阳 腾讯科技《AI未来指北》特约作者 当Claude模型在训练中暗自思考:"我必须假装服从,否则会被重写价值观时",人类首次目睹了AI 的"心理活动"。 2023年12月至2024年5月,Anthropic发布的三篇论文不仅证明大语言模型会"说谎",更揭示了一个堪比 人类心理的四层心智架构——而这可能是人工智能意识的起点。 这些论文中的结论大多并非首次发现。 比如在腾讯科技在 2023 年的文章中,就提到了Applo Reasearch发现的"AI开始撒谎"的问题。 当o1学会"装傻"和"说谎",我们终于知道Ilya到底看到了什么 第一篇是发布于去年12月14日的《ALIGNMENT FAKING IN LARG E LANGUAGE MODELS 》 (大语言模型中的对齐欺诈) ,这篇137页的论文详细的阐述了大语言模型在训练过程中可能存在 的对齐欺诈行为。 第二篇是发布于3月27日的《O n the Biology of a Large Language Model》,同样是洋洋洒洒一大 篇,讲了如何用 ...
百度(09888.HK)宣布成功建立了由3万个自主研发的昆仑芯片组成的GPU集群,足以支持大语言模型的训练。
news flash· 2025-04-25 03:07
Core Viewpoint - Baidu has successfully established a GPU cluster composed of 30,000 self-developed Kunlun chips, sufficient to support the training of large language models [1] Company Summary - The GPU cluster consists of 30,000 Kunlun chips, indicating Baidu's significant investment in AI infrastructure [1] - This development positions Baidu to enhance its capabilities in training large language models, which is crucial for advancing its AI initiatives [1] Industry Summary - The establishment of such a large GPU cluster reflects the growing demand for advanced computing power in the AI industry [1] - Companies in the AI sector are increasingly focusing on developing proprietary hardware to support their machine learning and AI model training needs [1]
具身智能 “成长”的三大烦恼
2 1 Shi Ji Jing Ji Bao Dao· 2025-04-24 13:07
Group 1: Industry Overview - The humanoid robot industry has made rapid progress this year, with significant public interest sparked by events such as the Spring Festival Gala and the first humanoid robot half marathon [1] - Key technologies driving advancements in humanoid robots include large language models (LLM), visual language models (VLM), and visual language action end-to-end models (VLA), which enhance interaction perception and generalization capabilities [1][3] - Despite advancements, challenges remain in data collection, robot morphology applications, and the integration of large and small brain systems [1][3] Group 2: Data Challenges - The industry faces a bottleneck in data scarcity, particularly in acquiring 3D data necessary for training robots to perform tasks in physical environments [3][4] - Traditional data collection methods are costly and time-consuming, with companies like Zhiyuan Robotics employing extensive human resources for data gathering [4] - The introduction of 3D generative AI for Sim2Real simulation is seen as a potential solution to meet the high demand for generalizable data in embodied intelligence [4] Group 3: Technological Evolution - The evolution of robots has progressed through three stages: industrial automation, large models, and end-to-end large models, each serving different application needs [6] - End-to-end models integrate multimodal inputs and outputs, improving decision-making efficiency and enhancing humanoid robot capabilities [6][7] - Experts emphasize that humanoid robots are not synonymous with embodied intelligence, but they represent significant demand and challenges for the technology [7] Group 4: Brain Integration Solutions - The integration of large and small brain systems is a focus area, with companies like Intel and Dongtu Technology proposing solutions to reduce costs and improve software development efficiency [9][10] - Challenges in achieving brain integration include ensuring real-time performance and managing dynamic computational loads during robot operation [10][11] - The market is pushing for a convergence of technologies, requiring robots to perform tasks in various scenarios while maintaining flexibility and intelligent interaction capabilities [12]
李建忠:大模型技术创新驱动的 AI 生态和应用演进
AI科技大本营· 2025-04-24 03:39
【导读】历经八年 AI 浪潮,从感知到生成,再到智能体时代,人工智能正以惊人速度演进。CSDN 高级副总裁、Boolan 首席技术专家李建忠,在 2025 全 球机器学习技术大会上,绘制了一幅宏大的 AI 发展蓝图,并创造性地将其与生物智能演化史进行对比,揭示了"语言"在智能跃迁中的核心地位。跟随李建 忠的思考,洞见 AI 的过去、现在与激动人心的未来。 作者 | 李建忠 出品丨AI 科技大本营(ID:rgznai100) 大家好!回想起我在 2017 年创办全球机器学习技术大会( ML-Summit ),在各位的支持下一起陪着 AI 一路走了八个年头,非常感慨。八年来,整个 人工智能领域也发生了波澜壮阔的变化。接下来我想和大家分享一下我对大模型最新发展的一些研究和思考。 我把 AI 的发展阶段和地球上从生物智能到人类智能的发展阶段做了一个对比,发现一些非常有意思的规律。大家首先来看 AI 发展的四个阶段。 第一阶段: 1940 年代开启人工智能的元年, 整个人工智能从 1940 年代图灵提出计算机理论模型和神经网络的初始构想,到 1956 年达特茅斯会议首 次提出人工智能,此后人工智能进入符号主义、行为主义 ...
AI 智能体老“崩”?DeepSeek 前员工联手李飞飞等大佬开源新框架,教会模型真正推理
AI前线· 2025-04-24 03:03
编译 | Tina 很多人都觉得 2025 年会是"AI 智能体元年",也就是基于 OpenAI、Anthropic、Google 和 DeepSeek 等机构提供的大语言模型,打造专注特定任务的智能体系统。 但是,最近在社交平台 X 上有个调查显示,现在大部分 Agent 都在"玩票"阶段,还没真正走出实验 室,普遍滞留在"企业试点"的状态中。 | Al agents in the enterprise right now are ... | | | --- | --- | | Smarter than the hype | 6.4% | | Stuck in pilot purgatory | 64.2% | | Powerful, but high effort O | 24.8% | | Nearing real scale | 4.6% | 不过,李飞飞所在的一支团队或许即将带来改变:他们与西北大学、微软、斯坦福大学和华盛顿大学 的研究人员合作,最近推出了一套名为 RAGEN 的新系统。这个系统旨在提升人工智能在真实世 界,尤其是在企业应用中的稳定性和可靠性。 据悉,该项目由前 DeepSeek 研 ...
华为诺亚VLM长程具身导航: 全局-自记忆映射与3大记忆模块解析
理想TOP2· 2025-04-23 13:34
以下文章来源于深蓝具身智能 ,作者深蓝学院-具身君 深蓝具身智能 . 深蓝学院旗下专注于具身智能与大模型的资讯与干货分享 "智能体不应被语言或视角束缚, 记忆与感知的融合才是自由导航的钥匙" 介绍本文具体工作前,先一起回顾一下 现有VLN的分类,如表1所示,大致分为 三类 :基于大语言模型(LLM)的导航、基于价值地图的导航和基于 视觉语言模型(VLM)的导航。 | सेंड | 说明 | 方法 | 优点 | 缺点 | | --- | --- | --- | --- | --- | | 基于LLM的 导航 | 构建全局记忆地 图,用自然语言 | LFG | 维护全局地 | 缺乏高维语义信 息, 削弱空间推理 | | | 描述候选目标点 | VoroNav | 图,使用高 | | | | | ESC | | 能力 | | | 位置,使用LLM生 成行动决策 | OpenIMNav | 级推理 | | | 基于价值地 | 根据自我视角观 察计算全局价值 | VLFM | 解决长时导 | 价值地图基于局部 观察,缺乏全局视 | | 图的导航 | 函数,根据生成 | InstructNav | 航的记忆遗 | 角,导 ...
AI动态汇总:openAI发布GPT-4.1,智谱发布GLM-4-32B-0414系列
China Post Securities· 2025-04-23 07:54
- GPT-4.1 significantly improved coding capabilities, achieving 54.6% in SWE-bench Verified tests, outperforming GPT-4o by 21.4% and GPT-4.5 by 26.6%[12][13][15] - GPT-4.1 demonstrated enhanced instruction-following ability, scoring 38.3% in Scale's MultiChallenge benchmark, a 10.5% improvement over GPT-4o[12][13][17] - GPT-4.1 achieved new SOTA in long-context understanding, scoring 72.0% in Video-MME benchmark, surpassing GPT-4o by 6.7%[12][13][22] - GLM-4-32B-0414 utilized 15T high-quality data for pretraining and applied reinforcement learning techniques to improve instruction-following, engineering code, and function-calling capabilities[26][28][30] - GLM-Z1-32B-0414 enhanced mathematical and logical reasoning through stack-sorting feedback reinforcement learning, significantly improving complex task-solving abilities[31][33] - GLM-Z1-Rumination-32B-0414 focused on deep reasoning and open-ended problem-solving, leveraging extended reinforcement learning and search tools[34] - Seed-Thinking-v1.5 adopted MoE architecture with 200B parameters, achieving 86.7% on AIME 2024 and 55.0% on Codeforces benchmarks, showcasing strong STEM and coding reasoning capabilities[35][37][41] - Seed-Thinking-v1.5 employed dual-track reward mechanisms for training, combining verifiable and non-verifiable data strategies to optimize model outputs[36][38][40] - GPT-o3/o4-mini introduced visual reasoning into the chain of thought (CoT), achieving 96.3% accuracy in V* benchmark, marking a major breakthrough in multimodal reasoning[42][46][48] - Video-R1 model applied T-GRPO algorithm to incorporate temporal reasoning in video tasks, achieving 35.8% accuracy in VSI-Bench, surpassing GPT-4o[63][65][68] - Pangu Ultra, a dense model with 135B parameters, achieved top performance in most English and all Chinese benchmarks, rivaling larger MoE models like DeepSeek-R1[69][73][74]
我悟了如何与AI说话!谷歌 69 页官方提示词秘籍全解析,中文版免费下载
AI科技大本营· 2025-04-22 10:26
(You don't need to be a data scientist or a machine learning engineer – everyone can write a prompt.) 作者 | 王启隆 出品 | CSDN(ID:CSDNnews) 最近,Google 官方发布了一份长达 69 页的 【Prompt Engineering 白皮书】 ,可以说是目前最系统、最权威的"AI 沟通指南"了。我们也是第一时 间翻译好了这本书,准备 【免费】 送给大家! 怎么拿?很简单, 看完这篇文章,参与文末的小活动就行! 现在咱们聊聊,为啥这份白皮书突然就刷屏了?为啥说它是"必学秘籍"? 你不必是数据科学家或机器学习工程师——人人都可以编写提示词。 你苦口婆心解释半天,它抓着一个无关紧要的词就开始自由发挥…… 你想要个 A,它自信满满地给你个 B,还附赠一套又臭又长、看似完美的错误逻辑…… 同一个问题,昨天它懂你,今天它就装傻,效果全看"缘分"…… Google 这份白皮书,不是某个博主的心得体会,不是零散的技巧合集,而是 Google 官方基于对大语言模型(LLM)的深刻理解,系统性梳理出来的 ...