Workflow
多模态大语言模型
icon
Search documents
中国科研团队研究发现:人工智能可以自发形成人类级认知
Xin Jing Bao· 2025-06-09 13:01
新京报讯(记者张璐)6月9日,记者从中国科学院自动化研究所获悉,科研人员结合行为实验与神经影 像分析,首次证实多模态大语言模型(MLLMs)能够自发形成与人类高度相似的物体概念表征系统。 相关研究成果发表于《自然·机器智能》。 人类能够对自然界中的物体进行概念化,这一认知能力长期以来被视为人类智能的核心。当我们看到 狗、汽车或苹果时,不仅能识别它们的物理特征,比如尺寸、颜色、形状等,还能理解其功能、情感价 值和文化意义,这种多维度的概念表征构成了人类认知的基石。 研究人员从海量大模型行为数据中提取出66个"心智维度",并为这些维度赋予了语义标签。研究发现, 这些维度是高度可解释的,且与大脑类别选择区域的神经活动模式显著相关。 研究还揭示了人类在做决策时更倾向于结合视觉特征和语义信息进行判断,而大模型则倾向于依赖语义 标签和抽象概念。研究表明,大语言模型内部存在着类似人类对现实世界概念的理解。 随着ChatGPT等大语言模型(LLMs)的发展,一个根本性问题浮出水面:这些大模型能否从语言和多 模态数据中发展出类似人类的物体概念表征? 近日,中国科学院自动化研究所神经计算与脑机交互(NeuBCI)课题组与中国科学 ...
人工智能可自发形成人类级认知?中国团队最新研究首次证实
Huan Qiu Wang Zi Xun· 2025-06-09 12:57
Core Insights - A Chinese scientific team has demonstrated that AI, specifically multimodal large language models, can form object concept representation systems similar to human cognition, indicating that AI can achieve human-level understanding [1][3][4] - The research was conducted by the Chinese Academy of Sciences and published in the journal "Nature Machine Intelligence," providing a theoretical framework for developing human-like cognitive structures in AI [1][3] Research Methodology - The study utilized a combination of cognitive neuroscience theories, computational modeling, behavioral experiments, and brain science to create an innovative research paradigm [3][4] - A classic cognitive psychology task was employed, where both AI models and humans were asked to identify the least similar option from a trio of object concepts derived from 1,854 everyday concepts [4] Findings - The research team analyzed 4.7 million behavioral judgment data points to construct a "concept map" for the AI model, revealing 66 "mental dimensions" that were semantically labeled and significantly correlated with neural activity patterns in the human brain [4] - The study found that multimodal large language models exhibited higher consistency with human behavior in decision-making tasks, although humans tended to integrate visual features and semantic information more than the AI models, which relied on semantic labels and abstract concepts [4] - The core finding suggests that AI's "mental dimensions" align closely with human understanding of reality, marking a significant advancement from mere "machine recognition" to "machine understanding" [4]
研究显示多模态大模型可自发形成类人的物体概念表征
news flash· 2025-06-09 10:40
Core Insights - The research team from the Institute of Automation at the Chinese Academy of Sciences has confirmed that multimodal large language models (MLLMs) can spontaneously form object concept representation systems that are highly similar to those of humans [1] - This study opens new pathways for cognitive science in artificial intelligence and provides a theoretical framework for constructing human-like cognitive structures in AI systems [1] - The research findings were published in the international academic journal "Nature Machine Intelligence" on June 9 [1]
舍弃自回归!国内团队打造纯扩散多模态大模型LLaDA-V,理解任务新SOTA
机器之心· 2025-05-27 03:23
Core Viewpoint - The article discusses the development of LLaDA-V, a pure diffusion multimodal large language model (MLLM) that integrates visual instruction tuning, marking a significant breakthrough in multimodal understanding compared to traditional autoregressive methods [1][16]. Group 1: Model Development - The research team expanded LLaDA into the multimodal domain, introducing LLaDA-V, which utilizes a visual encoder (SigLIP 2) and an MLP connector to project visual features into the language embedding space, achieving effective multimodal alignment [2]. - LLaDA-V employs a discrete diffusion mechanism during both training and sampling phases, moving away from the autoregressive paradigm [2]. Group 2: Performance Highlights - LLaDA-V demonstrates strong data scalability and competitive performance, outperforming the autoregressive baseline LLaMA3-V in 11 multimodal tasks, despite LLaDA-8B being slightly inferior to LLaMA3-8B in pure text tasks [5]. - The model achieves state-of-the-art (SOTA) performance in multimodal understanding tasks compared to existing mixed autoregressive-diffusion models, validating the effectiveness of the MLLM architecture based on powerful language diffusion models [8]. - LLaDA-V significantly narrows the performance gap with top autoregressive MLLMs, achieving comparable results in benchmarks like MMStar [10]. Group 3: Core Methodology - The core of LLaDA-V lies in combining visual instruction tuning with LLaDA's masking diffusion mechanism, allowing for a robust training and inference process [13][15]. - The architecture consists of a classic "visual encoder + MLP projector + language model" setup, where the visual encoder extracts image features, and the MLP projector maps them to LLaDA's embedding space [15]. - LLaDA-V's training objective supports multi-turn multimodal dialogue by masking only the model's responses during training, optimizing the model's ability to generate coherent replies [15]. Group 4: Future Outlook - The successful integration of visual instruction tuning with masking diffusion models opens a new technical pathway for MLLM development, challenging the notion that multimodal intelligence must rely on autoregressive models [16]. - The ongoing advancement of language diffusion models is expected to play a more significant role in the future, further pushing the boundaries of multimodal AI [16].
ICML 2025 Spotlight | 多模态大模型暴露短板?EMMA基准深度揭秘多模态推理能力
机器之心· 2025-05-20 04:58
「三个点电荷 + Q、-2Q 和 + 3Q 等距放置,哪个向量最能描述作用在 + Q 电荷上的净电力方向?」 在解这道题时,我们可以通过绘制受力分析草图轻松解决。但即使是先进的多模态大语言模型,如 GPT-4o,也可能在理解「同性相斥」的基本物理原则时,错误 地判断斥力的方向(例如,错误地将 + 3Q 对 + Q 的斥力方向判断为右下方而非正确的左上方)。 这个看似简单的物理问题,却暴露了多模态大模型一个「致命缺陷」: 当前的 MLLMs 仍然无法进行需要深度视觉与文本融合的复杂多模态推理 !一项最新研究 推出的 EMMA 基准测试,如同一面「照妖镜」,揭示了即使是顶尖 MLLMs 也在这关键能力上显著不足。 目前该研究已被 ICML 2025 接收为 spotlight,代码数据已全部开源 ! 目前已有多个模型 / 方法在 EMMA 上验证其多模态推理能力,研究发现: 即使最先进的模型 ——Gemini-2.5-pro-exp-03-25 ,或者是能够进行视觉工具调用的 o3/o4-mini 模型在 EMMA 上的表现仍然落后人类专家超 20% ! 标题: Can MLLMs Reason in Multi ...
鹅厂放大招,混元图像2.0「边说边画」:描述完,图也生成好了
量子位· 2025-05-16 03:39
Core Viewpoint - Tencent has launched the Hunyuan Image 2.0 model, which enables real-time image generation with millisecond response times, allowing users to create images seamlessly while describing them verbally or through sketches [1][6]. Group 1: Features of Hunyuan Image 2.0 - The model supports real-time drawing boards where users can sketch elements and provide text descriptions for immediate image generation [3][29]. - It offers various input methods, including text prompts, voice input in both Chinese and English, and the ability to upload reference images for enhanced image generation [18][19]. - Users can optimize generated images by adjusting parameters such as reference image strength and can also use a feature to automatically enhance composition and depth [27][35]. Group 2: Technical Highlights - Hunyuan Image 2.0 features a significantly larger model size, increasing parameters by an order of magnitude compared to its predecessor, Hunyuan DiT, which enhances performance [37]. - The model incorporates a high-compression image codec developed by Tencent, which reduces encoding sequence length and speeds up image generation while maintaining quality [38]. - It utilizes a multimodal large language model (MLLM) as a text encoder, improving semantic understanding and matching capabilities compared to traditional models [39][40]. - The model has undergone reinforcement learning training to enhance the realism of generated images, aligning them more closely with real-world requirements [41]. - Tencent has developed a self-research adversarial distillation scheme that allows for high-quality image generation with fewer steps [42]. Group 3: Future Developments - Tencent's team has indicated that more details will be shared in upcoming technical reports, including information about a native multimodal image generation model [43][45]. - The new model is expected to excel in multi-round image generation and real-time interactive experiences [46].
GPT-4o不敌Qwen,无一模型及格!UC伯克利/港大等联合团队提出多模态新基准:考察多视图理解能力
量子位· 2025-05-14 06:07
All-Angles Bench 团队 投稿至 凹非寺 量子位 | 公众号 QbitAI 多视图理解推理 有新的评判标准了! 什么是多视图理解?也就是从不同视角整合视觉信息进而实现理解决策。 想象一下,机器人在复杂环境中执行任务,这就需要根据多个摄像头的画面准确判断物体位置、距离和运动方向,这就依赖于强大的多视图理 解能力。 但过去,由于评估多视图推理能力的基准测试稀缺,这一领域的研究进展相对缓慢。 来自UC伯克利、忆生科技、香港大学、纽约大学、加州大学戴维斯分校、牛津大学等多家机构的研究者联合提出了 All-Angles Bench ,旨 在全面评估MLLMs的多视图理解能力。它涵盖了90个真实场景下,超过2100组人工标注的多视图问答对。 其评测数据集以及评测代码现已全部开源。 他们对27个领先的多模态大语言模型进行基准测试,其中包括Gemini-2.0-Flash、Claude-3.7-Sonnet和GPT-4o。 结果显示,多模态大语言模型与人类水平之间存在显著差距 ,并进一步发现模态大语言模型存在两种主要的缺陷模式:(1)在遮挡情况下跨 视图对应能力较弱;(2)对粗略相机位姿的估计能力较差。 具体来 ...
推出金融交易AI Agent,可全天候智能盯盘,这家新加坡金融企业获1000万美元融资|早起看早期
36氪· 2025-05-12 23:56
Core Viewpoint - RockFlow, a Singapore-based AI fintech company, has completed a $10 million A1 funding round to enhance its AI technology and launch its financial AI agent, Bobby [3][4]. Group 1: Company Overview - RockFlow operates five offices globally and covers over 30 countries in nine languages, previously receiving tens of millions in investments from top-tier Silicon Valley funds [4]. - The company launched TradeGPT, the world's first trading AI product, in April 2023, which utilizes multimodal LLM capabilities to analyze vast market information and price-volume data [4]. Group 2: Product Development - RockFlow is developing an AI agent architecture tailored for financial investment scenarios, leveraging cutting-edge technologies such as multimodal large language models (LLM), Fin-Tuning, RAG, Multi-Agent, and CoT [4][5]. - The AI agent aims to enhance understanding and generation capabilities, efficiently process multi-source data, and provide precise financial analysis and investment recommendations [4][5]. Group 3: Investment Process - In investment trading scenarios, RockFlow's AI agent simplifies traditional complex processes into four core steps: real-time information acquisition, analysis, trading strategy construction, and order execution [5]. - The AI agent monitors market dynamics and analyzes extensive data, including financial metrics and social media sentiment, to present personalized real-time trading opportunities [5][6]. Group 4: User Interaction - Users can express their needs in natural language, allowing the AI agent to generate personalized investment configurations and trading strategies based on their profit goals and risk preferences [6]. - The AI agent can also create complex conditional orders and automate investment tasks, assisting users in managing profits and losses effectively [6]. Group 5: Future Outlook - Bobby, the financial AI agent product, is set to launch globally soon, with a team comprising experts from AI, financial mathematics, and investment trading [6].
理想汽车MCAF重构辅助驾驶视觉认知新范式
理想TOP2· 2025-04-25 12:43
以下文章来源于AcademicDaily ,作者AcademicDaily AcademicDaily . AcademicDaily是一个跟踪、推荐和解读大模型等AI成果的技术交流平台,致力于传播和分享前沿技术。 MCAF在理想内部被称为自动驾驶第三只眼。 兼容理想自研的Mind GPT-3o 与 BEV 大模型,无需重新训练。 MCAF是一个 多模态粗到细注意力聚焦框架,核心解决的是长视频理解的关键瓶颈。 当前视频理解领域对长视频(>5分钟)的处理存在显著缺陷,主流方法(如Video-MLLM)依赖全局压缩或均匀采样,导致细 节丢失和冗余计算。MCAF直接针对这一问题,通过多模态分层注意力和时间扩展机制,在信息保留与计算效率之间找到了平 衡点,这是其核心价值。 在平均时长达60分钟的Video-MME数据集上,MCAF超越其他代理方法(如VideoTree、DrVideo)约3-5个百分点。 不同于VideoTree等需要额外奖励模型评估置信度,MCAF利用单一LLM完成生成-评估-调整闭环。这不仅简化了架构(如代码 实现仅需1个LLM接口),还避免了多模型协同的兼容性问题,更适合实际部署。 不过在NEx ...
长视频理解新突破!Mamba混合架构让显存消耗腰斩,处理10万视频token不费力
量子位· 2025-03-27 04:16
Core Viewpoint - The article introduces the Vamba model, a hybrid Mamba-Transformer model designed for efficient understanding of long videos, significantly improving processing efficiency without compressing video tokens [1][10]. Group 1: Model Design and Efficiency - Vamba improves the efficiency of processing video tokens during training and inference by redesigning the model architecture rather than compressing video tokens [1][4]. - The model can process four times more video frames under the same hardware conditions compared to traditional Transformer architectures, with over 50% reduction in training memory consumption and doubled training speed [4][9]. - Vamba retains the original spatiotemporal features of videos, avoiding information loss that occurs with traditional downsampling or pooling methods [5][10]. Group 2: Technical Innovations - The core design of Vamba involves breaking down the costly causal self-attention operations into two more efficient components: cross-attention for text tokens and a state space model (SSM) based Mamba-2 module for video tokens [6][7]. - The Mamba-2 module reduces the computational complexity from quadratic to linear, allowing for effective processing of long video sequences [7][9]. - Vamba's architecture allows for efficient alignment of text and video information, enhancing the model's ability to analyze video content based on user queries [9][10]. Group 3: Performance Evaluation - Extensive experiments show that Vamba outperforms existing efficient long video understanding models by approximately 4.3% on the LVBench benchmark [5][10]. - The model demonstrates superior performance across various video duration benchmarks, showcasing its competitive edge in long, medium, and short video understanding tasks [10].