对比学习
Search documents
Embedding黑箱成为历史!这个新框架让模型“先解释,再学Embedding”
量子位· 2025-10-21 09:05
UIUC团队 投稿 量子位 | 公众号 QbitAI 让模型先解释,再学Embedding! 来自UIUC、ANU、港科大、UW、TAMU等多所高校的研究人员,最新推出 可解释的生成式Embedding框架——GRACE 。 过去几年,文本表征 (Text Embedding) 模型经历了从BERT到E5、GTE、LLM2Vec,Qwen-Embedding等不断演进的浪潮。这些模型 将文本映射为向量空间,用于语义检索、聚类、问答匹配等任务。 简单来说, GRACE不再是"把文本压成向量",而是"让模型先解释,再学Embedding" —— 模型首先生成每个文本的"推理说明(rationale)",然后再将这些rationale编码成Embedding。奖励信号会鼓励模型产生更有逻辑、更语义 一致的推理。 方法总览:生成、表征、优化三位一体 概括而言,GRACE包含三个关键模块: 然而,大多数方法有一个共同缺陷: 它们把大语言模型当成"哑巴编码器"使用—— 输入文本,输出向量,却无法告诉我们为什么这两个文本相似 。 这种 "对比学习+池化" 的做法虽然有效,但本质上抛弃了大语言模型 (LLM) 的推理与生成能 ...
对比学习视角,GRPO即DPO?
自动驾驶之心· 2025-10-18 16:03
Core Insights - The article discusses the development of efficient GRPO (Generalized Reinforcement Policy Optimization) and its implications for reinforcement learning, highlighting the challenges and breakthroughs encountered during the research process [1][2]. Group 1: Research Development - The initial focus was on improving the speed of GRPO, with an emphasis on sampling efficiency, which is a common challenge in reinforcement learning [2][3]. - The author experimented with tree-based sampling methods but found that they did not yield the expected improvements in efficiency [3]. - A second approach involved "speculative sampling," which aimed to exit upon obtaining a correct sample, but faced implementation challenges that hindered performance [3][4]. Group 2: Methodological Innovations - The third approach utilized historical data to estimate the correctness of prompts, leading to a more efficient sampling strategy based on Bayesian methods [4]. - Experiments showed that reducing the number of rollouts per prompt did not significantly impact performance, indicating robustness in the methodology [4][5]. - The exploration of contrastive learning principles led to insights about the relationship between DPO (Direct Policy Optimization) and GRPO, suggesting potential avenues for further research [5]. Group 3: Community and Collaboration - The article emphasizes the importance of community engagement in advancing research, highlighting the role of discussions and collaborations in refining ideas and methodologies [8][10]. - The establishment of a comprehensive community focused on large model technologies aims to facilitate knowledge sharing and collaboration across various domains, including academic research and practical applications [9][10].
攻克结构化长文档检索难题!新框架让模型告别“结构性失明”
量子位· 2025-09-25 11:42
SEAL 团队 投稿 量子位 | 公众号 QbitAI AI读不懂HTML、Markdown长文档的标题和结构,找信息总踩坑? 解决方案来了—— SEAL 全新对比学习框架通过 带结构感知 + 元素对齐 ,让模型更懂长文。 | Method | HitRate@1 | HitRate@3 | HitRate@5 | MRR @ 5 | MRR@10 | NDCG@5 | NDCG@10 | | --- | --- | --- | --- | --- | --- | --- | --- | | mE5-large | 54.11 | 79.62 | 85.86 | 67.39 | 68.06 | 72.18 | 74.11 | | + Chunk | 56.85 | 82.94 | 88.79 | 70.12 | 71.45 | 74.78 | 77.42 | | + MCLS | 57.74 | 84.12 | 89.56 | 71.08 | 72.41 | 75.76 | 78.44 | | + SANTA | 55.79 | 81.76 | 88.02 | 69.01 | 70.49 | 73.79 | ...
何恺明改进了谢赛宁的REPA:极大简化但性能依旧强悍
机器之心· 2025-06-12 09:57
Core Viewpoint - The article discusses the significance of representation learning in generative models, particularly through the introduction of a new method called Dispersive Loss, which integrates self-supervised learning into diffusion-based generative models without requiring additional pre-training or external data sources [6][9][43]. Group 1: Diffusion Models and Representation Learning - Diffusion models excel in modeling complex data distributions but are largely disconnected from the representation learning field [2]. - The training objectives of diffusion models typically focus on reconstruction tasks, such as denoising, lacking explicit regularization for learned representations [3]. - Representation learning, particularly self-supervised learning, is crucial for learning general representations applicable to various downstream tasks [4]. Group 2: Introduction of Dispersive Loss - Dispersive Loss is a flexible and general plug-in regularizer that integrates self-supervised learning into diffusion-based generative models [9]. - The core idea of Dispersive Loss is to introduce a regularization target for the model's internal representations, encouraging them to spread out in the latent space [10][13]. - This method does not require additional layers or parameters, making it a simple and independent approach [15][16]. Group 3: Comparison with Existing Methods - Dispersive Loss operates without the need for pre-training, external data, or additional model parameters, unlike the REPA method, which relies on pre-trained models [7][41][43]. - The new method demonstrates that representation learning can benefit generative modeling without external information sources [13][43]. - In practical applications, introducing Dispersive Loss requires minimal adjustments, such as specifying the intermediate layers for regularization [29]. Group 4: Performance Evaluation - Experimental results show that Dispersive Loss consistently outperforms corresponding contrastive losses while avoiding the complexities of dual-view sampling [33]. - The method has been tested across various models, including DiT and SiT, showing improvements in all scenarios, particularly in larger models where effective regularization is crucial [36][37]. - The article highlights that Dispersive Loss can be generalized for one-step diffusion-based generative models, indicating its versatility [44].