自监督学习
Search documents
备受Meta折磨,LeCun依旧猛发论文,新作:JEPAs不只学特征,还能精准感知数据密度
3 6 Ke· 2025-10-09 11:39
Core Insights - Yann LeCun's team has discovered that the self-supervised model JEPAs (Joint Embedding Predictive Architecture) has the hidden ability to learn data density, which refers to the commonality of data samples [1][3] - This finding challenges the long-held belief that JEPAs only learn features and are unrelated to data density [3][4] Summary by Sections JEPAs Overview - JEPAs are a self-supervised learning framework that can autonomously learn feature patterns from vast amounts of data without manual labeling, making them efficient for tasks like image recognition and cross-modal matching [6][10] Key Findings - The breakthrough discovery is that JEPAs can accurately learn data density through a process called anti-collapse, which was previously thought to only prevent feature collapse [8][10] - The model's ability to perceive data density is a necessary outcome of its training process, as it must respond to small changes in samples to meet training constraints [8][10] Practical Application - The team introduced a key tool called JEPA-SCORE, which quantifies data density by scoring the commonality of samples. A higher score indicates a more typical sample, while a lower score suggests rarity or anomaly [10][11] - JEPA-SCORE is versatile and can be applied across various datasets and JEPAs architectures without additional training [10][11] Experimental Validation - Experiments demonstrated that JEPA-SCORE effectively identifies typical and rare samples in datasets like ImageNet and unfamiliar datasets, confirming its reliability and general applicability [11][13] Research Team - The research was a collaborative effort involving four core researchers from Meta's FAIR, including Randall Balestriero, Nicolas Ballas, and Michael Rabbat, each with significant backgrounds in AI and deep learning [20][22][23]
自动驾驶基础模型应该以能力为导向,而不仅是局限于方法本身
自动驾驶之心· 2025-09-16 23:33
Core Insights - The article discusses the transformative impact of foundational models on the autonomous driving perception domain, shifting from task-specific deep learning models to versatile architectures trained on vast and diverse datasets [2][4] - It introduces a new classification framework focusing on four core capabilities essential for robust performance in dynamic driving environments: general knowledge, spatial understanding, multi-sensor robustness, and temporal reasoning [2][5] Group 1: Introduction and Background - Autonomous driving perception is crucial for enabling vehicles to interpret their surroundings in real-time, involving key tasks such as object detection, semantic segmentation, and tracking [3] - Traditional models, designed for specific tasks, exhibit limited scalability and poor generalization, particularly in "long-tail scenarios" where rare but critical events occur [3][4] Group 2: Foundational Models - Foundational models, developed through self-supervised or unsupervised learning strategies, leverage large-scale datasets to learn general representations applicable across various downstream tasks [4][5] - These models demonstrate significant advantages in autonomous driving due to their inherent generalization capabilities, efficient transfer learning, and reduced reliance on labeled datasets [4][5] Group 3: Key Capabilities - The four key dimensions for designing foundational models tailored for autonomous driving perception are: 1. General Knowledge: Ability to adapt to a wide range of driving scenarios, including rare situations [5][6] 2. Spatial Understanding: Deep comprehension of 3D spatial structures and relationships [5][6] 3. Multi-Sensor Robustness: Maintaining high performance under varying environmental conditions and sensor failures [5][6] 4. Temporal Reasoning: Capturing temporal dependencies and predicting future states of the environment [6] Group 4: Integration and Challenges - The article outlines three mechanisms for integrating foundational models into autonomous driving technology stacks: feature-level distillation, pseudo-label supervision, and direct integration [37][40] - It highlights the challenges faced in deploying these models, including the need for effective domain adaptation, addressing hallucination risks, and ensuring efficiency in real-time applications [58][61] Group 5: Future Directions - The article emphasizes the importance of advancing research in foundational models to enhance their safety and effectiveness in autonomous driving systems, addressing current limitations and exploring new methodologies [2][5][58]
SceneSplat: 基于3DGS的场景理解和视觉语言预训练,让3D高斯「听懂人话」的一跃
机器之心· 2025-09-07 08:21
开放词汇识别与分类对于全面理解现实世界的 3D 场景至关重要。目前,所有现有方法在训练或推理过程中都依赖于 2D 或文本模态。这凸显出缺乏能够单独处理 3D 数据以进行端到端语义学习的模型,以及训练此类模型所需的数据。与此同时,3DGS 已成为各种视觉任务中 3D 场景表达的重要标准之一。 然而,有效地将语义理解以可泛化的方式集成到 3DGS 中仍然是一个难题。为了突破这些瓶颈,我们 引入了 SceneSplat ,第一个在 3DGS 上原生运行的端到端大 规模 3D 室内场景理解方法。此外,我们提出了一种自监督学习方案,可以从未标记场景中解锁丰富的 3D 特征学习。为了支持所提出的方法,我们采集了 首个针 对室内场景的大规模 3DGS 数据集 SceneSplat-7K ,包含 7916 个场景,这些场景源自七个现有数据集,例如 ScanNet 和 Matterport3D。生成 SceneSplat-7K 所需的 计算资源相当于在 L4 GPU 上运行 150 天。我们在 SceneSplat-7K 上进行了开放词汇和语义分割的测试,均 达到了 state-of-the-art 的效果 。 文章链接:ht ...
语音分离最全综述来了!清华等团队深度分析200+文章,系统解析「鸡尾酒会问题」研究
机器之心· 2025-09-03 04:33
Core Viewpoint - The article discusses the revolutionary advancements in the field of speech separation, particularly addressing the "cocktail party problem" through the development of deep neural networks (DNN) [2]. Group 1: Overview of Speech Separation - Speech separation has become crucial for enhancing speech clarity in complex acoustic environments and serves as a preprocessing method for other speech processing tasks [2]. - Researchers from various institutions conducted a comprehensive survey of over 200 representative papers, analyzing the latest research methods across multiple dimensions including deep learning methods, model architectures, evaluation metrics, datasets, and future challenges [2]. Group 2: Problem Definition - The authors categorize speech separation tasks into known and unknown speaker separation based on whether the number of speakers is fixed or variable, highlighting the challenges associated with each scenario [6]. - The need for dynamic output channel determination and the balance between separation quality and termination timing are emphasized as significant challenges in unknown speaker scenarios [6]. Group 3: Learning Paradigms - The article compares supervised and unsupervised learning methods, detailing the advantages and limitations of each approach in the context of speech separation [10]. - Supervised learning is currently the most mature paradigm, utilizing paired mixed audio and clean source audio for training, while unsupervised methods explore training models directly on unlabelled mixed audio [12]. Group 4: Model Architectures - The core components and evolution of speech separation models are summarized, including encoder, separation network, and decoder [14]. - Various architectures such as RNN-based, CNN-based, and transformer models are discussed, showcasing their strengths in capturing long-term dependencies and local feature extraction [17][18]. Group 5: Evaluation Metrics - A comprehensive evaluation metric system is necessary for assessing model performance, which includes both subjective and objective metrics [19]. - The article compares various metrics, highlighting the trade-offs between subjective evaluations that reflect human experience and objective metrics that are efficient but may focus on different aspects [20]. Group 6: Datasets - The article summarizes publicly available datasets for speech separation research, categorizing them based on single-channel and multi-channel formats [22]. - Understanding the coverage and difficulty of these datasets aids researchers in selecting appropriate datasets for algorithm evaluation and identifying gaps in current research [22]. Group 7: Performance Comparison - The authors present a comparison of different models' performance on standard datasets, illustrating the progress in speech separation technology over recent years [24]. - Notable improvements in performance metrics, such as SDR, are highlighted, with advanced architectures achieving SDR levels around 20 dB [24][25]. Group 8: Tools and Platforms - The article introduces various open-source tools and platforms that facilitate the development and application of speech separation tasks, comparing their functionalities and limitations [28]. - These tools provide convenient interfaces for researchers to replicate results and build prototype systems, accelerating the transition from research to application [28]. Group 9: Challenges and Future Directions - The article discusses current challenges in the field, including long-duration audio processing, mobile and embedded applications, real-time speech separation, and the rise of generative methods [32][33]. - The integration of pre-training techniques and the focus on target speaker extraction are also identified as key areas for future exploration [33].
小扎又开源了:7B实现自监督学习SOTA
量子位· 2025-08-16 02:00
Core Viewpoint - Meta has released a new open-source visual model, DINOv3, which demonstrates that self-supervised learning models can outperform weakly supervised learning models across a wide range of tasks [1][3]. Group 1: Model Overview - DINOv3 utilizes an unlabelled approach, expanding the dataset to 1.7 billion images and the model size to 7 billion parameters, effectively supporting applications where data labeling is scarce or costly [1][6]. - The model shows superior performance in scenarios lacking labels or across domains, achieving state-of-the-art (SOTA) results in the three core tasks of computer vision: classification, detection, and segmentation [3][22]. Group 2: Training Methodology - The training process of DINOv3 consists of two main phases, focusing on large-scale self-supervised training to learn high-quality visual representations [8]. - A new method called "Gram anchoring" is introduced to address the degradation of dense feature maps during training, significantly enhancing local feature quality without compromising global features [15][20]. Group 3: Performance Metrics - DINOv3 outperforms its predecessor DINOv2 in various benchmarks, such as achieving 55.9 in segmentation on ADE-20k and 90.4 in image classification on ImageNet ReaL [4]. - The model's training strategy includes RoPE-box jittering, enhancing robustness to variations in resolution, scale, and aspect ratio while maintaining training stability [13][14]. Group 4: Practical Applications - DINOv3 has demonstrated strong generalization capabilities, such as analyzing satellite imagery to detect tree loss and land use changes, providing significant support for global forest restoration and agricultural management [27][28]. - The model has achieved SOTA results in multiple remote sensing tasks, including semantic geospatial tasks and high-resolution semantic tasks [29]. Group 5: Future Implications - The DINO series represents Meta's ongoing exploration of self-supervised methods in the visual domain, marking significant progress in large-scale self-supervised training [30][38]. - DINOv3 is expected to accelerate the development of existing applications and unlock new scenarios across various industries, including healthcare, environmental monitoring, autonomous driving, retail, and manufacturing [39].
吞下17亿图片,Meta最强巨兽DINOv3开源,重新定义CV天花板
3 6 Ke· 2025-08-15 07:29
Core Insights - Meta has developed DINOv3, a self-supervised learning model trained on 1.7 billion images with 7 billion parameters, which has been successfully utilized by NASA for Mars exploration [1][3][26] - DINOv3 sets a new benchmark in computer vision performance, surpassing specialized solutions in various dense prediction tasks [1][10][19] - The model is fully open-sourced, including the pre-trained backbone, adapters, and training and evaluation code, making it suitable for commercial use [6][26] Performance Metrics - DINOv3 achieved significant improvements in various benchmarks compared to its predecessors, such as: - Segmentation on ADE-20k: 55.9 (up from 49.5 with DINOv2) [2] - Depth estimation on NYU I: 0.309 (improved from 0.372 with DINOv2) [2] - Video tracking on DAVIS: 83.3 (up from 76.6 with DINOv2) [2] - Instance retrieval on Met: 55.4 (increased from 44.6 with DINOv2) [2] - Image classification on ImageNet ReaL: 90.4 (up from 86.1 with DINOv2) [2] Applications and Impact - DINOv3's self-supervised learning approach allows it to function effectively in scenarios where labeled data is scarce, such as satellite imagery and medical imaging [10][12][15] - The model has been applied in real-world scenarios, such as monitoring deforestation and supporting ecological restoration efforts by the World Resources Institute [16] - DINOv3 has demonstrated a reduction in measurement error for tree canopy height estimation in Kenya, from 4.1 meters to 1.2 meters [17] Model Flexibility and Deployment - DINOv3's architecture allows for high efficiency and versatility, enabling it to perform multiple visual tasks without the need for fine-tuning [22][24] - Meta has created a family of models ranging from lightweight to high-performance versions to cater to various computational needs, ensuring practical deployment across different applications [26]
Meta视觉基座DINOv3王者归来:自监督首次全面超越弱监督,商用开源
机器之心· 2025-08-15 03:29
Core Viewpoint - The article discusses the advancements in computer vision, particularly focusing on the development and capabilities of the DINO series of models, emphasizing the transition from supervised to self-supervised learning paradigms in AI [2][15][29]. Group 1: DINO Model Evolution - DINO, DINOv2, and DINOv3 represent significant milestones in self-supervised learning, with DINOv3 achieving state-of-the-art performance across various tasks without the need for labeled data [2][15][31]. - DINOv3 has expanded its training dataset to 1.7 billion images and model parameters to 7 billion, significantly enhancing its capabilities compared to its predecessors [9][31][36]. - The introduction of innovative techniques in DINOv3, such as Gram Anchoring and RoPE, has improved the model's ability to generate high-resolution dense features, addressing limitations seen in DINOv2 [18][24][28]. Group 2: Performance Metrics - DINOv3 outperforms previous models in multiple benchmarks, achieving a segmentation score of 55.9, depth estimation of 0.309, and video tracking accuracy of 83.3, showcasing its superior performance in dense prediction tasks [17][31]. - The model's performance in image classification tasks is also notable, with an accuracy of 90.4 on ImageNet ReaL, indicating its robustness across various applications [17][31]. Group 3: Practical Applications - DINOv3 is being utilized in real-world applications, such as analyzing satellite images for environmental monitoring and supporting climate finance processes, demonstrating its practical impact [39][40]. - The model's ability to operate effectively without fine-tuning makes it suitable for edge applications where multiple visual prediction tasks need to be executed simultaneously [34][36]. Group 4: Community Engagement and Accessibility - Meta has open-sourced DINOv3, providing a complete backbone network and evaluation heads for community use, facilitating further research and development [13][36]. - The model family includes various distilled versions to cater to different computational needs, ensuring accessibility for researchers and developers [36][37].
DeepTiming:日内信息与相似度学习驱动择时
Minsheng Securities· 2025-07-31 09:02
Quantitative Models and Construction Methods 1. Model Name: Deep Learning Stock Return Prediction Model - **Model Construction Idea**: This model is based on a deep learning framework tailored to the current market environment. It integrates daily and minute-frequency inputs to predict stock returns and generate trading signals based on historical rolling thresholds[1][10][22] - **Model Construction Process**: - **Input Layer**: Combines 51 technical/sentiment daily features, 7 basic daily price-volume indicators, 10 enhanced style factors, and 52 minute-frequency features aggregated to daily frequency[22] - **Training Layer**: Utilizes meta-learning to adapt to new market data dynamically, avoiding overfitting to historical data[14] - **Output Layer**: Employs LinSAT neural networks to impose constraints on the output, ensuring specific objectives like controlling style and industry exposures[18] - **Loss Function**: Multi-period mean squared error (MSE) is used to stabilize predictions for timing strategies[22] - **Formula**: Multi-period return prediction as \( y = (n, 1) \), where \( n \) represents the number of stocks[22] - **Model Evaluation**: Demonstrates robustness in adapting to market changes and controlling exposures, with significant predictive power for timing strategies[10][22] 2. Model Name: SimStock - **Model Construction Idea**: SimStock uses self-supervised learning to predict stock similarities, incorporating both static and dynamic correlations. It leverages contrastive learning to dynamically capture time-series information beyond traditional industry and style classifications[2][47][48] - **Model Construction Process**: - **Input**: Past 40-day price-volume data, Barra style factors, and capital flow indicators[52] - **Positive and Negative Sample Construction**: Positive samples are generated as \( X_{pos} = X + (1-\alpha)X_{rand} \), where \( \alpha = 0.75 \) and \( X_{rand} \) is a random feature sample[52] - **Embedding**: LSTM initializes dynamic attention weights, and CLS tokens aggregate sequence information into stock attribute vectors[52] - **Similarity Calculation**: Stock similarity is measured using cosine similarity between attribute vectors[52] - **Model Evaluation**: Effectively identifies stocks with high similarity, primarily within the same industry, but without clear patterns in market capitalization or sub-industry[56] 3. Model Name: Improved GRU Model with SimStock Integration - **Model Construction Idea**: Enhances the GRU-based stock return prediction model by initializing hidden states with SimStock-generated stock attribute vectors, improving stability across different stock types[57][59] - **Model Construction Process**: - **Initialization**: SimStock attribute vectors replace the GRU model's initial hidden state[57] - **Training**: Retains the same training setup as the baseline GRU model, with adjustments to incorporate the new initialization[59] - **Model Evaluation**: Demonstrates improved predictive performance and stability, particularly in timing strategies across diverse stocks[60][63] 4. Model Name: Index Timing Model - **Model Construction Idea**: Aggregates individual stock signals into index signals using weighted predictions based on market capitalization, followed by threshold-based signal generation[77] - **Model Construction Process**: - **Aggregation**: Combines stock return predictions into index return predictions using market-cap weights[77] - **Signal Generation**: Uses the 60th percentile of past-year predictions as the buy threshold and the 40th percentile as the sell threshold[77] - **Holding Period**: Maintains positions for at least 5 trading days to reduce turnover[77] - **Model Evaluation**: Effective in generating excess returns, particularly in high-volatility sectors[79][82][84] --- Model Backtest Results 1. Deep Learning Stock Return Prediction Model - **Cumulative Excess Return**: 77% over 5 years[33] - **Annualized Return**: 27%[33] - **Excess Return vs. Stocks**: 11.3% (pre-cost)[33] 2. SimStock - **Cumulative Excess Return**: 109% over 5 years[60] - **Annualized Return**: 30%[60] - **Excess Return vs. Stocks**: 14.8% (pre-cost)[60] - **Daily Win Rate**: 57.4%[60] - **Holding Probability**: 45.7%[60] 3. Index Timing Model - **HS300**: Annualized Return 5.1%, Excess Return 5.6%, Max Drawdown 7.7%[79] - **CSI500**: Annualized Return 12.4%, Excess Return 12.2%, Max Drawdown 7.1%[82] - **CSI1000**: Annualized Return 15.1%, Excess Return 14.9%, Max Drawdown 11.3%[84] 4. Sector Timing - **Best Sector**: Electric Power Equipment & New Energy, Annualized Return 36%, Excess Return 31.1%[101] --- Quantitative Factors and Construction Methods 1. Factor Name: Reinforced Style Factor (PPO Model) - **Factor Construction Idea**: Uses PPO reinforcement learning to predict market style preferences, generating more interpretable and robust risk factors compared to traditional deep learning[12] - **Factor Construction Process**: - **Input**: Traditional style factors and recent stock price-volume data[12] - **Reward Function**: Stability-penalized market return goodness-of-fit[12] - **Output**: Enhanced style factor representing AI market preferences[12] - **Factor Evaluation**: Provides a stable and interpretable representation of market style dynamics[12] --- Factor Backtest Results 1. Reinforced Style Factor - **RankIC**: Weekly average of 4.5% since 2019[36] - **Annualized Return**: 23.2% for long-only portfolios, Excess Return 18.3% vs. CSI800[36]
何恺明改进了谢赛宁的REPA:极大简化但性能依旧强悍
机器之心· 2025-06-12 09:57
Core Viewpoint - The article discusses the significance of representation learning in generative models, particularly through the introduction of a new method called Dispersive Loss, which integrates self-supervised learning into diffusion-based generative models without requiring additional pre-training or external data sources [6][9][43]. Group 1: Diffusion Models and Representation Learning - Diffusion models excel in modeling complex data distributions but are largely disconnected from the representation learning field [2]. - The training objectives of diffusion models typically focus on reconstruction tasks, such as denoising, lacking explicit regularization for learned representations [3]. - Representation learning, particularly self-supervised learning, is crucial for learning general representations applicable to various downstream tasks [4]. Group 2: Introduction of Dispersive Loss - Dispersive Loss is a flexible and general plug-in regularizer that integrates self-supervised learning into diffusion-based generative models [9]. - The core idea of Dispersive Loss is to introduce a regularization target for the model's internal representations, encouraging them to spread out in the latent space [10][13]. - This method does not require additional layers or parameters, making it a simple and independent approach [15][16]. Group 3: Comparison with Existing Methods - Dispersive Loss operates without the need for pre-training, external data, or additional model parameters, unlike the REPA method, which relies on pre-trained models [7][41][43]. - The new method demonstrates that representation learning can benefit generative modeling without external information sources [13][43]. - In practical applications, introducing Dispersive Loss requires minimal adjustments, such as specifying the intermediate layers for regularization [29]. Group 4: Performance Evaluation - Experimental results show that Dispersive Loss consistently outperforms corresponding contrastive losses while avoiding the complexities of dual-view sampling [33]. - The method has been tested across various models, including DiT and SiT, showing improvements in all scenarios, particularly in larger models where effective regularization is crucial [36][37]. - The article highlights that Dispersive Loss can be generalized for one-step diffusion-based generative models, indicating its versatility [44].
AI“化学侦探”快速解析未知分子结构
Ke Ji Ri Bao· 2025-05-28 23:43
Core Insights - An international team led by the Czech Technical University has developed an AI molecular decoder named DreaMS, which can rapidly analyze the structure of unknown molecules, with potential applications in drug development and space life detection [1][2] - The research highlights that known natural molecules represent only a small fraction of what exists, with many undiscovered molecules in plants, soil, and extraterrestrial environments possibly holding keys to new drug formulations and environmental solutions [1] - DreaMS utilizes a groundbreaking learning approach that mimics how human infants learn language, allowing it to autonomously construct a cognitive framework for molecular structure interpretation without prior chemical rules [1] Molecular Analysis Capabilities - DreaMS can estimate the presence of specific molecular fragments or chemical elements, including mastering fluorine detection, which is crucial for modern pharmaceuticals and pesticides [2] - The team trained DreaMS using fluorine-containing samples, overcoming a long-standing detection challenge in the academic community [2]