自监督学习
Search documents
LeCun在Meta的最后一篇论文
3 6 Ke· 2025-11-14 03:04
Core Insights - The article discusses Yann LeCun's recent paper on a self-supervised learning method called LeJEPA, which is seen as his farewell work at Meta as he departs the company [1][33]. - LeJEPA introduces a new framework that enhances predictive performance by ensuring the embedding space follows a specific statistical distribution [2]. Group 1: LeJEPA Framework - LeJEPA is based on isotropic Gaussian embedding and addresses the representation collapse issue in traditional JEPA frameworks, significantly improving model generalization [1][5]. - The framework utilizes Sketched Isotropic Gaussian Regularization (SIGReg) to achieve distribution matching, transforming the problem into a statistical hypothesis test [6][11]. Group 2: Experimental Validation - Extensive experiments were conducted on large architectures such as ViT, ConvNeXt, and ResNet, with models approaching 1 billion parameters [8]. - Results indicate that LeJEPA outperforms existing methods while maintaining training simplicity and robustness, particularly on domain-specific datasets like Galaxy10 and Food101 [10]. Group 3: Statistical Insights - The research highlights that isotropic Gaussian distribution minimizes bias and variance during training, enhancing stability and accuracy in downstream tasks [3][5]. - Non-isotropic distributions lead to higher bias and variance, confirming the superiority of isotropic Gaussian distribution through various experiments [3]. Group 4: Future Directions - Despite LeCun's departure from Meta, it is suggested that he is raising funds to establish a startup focused on advancing his work in world models, indicating ongoing contributions to the AI field [33][34].
LeCun在Meta的最后论文?还是共同一作,LeJEPA:JEPAs理论拼图补完
机器之心· 2025-11-14 01:33
Core Viewpoint - The article discusses the development of LeJEPA, a new self-supervised learning framework that addresses the limitations of existing Joint Embedding Predictive Architectures (JEPAs) by providing a solid theoretical foundation and eliminating reliance on heuristic methods [4][5][8]. Group 1: Theoretical Foundation - The research team established that the optimal embedding distribution for JEPAs is an isotropic Gaussian distribution, which minimizes downstream prediction risk across various tasks [5]. - A novel distribution matching objective called Stochastic Isotropic Gaussian Regularization (SIGReg) was introduced to efficiently enforce the embedding to conform to the ideal isotropic Gaussian distribution [6][8]. - LeJEPA combines the predictive objectives of JEPA with SIGReg, resulting in a statistically optimal solution that mitigates representation collapse [8][9]. Group 2: Practical Implementation - LeJEPA demonstrates simplicity, robustness, and high performance due to its principled theoretical design, which eliminates the need for complex heuristics like gradient stopping and teacher-student networks [9][11]. - The implementation of LeJEPA requires only about 50 lines of code in PyTorch, making it user-friendly and easy to deploy [11][19]. Group 3: Experimental Validation - LeJEPA was tested across over 10 datasets and 60 architectures, achieving or surpassing state-of-the-art results, such as a 79% accuracy on ImageNet-1K with ViT-H/14 [10]. - The framework showed superior performance in domain-specific datasets, outperforming DINOv2-based transfer learning, indicating its capability for in-domain pre-training [10][33]. Group 4: Stability and Scalability - LeJEPA maintains stability across different hyperparameters and architectures, with recommended settings yielding competitive performance even with small batch sizes [24][26]. - The framework's design is architecture-agnostic, allowing it to learn high-quality representations across various model types [26][27]. Group 5: Semantic Structure Emergence - LeJEPA's self-supervised learning successfully emerged semantic structures without explicit supervision, as evidenced by attention patterns that correspond to object boundaries and salient regions [41][43]. - The attention maps demonstrated temporal consistency, enabling unsupervised video segmentation, indicating that the learned features capture both spatial semantics and temporal structure [43].
LeCun在Meta的最后一篇论文
量子位· 2025-11-13 11:52
Core Insights - The article discusses the introduction of LeJEPA, a self-supervised learning method developed by Yann LeCun, marking his farewell from Meta [2][3][4]. - LeJEPA aims to address the representation collapse issue in traditional JEPA frameworks by utilizing isotropic Gaussian embeddings and introducing SIGReg regularization to enhance model generalization [5][6]. Group 1: LeJEPA Overview - LeJEPA is based on isotropic Gaussian embeddings, which effectively mitigate the representation collapse problem and significantly improve model generalization capabilities [5]. - The traditional JEPA framework often encounters representation collapse, where models map all inputs to a single point, hindering the capture of semantic differences [6]. Group 2: Impact of Embedding Distribution - The study analyzed the impact of embedding distribution on bias and variance through ordinary least squares regression, revealing that isotropic Gaussian distribution minimizes both during training [8][9]. - Isotropic Gaussian distribution ensures lower bias and variance compared to non-isotropic distributions, enhancing stability and accuracy in downstream tasks [9][11][13]. Group 3: SIGReg Regularization - SIGReg (Sketched Isotropic Gaussian Regularization) is introduced as a method to achieve distribution matching, transforming the problem into a hypothesis testing framework [15][17]. - It employs a combination of univariate directional tests and Epps-Pulley tests to assess the match between the embedding distribution and the target isotropic Gaussian distribution [16][17]. Group 4: High-Dimensional Challenges - SIGReg addresses computational challenges in high-dimensional spaces by combining SIGReg and predictive loss, ensuring efficient and stable training through mini-batch training [19][21]. - The total loss in LeJEPA is a weighted sum of SIGReg loss and predictive loss, with a hyperparameter λ to balance their contributions [22]. Group 5: Experimental Validation - Extensive experiments on large architectures, including ViT, ConvNeXt, ResNet, MaxViT, and Swin Transformer, demonstrated that LeJEPA outperforms existing methods while maintaining training simplicity and robustness [20][23]. - In domain-specific datasets like Galaxy10 and Food101, LeJEPA surpassed DINOv2-based transfer learning methods when pre-trained directly on target data [24]. Group 6: JEPA Framework Evolution - JEPA (Joint-Embedding Predictive Architecture) has evolved over three years since its introduction by LeCun, focusing on enhancing model expressiveness and reasoning capabilities through joint prediction methods [31][28]. - Unlike generative models, JEPA captures the dependencies between x and y without explicitly generating predictions for y [32]. Group 7: Future Directions - Although LeJEPA signifies the end of LeCun's research at Meta, it does not mark the conclusion of JEPA's development, as LeCun is reportedly raising funds to establish a startup focused on world models [72][71]. - LeCun's departure from Meta, while not entirely graceful, reflects a significant period of achievement in AI research, contributing to the field's advancement [74][79].
备受Meta折磨,LeCun依旧猛发论文,新作:JEPAs不只学特征,还能精准感知数据密度
3 6 Ke· 2025-10-09 11:39
Core Insights - Yann LeCun's team has discovered that the self-supervised model JEPAs (Joint Embedding Predictive Architecture) has the hidden ability to learn data density, which refers to the commonality of data samples [1][3] - This finding challenges the long-held belief that JEPAs only learn features and are unrelated to data density [3][4] Summary by Sections JEPAs Overview - JEPAs are a self-supervised learning framework that can autonomously learn feature patterns from vast amounts of data without manual labeling, making them efficient for tasks like image recognition and cross-modal matching [6][10] Key Findings - The breakthrough discovery is that JEPAs can accurately learn data density through a process called anti-collapse, which was previously thought to only prevent feature collapse [8][10] - The model's ability to perceive data density is a necessary outcome of its training process, as it must respond to small changes in samples to meet training constraints [8][10] Practical Application - The team introduced a key tool called JEPA-SCORE, which quantifies data density by scoring the commonality of samples. A higher score indicates a more typical sample, while a lower score suggests rarity or anomaly [10][11] - JEPA-SCORE is versatile and can be applied across various datasets and JEPAs architectures without additional training [10][11] Experimental Validation - Experiments demonstrated that JEPA-SCORE effectively identifies typical and rare samples in datasets like ImageNet and unfamiliar datasets, confirming its reliability and general applicability [11][13] Research Team - The research was a collaborative effort involving four core researchers from Meta's FAIR, including Randall Balestriero, Nicolas Ballas, and Michael Rabbat, each with significant backgrounds in AI and deep learning [20][22][23]
自动驾驶基础模型应该以能力为导向,而不仅是局限于方法本身
自动驾驶之心· 2025-09-16 23:33
Core Insights - The article discusses the transformative impact of foundational models on the autonomous driving perception domain, shifting from task-specific deep learning models to versatile architectures trained on vast and diverse datasets [2][4] - It introduces a new classification framework focusing on four core capabilities essential for robust performance in dynamic driving environments: general knowledge, spatial understanding, multi-sensor robustness, and temporal reasoning [2][5] Group 1: Introduction and Background - Autonomous driving perception is crucial for enabling vehicles to interpret their surroundings in real-time, involving key tasks such as object detection, semantic segmentation, and tracking [3] - Traditional models, designed for specific tasks, exhibit limited scalability and poor generalization, particularly in "long-tail scenarios" where rare but critical events occur [3][4] Group 2: Foundational Models - Foundational models, developed through self-supervised or unsupervised learning strategies, leverage large-scale datasets to learn general representations applicable across various downstream tasks [4][5] - These models demonstrate significant advantages in autonomous driving due to their inherent generalization capabilities, efficient transfer learning, and reduced reliance on labeled datasets [4][5] Group 3: Key Capabilities - The four key dimensions for designing foundational models tailored for autonomous driving perception are: 1. General Knowledge: Ability to adapt to a wide range of driving scenarios, including rare situations [5][6] 2. Spatial Understanding: Deep comprehension of 3D spatial structures and relationships [5][6] 3. Multi-Sensor Robustness: Maintaining high performance under varying environmental conditions and sensor failures [5][6] 4. Temporal Reasoning: Capturing temporal dependencies and predicting future states of the environment [6] Group 4: Integration and Challenges - The article outlines three mechanisms for integrating foundational models into autonomous driving technology stacks: feature-level distillation, pseudo-label supervision, and direct integration [37][40] - It highlights the challenges faced in deploying these models, including the need for effective domain adaptation, addressing hallucination risks, and ensuring efficiency in real-time applications [58][61] Group 5: Future Directions - The article emphasizes the importance of advancing research in foundational models to enhance their safety and effectiveness in autonomous driving systems, addressing current limitations and exploring new methodologies [2][5][58]
SceneSplat: 基于3DGS的场景理解和视觉语言预训练,让3D高斯「听懂人话」的一跃
机器之心· 2025-09-07 08:21
Core Insights - The article introduces SceneSplat, the first end-to-end large-scale 3D indoor scene understanding method that operates natively on 3D Gaussian Scenes (3DGS) [2][6] - A self-supervised learning scheme is proposed to unlock rich 3D feature learning from unlabelled scenes, addressing the lack of models that can independently handle 3D data for semantic learning [2][6] - The SceneSplat-7K dataset is created, consisting of 7,916 scenes sourced from seven existing datasets, enabling effective training and testing of the SceneSplat model [2][6] Dataset Construction - SceneSplat-7K includes 7,916 processed 3DGS scenes and a total of 11.27 billion Gaussian points, with an average of approximately 1.42 million points per scene [6][7] - The dataset's construction required computational resources equivalent to 150 days of running on L4 GPUs, ensuring high reconstruction quality with a PSNR of 29.64 dB and average Depth-L1 of 0.035 m [6][7] Semantic Annotation - A stable and fast system is utilized for annotating semantic information in 3DGS, employing SAMv2 for object-level segmentation and SigLIP2 for extracting visual-language features [8][10] - The pre-trained encoder learns rich semantic representations solely based on 3DGS parameters and neighborhood information, eliminating the need for 2D fusion during inference [8][10] Training Methodology - Two training routes are provided: visual-language pre-training for labelled data and self-supervised training for unlabelled data, maximizing the learning potential of unlabelled scenes [12][14] - The model employs a hierarchical Transformer architecture, using Gaussian tokens and neighborhood attention to achieve effective semantic vector regression [15] Experimental Results - The SceneSplat method achieves state-of-the-art (SOTA) results in zero-shot semantic segmentation on datasets like ScanNet200, ScanNet++, and Matterport3D [21][22] - Quantitative experiments demonstrate significant improvements in mean Intersection over Union (mIoU) and mean Accuracy (mAcc) across various datasets, showcasing the model's robustness [22][23] Future Work - The SceneSplat-7K dataset is being expanded to SceneSplat-49K, with ongoing benchmarking of 3DGS and semantic integration across multiple datasets [31]
语音分离最全综述来了!清华等团队深度分析200+文章,系统解析「鸡尾酒会问题」研究
机器之心· 2025-09-03 04:33
Core Viewpoint - The article discusses the revolutionary advancements in the field of speech separation, particularly addressing the "cocktail party problem" through the development of deep neural networks (DNN) [2]. Group 1: Overview of Speech Separation - Speech separation has become crucial for enhancing speech clarity in complex acoustic environments and serves as a preprocessing method for other speech processing tasks [2]. - Researchers from various institutions conducted a comprehensive survey of over 200 representative papers, analyzing the latest research methods across multiple dimensions including deep learning methods, model architectures, evaluation metrics, datasets, and future challenges [2]. Group 2: Problem Definition - The authors categorize speech separation tasks into known and unknown speaker separation based on whether the number of speakers is fixed or variable, highlighting the challenges associated with each scenario [6]. - The need for dynamic output channel determination and the balance between separation quality and termination timing are emphasized as significant challenges in unknown speaker scenarios [6]. Group 3: Learning Paradigms - The article compares supervised and unsupervised learning methods, detailing the advantages and limitations of each approach in the context of speech separation [10]. - Supervised learning is currently the most mature paradigm, utilizing paired mixed audio and clean source audio for training, while unsupervised methods explore training models directly on unlabelled mixed audio [12]. Group 4: Model Architectures - The core components and evolution of speech separation models are summarized, including encoder, separation network, and decoder [14]. - Various architectures such as RNN-based, CNN-based, and transformer models are discussed, showcasing their strengths in capturing long-term dependencies and local feature extraction [17][18]. Group 5: Evaluation Metrics - A comprehensive evaluation metric system is necessary for assessing model performance, which includes both subjective and objective metrics [19]. - The article compares various metrics, highlighting the trade-offs between subjective evaluations that reflect human experience and objective metrics that are efficient but may focus on different aspects [20]. Group 6: Datasets - The article summarizes publicly available datasets for speech separation research, categorizing them based on single-channel and multi-channel formats [22]. - Understanding the coverage and difficulty of these datasets aids researchers in selecting appropriate datasets for algorithm evaluation and identifying gaps in current research [22]. Group 7: Performance Comparison - The authors present a comparison of different models' performance on standard datasets, illustrating the progress in speech separation technology over recent years [24]. - Notable improvements in performance metrics, such as SDR, are highlighted, with advanced architectures achieving SDR levels around 20 dB [24][25]. Group 8: Tools and Platforms - The article introduces various open-source tools and platforms that facilitate the development and application of speech separation tasks, comparing their functionalities and limitations [28]. - These tools provide convenient interfaces for researchers to replicate results and build prototype systems, accelerating the transition from research to application [28]. Group 9: Challenges and Future Directions - The article discusses current challenges in the field, including long-duration audio processing, mobile and embedded applications, real-time speech separation, and the rise of generative methods [32][33]. - The integration of pre-training techniques and the focus on target speaker extraction are also identified as key areas for future exploration [33].
小扎又开源了:7B实现自监督学习SOTA
量子位· 2025-08-16 02:00
Core Viewpoint - Meta has released a new open-source visual model, DINOv3, which demonstrates that self-supervised learning models can outperform weakly supervised learning models across a wide range of tasks [1][3]. Group 1: Model Overview - DINOv3 utilizes an unlabelled approach, expanding the dataset to 1.7 billion images and the model size to 7 billion parameters, effectively supporting applications where data labeling is scarce or costly [1][6]. - The model shows superior performance in scenarios lacking labels or across domains, achieving state-of-the-art (SOTA) results in the three core tasks of computer vision: classification, detection, and segmentation [3][22]. Group 2: Training Methodology - The training process of DINOv3 consists of two main phases, focusing on large-scale self-supervised training to learn high-quality visual representations [8]. - A new method called "Gram anchoring" is introduced to address the degradation of dense feature maps during training, significantly enhancing local feature quality without compromising global features [15][20]. Group 3: Performance Metrics - DINOv3 outperforms its predecessor DINOv2 in various benchmarks, such as achieving 55.9 in segmentation on ADE-20k and 90.4 in image classification on ImageNet ReaL [4]. - The model's training strategy includes RoPE-box jittering, enhancing robustness to variations in resolution, scale, and aspect ratio while maintaining training stability [13][14]. Group 4: Practical Applications - DINOv3 has demonstrated strong generalization capabilities, such as analyzing satellite imagery to detect tree loss and land use changes, providing significant support for global forest restoration and agricultural management [27][28]. - The model has achieved SOTA results in multiple remote sensing tasks, including semantic geospatial tasks and high-resolution semantic tasks [29]. Group 5: Future Implications - The DINO series represents Meta's ongoing exploration of self-supervised methods in the visual domain, marking significant progress in large-scale self-supervised training [30][38]. - DINOv3 is expected to accelerate the development of existing applications and unlock new scenarios across various industries, including healthcare, environmental monitoring, autonomous driving, retail, and manufacturing [39].
吞下17亿图片,Meta最强巨兽DINOv3开源,重新定义CV天花板
3 6 Ke· 2025-08-15 07:29
Core Insights - Meta has developed DINOv3, a self-supervised learning model trained on 1.7 billion images with 7 billion parameters, which has been successfully utilized by NASA for Mars exploration [1][3][26] - DINOv3 sets a new benchmark in computer vision performance, surpassing specialized solutions in various dense prediction tasks [1][10][19] - The model is fully open-sourced, including the pre-trained backbone, adapters, and training and evaluation code, making it suitable for commercial use [6][26] Performance Metrics - DINOv3 achieved significant improvements in various benchmarks compared to its predecessors, such as: - Segmentation on ADE-20k: 55.9 (up from 49.5 with DINOv2) [2] - Depth estimation on NYU I: 0.309 (improved from 0.372 with DINOv2) [2] - Video tracking on DAVIS: 83.3 (up from 76.6 with DINOv2) [2] - Instance retrieval on Met: 55.4 (increased from 44.6 with DINOv2) [2] - Image classification on ImageNet ReaL: 90.4 (up from 86.1 with DINOv2) [2] Applications and Impact - DINOv3's self-supervised learning approach allows it to function effectively in scenarios where labeled data is scarce, such as satellite imagery and medical imaging [10][12][15] - The model has been applied in real-world scenarios, such as monitoring deforestation and supporting ecological restoration efforts by the World Resources Institute [16] - DINOv3 has demonstrated a reduction in measurement error for tree canopy height estimation in Kenya, from 4.1 meters to 1.2 meters [17] Model Flexibility and Deployment - DINOv3's architecture allows for high efficiency and versatility, enabling it to perform multiple visual tasks without the need for fine-tuning [22][24] - Meta has created a family of models ranging from lightweight to high-performance versions to cater to various computational needs, ensuring practical deployment across different applications [26]
Meta视觉基座DINOv3王者归来:自监督首次全面超越弱监督,商用开源
机器之心· 2025-08-15 03:29
Core Viewpoint - The article discusses the advancements in computer vision, particularly focusing on the development and capabilities of the DINO series of models, emphasizing the transition from supervised to self-supervised learning paradigms in AI [2][15][29]. Group 1: DINO Model Evolution - DINO, DINOv2, and DINOv3 represent significant milestones in self-supervised learning, with DINOv3 achieving state-of-the-art performance across various tasks without the need for labeled data [2][15][31]. - DINOv3 has expanded its training dataset to 1.7 billion images and model parameters to 7 billion, significantly enhancing its capabilities compared to its predecessors [9][31][36]. - The introduction of innovative techniques in DINOv3, such as Gram Anchoring and RoPE, has improved the model's ability to generate high-resolution dense features, addressing limitations seen in DINOv2 [18][24][28]. Group 2: Performance Metrics - DINOv3 outperforms previous models in multiple benchmarks, achieving a segmentation score of 55.9, depth estimation of 0.309, and video tracking accuracy of 83.3, showcasing its superior performance in dense prediction tasks [17][31]. - The model's performance in image classification tasks is also notable, with an accuracy of 90.4 on ImageNet ReaL, indicating its robustness across various applications [17][31]. Group 3: Practical Applications - DINOv3 is being utilized in real-world applications, such as analyzing satellite images for environmental monitoring and supporting climate finance processes, demonstrating its practical impact [39][40]. - The model's ability to operate effectively without fine-tuning makes it suitable for edge applications where multiple visual prediction tasks need to be executed simultaneously [34][36]. Group 4: Community Engagement and Accessibility - Meta has open-sourced DINOv3, providing a complete backbone network and evaluation heads for community use, facilitating further research and development [13][36]. - The model family includes various distilled versions to cater to different computational needs, ensuring accessibility for researchers and developers [36][37].