JEPAs
Search documents
备受Meta折磨,LeCun依旧猛发论文,新作:JEPAs不只学特征,还能精准感知数据密度
3 6 Ke· 2025-10-09 11:39
Core Insights - Yann LeCun's team has discovered that the self-supervised model JEPAs (Joint Embedding Predictive Architecture) has the hidden ability to learn data density, which refers to the commonality of data samples [1][3] - This finding challenges the long-held belief that JEPAs only learn features and are unrelated to data density [3][4] Summary by Sections JEPAs Overview - JEPAs are a self-supervised learning framework that can autonomously learn feature patterns from vast amounts of data without manual labeling, making them efficient for tasks like image recognition and cross-modal matching [6][10] Key Findings - The breakthrough discovery is that JEPAs can accurately learn data density through a process called anti-collapse, which was previously thought to only prevent feature collapse [8][10] - The model's ability to perceive data density is a necessary outcome of its training process, as it must respond to small changes in samples to meet training constraints [8][10] Practical Application - The team introduced a key tool called JEPA-SCORE, which quantifies data density by scoring the commonality of samples. A higher score indicates a more typical sample, while a lower score suggests rarity or anomaly [10][11] - JEPA-SCORE is versatile and can be applied across various datasets and JEPAs architectures without additional training [10][11] Experimental Validation - Experiments demonstrated that JEPA-SCORE effectively identifies typical and rare samples in datasets like ImageNet and unfamiliar datasets, confirming its reliability and general applicability [11][13] Research Team - The research was a collaborative effort involving four core researchers from Meta's FAIR, including Randall Balestriero, Nicolas Ballas, and Michael Rabbat, each with significant backgrounds in AI and deep learning [20][22][23]
备受Meta折磨,LeCun依旧猛发论文!新作:JEPAs不只学特征,还能精准感知数据密度
量子位· 2025-10-09 04:52
Core Insights - The article discusses a new research paper by Yann LeCun's team that reveals the hidden capability of the self-supervised model JEPAs (Joint Embedding Predictive Architecture) to learn data "density" [2][5][6] - This finding challenges the long-held belief that JEPAs only excel at feature extraction and are unrelated to data density [7] Group 1: Key Findings - JEPAs can autonomously learn the commonality of data samples during training, allowing them to assess the typicality of a sample without additional modifications [6][11] - The core discovery is that the anti-collapse mechanism enables precise learning of data density, which was previously underestimated [11][12] - The research highlights that when JEPAs output Gaussian embeddings, they must perceive data density through the Jacobian matrix, making the learning of data density an inherent result of the training process [11] Group 2: Practical Applications - The team introduced a key tool called JEPA-SCORE, which quantifies data density and scores the commonality of samples [14][15] - JEPA-SCORE is versatile and can be applied across various datasets and JEPAs architectures without requiring additional training [16][17] - Experiments demonstrated that JEPA-SCORE effectively identifies typical and rare samples across different datasets, confirming its reliability and general applicability [18] Group 3: Research Team - The research was a collaborative effort involving four core researchers from Meta's FAIR, including Randall Balestriero, Nicolas Ballas, and Michael Rabbat, each with significant backgrounds in AI and deep learning [26][28][30][32][34][36]