Workflow
自监督学习
icon
Search documents
软件所提出小批量数据采样策略
Jing Ji Guan Cha Wang· 2025-05-27 07:50
Core Insights - A research team from the Institute of Software, Chinese Academy of Sciences, proposed a small-batch data sampling strategy to eliminate the interference of unobservable variable semantics on representation learning, enhancing the out-of-distribution generalization ability of self-supervised learning models [1][2] Group 1: Research Findings - The out-of-distribution generalization ability refers to the model's performance on test data that differs from the training data distribution, which is crucial for maintaining effectiveness in unseen data scenarios [1] - The study identified that self-supervised learning models are affected by unobservable variable semantics during training, which weakens their out-of-distribution generalization ability [1] Group 2: Methodology - The proposed strategy utilizes causal effect estimation techniques to eliminate the confounding effects of unobservable variable semantics [1] - By learning a latent variable model, the strategy estimates the posterior probability distribution of unobservable semantic variables given "anchor" samples, termed as balance scores [1] - Samples with similar or close balance scores are grouped into the same small-batch dataset, ensuring that unobservable semantic variables are conditionally independent of the "anchor" samples within each batch [1] Group 3: Experimental Results - Extensive experiments on benchmark datasets showed that the sampling strategy improved the performance of mainstream self-supervised learning methods by at least 2% across various evaluation tasks [2] - In classification tasks on ImageNet100 and ImageNet, both Top-1 and Top-5 accuracy surpassed the state-of-the-art self-supervised methods [2] - In semi-supervised classification tasks, Top-1 and Top-5 accuracy increased by over 3% and 2%, respectively [2] - The strategy also provided stable gains in average precision for object detection and instance segmentation transfer learning tasks [2] - Performance improvements exceeded 5% for few-shot transfer learning tasks on datasets like Omniglot, miniImageNet, and CIFARFS [2] - The research findings were accepted by the top-tier academic conference in artificial intelligence, International Conference on Machine Learning (ICML-25) [2]
港大马毅团队等开源新作:用编码率正则化重构视觉自监督学习范式,“少即是多”
量子位· 2025-03-08 03:35
Core Viewpoint - The article discusses the introduction of SimDINO and SimDINOv2, two new visual pre-training models developed by a collaboration of researchers from various institutions, which simplify the training process of the existing DINO and DINOv2 models while enhancing their performance [1][5][12]. Group 1: Model Development - SimDINO and SimDINOv2 are designed to address the complexities associated with DINO and DINOv2, which are currently leading models in visual pre-training [2][4]. - The new models utilize coding rate regularization to simplify the training process and improve robustness and performance [12][16]. - The core idea is to remove complex empirical design components from the original DINO and DINOv2 training processes, making the models easier to train and implement [12][18]. Group 2: Methodology - The introduction of coding rate regularization helps prevent representation collapse, which was a significant issue in the original models [14][17]. - SimDINO retains the EMA self-distillation scheme and multi-view data augmentation from DINO but modifies the contrastive learning approach to use Euclidean distance or cosine similarity instead of high-dimensional projections [18][19]. - SimDINOv2 further simplifies the iBOT mechanism introduced in DINOv2, enhancing the model's efficiency [19]. Group 3: Experimental Validation - Extensive experiments on various datasets, including ImageNet-1K, COCO val2017, and ADE20K, demonstrate that SimDINO and SimDINOv2 outperform the DINO series in terms of computational efficiency, training stability, and downstream task performance [22][23]. - In specific evaluations, SimDINO achieved a linear segmentation mIoU of 33.7 and mAcc of 42.8, while SimDINOv2 reached mIoU of 36.9 and mAcc of 46.5, showcasing significant improvements over DINO and DINOv2 [30]. Group 4: Theoretical Insights - The research team proposes a theoretical framework for selecting hyperparameters in SimDINO, focusing on balancing the gradients of the coding rate regularization term and the distance term [33][34]. - This theoretical analysis provides a clearer optimization target and reduces the complexity of hyperparameter tuning, making the training process more straightforward [39]. Group 5: Future Directions - The research team suggests potential improvements for SimDINO, including exploring self-supervised objectives that do not require self-distillation optimization [43].