生成对抗网络(GAN)
Search documents
ICCV 2025 | 新型后门攻击直指Scaffold联邦学习,NTU联手0G Labs揭示中心化训练安全漏洞
机器之心· 2025-08-09 03:59
Core Viewpoint - The article introduces BadSFL, a novel backdoor attack method specifically designed for the Scaffold Federated Learning (SFL) framework, highlighting its effectiveness, stealth, and persistence compared to existing methods [2][39]. Group 1: Background on Federated Learning and Scaffold - Federated Learning (FL) allows distributed model training while protecting client data privacy, but its effectiveness is heavily influenced by the distribution of training data across clients [6][10]. - In non-IID scenarios, where data distribution varies significantly among clients, traditional methods like FedAvg struggle, leading to poor model convergence [7][10]. - Scaffold was proposed to address these challenges by using control variates to correct client updates, improving model convergence in non-IID settings [7][12]. Group 2: Security Vulnerabilities in Scaffold - Despite its advantages, Scaffold introduces new security vulnerabilities, particularly against malicious clients that can exploit the model update mechanism to inject backdoor behaviors [8][9]. - The reliance on control variates in Scaffold creates a new attack surface, allowing attackers to manipulate these variates to guide benign clients' updates towards malicious objectives [9][16]. Group 3: BadSFL Attack Methodology - BadSFL operates by subtly altering control variates to steer benign clients' local gradient updates in a "poisoned" direction, enhancing the persistence of backdoor attacks [2][9]. - The attack utilizes a GAN-based data poisoning strategy to enrich the attacker's dataset, maintaining high accuracy for both normal and backdoor samples while remaining covert [2][11]. - BadSFL demonstrates superior persistence, maintaining attack effectiveness for over 60 rounds, which is three times longer than existing benchmark methods [2][32]. Group 4: Experimental Results - Experiments conducted on MNIST, CIFAR-10, and CIFAR-100 datasets show that BadSFL outperforms four other known backdoor attacks in terms of effectiveness and persistence [32][33]. - In the initial 10 rounds of training, BadSFL achieved over 80% accuracy on backdoor tasks while maintaining around 60% accuracy on primary tasks [34]. - Even after the attacker ceases to upload malicious updates, BadSFL retains backdoor functionality significantly longer than benchmark methods, demonstrating its robustness [37][38].
杭州ai图像识别的重点技术
Sou Hu Cai Jing· 2025-05-13 12:54
Core Insights - Hangzhou is a leading city in China for AI image recognition technology, showcasing its strength and potential in this field [1] Group 1: Key Technologies - Deep learning and neural networks are the core of Hangzhou's AI image recognition technology, enabling accurate image content recognition through multi-layered neural networks [3] - Convolutional Neural Networks (CNN) are widely applied in Hangzhou's AI image recognition, effectively extracting spatial features and hierarchical information for tasks like facial recognition and object detection [4] - Generative Adversarial Networks (GAN) are utilized in Hangzhou for data augmentation and image restoration, enhancing model generalization and robustness [5] - Transfer learning and weak supervision learning address data scarcity and label shortage in image recognition tasks, improving model performance and scalability in Hangzhou's AI technology [6] Group 2: Conclusion - The continuous innovation and application of deep learning, CNN, GAN, transfer learning, and weak supervision learning have led to significant achievements in Hangzhou's AI image recognition field, laying a solid foundation for future development [7]