让AI学会“安全遗忘
Ke Ji Ri Bao·2025-10-10 01:35

Core Insights - The research team from Xi'an University of Electronic Science and Technology has developed a method for AI to "forget" harmful data, addressing data privacy and security issues in intelligent models [1][2] Group 1: Technology Development - The team has created a "model forgetting strategy" based on gradient ascent, allowing AI to effectively erase harmful memories instead of just retaining them [2] - This new method significantly improves efficiency, reducing the time required for data processing tasks from potentially "100 hours" to "1 hour" [2] Group 2: Application and Impact - The technology enables safe data withdrawal in collaborative model training, allowing institutions to retract their data without disrupting the model's functionality, thus enhancing data privacy [3] - The team's approach is rooted in close collaboration with industry, ensuring that their research addresses real-world security challenges faced by companies in sectors like finance and autonomous driving [4][5] Group 3: Team Development and Philosophy - The team emphasizes a strong integration of theory and practical application, fostering a culture of innovation and problem-solving through direct engagement with industry [4][5] - The leader of the team encourages a proactive learning environment, motivating students to understand the significance of their studies and to develop unique skills that set them apart [5]