LumaAI

Search documents
3D高斯泼溅算法大漏洞:数据投毒让GPU显存暴涨70GB,甚至服务器宕机
量子位· 2025-04-22 05:06
Core Viewpoint - The emergence of 3D Gaussian Splatting (3DGS) as a leading 3D modeling technology has introduced significant security vulnerabilities, particularly through a newly proposed attack method called Poison-Splat, which can drastically increase training costs and system failures [1][2][31]. Group 1: Introduction and Background - 3DGS has rapidly become a dominant technology in 3D vision, replacing NeRF due to its high rendering efficiency and realism [2][7]. - The adaptive nature of 3DGS, which adjusts computational resources based on scene complexity, is both a strength and a potential vulnerability [8][11]. - The research highlights a critical security blind spot in mainstream 3D reconstruction systems, revealing how minor alterations to input images can lead to significant operational disruptions [2][31]. Group 2: Attack Mechanism - The Poison-Splat attack targets the GPU memory usage and training time by introducing perturbations to input images, leading to increased computational costs [12][22]. - The attack is modeled as a max-min bi-level optimization problem, employing innovative strategies such as a proxy model to approximate the victim's behavior and maximizing the Total Variation (TV) of images to induce excessive complexity in 3DGS [13][16][15]. - The attack can significantly increase GPU memory usage from under 4GB to 80GB and training time by up to five times, demonstrating its effectiveness [25][22]. Group 3: Experimental Results - Experiments conducted on various 3D datasets showed that unconstrained attacks could lead to GPU memory usage surging by 20 times and rendering speeds dropping to one-tenth of the original [25][22]. - Even with constraints on pixel perturbations, the attack remains potent, with some scenarios showing over eightfold increases in memory consumption [27][22]. Group 4: Implications and Contributions - The research emphasizes that the findings are not merely academic but represent real threats to 3D service providers that allow user-uploaded content [31][40]. - Simple defenses, such as limiting the number of Gaussian points, are ineffective as they compromise the quality of 3D reconstructions [39][35]. - The study aims to raise awareness about the security of AI systems in 3D modeling, advocating for the development of more intelligent defense mechanisms [41][37].