Core Viewpoint - The European Commission has initiated a formal investigation into the social media platform X due to allegations that its AI chatbot "Grok" generates or assists in the dissemination of non-consensual pornography and deepfake content, raising concerns about the risks posed by generative AI technology to minors' rights and safety [2] Group 1: Risks to Minors - Deepfake technology has lowered the barriers for creating harmful content, leading to increased incidents of online bullying and defamation against minors, which can result in severe psychological harm and social stigma [2][3] - Some offenders use deepfake technology to place minors' faces onto pornographic videos, violating their dignity and privacy, and potentially leading to long-term trauma and fear of exposure [3] - The illegal processing of biometric information poses a significant risk to minors, as their images and data can be collected and misused without proper consent, leading to identity exposure and long-term risks [4] Group 2: Challenges in Governance - The low barrier for generating deepfake content combined with high detection challenges complicates evidence collection and legal proceedings, as the rapid evolution of generative algorithms outpaces detection technologies [6] - Social media platforms' profit-driven content distribution mechanisms amplify harmful content faster than they can be moderated, increasing the risk of minors becoming victims of online exploitation [7] - Existing legal frameworks are fragmented and lack clear definitions regarding responsibilities, making it difficult to effectively address deepfake-related harms to minors [7] Group 3: Legal and Judicial Responses - Prosecutorial bodies should enhance their legal oversight functions to protect minors' rights, focusing on both punitive measures against offenders and preventive strategies to mitigate risks associated with deepfake technology [8] - The establishment of a dual mechanism for criminal accountability and public interest litigation can help address violations against minors, ensuring that offenders are held accountable while also promoting systemic improvements [9] - Recommendations for improving algorithmic risk controls and content identification mechanisms are essential to safeguard minors from deepfake exploitation, emphasizing the need for comprehensive regulatory frameworks [10]
充分发挥检察职能促推治理“涉未”深度伪造
Xin Lang Cai Jing·2026-02-15 00:02