AI工具虽好用,受访青年更盼“放心用”
Zhong Guo Qing Nian Bao·2026-02-27 02:23

Core Viewpoint - The rapid advancement of AI tools, such as "Seedance 2.0," has raised significant concerns among users regarding privacy, data security, and the potential for misinformation, leading to a desire for safer usage of these technologies [1][2][3]. Group 1: User Concerns - A survey indicated that 58.13% of respondents worry about AI generating false information or "AI hallucinations" that mislead users [2]. - 52.52% of respondents expressed concerns about potential violations of personal rights, such as portrait and copyright infringements [2]. - 44.39% of users fear that AI-generated content could be used for scams, threatening financial security [2]. Group 2: Data Privacy Issues - Many users feel they lack control over how their personal data is collected and used by AI systems, with 57.70% concerned about AI conversations being used for targeted user profiling [6]. - Users reported instances of AI accessing personal information without clear consent, leading to discomfort and distrust [5][6]. - The prevalence of "take-it-or-leave-it" user agreements on platforms has left users feeling powerless regarding their privacy rights [4]. Group 3: Misinformation and Reliability - Users have encountered misleading information from AI, with 16.91% having personally fallen for AI-generated content that turned out to be false [7]. - AI's tendency to provide biased responses based on user input can reinforce existing misconceptions, as noted by users who experienced this firsthand [8]. - The need for AI to acknowledge its limitations and avoid fabricating information was highlighted as a critical improvement area [11]. Group 4: Commercialization and Advertising - A significant 84.18% of respondents reported encountering AI recommending products or services, raising concerns about the objectivity of AI responses [9]. - Users expressed skepticism about the motivations behind AI recommendations, suspecting commercial interests may influence the information provided [10]. - The integration of advertising into AI interactions has led to increased caution among users regarding the authenticity of AI-generated suggestions [10]. Group 5: Recommendations for Improvement - Users suggested implementing transparent user agreements and clearer consent processes for data usage [10][11]. - The introduction of digital watermarks on AI-generated content was proposed to help users identify the authenticity of information [11]. - Establishing robust reporting channels for harmful or infringing AI-generated content was deemed essential for enhancing user trust [11].

AI工具虽好用,受访青年更盼“放心用” - Reportify