Group 1 - The phenomenon of AI hallucination occurs when AI models generate inaccurate or fabricated information, leading to misleading outputs [1][2] - A survey indicates that 42.2% of users report that the most significant issue with AI applications is the inaccuracy or presence of false information [2] - The rapid growth of generative AI users in China, reaching 249 million, raises concerns about the risks associated with AI hallucinations [2] Group 2 - AI hallucinations stem from the probabilistic nature of large models, which generate content based on learned patterns rather than storing factual information [2][3] - There is a perspective that AI hallucinations can be viewed as a form of divergent thinking and creativity, suggesting a need for a balanced view of their potential benefits and drawbacks [3] - Efforts are being made to mitigate the negative impacts of AI hallucinations, including regulatory actions and improvements in model training to enhance content accuracy [3][4]
如何看待AI“一本正经地胡说八道”(新知)
Ren Min Ri Bao·2025-07-01 21:57