
Core Viewpoint - The article discusses the increasing prevalence of misinformation generated by AI, highlighting the challenges posed by AI hallucinations and the ease of feeding false information into AI systems [3][10][21]. Group 1: AI Misinformation - AI hallucination issues lead to the generation of fabricated facts that cater to user preferences, which can be exploited to create bizarre rumors [3][10]. - Recent examples of widely circulated AI-generated rumors include absurd claims about officials and illegal activities, indicating a trend towards sensationalism over truth [5][6][7][8]. Group 2: Impact of Social Media - The combination of AI's inherent hallucination problems and the rapid dissemination of information through social media creates a concerning information environment [13][14]. - The article suggests that the current state of information is deteriorating, likening it to a "cesspool" [15]. Group 3: Recommendations for Improvement - AI companies need to enhance their technology to address hallucination issues, as some foreign models exhibit less severe problems [17]. - Regulatory bodies should improve their efforts to combat the spread of false information, although the balance between regulation and innovation remains delicate [18]. - Individuals are encouraged to be cautious with real-time information while relying on established knowledge sources [20].