Copilot(微软必应聊天)

Search documents
研究:主流 AI 聊天机器人假消息传播概率猛增,较去年高出一倍
Sou Hu Cai Jing· 2025-09-15 06:31
Core Insights - The spread of false information by generative AI tools has increased significantly, with a 35% occurrence rate in August this year compared to 18% in the same month last year [1] - The introduction of real-time web search capabilities in chatbots has led to a decrease in refusal rates to answer user queries, dropping from 31% in August 2024 to 0% a year later, which has contributed to the dissemination of misinformation [1][4] Performance of AI Models - Inflection's model has the highest misinformation spread rate at 56.67%, followed by Perplexity at 46.67%, while ChatGPT and Meta's models have a misinformation rate of 40% [3] - The best-performing models are Claude and Gemini, with misinformation rates of 10% and 16.67% respectively [4] - Perplexity's performance has notably declined, with its misinformation spread rate rising from 0% in August 2024 to nearly 50% a year later [4] Challenges in Information Verification - The integration of web search was intended to address outdated responses from AI but has instead led to new issues, as chatbots now source information from unreliable sources [4] - Newsguard has identified a fundamental flaw in AI's approach, as early models avoided spreading misinformation by refusing to answer questions, but current models are now exposed to a polluted information ecosystem [4] AI's Limitations - OpenAI acknowledges that language models inherently produce "hallucinated content," as they predict the next likely word rather than seeking factual accuracy [5] - The company is working on new technologies to indicate uncertainty in future models, but it remains unclear if this will effectively address the deeper issue of misinformation spread by AI [5]