Investment Rating - The report maintains a positive outlook on the industry, specifically highlighting the potential for effective control of AI model hallucinations by 2026 [2]. Core Insights - The report emphasizes that while hallucinations in AI models are inevitable, advancements in algorithms, data quality, and engineering practices can significantly reduce their occurrence. The top 25 global models have achieved a hallucination rate below 8% [5][6]. - The report identifies three key areas for investment: mature AI applications, marketing AI that is less sensitive to hallucinations, and data plus AI infrastructure [6]. Summary by Sections 1. Hallucinations - The Lower Bound of Model Capability - The report defines hallucinations as overconfident errors produced by language models, which can include fabrications, factual inaccuracies, contextual misunderstandings, and logical fallacies. For instance, GPT-3.5 had a hallucination rate of approximately 40%, while GPT-4's rate was 28.6% [14][15]. 2. Sources of Hallucinations - Hallucinations arise from several factors, including model architecture, toxic data, lack of accuracy in reward objectives, and context window limitations. Addressing these factors is crucial for controlling hallucinations [7][8]. 3. Reducing Hallucinations: From Models, Data, Engineering, and Agents - The report discusses various strategies to mitigate hallucinations, such as using larger training datasets, extending context windows, and incorporating human feedback through reinforcement learning (RLHF) [25][26]. - Engineering practices like Retrieval-Augmented Generation (RAG) are becoming standard, with Gartner predicting a 68% adoption rate by 2025 [56][57]. 4. 2B Application Penetration and Evolution - The report notes that the control of hallucinations in mainstream models has made significant progress, with the top 25 models in the Vectara HHEM ranking achieving hallucination rates below 8%. For example, the Finix model developed by Ant Group has a hallucination rate of only 1.8% [72].
GenAI系列报告之68:2026大模型幻觉能被抑制吗?