Workflow
晓花科技吴淏:大模型存在“幻觉”等风险,应避免输出不合规或错误的信息
Bei Jing Shang Bao·2025-08-01 10:25

Group 1 - The event "AI Financial Double-Edged Sword: Finding Transformation Opportunities from Safety Bottom Line" was successfully held in Shanghai, organized by Beijing Business Daily and Deep Blue Media Think Tank [2] - Traditional robotic intelligence is insufficient to meet business and customer demands, prompting companies to focus on developing customer service systems based on large model technologies like DeepSeek and Wenxin Yiyan [2] - The company has implemented a hybrid architecture of "large model + small model" to address the "hallucination" issue, where small models handle routine queries and large models focus on complex scenarios [2] Group 2 - The system has shown significant improvements, with a daily queue reduction of 2,000 to 3,000 instances and first-round question recognition rates increasing from 50% to 70%-80% within a month and a half of launch [2] - The company identifies several risks associated with large models, including stability risks and "hallucination" risks, and emphasizes the need to control the model's language capabilities within a reliable knowledge range [3] - The core strategy to mitigate the "hallucination" risk involves using Retrieval-Augmented Generation (RAG) to limit responses to the business knowledge base, along with refined prompts and quality checks on output results [3]