AI幻觉问题
Search documents
企业如何控制AI大模型的应用风险
经济观察报· 2025-11-25 13:11
AI大模型的发明给企业带来了前所未有的机会和风险。AI在很 多方面赶上了甚至超过了人类,但也在其他方面给企业带来了 比人类更大的风险。在现在这种状态下,AI和人类各有优缺 点。所以,企业管理的最优解是让人和AI协同作战,通过组 织、流程发挥各自的长处,屏蔽各自的短处。 作者: 刘劲等 封图:图虫创意 最近几年,AI大模型的发展具有革命性,带来了在众多方面达到甚至超过人类智能水平的能力。其中,诸如ChatGPT或DeepSeek等大模型更是迅速 积累了众多个人用户。 但最近美国麻省理工的一项研究发现,在企业管理和运营层面,真正能成功利用AI的企业少之又少,超过95%的企业在AI的试点运用中失败了。在与 中国企业的沟通中,我们发现情况也非常类似。 为什么企业运用AI大模型这么难?因为企业一方面要利用大模型带来的能力和效率,另一方面要控制它的应用成本以及给企业带来的风险。本文忽略成 本问题,而专注于大模型的风险,因为这是主要矛盾。 AI风险的微观面 AI的风险包含宏观风险和微观风险。前者涉及技术安全、社会伦理到人类未来生存等诸多维度,比如算法偏见带来的社会不平等加剧,AGI将来取代人 类工作造成的大范围失业问题,甚 ...
很遗憾,“AI不会成为治疗者” 生成式AI让心理健康服务更普惠,但暂时难堪大任
Mei Ri Jing Ji Xin Wen· 2025-11-05 13:23
Core Viewpoint - The evolution of AI in mental health care is transitioning from basic algorithmic interactions to advanced models capable of emotional understanding, but ethical and safety challenges remain [1][5][16] Group 1: AI Development in Mental Health - AI has progressed from simple rule-based systems like ELIZA to sophisticated models that aim to empathize with users [1] - The market for mental health services in China is projected to reach 10.41 billion yuan by 2025, driven by increasing awareness and the growth of online platforms [3] - The introduction of generative AI is seen as a pivotal moment for making mental health services more accessible and supportive [5] Group 2: Clinical Challenges and AI Limitations - There is a call for strict limitations on AI providing direct behavioral advice to patients due to technical shortcomings [2][10] - The complexity of diagnosing mental health issues, which lack biological markers, poses significant challenges for AI applications [6][10] - AI tools can offer low-cost, scalable solutions for initial emotional understanding, but they cannot replace human therapists [7][8] Group 3: Ethical Considerations and Risks - The phenomenon of "hallucination" in AI, where it generates plausible but incorrect information, is particularly concerning in mental health contexts [9][10] - There is a need for AI systems to maintain transparency and accountability in their recommendations to avoid potential harm [10][11] - The ethical implications of data usage and privacy in AI applications for mental health are critical, as users must have control over their data [16] Group 4: Future Directions for AI in Mental Health - The industry is moving towards specialized, multi-modal AI systems that can better understand and respond to individual patient needs [12][14] - AI should evolve from general models to those that can integrate various data types, including behavioral and physiological signals [14] - The ultimate goal for AI in mental health is to bridge the gap between patients' subjective experiences and societal expectations, enhancing understanding and support [15][16]
DeepSeek又惹祸了?画面不敢想
Xin Lang Cai Jing· 2025-07-06 04:24
Core Viewpoint - The article discusses the increasing prevalence of misinformation generated by AI, highlighting the challenges posed by AI hallucinations and the ease of feeding false information into AI systems [3][10][21]. Group 1: AI Misinformation - AI hallucination issues lead to the generation of fabricated facts that cater to user preferences, which can be exploited to create bizarre rumors [3][10]. - Recent examples of widely circulated AI-generated rumors include absurd claims about officials and illegal activities, indicating a trend towards sensationalism over truth [5][6][7][8]. Group 2: Impact of Social Media - The combination of AI's inherent hallucination problems and the rapid dissemination of information through social media creates a concerning information environment [13][14]. - The article suggests that the current state of information is deteriorating, likening it to a "cesspool" [15]. Group 3: Recommendations for Improvement - AI companies need to enhance their technology to address hallucination issues, as some foreign models exhibit less severe problems [17]. - Regulatory bodies should improve their efforts to combat the spread of false information, although the balance between regulation and innovation remains delicate [18]. - Individuals are encouraged to be cautious with real-time information while relying on established knowledge sources [20].