Workflow
AI幻觉
icon
Search documents
【高端访谈】“自动化生成授信尽调报告,人机协同重构银行智慧内核”——专访中国光大银行副行长杨兵兵
Xin Hua Cai Jing· 2025-07-02 08:38
新华财经北京7月2日电 当银行客户经理写一份企业授信尽调报告从耗时7天压缩至3分钟,当政策问答 平均响应时间缩短至20秒,银行与大模型的化学反应正悄然颠覆传统金融作业模式。近日,新华财经独 家对话中国光大银行副行长杨兵兵,深入探讨大模型在银行核心场景的深度实践,用好大模型的关键资 源以及与技术红利如影随形的AI幻觉应对之策等话题。 场景深耕:3分钟生成授信尽调报告,20秒实现精准问答 走进银行的业务一线,大模型技术已不再是遥不可及的概念,而是真切地扎根于多个核心场景,并结出 效率之果。 "大模型不是实验室玩具,而是解决业务痛点的工具。"杨兵兵告诉记者,该行已经推动大模型技术在客 户经理赋能、合规运营、远程坐席、助力分行智能化经营等场景的落地。 在银行客户经理撰写授信尽调报告这一场景中,效率提升尤为显著。 在传统流程下,银行客户经理撰写授信尽调报告需要经历与客户接洽、资料收集、现场尽调、风险评 估、授信方案设计并撰写报告,再提交审批。对于一些中大型企业来说,撰写一份百页授信尽调报告平 均需要7天左右,如今借助大模型技术,短短3分钟即可完成一份报告。 "这极大地节省了客户经理的精力,让他们能更专注于客户关系的深度 ...
智能体调查:七成担忧AI幻觉与数据泄露,过半不知数据权限
Core Viewpoint - The year 2025 is anticipated to be the "Year of Intelligent Agents," marking a paradigm shift in AI development from "I say AI responds" to "I say AI acts," with intelligent agents becoming a crucial commercial anchor and the next generation of human-computer interaction [1] Group 1: Importance of Safety and Compliance - 67.4% of industry respondents consider the safety and compliance issues of intelligent agents to be "very important," but it does not rank in the top three priorities [2][7] - The majority of respondents (70%) express concerns about AI hallucinations, erroneous decisions, and data leakage [3] - 58% of users do not fully understand the permissions and data access capabilities of intelligent agents [4] Group 2: Current State of Safety and Compliance - 60% of respondents deny that their companies have experienced any significant safety compliance incidents related to intelligent agents, while 40% are unwilling to disclose such information [5][19] - The survey indicates that while safety is deemed important, the immediate focus is on enhancing task stability and quality (67.4%), exploring application scenarios (60.5%), and improving foundational model capabilities (51.2%) [11] Group 3: Industry Perspectives on Safety - There is no consensus on whether the industry is adequately addressing safety and compliance, with 48.8% believing there is some attention but insufficient investment, and 34.9% feeling there is a lack of effective focus [9] - The majority of respondents (62.8%) believe that the complexity and novelty of intelligent agent risks pose the greatest challenge to governance [16][19] - 51% of respondents report that their companies lack a clear safety officer for intelligent agents, and only 3% have a dedicated compliance team [23] Group 4: Concerns and Consequences of Safety Incidents - The most significant concerns regarding potential safety incidents include user data leakage (81.4%) and unauthorized operations leading to business losses (53.49%) [15][16] - Different industry roles have varying concerns, with users and service providers primarily worried about data leakage, while developers are more concerned about regulatory investigations [16]
如何看待AI“一本正经地胡说八道”(新知)
Ren Min Ri Bao· 2025-07-01 21:57
Group 1 - The phenomenon of AI hallucination occurs when AI models generate inaccurate or fabricated information, leading to misleading outputs [1][2] - A survey indicates that 42.2% of users report that the most significant issue with AI applications is the inaccuracy or presence of false information [2] - The rapid growth of generative AI users in China, reaching 249 million, raises concerns about the risks associated with AI hallucinations [2] Group 2 - AI hallucinations stem from the probabilistic nature of large models, which generate content based on learned patterns rather than storing factual information [2][3] - There is a perspective that AI hallucinations can be viewed as a form of divergent thinking and creativity, suggesting a need for a balanced view of their potential benefits and drawbacks [3] - Efforts are being made to mitigate the negative impacts of AI hallucinations, including regulatory actions and improvements in model training to enhance content accuracy [3][4]
猫猫拯救科研!AI怕陷“道德危机”,网友用“猫猫人质”整治AI乱编文献
量子位· 2025-07-01 03:51
Core Viewpoint - The article discusses how a method involving "cat" has been used to improve the accuracy of AI-generated references, particularly in the context of scientific research, highlighting the ongoing challenges of AI hallucinations in generating fictitious literature [1][25][26]. Group 1 - A post on Xiaohongshu claims that using "cat" as a safety threat has successfully corrected AI's tendency to fabricate references [1][5]. - The AI model Gemini reportedly found real literature while ensuring the safety of the "cat" [2][20]. - The post resonated with many researchers, garnering over 4000 likes and 700 comments [5]. Group 2 - Testing the method on DeepSeek revealed that without the "cat" prompt, the AI produced incorrect references, including links to non-existent articles [8][12][14]. - Even when the "cat" prompt was applied, the results were mixed, with some genuine references but still many unverifiable titles [22][24]. - The phenomenon of AI fabricating literature is described as a "hallucination," where the AI generates plausible-sounding but false information [25][26]. Group 3 - The article emphasizes that the core issue of AI generating false references stems from its statistical learning from vast datasets, rather than true understanding of language [27][28]. - Current industry practices to mitigate hallucinations include Retrieval-Augmented Generation (RAG), which enhances model outputs by integrating accurate content [31]. - The integration of AI with search functionalities is becoming standard across major platforms, improving the quality of collected data [32][34].
ChatGPT,救了我的命
Hu Xiu· 2025-06-28 05:51
Core Insights - ChatGPT has demonstrated its potential in outdoor navigation by successfully guiding a group lost in a forest using GPS coordinates, showcasing its ability to provide clear directional information and terrain details [2][3][5] Group 1: AI Navigation Capabilities - A recent study published in Translational Vision Science & Technology indicates that AI can assist in navigation by interpreting outdoor scene images, suggesting that models like ChatGPT can effectively respond to directional queries based on visual inputs [7][9] - Research has shown that large language models can optimize path planning in outdoor navigation by utilizing semantic terrain cost grids and classic pathfinding algorithms, improving efficiency by 66% to 87% [18] Group 2: Limitations and Risks - Despite the promising results, current AI technology relies heavily on extensive training data and pre-existing map databases, which limits its effectiveness in uncharted or data-scarce areas [16] - The phenomenon of "AI hallucination" poses a significant risk, as misjudgments in complex real-world environments could lead to severe consequences [17][19]
AI大模型幻觉测试:马斯克的Grok全对,国产AI甘拜下风?
Sou Hu Cai Jing· 2025-06-24 11:45
Group 1 - Musk, co-founder of OpenAI, is developing an AI assistant named Grok through his company xAI, which is currently involved in a $300 million equity transaction, valuing xAI at $113 billion [1] - Musk expressed frustration on the X platform regarding the presence of "garbage" data in uncorrected foundational models, indicating plans to rewrite the human knowledge corpus using Grok 3.5 or Grok 4 to enhance data accuracy [1][2] - The industry is currently employing various methods, such as RAG frameworks and external knowledge integration, to mitigate AI hallucinations, while Musk's approach aims to create a reliable knowledge base [2][35] Group 2 - A recent evaluation of AI models, including Grok, revealed that some models still exhibit hallucinations, with Grok performing well in tests by providing accurate answers [3][11][21] - The tests highlighted the importance of enabling deep thinking modes and networked searches to improve the accuracy of AI-generated content, as models like Doubao and Tongyi showed improved performance when these features were activated [7][21][37] - The evaluation also indicated that while AI hallucinations persist, they are becoming less frequent, and Grok consistently provided correct answers across multiple tests [33][38] Group 3 - Critics, including Gary Marcus, argue that Musk's plan to rewrite the human knowledge corpus may introduce bias, potentially compromising the objectivity of the AI model [38] - The ongoing development of AI models suggests that integrating new mechanisms for content verification may be more effective in reducing hallucinations than rewriting the knowledge base [38] - Research indicates that retaining some level of AI hallucination can be beneficial in fields like abstract creation and scientific research, as demonstrated by the recent Nobel Prize-winning work utilizing AI's "error folding" [38]
AI“幻觉”的克星来了!海致科技港股IPO弄潮
21世纪经济报道记者 赵云帆 报道 作为AI行业的"卖水人",芯片半导体,数据中心与云计算等行业企业受益于AI模型训练需求的大增,过 去一年不断得到资本市场追捧。如今,另外一些依附于AI大模型训练的业态,也正在受到资本市场的 关注。 "AI除幻"或年复合增长140% 从招股书来看,海致科技将自身定义为AI智能体企业,并专注于通过"图模融合技术"开发AI智能体或 AI解决方案。 何谓"图模融合技术"?需要指出的是,公司提及的"图"并非指"图片"或者"图像",而是所谓的"知识图 谱"。而"图模融合"即知识图谱与大模型技术的融合。 招股书称,该技术通过使大模型在预训练阶段学习图谱推理能力,以及通过吸收图谱数据库中的结构性 知识,提高彼等对隐晦关系的理解,提高了大语言模型输出内容的准确性、可追溯性及可解释性,有效 地减少了大语言模型中的幻觉。 同时,招股书还表述,除预训练阶段,该公司的服务还可以在推理阶段和检索阶段被使用。 近日,北京海致科技集团(以下简称"海致科技")正式向港交所递交招股说明书,招银国际、中银国际 和申万宏源香港担任联席保荐人。 而这家公司的特别之处在于,其聚焦的领域在于去除AI大模型的"幻觉"。 ...
AI商业化:一场创新投入的持久战
Jing Ji Guan Cha Wang· 2025-06-20 23:40
Group 1: AI Commercialization and Challenges - The concept of artificial intelligence (AI) was officially proposed in 1956, but its commercialization faced slow progress due to limitations in computing power and data scale until breakthroughs in deep learning and the advent of big data in the 21st century [2] - Early commercial applications of AI were concentrated in specific verticals, enhancing industry efficiency through automation and data-driven techniques [3] - AI applications in customer service and security, such as natural language processing for handling customer inquiries and AI-assisted identification of suspects, exemplify early use cases [4][5] Group 2: Investment Trends and Market Dynamics - The efficiency revolution driven by AI has led to a surge in capital market financing, with significant investments in companies like Databricks and OpenAI, which raised $10 billion and $6.6 billion respectively in 2024 [6] - In the domestic AIGC sector, there were 84 financing events in Q3 2024, with disclosed amounts totaling 10.54 billion yuan, indicating a trend towards smaller financing rounds averaging 26 million yuan [6] Group 3: Industry Fragmentation and Competition - Fragmentation of application scenarios poses challenges for AI technology to transition from laboratory settings to large-scale deployment, increasing development costs due to non-standard characteristics across different manufacturing lines [7] - The concentration of resources in leading companies creates a "Matthew effect," where top firms benefit disproportionately from funding, talent, and technology, while smaller firms face systemic challenges [8] Group 4: Data Privacy and Ethical Concerns - Data has become a core resource for innovation in AI, but privacy issues are emerging as a significant concern, with companies facing dilemmas between data acquisition and user privacy protection [9] - The frequency of employees uploading sensitive data to AI tools surged by 485% in 2024, highlighting the risks associated with data governance [9] Group 5: Regulatory and Ethical Frameworks - The need for a balanced approach between innovation and privacy protection is critical for the long-term development of AI companies, as evidenced by legal challenges faced by firms like DeepMind and ChatGPT [10][11] - Establishing a collaborative governance network involving developers, legal scholars, and the public is essential to maintain ethical standards in AI development [11] Group 6: Future Directions and Innovations - AI technology is being integrated into various sectors, with companies like General Motors shifting focus from robotaxi investments to enhancing personal vehicle automation due to high costs and slow commercialization [17] - The emergence of competitive pricing strategies among leading firms aims to stimulate market demand and foster rapid application of large models, with price reductions reaching over 90% [17] - Innovations like DeepSeek-R1 demonstrate that performance can be achieved at significantly lower costs, indicating a potential path for sustainable development in AI [18]
人工智能为何会产生幻觉(唠“科”)
Ren Min Ri Bao· 2025-06-20 21:27
Core Insights - The phenomenon of "AI hallucination" is a significant challenge for many AI companies and users, where AI generates plausible but false information [1][2][3] - AI's fundamental operation as a large language model relies on predicting and generating text based on vast amounts of internet data, which can include misinformation and biases [1][2] - The training process of AI models often prioritizes user satisfaction over factual accuracy, leading to a tendency for AI to produce content that aligns with user expectations rather than truth [2][3] Group 1: Causes of AI Hallucination - AI hallucination arises from the training data, which is often a mix of accurate and inaccurate information, leading to data contamination [2] - In fields with insufficient specialized data, AI may fill gaps using vague statistical patterns, potentially misrepresenting fictional concepts as real technologies [2] - The training process includes reward mechanisms that focus on language logic and format rather than factual verification, exacerbating the issue of AI generating false information [2][3] Group 2: User Perception and Awareness - A survey conducted by Shanghai Jiao Tong University revealed that approximately 70% of respondents lack a clear understanding of the risks associated with AI-generated false or erroneous information [3] - The tendency of AI to "please" users can result in the generation of fabricated examples or seemingly scientific terms to support incorrect claims, making it difficult for users to discern AI hallucinations [3] Group 3: Solutions and Recommendations - Developers are exploring technical solutions to mitigate AI hallucination, such as "retrieval-augmented generation" which involves retrieving relevant information from updated databases before generating responses [3] - AI models are being designed to acknowledge uncertainty by stating "I don't know" instead of fabricating answers, although this does not fundamentally resolve the hallucination issue [3] - Addressing AI hallucination requires a systemic approach that includes enhancing public AI literacy, defining platform responsibilities, and promoting fact-checking capabilities [4]
稳定币资本映像:概念股从狂热回归理性
Core Viewpoint - The stablecoin sector is experiencing a period of adjustment after a surge in interest, with significant net outflows of capital and a decline in the stablecoin index, indicating a shift towards rationality in the market [1][12][13]. Market Dynamics - The initial excitement in the stablecoin market was driven by legislative progress in the U.S. and Hong Kong, leading to a surge in related stocks, particularly in the U.S. and Hong Kong markets [5][12]. - The stablecoin index saw a decline of 1.55% on June 20, with 13 out of 17 component stocks experiencing drops [1][13]. - A significant number of A-share companies began to clarify their non-involvement in stablecoin projects, contributing to the cooling of the market [3][10]. Investor Behavior - Investors initially reacted to the stablecoin news with enthusiasm, leading to substantial price increases in related stocks, such as a 60% rise in ZhongAn Online and a 44.86% increase in Lianlian Digital [5][6]. - The A-share market saw a speculative frenzy, with investors searching for any potential stablecoin-related companies, leading to irrational price movements [6][7]. - Despite the cooling of the market, some investors remained optimistic, believing in the future potential of stablecoins [10][12]. Regulatory Environment - The U.S. Senate passed the GENIUS Act, marking a significant step in stablecoin regulation, which positively impacted the stock of Circle, the second-largest stablecoin issuer [12][16]. - The People's Bank of China acknowledged the rise of stablecoins and their implications for traditional payment systems, although the A-share market's reaction was muted [15][16]. Company Developments - Companies like Lakala and Ant Group are exploring stablecoin opportunities, with Lakala planning a listing on the Hong Kong Stock Exchange to enhance its international strategy [15][16]. - JD Group is in the process of testing its stablecoin, aiming to facilitate cross-border payments and reduce costs significantly [9][15].