数据泄露

Search documents
英伟达推理服务器被曝高危漏洞,云端AI模型被攻击直接裸奔
量子位· 2025-08-06 05:56
henry 发自 凹非寺 量子位 | 公众号 QbitAI 一波未平,一波又起。 英伟达Triton推理服务器,被 安全研究机构Wiz Research 曝光了一组高危漏洞链。 这组漏洞可以被组合利用,实现远程代码执行(RCE),攻击者可以读取或篡改共享内存中的数据,操纵模型输出,控制整个推理后端的行 为。 可能造成的后果包括模型被盗、数据泄露、响应操纵,乃至系统失控。 目前,英伟达已经发布补丁,但所有25.07版本之前的系统都处于裸奔状态,用户需要将Triton Inference Server更新到最新版本。 一处漏洞,牵一发而动全身 这次的漏洞链危害有多大呢? 据Wiz表示,该漏洞链可能允许未经身份验证的远程攻击者控制英伟达Triton推理服务器,进而可能导致以下一连串的严重后果: 首先,是 模型被盗(Model Theft) ,攻击者可以通过精确定位共享内存区域,窃取专用且昂贵的人工智能模型。 CVE-2025-23320:当攻击者发送一个超大请求超出共享内存限制时,会触发异常,返回的错误信息会暴露后端内部IPC(进程间通信)共享 内存区的唯一标识符(key)。 | | NVIDIA Triton I ...
给大热的智能体做体检:关键「安全」问题能达标吗?
21世纪经济报道· 2025-07-04 06:55
Core Viewpoint - The article discusses the emergence of "intelligent agents" as a significant commercial anchor and the next generation of human-computer interaction, highlighting the shift from "I say AI responds" to "I say AI does" [1] Group 1: Current State and Industry Perspectives - The concept of intelligent agents is currently the hottest topic in the market, with various definitions leading to confusion [3] - A survey indicates that 67.4% of respondents consider the safety and compliance issues of intelligent agents "very important," with an average score of 4.48 out of 5 [9] - The majority of respondents believe that the industry has not adequately addressed safety compliance, with 48.8% stating that there is some awareness but insufficient investment [9] Group 2: Key Challenges and Concerns - The complexity and novelty of risks associated with intelligent agents are seen as the biggest challenges in governance, with 62.8% of respondents agreeing [11] - The most concerning safety compliance issues identified are AI hallucinations and erroneous decisions (72%) and data leaks (72%) [14] - The industry is particularly worried about user data leaks (81.4%) and unauthorized operations leading to business losses (53.49%) [16] Group 3: Collaboration and Security Risks - The interaction of multiple intelligent agents raises new security risks, necessitating specialized security mechanisms [22] - The industry is working on security solutions for intelligent agent collaboration, such as the ASL (Agent Security Link) technology [22] Group 4: Data Responsibility and Transparency - The responsibility for data handling in intelligent agents is often placed on developers, with platforms maintaining a neutral stance [35] - There is a lack of clarity regarding data flow and responsibility, leading to potential blind spots in user data protection [34] - Many developers are unaware of their legal responsibilities regarding user data, which complicates compliance efforts [36]
智能体狂奔之时,安全是否就绪了?
2 1 Shi Ji Jing Ji Bao Dao· 2025-07-03 23:07
Core Insights - The year 2025 is referred to as the "Year of Intelligent Agents," marking a paradigm shift in AI development from "I say AI responds" to "I say AI acts" [1] - The report titled "Intelligent Agent Health Check Report - Safety Panorama Scan" aims to assess whether safety and compliance are ready amidst the rapid development of intelligent agents [1] - The core capabilities of intelligent agents, namely autonomy and actionability, are identified as potential risk areas [1] Dimension of Fault Tolerance and Autonomy - The report establishes a model based on two dimensions: fault tolerance and autonomy, which are considered core competitive indicators for the future development of intelligent agents [2] - Fault tolerance is crucial in high-stakes fields like healthcare, where errors can have severe consequences, while low-stakes fields like creative writing allow for more flexibility [2] - Autonomy measures the ability of intelligent agents to make decisions and execute actions without human intervention, with higher autonomy leading to increased efficiency but also greater risks [2] Industry Perspectives on Safety and Compliance - A survey revealed that 67.4% of respondents consider safety and compliance issues "very important," with an average score of 4.48 out of 5 [4] - There is no consensus on whether the industry is adequately addressing safety and compliance, with 48.8% believing there is some attention but insufficient investment [4] - The top three urgent issues identified are stability and quality of task execution (67.4%), exploration of application scenarios (60.5%), and enhancement of foundational model capabilities (51.2%) [5] Concerns Over AI Risks - The most common safety and compliance concerns include AI hallucinations and erroneous decisions (72%) and data leaks (72%) [6] - The industry is particularly worried about user data leaks (81.4%) and unauthorized operations leading to business losses (53.49%) [6] Responsibility and Data Management - The responsibility for data management in intelligent agents is often unclear, with user agreements typically placing the burden on developers [14][15] - Many developers lack awareness of their legal responsibilities regarding user data, which complicates compliance efforts [15] - The report highlights the need for clearer frameworks and standards to ensure responsible data handling and compliance within the intelligent agent ecosystem [15]
智能体调查:七成担忧AI幻觉与数据泄露,过半不知数据权限
2 1 Shi Ji Jing Ji Bao Dao· 2025-07-02 00:59
Core Viewpoint - The year 2025 is anticipated to be the "Year of Intelligent Agents," marking a paradigm shift in AI development from "I say AI responds" to "I say AI acts," with intelligent agents becoming a crucial commercial anchor and the next generation of human-computer interaction [1] Group 1: Importance of Safety and Compliance - 67.4% of industry respondents consider the safety and compliance issues of intelligent agents to be "very important," but it does not rank in the top three priorities [2][7] - The majority of respondents (70%) express concerns about AI hallucinations, erroneous decisions, and data leakage [3] - 58% of users do not fully understand the permissions and data access capabilities of intelligent agents [4] Group 2: Current State of Safety and Compliance - 60% of respondents deny that their companies have experienced any significant safety compliance incidents related to intelligent agents, while 40% are unwilling to disclose such information [5][19] - The survey indicates that while safety is deemed important, the immediate focus is on enhancing task stability and quality (67.4%), exploring application scenarios (60.5%), and improving foundational model capabilities (51.2%) [11] Group 3: Industry Perspectives on Safety - There is no consensus on whether the industry is adequately addressing safety and compliance, with 48.8% believing there is some attention but insufficient investment, and 34.9% feeling there is a lack of effective focus [9] - The majority of respondents (62.8%) believe that the complexity and novelty of intelligent agent risks pose the greatest challenge to governance [16][19] - 51% of respondents report that their companies lack a clear safety officer for intelligent agents, and only 3% have a dedicated compliance team [23] Group 4: Concerns and Consequences of Safety Incidents - The most significant concerns regarding potential safety incidents include user data leakage (81.4%) and unauthorized operations leading to business losses (53.49%) [15][16] - Different industry roles have varying concerns, with users and service providers primarily worried about data leakage, while developers are more concerned about regulatory investigations [16]