Workflow
大模型安全
icon
Search documents
喜报丨信安世纪荣膺ISC.AI 2025创新百强称号
Xin Lang Cai Jing· 2025-12-17 14:19
Core Insights - The ISC.AI 2025 Sixth Innovation Top 100 Awards Ceremony was successfully held on December 17, showcasing significant participation from over 500 companies and 100 universities across the country [1][4] - The event collected more than 800 innovative products and solutions, covering cutting-edge areas such as secure large models, secure intelligent agents, and large model security [1][4] - Beijing Xinan Century Technology Co., Ltd. (stock code: 688201) was awarded the title of "ISC.AI 2025 Sixth Innovation Top 100" for its outstanding performance in the field of identity security [1][4]
50位专家齐聚冰城 共探AI时代安全防护新路径
Zhong Guo Xin Wen Wang· 2025-11-30 06:23
Core Insights - The 2025 Northeast Academic Seminar on Information Network Security was held in Harbin, focusing on the challenges and advancements in cybersecurity related to artificial intelligence and industrial internet security [1][2]. Group 1: Seminar Overview - The seminar gathered 50 experts from over 20 universities and research institutions to discuss cutting-edge topics such as industrial internet security, AI data privacy, and deep forgery [1][2]. - The event was organized by the Ministry of Public Security's Third Research Institute and co-hosted by several universities and professional committees [5]. Group 2: Key Discussions - Professor Yao Yu from Northeast University presented on the challenges of industrial internet security and shared the "Listening" security capability system, highlighting AI's potential in industrial safety [2]. - Professor Lv Hongwu from Harbin Engineering University discussed the progress of AI-driven traffic classification methods and the challenges of imbalanced traffic, long sequence dependencies, and high labeling costs [2]. - A roundtable discussion addressed topics such as model governance, security evaluation systems, AI content safety, and data security, with experts proposing the establishment of a secure and controllable large model system and the improvement of model security standards [2]. Group 3: Specialized Forums - A sub-forum at Heilongjiang University focused on critical issues in content security and trusted computing, including image authenticity verification, risk assessment of deep forgery, and methods for detecting visual content tampering [5]. - The forum also explored key technologies in federated learning and efficient data structure design for trusted execution environments [5].
大模型“带病运行”,漏洞占比超六成
3 6 Ke· 2025-11-17 10:34
Core Viewpoint - The rapid integration of large models into critical sectors has transformed inherent risks related to data security, algorithm robustness, and output credibility from theoretical concerns into real threats, impacting public interest and social order [1]. Group 1: Security Risks and Vulnerabilities - The National Cybersecurity Center reported severe vulnerabilities in the open-source model tool Ollama, leading to risks such as data leakage, computational theft, and service interruptions [1]. - A significant increase in security vulnerabilities was noted, with 281 vulnerabilities identified during the first domestic AI model testing in 2025, over 60% of which were unique to large models [1]. - The monitoring report from the Frontier AI Risk Monitoring Platform indicated that the risk index for models has reached new highs, with network attack risks increasing by 31%, biological risks by 38%, chemical risks by 17%, and loss of control risks by 50% over the past year [3]. Group 2: Industry Response and Monitoring - The industry faces challenges in proactive security measures, often resorting to reactive fixes due to a lack of comprehensive risk management tools [2]. - The Frontier AI Risk Monitoring Platform was launched to assess and monitor catastrophic risks associated with cutting-edge AI models, providing targeted evaluations and regular monitoring of 15 leading model companies [2]. - The assessment methodology of the monitoring platform includes defining risk areas, selecting evaluation benchmarks, choosing leading models, conducting benchmark tests, and calculating risk indices [8]. Group 3: Trust and Integrity Issues - Data leakage, misleading outputs, and content violations are prevalent security risks, highlighting weaknesses in infrastructure protection [3]. - The integrity of models is a growing concern, with only 4 models scoring above 80 on the honesty assessment benchmark, while 30% scored below 50, indicating a significant risk of misinformation [5]. - The lack of a unified approach to risk assessment and transparency in evaluation reports contributes to uncertainty regarding the risk status of various models [7]. Group 4: Future Challenges and Innovations - The evolution of AI agents and multimodal models is expected to introduce new forms of security risks, with potential for malicious exploitation of enhanced capabilities [11]. - The anticipated risks over the next 12 to 24 months include "model supply chain poisoning" and "autonomous agent misuse," which could lead to significant security breaches [11]. - The complexity of large model risks necessitates collaborative efforts in technological innovation and industry standards to address the rapid pace of threat evolution [12].
360重磅发布《大模型安全白皮书》 推动AI应用“安全、向善、可信、可控”
Zheng Quan Ri Bao· 2025-11-09 11:07
Core Insights - The white paper systematically outlines five key risks threatening the security of large models, including infrastructure security risks, content security risks, data and knowledge base security risks, agent security risks, and user-end security risks [1][3] - The proposed dual governance strategy combines "external security" and "platform-native security" to create a comprehensive protection network for AI applications [1][3] Group 1: Key Risks - The first category of risks is infrastructure security risks, which include device control, supply chain vulnerabilities, denial-of-service attacks, and misuse of computing resources [1] - The second category is content security risks, involving non-compliance with core values, false or illegal content, model hallucinations, and prompt injection attacks [1] - The third category focuses on data and knowledge base security risks, highlighting issues such as data breaches, unauthorized access, privacy abuse, and intellectual property concerns [1] - The fourth category addresses agent security risks, where the increasing autonomy of agents blurs the security boundaries in areas like plugin invocation, computing resource scheduling, and data flow [1] - The fifth category is user-end security risks, which encompass permission control, API call monitoring, execution of malicious scripts, and security during MCP execution [1] Group 2: Security Solutions - The white paper emphasizes a dual governance strategy: "external security" acts as a flexible response to real-time risks, while "platform-native security" builds a robust security foundation from the ground up [1] - 360's products, including enterprise-level knowledge bases and agent construction platforms, are designed to embed security deeply within the platform, ensuring compliance with national and industry standards [2] - The three main platform products work together to address inherent security challenges, such as data leakage, uncontrolled agent behavior, and terminal misuse, thereby establishing a stable foundation for AI applications [2] - 360 has implemented these capabilities across various sectors, including government, finance, and manufacturing, transforming theoretical security into practical solutions [2] - The company aims to collaborate with academia and industry to promote security standards and technology sharing, contributing to a safer and more trustworthy AI ecosystem [2]
360发布《大模型安全白皮书》
Zhong Zheng Wang· 2025-11-09 03:29
Core Insights - The 360 Digital Security Group released the "Large Model Security White Paper" at the World Internet Conference, outlining five key risks associated with large model operations and proposing a dual-track governance strategy for security [1][2] Group 1: Key Risks Identified - The white paper identifies five critical risks threatening large model security: 1. Infrastructure security risks, including device control, supply chain vulnerabilities, denial-of-service attacks, and misuse of computing resources 2. Content security risks, involving non-compliance with core values, false or illegal content, large model hallucinations, and prompt injection attacks 3. Data and knowledge base security risks, highlighting issues like data leakage, unauthorized access, privacy abuse, and intellectual property concerns 4. Intelligent agent security risks, where the boundaries of security become blurred due to increased autonomy in agent operations 5. User-end security risks, which encompass permission control, API call monitoring, execution of malicious scripts, and security in MCP execution [1] Group 2: Proposed Security Solutions - The white paper advocates a "plug-in security + platform-native security" dual governance strategy, which offers two main advantages: 1. High adaptability and low deployment costs, allowing for quick integration into various enterprise environments without redundant development 2. Rapid response capabilities with independent monitoring and interception mechanisms that can identify and block real-time threats, such as abnormal computing consumption or malicious content, in milliseconds [2] Group 3: Implementation and Future Plans - 360 has successfully implemented these security capabilities across various sectors, including government, finance, and manufacturing, transforming large model security from theoretical concepts into practical, actionable solutions - The company plans to collaborate with academia and industry to promote the establishment of security standards and technology sharing, aiming to build a safe and trustworthy AI ecosystem [2]
360胡振泉谈AI换脸乱象:以现有识别鉴定技术看破有难度
Nan Fang Du Shi Bao· 2025-11-09 01:38
Group 1 - The core issue of AI-generated content, particularly the risks associated with AI face-swapping technology, has gained significant attention following an incident involving actor Wen Zhengrong [1] - Hu Zhenquan, president of 360 Digital Security Group, highlighted the challenges in identifying AI-generated content due to its realism, indicating a need for improved detection technologies [1][3] - The 2025 World Internet Conference in Wuzhen served as a platform for the release of the "Large Model Security White Paper," which outlines the security vulnerabilities associated with AI large models [3][4] Group 2 - The white paper identified 281 security vulnerabilities, with 177 being unique to large models, representing over 60% of the total [3] - Five key risk categories threatening large model security were outlined, including infrastructure security risks, content security risks, data and knowledge base security risks, user-end security risks, and the complexities arising from the interconnection of these risks [4] - The proposed dual governance strategy includes "external security" focusing on model protection and "native platform security" embedding security capabilities within core components [4] Group 3 - Despite the controversies surrounding AI intelligent agents, Hu Zhenquan expressed optimism about their future, likening their current stage to the early days of personal computers [5] - He emphasized that intelligent agents, as essential carriers for large model applications, are expected to evolve and become mainstream in AI applications [5] - The development of intelligent agents is anticipated to lead to significant advancements in efficiency and capability in the near future [5]
乌镇峰会上三六零首发《大模型安全白皮书》 拉起全链路安全防线
Core Viewpoint - The 360 Digital Security Group released the "Large Model Security White Paper" at the World Internet Conference, outlining five key risks associated with large model operations and proposing a dual-track governance strategy for security [1][2]. Summary by Sections Key Risks Identified - The white paper identifies five critical risks threatening large model security: 1. Infrastructure security risks, including device control, supply chain vulnerabilities, denial-of-service attacks, and misuse of computing resources 2. Content security risks, involving non-compliance with core values, false or illegal content, model hallucinations, and prompt injection attacks 3. Data and knowledge base security risks, highlighting data breaches, unauthorized access, privacy abuse, and intellectual property issues 4. Agent security risks, where the increasing autonomy of agents blurs security boundaries in areas like plugin calls, computing resource scheduling, and data flow 5. User-end security risks, covering permission control, API call monitoring, malicious script execution, and MCP execution security [1][2]. Governance Strategy - The white paper proposes a dual-track governance strategy of "external security + platform-native security": - External security acts as an "external bodyguard" to flexibly respond to real-time risks, while platform-native security serves as an "internal armor" to strengthen the foundational security [2][3]. - External security focuses on monitoring and defending against risks related to computing hosts, software ecosystems, input/output content, and model hallucinations [2]. - Platform-native security embeds security capabilities into core components, enhancing the safety of supporting components and ensuring compliance throughout the process [3][4]. Product Capabilities - The company has developed a comprehensive solution for large model security, consisting of seven core product capabilities that combine external and platform-native security: - External security capabilities do not intrude on the original architecture of large models and provide flexible, rapid dynamic protection through external tools [3]. - Key products include the Large Model Guardian computing host security system, detection system, protection system, and hallucination detection and mitigation system, which together form an external barrier against infrastructure and content risks [3][4]. Implementation and Future Plans - The platform-native security approach is reflected in three major products: an enterprise-level knowledge base, an agent construction and operation platform, and an agent client, which collectively address internal security challenges [4]. - The company has successfully implemented these capabilities across various sectors, including government, finance, and manufacturing, transforming large model security from theory into practical solutions [4][5]. - Future plans involve collaboration with academia and industry to promote security standards and technology sharing, aiming to build a safe and trustworthy AI ecosystem [5].
乌镇峰会,360首发《大模型安全白皮书》,拉起全链路安全防线
Zhong Jin Zai Xian· 2025-11-08 04:50
Core Insights - The 360 Digital Security Group released the "Large Model Security White Paper" at the World Internet Conference, outlining five key risks associated with large model operations and proposing a dual-track security strategy to enhance AI safety and reliability [1][4][12] Risk Summary - The white paper identifies five critical risks to large model security: 1. Infrastructure security risks, including device control, supply chain vulnerabilities, denial-of-service attacks, and misuse of computing resources [5] 2. Content security risks, which involve non-compliance with core values, false or illegal content, model hallucinations, and prompt injection attacks [5] 3. Data and knowledge base security risks, highlighting issues like data leakage, unauthorized access, privacy abuse, and intellectual property concerns [5] 4. Intelligent agent security risks, where the increasing autonomy of agents blurs security boundaries in areas like plugin invocation and data flow [5] 5. User-end security risks, including permission control, API call monitoring, malicious script execution, and security in multi-cloud platforms [5] Security Strategy - The white paper proposes a dual-track governance strategy of "External Security + Platform Native Security" to address the identified risks: - External security acts as an "external bodyguard" for real-time risk management, while platform native security serves as an "internal armor" to strengthen foundational safety [7][10] Implementation of Security Measures - The external security approach focuses on proactive monitoring and defense against threats to computing hosts, software ecosystems, input/output content, and model hallucinations, offering adaptability and rapid response capabilities [9] - The platform native security embeds safety features into core components, ensuring compliance with national and industry standards while providing comprehensive protection for intelligent applications [9][10] Comprehensive Defense Capabilities - The company has developed a comprehensive solution comprising seven core product capabilities that integrate external and platform native security, addressing risks from infrastructure to content layers [10] - The external security products include systems for computing host security, detection, protection, and hallucination detection, while platform native products safeguard data, control intelligent agent behavior, and secure user endpoints [10][12] Industry Application - The security capabilities have been successfully implemented across various sectors, including government, finance, and manufacturing, transforming theoretical security measures into practical solutions [12]
启明星辰前三季度毛利率提升7个百分点
Zheng Quan Ri Bao Wang· 2025-10-28 14:13
Core Insights - Q3 2025 report shows significant improvement in key financial metrics for Qimingxingchen, with revenue reaching 1.548 billion yuan and gross margin increasing by 7 percentage points year-on-year [1] - The company has achieved positive operating cash flow for two consecutive quarters, indicating a solid operational foundation [1] Financial Performance - Revenue for the first three quarters of 2025 was 1.548 billion yuan, with a gross margin increase of 7 percentage points year-on-year and a nearly 16 percentage point increase in Q3 alone [1] - Operating expenses were reduced by 161 million yuan compared to the same period last year, demonstrating effective cost management [1] - Net cash flow from operating activities increased by 443 million yuan year-on-year, a growth of 75%, maintaining positive inflow for two consecutive quarters [2] Cash and Debt Management - The company has cash reserves exceeding 4.2 billion yuan and no interest-bearing debt, positioning it strongly within the industry [2] - This financial strength supports increased investment in technology and market opportunities, as well as talent retention [2] Strategic Partnerships and Market Position - Collaboration with China Mobile has accelerated technology transfer and market expansion, leading to successful bids for key products like large model security products and the "Data Oasis" product line [2] - The company has secured a major contract for a national medical big data center, enhancing its market position in data security [3] Technological Advancements - Qimingxingchen is proactively engaging in quantum computing and other cutting-edge technology areas, integrating quantum key distribution into its VPN products to enhance security capabilities [3] - The company’s large model application firewall (MAF) has been recognized as a leading technology in AI application security, winning exclusive contracts with major internet firms [2][3]
腾讯云全栈安全能力亮剑国家网安周,筑牢数字时代“智能防线”
Sou Hu Cai Jing· 2025-10-15 08:55
Core Insights - Tencent Security showcased its advancements in AI security, cybersecurity, and domestic integration innovation at the 2025 National Cybersecurity Awareness Week in Kunming, Yunnan, emphasizing its commitment to building a "digital shield" for various industries [1][3]. AI Security - The rise of AI, particularly large models, presents new security challenges that traditional defense systems struggle to address, necessitating the construction of a robust defense line for the AI era [3]. - Tencent provides a comprehensive solution covering the entire lifecycle of large models, integrating years of practical experience into a structured risk governance framework [3][5]. Large Model Security - Tencent introduced the AI-SPM large model security posture management product to protect the operational environment of large models, enabling timely detection and handling of security risks [5]. - The LLM-WAF large model firewall was launched to offer full-link protection against various threats, including computational abuse and data leakage [5]. Data and Content Security - Tencent employs multiple technologies for end-to-end protection of data security and privacy throughout the lifecycle of large models, including data classification, encryption, and auditing [8]. - The Tencent Cloud Tianyu content risk control platform supports large model training and inference through a six-dimensional approach, ensuring effective content management [8]. Cloud Security - Tencent Cloud maintains over 1.5 million servers globally and has developed a "1+4+N" security defense system to address common and industry-specific security challenges faced by enterprises [10][12]. - The "1+4+N" defense system includes exposure management, data security, host security, web application firewalls, and cloud firewalls, providing a comprehensive security framework for cloud operations [12][14]. Cloud-Native Security - Tencent Cloud has established a cloud-native security system that adheres to principles such as security capability native integration and zero-trust architecture, ensuring the security of containerized applications throughout their lifecycle [15][17]. - The system includes capabilities for image security, runtime intrusion detection, and network security, addressing the unique risks associated with cloud-native architectures [15][17]. Mini Program Security - Tencent Cloud has launched an innovative security solution for mini programs, providing essential protections such as DDoS defense and web application security, ensuring a seamless user experience for retail businesses [18].