Workflow
蚁天鉴
icon
Search documents
2025国家网络安全周在昆明开幕 蚂蚁集团gPass等多款安全可信AI技术亮相
Core Viewpoint - The article highlights Ant Group's participation in the 2025 National Cybersecurity Publicity Week, showcasing its innovations in AI security, data protection, and intelligent risk control, particularly through the introduction of the gPass framework for AI glasses [1][2]. Group 1: gPass Framework - gPass is designed to create a trusted, seamless information bridge between AI glasses and intelligent agents, focusing on three core capabilities: security, interaction, and connectivity [1][2]. - The framework employs technologies such as trusted identity circulation, end-to-end encryption, and device authentication to ensure user information security and privacy [2]. - gPass has already partnered with brands like Rokid, Xiaomi, Quark, and Thunderbird, with plans to expand its applications to various life scenarios, including healthcare and travel [2]. Group 2: Advanced Security Technologies - Ant Group is promoting the ASL initiative to ensure security in the collaboration of intelligent agents, focusing on permissions, data, and privacy [3]. - The "Ant Tianjian" solution for large models includes features for intelligent agent security scanning and abuse detection, forming a comprehensive technology chain [3]. - The "Trusted Data Space" product under Ant Group's MiSuan division provides high-performance, low-cost, and secure data fusion capabilities, supporting various sectors [3]. Group 3: Risk Control Capabilities - Ant Group's financial technology division has demonstrated advanced risk control capabilities against document and voice forgery, achieving a 98% accuracy rate in fake document detection [4]. - The company has collaborated with judicial authorities to address illegal financial intermediaries, involving over 200 individuals since 2024 [4]. - Ant Group aims to build a trustworthy AI governance system to ensure the authenticity and reliability of AI-generated content and agent behavior [4]. Group 4: Commitment to Security Technology - Ant Group emphasizes that security technology is fundamental to its development, committing to enhancing AI security capabilities through responsible privacy protection and comprehensive AI governance [4][5]. - The company has received multiple awards for its advancements in business security, AI security, and content security, reflecting its leadership in the field [5].
2025国家网络安全周在昆明开幕,蚂蚁集团gPass等多款安全可信AI技术亮相
Core Viewpoint - The article highlights Ant Group's participation in the 2025 National Cybersecurity Publicity Week, showcasing its innovations in AI security, data protection, and intelligent risk control, particularly through the introduction of the gPass framework for AI glasses [1][2]. Group 1: gPass Framework - gPass is designed to provide a secure, interactive, and connected experience for AI glasses, addressing challenges such as fragmented ecosystems and limited application scenarios in the AI glasses industry [1][2]. - The framework employs technologies like trusted identity circulation, end-to-end encryption, and device authentication to ensure user information security and privacy [2]. - gPass has already partnered with brands like Rokid, Xiaomi, Quark, and Thunderbird, with plans to expand its applications to various life scenarios, including healthcare and travel [2]. Group 2: Advanced Security Technologies - Ant Group has introduced several advanced security technologies, including the ASL initiative for agent collaboration security and the "Ant Tianjian" model security solution, which includes features for detecting misuse and ensuring data privacy [3]. - The ZOLOZ Deeper technology effectively addresses threats from deep forgery, such as fake faces and voice synthesis [3]. - The "Trusted Data Space" product under Ant Group's Mican provides high-performance, low-cost, and secure data fusion capabilities, supporting various sectors [3]. Group 3: Risk Control Capabilities - Ant Group's financial technology division has demonstrated advanced risk control capabilities against document and voice forgery, achieving a 98% accuracy rate in fake document detection and covering over 50 types of voice synthesis [4]. - The company has collaborated with judicial authorities to address illegal financial intermediaries, involving over 200 individuals since 2024 [4]. - Ant Group aims to build a trustworthy AI governance system to ensure the authenticity and reliability of AI-generated content and agent behavior [4]. Group 4: Recognition and Awards - Ant Group's security technology has received multiple awards for its research and application in business security, AI security, and content security, including first prizes from various technology advancement awards [5].
AI时代未成年人需要“调控型保护”
Nan Fang Du Shi Bao· 2025-09-13 23:13
Core Insights - The forum titled "Regulating AI Content, Building a Clear Ecology Together" was held on September 12, focusing on the risks and challenges associated with AI-generated content and its dissemination [6][8][14] - The report "AI New Governance Direction: Observations on the Governance of Risks in AI-Generated Content and Dissemination" was released, highlighting the rapid development of generative AI and the emergence of new risks such as misinformation and privacy concerns [8][14][15] Group 1: AI Governance and Risk Management - The report emphasizes the need for a multi-faceted governance approach to address the risks associated with generative AI, including misinformation, deepfake scams, and privacy violations [15][19] - Key recommendations include strengthening standards and technical governance, promoting collaborative governance among government, enterprises, and associations, and prioritizing social responsibility and ethical considerations in AI development [7][22][23] Group 2: Findings from the Report - The report indicates that 76.5% of respondents have encountered AI-generated fake news, highlighting the widespread impact of misinformation [8][14][20] - It identifies various risks associated with generative AI, including misleading information, deepfake scams, privacy breaches, copyright infringements, and the potential harm to minors [15][18][19] Group 3: Expert Insights and Recommendations - Experts at the forum discussed the challenges of AI content governance, emphasizing the need for a dynamic approach to address the complexities of misinformation and the evolving nature of AI technology [9][10][19] - Recommendations include implementing mandatory identification for AI-generated content, enhancing data compliance mechanisms, and developing educational programs to improve AI literacy among minors [23][24]
首发首秀世界人工智能大会 智能体开启AI新赛道
Jing Ji Ri Bao· 2025-08-07 00:09
Core Insights - The World Artificial Intelligence Conference has seen a surge in the number of intelligent agents, with over three times the number of products launched in the past three months compared to the entire previous year [1][2] - Intelligent agents, defined as autonomous entities capable of perceiving their environment and taking actions to achieve specific goals, are becoming a focal point in the tech industry [2][3] Industry Developments - Numerous companies, including MiniMax, SenseTime, and JieYue XingChen, have launched new intelligent agent products, while Fudan University has introduced an ethical review intelligent agent called "YiJian" [2] - In the industrial sector, Shanghai MajiGeek has released the first real-time spatial multimodal interactive intelligent agent, "Installation XiaoLingTong," aimed at improving construction efficiency and reducing errors [2] - The AI-Scientist platform by Zhongke Wenge focuses on enhancing research efficiency through AI collaboration, transforming the research paradigm from human-led to AI-assisted exploration [2] Market Trends - The global intelligent agent market has surpassed $5 billion, with an annual growth rate of 40%, indicating a significant expansion in this sector [4] - Major tech companies are investing heavily in intelligent agents, with Alibaba Cloud launching "Wuying AgentBay," a cloud infrastructure designed for intelligent agents [5] Technical Challenges - A key challenge in the intelligent agent market is the limited computing power of local devices, which struggles to support high-demand tasks, particularly those requiring extensive GPU processing [4] - New companies are emerging to address these challenges, such as Xinghuan Technology, which offers a new AI infrastructure technology to facilitate the rapid development of industry-specific intelligent agents [4] Safety and Security - Concerns regarding the safety of intelligent agents are rising, with over 70% of industry practitioners worried about issues like AI hallucinations, erroneous decisions, and data breaches [6] - Ant Group has upgraded its large model security solution, "Ant Tianjian," to include intelligent agent safety assessment tools, enhancing security measures for AI applications [6] - PPIO has introduced the first domestic intelligent agent sandbox product, designed to ensure secure execution of tasks in isolated environments, preventing data leaks and resource conflicts [6]
首发首秀世界人工智能大会——智能体开启AI新赛道
Jing Ji Ri Bao· 2025-08-06 21:58
Core Insights - The World Artificial Intelligence Conference has seen a surge in intelligent agents, with more products launched in the past three months than in the entire previous year, indicating a significant trend in the tech industry [1] Group 1: Intelligent Agent Development - Intelligent agents, capable of perceiving environments and taking actions to achieve specific goals, are emerging rapidly, with new products from companies like MiniMax, SenseTime, and JieYue Star [2] - The "Installation Little Genius," a real-time spatial multimodal interactive intelligent agent, was launched to enhance construction efficiency and reduce human error [2] - The AI-Scientist platform aims to transform research methodologies by enabling AI collaboration in scientific exploration, thus improving research efficiency [2] Group 2: Market Growth and Challenges - The global intelligent agent market has surpassed $5 billion, with a year-on-year growth rate of 40%, highlighting its increasing importance [4] - A significant challenge remains in local device computing power, which struggles to support high-demand intelligent agent tasks, particularly those requiring extensive GPU processing [4] - New companies are emerging to address these challenges, such as Star Ring Technology, which offers a platform for quickly building industry-specific intelligent agents [4] Group 3: Industry Trends and Innovations - Major tech companies are investing in intelligent agents, with Alibaba Cloud launching the "Shadowless AgentBay," a cloud infrastructure designed for intelligent agents [5] - The transition of AI agents from mere tools to core engines of industry is reshaping market boundaries, presenting new challenges in human-agent collaboration [5] Group 4: Security Concerns - The rise of intelligent agents brings security challenges, with over 70% of industry professionals concerned about risks such as AI hallucinations and data breaches [6] - Ant Group has upgraded its security solution for intelligent agents, introducing tools for safety assessment and zero-trust defense [6] - PPIO has launched a sandbox product designed to isolate tasks in a secure cloud environment, minimizing risks of data leakage and resource conflicts [6]
应对AI新安全挑战,首份智能体安全白皮书发布
Group 1 - The AI field is transitioning from the era of large models to the era of intelligent agents, which brings security challenges such as overreach and excessive delegation [1] - The "2025 Terminal Intelligent Agent Security" white paper was jointly released by Shanghai AI Laboratory, CAICT, Ant Group, and IIFAA Alliance, providing a comprehensive risk assessment guide for terminal intelligent agents [1][2] - Intelligent agents are rapidly penetrating various terminal devices like smartphones, glasses, headphones, and car systems, redefining interaction methods across multiple industries including life, industrial, medical, and education [1] Group 2 - The white paper outlines three major protective paths: single intelligent agent security, multi-agent trusted interconnection, and AI terminal security, aiming to serve as a comprehensive and targeted security guideline [2] - The white paper introduces a terminal intelligent agent security system supported by a technical ecosystem, detailing security technologies for single agents and multi-agent interactions [2] - Over 70% of intelligent agent practitioners express concerns about issues like AI hallucinations, erroneous decisions, and data leaks, with more than half indicating their companies lack a designated security officer for intelligent agents [3] Group 3 - Ant Group's "Ant Tianjian" has announced an upgrade to its large model security solution, adding intelligent agent security assessment tools with a risk judgment accuracy rate exceeding 96% [3]
云姨夜话丨谁在“安全”前提下持续破解AI的“医”题?
Qi Lu Wan Bao· 2025-07-30 09:34
Group 1 - The core viewpoint of the articles highlights the rapid growth of the medical AI market, projected to exceed $2.7 billion in 2025 and reach $17 billion by 2034, indicating a significant transformation in traditional healthcare models through AI integration [2][3]. - Ant Group's AI health application AQ has made substantial progress by connecting with 269 doctor AI agents and launching the first intelligent agent standard system in collaboration with the China Academy of Information and Communications Technology [2][3]. - The AI application in clinical settings is advancing, particularly in chronic disease management, providing users with 24/7 access to professional health support through mobile devices [3][4]. Group 2 - Ant Group's AI safety solution "Ant Tianjian" has been upgraded to include an AI agent safety evaluation tool, achieving over 96% accuracy in risk assessment and supporting testing across 11 industries [4][5]. - The World Digital Academy has released new standards for AI agent operational safety testing, aligning with Ant Tianjian's capabilities to ensure the secure application of AI technologies in healthcare [5]. - The healthcare industry is transitioning from "usable" to "user-friendly" AI solutions while facing challenges such as data silos and ethical standards, necessitating comprehensive training for healthcare professionals [5].
WAIC 2025丨应对智能体安全挑战 蚂蚁集团升级“蚁天鉴”
Xin Hua Cai Jing· 2025-07-28 11:14
Core Insights - The AI field is transitioning from the era of large models to the era of intelligent agents, with Ant Group's "Yitianjian" upgrading its security solutions to include AI agent safety assessment tools [1][2] - The upgraded features of "Yitianjian" include four core functions: agent alignment, MCP security scanning, intelligent agent security scanning, and zero-trust defense [1] - Over 70% of AI agent practitioners express concerns about issues such as AI hallucinations, erroneous decision-making, and data leaks, highlighting the safety challenges posed by intelligent agents [1] Company Insights - "Yitianjian" is a collaborative development between Ant Group and Tsinghua University, designed to ensure the safe and reliable operation of large model technologies [2] - The risk assessment agent of "Yitianjian" boasts an accuracy rate of over 96% and supports testing for intelligent agents across 11 industries [2] - The safety philosophy of the upgraded "Yitianjian" is based on the concept of "attack to promote defense," creating a comprehensive protection system for intelligent agents [2]
蚂蚁集团大模型数据安全总监杨小芳:用可信AI这一“缰绳”,驾驭大模型这匹“马”
Mei Ri Jing Ji Xin Wen· 2025-06-09 14:42
Core Viewpoint - The rapid development of AI technology presents significant application potential in data analysis, intelligent interaction, and efficiency enhancement, while also raising serious security concerns [1][2]. Group 1: Current AI Security Risks - Data privacy risks are increasing due to insufficient transparency in training data, which may lead to copyright issues and unauthorized access to user data by AI agents [3][4]. - The lowering of security attack thresholds allows individuals to execute attacks through natural language commands, complicating the defense against AI security threats [3][4]. - The misuse of generative AI (AIGC) can lead to social issues such as deepfakes, fake news, and the creation of tools for cyberattacks, which can disrupt social order [3][4]. - The long-standing challenge of insufficient inherent security in AI affects the reliability and credibility of AI technologies, potentially leading to misinformation and decision-making biases in critical sectors like healthcare and finance [3][4]. Group 2: Protective Strategies - The core strategy for preventing data leakage in both AI and non-AI fields is comprehensive data protection throughout its lifecycle, from collection to destruction [4][5]. - Specific measures include scanning training data to remove sensitive information, conducting supply chain vulnerability assessments, and performing security testing before deploying AI agents [5][6]. Group 3: Governance and Responsibility - Platform providers play a crucial role in governance by scanning and managing AI agents developed on their platforms, but broader regulatory oversight is necessary to ensure effective governance across multiple platforms [7][8]. - The establishment of national standards and regulatory policies is essential for monitoring and constraining platform development, similar to the regulation of mini-programs [7][8]. Group 4: Future Trends in AI Security - Future AI security development may focus on embedding security capabilities into AI infrastructure, achieving "security by design" to reduce costs associated with security measures [15][16]. - Breakthroughs in specific security technologies could provide ready-to-use solutions for small and medium enterprises facing AI-related security risks [15][16]. - The importance of industry standards is emphasized as they provide a foundational framework for building a secure ecosystem, guiding technical practices, and promoting compliance and innovation [17][18].
专访蚂蚁集团大模型数据安全总监杨小芳:AI安全与创新发展不是对立的,而是互相成就
Mei Ri Jing Ji Xin Wen· 2025-06-03 11:26
Core Viewpoint - The rapid development of generative AI technology presents significant potential for applications in data analysis, intelligent interaction, and efficiency enhancement, while also raising serious security concerns [1] Group 1: Current AI Security Risks - Data privacy risks include insufficient transparency of training data, which may lead to copyright issues, and the potential for AI agents to access user data beyond their permissions [3][4] - The lowering of security attack thresholds allows individuals with minimal technical skills to execute attacks using AI models, complicating the defense against such threats [3][4] - The misuse of generative AI can lead to societal issues such as deepfakes, fake news, and the creation of tools for cyberattacks, which can disrupt social order [3][4] Group 2: Defensive Strategies - The core strategy for preventing data leakage is full lifecycle data protection, covering all stages from collection to destruction, specifically tailored for AI model training and deployment [5][6] - Key measures include scanning training data for sensitive information, conducting supply chain vulnerability assessments, and ongoing risk monitoring during AI agent operation [6][7] Group 3: Challenges and Blind Spots - Supply chain and ecological risks, as well as the rapid development of AI agents, pose significant challenges due to the involvement of multiple participants and the lack of mature governance [7][8] - The need for a credible authentication mechanism is critical to ensure the trustworthiness of AI agents, especially in collaborative environments [7][8] Group 4: Governance and Responsibility - Platform providers play a crucial role in governance, as they have the authority to scan and manage AI agents developed on their platforms, but broader regulatory oversight is also necessary [8][9] - Effective governance requires collaboration between platform providers and regulatory bodies to establish standards and monitoring mechanisms [8][9] Group 5: Future Trends in AI Security - Future AI security development may focus on embedding security capabilities into AI infrastructure, achieving "security by design" [16][18] - Breakthroughs in specific security technologies could help mitigate risks for small and medium enterprises, making AI applications safer [16][18] - Data governance will be essential at both enterprise and societal levels, emphasizing transparency and accountability in AI data usage [16][18] Group 6: Role of Industry Standards - Industry standards are vital for establishing a secure ecosystem, guiding technical practices, and promoting compliance and innovation [18][19] - The development of open standards and assessment tools can lower barriers for small enterprises, enhancing overall security levels across the ecosystem [18][19] - The company has actively participated in the formulation of over 80 domestic and international standards related to AI governance and security risk management [19]