蚁天鉴

Search documents
2025国家网络安全周在昆明开幕 蚂蚁集团gPass等多款安全可信AI技术亮相
Zheng Quan Shi Bao Wang· 2025-09-15 09:52
据现场工作人员介绍,在安全方面,gPass借助可信身份流通、端到端加密、设备认证等技术,确保"设 备是本人",构建信息传输的安全屏障,全面保障用户信息安全与隐私。在交互方面,该框架融合声 纹、虹膜、指纹等无感核身技术,实现"本人操作"的流畅安全认证。在连接方面,gPass可基于用户时 间与场景需求,自动调度多智能体间即时、安全的消息交互,显著提升AI眼镜的体验连贯性。 目前,gPass已完成与Rokid、小米、夸克、雷鸟等品牌眼镜合作,在"看一下支付"场景中率先落地。未 来,该框架还计划扩展至医疗、文旅、出行等更多生活场景,为用户带来无感、可信、全程伴随的智能 服务。 在gPass之外,蚂蚁集团还集中展示多项前沿安全技术与系统级解决方案。 9月15日,以"网络安全为人民,网络安全靠人民——以高水平安全守护高质量发展"为主题的2025年国 家网络安全宣传周开幕式在云南昆明举行。蚂蚁集团以"安全可信,让AI用得放心"为主题,参与网安周 多项重要论坛及展览活动,系统展示了其在AI安全、数据保护、网络安全和智能风控等领域的系列创 新成果与实践。其中,全球首个智能眼镜可信连接技术框架——gPass首次在网安周亮相,便成 ...
2025国家网络安全周在昆明开幕,蚂蚁集团gPass等多款安全可信AI技术亮相
Zheng Quan Shi Bao Wang· 2025-09-15 09:03
目前,gPass已完成与Rokid、小米、夸克、雷鸟等品牌眼镜合作,在"看一下支付"场景中率先落地。未 来,该框架还计划扩展至医疗、文旅、出行等更多生活场景,为用户带来无感、可信、全程伴随的智能 服务。 在gPass之外,蚂蚁集团还集中展示多项前沿安全技术与系统级解决方案。 9月15日,以"网络安全为人民,网络安全靠人民——以高水平安全守护高质量发展"为主题的2025年国 家网络安全宣传周开幕式在云南昆明举行。蚂蚁集团以"安全可信,让AI用得放心"为主题,参与网安周 多项重要论坛及展览活动,系统展示了其在AI安全、数据保护、网络安全和智能风控等领域的系列创 新成果与实践。其中,全球首个智能眼镜可信连接技术框架——gPass首次在网安周亮相,便成为关注 焦点,展示了蚂蚁在构建"以人为本"智能交互基础设施上的安全技术突破。 记者了解到,当前AI眼镜行业仍处于发展初期,普遍存在生态碎片化、应用场景有限、跨端协同能力 弱等挑战。硬件算力、系统分裂以及开发门槛高等因素,严重制约其从"功能机"向"智能体"的演进。 在这一背景下,蚂蚁集团gPass以"安全、交互、连接"三大核心能力为支撑,致力于为AI眼镜与智能体 之间搭建 ...
AI时代未成年人需要“调控型保护”
Nan Fang Du Shi Bao· 2025-09-13 23:13
9月12日,在外滩大会"规范AI内容 共筑清朗生态"见解论坛上,南都大数据研究院编制并发布《AI新 治向:生成式人工智能内容与传播风险治理观察》报告。 "AI新治向"重磅报告首发、实验"AI造假"路人反应、专家献策"AI谣言"治理、脱口秀抛梗"AI新生 活"……9月12日下午,以"规范AI内容 共筑清朗生态"为主题的外滩大会见解论坛在上海举办。论坛汇 聚AI治理专家学者、企业精英、青年学子及青少年代表,围绕人工智能生成合成内容及传播过程中的 风险挑战议题深入探讨。据悉,本次论坛由南方都市报社、南都大数据研究院、中国互联网协会人工智 能工作委员会、复旦大学传播与国家治理研究中心主办。 多元协同共筑清朗生态 最新报告聚焦AI风险治理 活动现场,本次活动的主办方代表、南方都市报社主编刘江涛致辞表示,AI已经渗入日常生活当中, 未来还将从根本上改变诸多方面,但社会也需要形成共识,要对AI保持一份足够的清醒,按照国家的 相关要求确保人工智能安全、可靠、可控。南都愿意做"铺路石",或是"吹哨者",与多方协力共建智慧 交流平台。 中国互联网协会人工智能工作委员会秘书长邓凯在致辞时指出,应对AI内容治理挑战可从三方面着力: ...
首发首秀世界人工智能大会 智能体开启AI新赛道
Jing Ji Ri Bao· 2025-08-07 00:09
Core Insights - The World Artificial Intelligence Conference has seen a surge in the number of intelligent agents, with over three times the number of products launched in the past three months compared to the entire previous year [1][2] - Intelligent agents, defined as autonomous entities capable of perceiving their environment and taking actions to achieve specific goals, are becoming a focal point in the tech industry [2][3] Industry Developments - Numerous companies, including MiniMax, SenseTime, and JieYue XingChen, have launched new intelligent agent products, while Fudan University has introduced an ethical review intelligent agent called "YiJian" [2] - In the industrial sector, Shanghai MajiGeek has released the first real-time spatial multimodal interactive intelligent agent, "Installation XiaoLingTong," aimed at improving construction efficiency and reducing errors [2] - The AI-Scientist platform by Zhongke Wenge focuses on enhancing research efficiency through AI collaboration, transforming the research paradigm from human-led to AI-assisted exploration [2] Market Trends - The global intelligent agent market has surpassed $5 billion, with an annual growth rate of 40%, indicating a significant expansion in this sector [4] - Major tech companies are investing heavily in intelligent agents, with Alibaba Cloud launching "Wuying AgentBay," a cloud infrastructure designed for intelligent agents [5] Technical Challenges - A key challenge in the intelligent agent market is the limited computing power of local devices, which struggles to support high-demand tasks, particularly those requiring extensive GPU processing [4] - New companies are emerging to address these challenges, such as Xinghuan Technology, which offers a new AI infrastructure technology to facilitate the rapid development of industry-specific intelligent agents [4] Safety and Security - Concerns regarding the safety of intelligent agents are rising, with over 70% of industry practitioners worried about issues like AI hallucinations, erroneous decisions, and data breaches [6] - Ant Group has upgraded its large model security solution, "Ant Tianjian," to include intelligent agent safety assessment tools, enhancing security measures for AI applications [6] - PPIO has introduced the first domestic intelligent agent sandbox product, designed to ensure secure execution of tasks in isolated environments, preventing data leaks and resource conflicts [6]
首发首秀世界人工智能大会——智能体开启AI新赛道
Jing Ji Ri Bao· 2025-08-06 21:58
Core Insights - The World Artificial Intelligence Conference has seen a surge in intelligent agents, with more products launched in the past three months than in the entire previous year, indicating a significant trend in the tech industry [1] Group 1: Intelligent Agent Development - Intelligent agents, capable of perceiving environments and taking actions to achieve specific goals, are emerging rapidly, with new products from companies like MiniMax, SenseTime, and JieYue Star [2] - The "Installation Little Genius," a real-time spatial multimodal interactive intelligent agent, was launched to enhance construction efficiency and reduce human error [2] - The AI-Scientist platform aims to transform research methodologies by enabling AI collaboration in scientific exploration, thus improving research efficiency [2] Group 2: Market Growth and Challenges - The global intelligent agent market has surpassed $5 billion, with a year-on-year growth rate of 40%, highlighting its increasing importance [4] - A significant challenge remains in local device computing power, which struggles to support high-demand intelligent agent tasks, particularly those requiring extensive GPU processing [4] - New companies are emerging to address these challenges, such as Star Ring Technology, which offers a platform for quickly building industry-specific intelligent agents [4] Group 3: Industry Trends and Innovations - Major tech companies are investing in intelligent agents, with Alibaba Cloud launching the "Shadowless AgentBay," a cloud infrastructure designed for intelligent agents [5] - The transition of AI agents from mere tools to core engines of industry is reshaping market boundaries, presenting new challenges in human-agent collaboration [5] Group 4: Security Concerns - The rise of intelligent agents brings security challenges, with over 70% of industry professionals concerned about risks such as AI hallucinations and data breaches [6] - Ant Group has upgraded its security solution for intelligent agents, introducing tools for safety assessment and zero-trust defense [6] - PPIO has launched a sandbox product designed to isolate tasks in a secure cloud environment, minimizing risks of data leakage and resource conflicts [6]
应对AI新安全挑战,首份智能体安全白皮书发布
Bei Jing Ri Bao Ke Hu Duan· 2025-07-30 11:38
Group 1 - The AI field is transitioning from the era of large models to the era of intelligent agents, which brings security challenges such as overreach and excessive delegation [1] - The "2025 Terminal Intelligent Agent Security" white paper was jointly released by Shanghai AI Laboratory, CAICT, Ant Group, and IIFAA Alliance, providing a comprehensive risk assessment guide for terminal intelligent agents [1][2] - Intelligent agents are rapidly penetrating various terminal devices like smartphones, glasses, headphones, and car systems, redefining interaction methods across multiple industries including life, industrial, medical, and education [1] Group 2 - The white paper outlines three major protective paths: single intelligent agent security, multi-agent trusted interconnection, and AI terminal security, aiming to serve as a comprehensive and targeted security guideline [2] - The white paper introduces a terminal intelligent agent security system supported by a technical ecosystem, detailing security technologies for single agents and multi-agent interactions [2] - Over 70% of intelligent agent practitioners express concerns about issues like AI hallucinations, erroneous decisions, and data leaks, with more than half indicating their companies lack a designated security officer for intelligent agents [3] Group 3 - Ant Group's "Ant Tianjian" has announced an upgrade to its large model security solution, adding intelligent agent security assessment tools with a risk judgment accuracy rate exceeding 96% [3]
云姨夜话丨谁在“安全”前提下持续破解AI的“医”题?
Qi Lu Wan Bao· 2025-07-30 09:34
Group 1 - The core viewpoint of the articles highlights the rapid growth of the medical AI market, projected to exceed $2.7 billion in 2025 and reach $17 billion by 2034, indicating a significant transformation in traditional healthcare models through AI integration [2][3]. - Ant Group's AI health application AQ has made substantial progress by connecting with 269 doctor AI agents and launching the first intelligent agent standard system in collaboration with the China Academy of Information and Communications Technology [2][3]. - The AI application in clinical settings is advancing, particularly in chronic disease management, providing users with 24/7 access to professional health support through mobile devices [3][4]. Group 2 - Ant Group's AI safety solution "Ant Tianjian" has been upgraded to include an AI agent safety evaluation tool, achieving over 96% accuracy in risk assessment and supporting testing across 11 industries [4][5]. - The World Digital Academy has released new standards for AI agent operational safety testing, aligning with Ant Tianjian's capabilities to ensure the secure application of AI technologies in healthcare [5]. - The healthcare industry is transitioning from "usable" to "user-friendly" AI solutions while facing challenges such as data silos and ethical standards, necessitating comprehensive training for healthcare professionals [5].
WAIC 2025丨应对智能体安全挑战 蚂蚁集团升级“蚁天鉴”
Xin Hua Cai Jing· 2025-07-28 11:14
Core Insights - The AI field is transitioning from the era of large models to the era of intelligent agents, with Ant Group's "Yitianjian" upgrading its security solutions to include AI agent safety assessment tools [1][2] - The upgraded features of "Yitianjian" include four core functions: agent alignment, MCP security scanning, intelligent agent security scanning, and zero-trust defense [1] - Over 70% of AI agent practitioners express concerns about issues such as AI hallucinations, erroneous decision-making, and data leaks, highlighting the safety challenges posed by intelligent agents [1] Company Insights - "Yitianjian" is a collaborative development between Ant Group and Tsinghua University, designed to ensure the safe and reliable operation of large model technologies [2] - The risk assessment agent of "Yitianjian" boasts an accuracy rate of over 96% and supports testing for intelligent agents across 11 industries [2] - The safety philosophy of the upgraded "Yitianjian" is based on the concept of "attack to promote defense," creating a comprehensive protection system for intelligent agents [2]
蚂蚁集团大模型数据安全总监杨小芳:用可信AI这一“缰绳”,驾驭大模型这匹“马”
Mei Ri Jing Ji Xin Wen· 2025-06-09 14:42
Core Viewpoint - The rapid development of AI technology presents significant application potential in data analysis, intelligent interaction, and efficiency enhancement, while also raising serious security concerns [1][2]. Group 1: Current AI Security Risks - Data privacy risks are increasing due to insufficient transparency in training data, which may lead to copyright issues and unauthorized access to user data by AI agents [3][4]. - The lowering of security attack thresholds allows individuals to execute attacks through natural language commands, complicating the defense against AI security threats [3][4]. - The misuse of generative AI (AIGC) can lead to social issues such as deepfakes, fake news, and the creation of tools for cyberattacks, which can disrupt social order [3][4]. - The long-standing challenge of insufficient inherent security in AI affects the reliability and credibility of AI technologies, potentially leading to misinformation and decision-making biases in critical sectors like healthcare and finance [3][4]. Group 2: Protective Strategies - The core strategy for preventing data leakage in both AI and non-AI fields is comprehensive data protection throughout its lifecycle, from collection to destruction [4][5]. - Specific measures include scanning training data to remove sensitive information, conducting supply chain vulnerability assessments, and performing security testing before deploying AI agents [5][6]. Group 3: Governance and Responsibility - Platform providers play a crucial role in governance by scanning and managing AI agents developed on their platforms, but broader regulatory oversight is necessary to ensure effective governance across multiple platforms [7][8]. - The establishment of national standards and regulatory policies is essential for monitoring and constraining platform development, similar to the regulation of mini-programs [7][8]. Group 4: Future Trends in AI Security - Future AI security development may focus on embedding security capabilities into AI infrastructure, achieving "security by design" to reduce costs associated with security measures [15][16]. - Breakthroughs in specific security technologies could provide ready-to-use solutions for small and medium enterprises facing AI-related security risks [15][16]. - The importance of industry standards is emphasized as they provide a foundational framework for building a secure ecosystem, guiding technical practices, and promoting compliance and innovation [17][18].
专访蚂蚁集团大模型数据安全总监杨小芳:AI安全与创新发展不是对立的,而是互相成就
Mei Ri Jing Ji Xin Wen· 2025-06-03 11:26
Core Viewpoint - The rapid development of generative AI technology presents significant potential for applications in data analysis, intelligent interaction, and efficiency enhancement, while also raising serious security concerns [1] Group 1: Current AI Security Risks - Data privacy risks include insufficient transparency of training data, which may lead to copyright issues, and the potential for AI agents to access user data beyond their permissions [3][4] - The lowering of security attack thresholds allows individuals with minimal technical skills to execute attacks using AI models, complicating the defense against such threats [3][4] - The misuse of generative AI can lead to societal issues such as deepfakes, fake news, and the creation of tools for cyberattacks, which can disrupt social order [3][4] Group 2: Defensive Strategies - The core strategy for preventing data leakage is full lifecycle data protection, covering all stages from collection to destruction, specifically tailored for AI model training and deployment [5][6] - Key measures include scanning training data for sensitive information, conducting supply chain vulnerability assessments, and ongoing risk monitoring during AI agent operation [6][7] Group 3: Challenges and Blind Spots - Supply chain and ecological risks, as well as the rapid development of AI agents, pose significant challenges due to the involvement of multiple participants and the lack of mature governance [7][8] - The need for a credible authentication mechanism is critical to ensure the trustworthiness of AI agents, especially in collaborative environments [7][8] Group 4: Governance and Responsibility - Platform providers play a crucial role in governance, as they have the authority to scan and manage AI agents developed on their platforms, but broader regulatory oversight is also necessary [8][9] - Effective governance requires collaboration between platform providers and regulatory bodies to establish standards and monitoring mechanisms [8][9] Group 5: Future Trends in AI Security - Future AI security development may focus on embedding security capabilities into AI infrastructure, achieving "security by design" [16][18] - Breakthroughs in specific security technologies could help mitigate risks for small and medium enterprises, making AI applications safer [16][18] - Data governance will be essential at both enterprise and societal levels, emphasizing transparency and accountability in AI data usage [16][18] Group 6: Role of Industry Standards - Industry standards are vital for establishing a secure ecosystem, guiding technical practices, and promoting compliance and innovation [18][19] - The development of open standards and assessment tools can lower barriers for small enterprises, enhancing overall security levels across the ecosystem [18][19] - The company has actively participated in the formulation of over 80 domestic and international standards related to AI governance and security risk management [19]