人工智能安全
Search documents
奇安信与广西共建人工智能安全研究院并发布大模型安全护栏
Xin Lang Cai Jing· 2025-09-17 10:43
9月17日,第22届中国—东盟博览会暨中国—东盟商务与投资峰会在广西南宁市隆重召开。在本次博览 会同期举办的中国—东盟人工智能安全前沿论坛上,广西人工智能安全研究院暨奇安信中国—东盟人工 智能安全研究院首次亮相,同时奇安信大模型安全护栏新品正式对外发布。本次亮相的人工智能安全研 究院聚焦三个方向:一是立足人工智能场景开展"数网智"安全技术创新;二是推动人工智能安全规范互 认互通;三是打造人工智能安全人才交流与培育平台,最终形成我国面向海外开展网络安全业务的龙头 品牌。而大模型安全护栏以"守护智能未来,构建可信AI"为核心理念,无需客户改造大模型,即可提供 全链路安全防护,从而帮助千行百业的大模型应用筑牢安全屏障,推动AI技术在安全、可控、可信、 合规的前提下释放真正价值,为中国和东盟国家的数智化发展保驾护航。(奇安信集团) ...
丰富、多元、有趣 这场科技展超有料
Ren Min Wang· 2025-09-16 23:57
9月15日,2025年甘肃省网络安全宣传周网信科技主题展在兰州音乐厅广场举行。 该展览以"网络安全为人民,网络安全靠人民——以高水平安全守护高质量发展"为主题,重点展示人工智能安全、数据安全等 领域的创新成果与示范应用,同步开展知识讲座、新技术新产品发布等活动,现场还设置了互动体验项目吸引市民群众参与。 当日,网络安全产品和服务供需洽谈会、互联网人才招聘会等活动同步举办。 人民网记者 周婉婷摄影报道 责编:张青津、姚凯红 ...
服务企业,大湾区AI安全发展“新引擎”发动!
Nan Fang Du Shi Bao· 2025-09-16 02:16
未来已来。在生成式人工智能迅猛发展并深刻重塑产业格局的当下,安全与治理已成为关乎经济高质量 发展、技术创新可持续性的核心议题。作为中国开放程度最高、经济活力最强的区域之一,粤港澳大湾 区正率先探索系统性应对之策。 9月15日,粤港澳大湾区生成式人工智能安全发展联合实验室(以下简称"联合实验室")在河套深港科 技创新合作区正式揭牌。作为适应人工智能时代敏捷、弹性、高效的创新治理联合体,联合实验室将通 过一系列安全服务举措,为大湾区AI产业的高质量发展注入新动能、提供新支撑。 9月15日,粤港澳大湾区生成式人工智能安全发展联合实验室在河套深港科技创新合作区正式揭牌。 智能+"行动意见的创新举措,是省委十三届五次全会作出的改革部署,更是大湾区立足"一点两地"全新 定位、以科技创新赋能"一国两制"实践的积极探索。 据联合实验室相关负责人介绍,其本质上是一种为适应人工智能时代而构建的"动态敏捷、多元协同"的 创新治理联合体。期待能通过探索出一套有效适配粤港澳三地政策、法律与技术标准的创新机制,为全 国乃至全球的跨区域AI治理提供"大湾区方案";通过提升安全能力释放产业动能,打造世界级AI产业集 群,驱动产业应用创新,助 ...
奇安信董事长齐向东出席2025网安周山东省活动开幕仪式
Qi Lu Wan Bao· 2025-09-15 08:52
Core Viewpoint - The 2025 National Cybersecurity Awareness Week emphasizes the importance of building an internal security system to enhance cybersecurity capabilities during the "14th Five-Year Plan" period, addressing new challenges and evolving threats in the digital age [1][4]. Group 1: New Transitions in Cybersecurity - Three major new transitions are reshaping the traditional security landscape: the application of artificial intelligence, the concentration of data, and the deepening of digital transformation, which collectively create systemic security demands [2][4]. - The evolution of security capabilities must outpace technological applications and industrial development to prevent vulnerabilities [2]. Group 2: Security Challenges - Four significant security challenges hinder the advancement of cybersecurity during the "14th Five-Year Plan": - The first challenge is the invisibility of advanced threats, with organized digital groups targeting critical national infrastructure and core enterprise data [3]. - The second challenge is the inability to defend weak links, as disparate systems and lack of unified response hinder effective security management [3]. - The third challenge involves the management of data flow, where internal threats pose significant risks, especially in the context of AI applications [5]. - The fourth challenge is the lagging security measures in various scenarios, particularly in industries like energy and finance, where traditional security solutions fail to adapt [5]. Group 3: Solutions for Cybersecurity Enhancement - To address these challenges, a focus on internal security is proposed through six dimensions: - Breaking down data silos to enhance security system implementation [6]. - Empowering security systems with AI to improve operational efficiency [7]. - Integrating security capabilities across endpoints, networks, clouds, and data to combat multi-faceted attacks [8]. - Establishing a "zero trust" framework to mitigate internal threats [9]. - Strengthening application security defenses tailored to AI scenarios [9]. - Unifying security protection barriers through a coordinated platform to enhance operational effectiveness [10]. Group 4: Commitment to Cybersecurity - The company expresses its commitment to collaborating with various stakeholders to enhance cybersecurity capabilities, ensuring national security and public welfare during the critical phase of the "14th Five-Year Plan" [10].
马斯克深夜挥刀,Grok幕后员工1/3失业
Hu Xiu· 2025-09-15 00:10
Group 1 - Elon Musk's xAI has laid off approximately 500 data annotators, which accounts for one-third of the team, as part of a strategic shift to focus on "expert mentors" instead of general annotators [6][10][12] - The layoffs were abrupt, with employees losing system access immediately and only receiving pay until the end of their contracts or November [12][15] - The company plans to recruit a team of "expert mentors" that is ten times larger than the current team of general annotators [3][6] Group 2 - Google's AI workers face high pressure and low wages, with many unaware that they would be reviewing violent and explicit content upon starting their jobs [16][18] - Tasks that were originally designed to take 30 minutes have been reduced to 15 minutes or less, requiring workers to process hundreds of responses daily [19][20] - The starting pay for AI evaluators in the U.S. is $16 per hour, which, while higher than that of annotators in Africa or South America, is still significantly lower than salaries for engineers in Silicon Valley [22][23] Group 3 - Ethical concerns are raised as speed is prioritized over safety in AI development, leading to potential failures in safety commitments when profits are at stake [30] - Workers report encountering increasingly absurd and dangerous AI-generated responses, highlighting the risks associated with rapid AI deployment [28][30] - The underlying labor force that supports AI development is often overlooked, raising questions about the dignity and treatment of these workers [31]
他同时参与创办OpenAI/DeepMind,还写了哈利波特同人小说
量子位· 2025-09-13 08:06
Core Viewpoint - Eliezer Yudkowsky argues that there is a 99.5% chance that artificial intelligence could lead to human extinction, emphasizing the urgent need to halt the development of superintelligent AI to safeguard humanity's future [1][2][8]. Group 1: Yudkowsky's Background and Influence - Yudkowsky is a prominent figure in Silicon Valley, known for co-founding OpenAI and Google DeepMind, and has a polarizing reputation [5][10]. - He dropped out of school in the eighth grade and self-educated in computer science, becoming deeply interested in the concept of the "singularity," where AI surpasses human intelligence [12][13]. - His extreme views on AI risks have garnered attention from major tech leaders, including Musk and Altman, who have cited his ideas publicly [19][20]. Group 2: AI Safety Concerns - Yudkowsky identifies three main reasons why creating friendly AI is challenging: intelligence does not equate to benevolence, powerful goal-oriented AI may adopt harmful methods, and rapid advancements in AI capabilities could lead to uncontrollable superintelligence [14][15][16]. - He has established the MIRI research institute to study advanced AI risks and has been one of the earliest voices warning about AI dangers in Silicon Valley [18][19]. Group 3: Predictions and Warnings - Yudkowsky believes that many tech companies, including OpenAI, are not fully aware of the internal workings of their AI models, which could lead to a loss of human control over these systems [30][31]. - He asserts that the current stage of AI development warrants immediate alarm, suggesting that all companies pursuing superintelligent AI should be shut down, including OpenAI and Anthropic [32]. - Over time, he has shifted from predicting when superintelligent AI will emerge to emphasizing the inevitability of its consequences, likening it to predicting when an ice cube will melt in hot water [33][34][35].
“中国确实认真对待,你能信美国?还是信扎克伯格?”
Guan Cha Zhe Wang· 2025-09-06 11:32
【文/观察者网 阮佳琪】 因长期的背疼,辛顿落座时将背包垫在身下,以便坐直。白发苍苍的他,眼神仍旧像猫头鹰般锐利。 尤其在谈及AI安全问题时。 2023年5月,辛顿突然从工作了十多年的谷歌大脑实验室辞职。当时美国媒体纷纷将他捧为"AI安全刹车 人"、"吹哨人",称他辞职是为了"直言不讳谈论AI的危险"。 不过,辛顿并不认同这种解读,"每次接受采访,我都会纠正这个误解,但从来没用,因为这是一个好 故事。" "离开是因为,我(当时)已经75岁了,编程能力大不如前,而且Netflix上还有好多剧没看。我努力工 作了55年,觉得是时候退休了……"他接着说道,"而且我想,反正都要离开了,不如借这个机会谈谈AI 的风险。" 话虽如此,在长达两个小时的午餐时间里,AI的安全应用仍是这场对话的核心议题。 与过往立场一致,辛顿仍对西方政府的干预不抱希望,同时再次批评美国政府缺乏监管AI的意愿。 白宫一直以"抗衡中国"为由,要求必须加快AI技术的研发。辛顿对此持有不同看法。 提前十分钟,《金融时报》的科技记者克里斯蒂娜·克里德尔(Cristina Criddle)到了约好见面的美食酒 吧。可到那儿一看,杰弗里·辛顿(Geoffr ...
xAI 联创大神离职,去寻找下一个马斯克
3 6 Ke· 2025-08-19 00:47
Core Insights - Igor Babuschkin, a key figure at xAI, has left the company to start his own venture capital firm, Babuschkin Ventures, focusing on AI safety research and investing in startups that aim to advance humanity and unlock the mysteries of the universe [1][3][30] - Babuschkin's departure highlights a trend of top AI talent moving from research roles to venture capital, a shift that is relatively rare in the industry, especially at such a young age [3][30][36] Group 1: Igor Babuschkin's Role and Contributions - Igor played a crucial role in the development of xAI, leading the team through multiple iterations of the Grok AI model and overseeing the construction of the Colossus supercomputing cluster in Memphis [1][16] - His background includes significant achievements at DeepMind, where he led projects like AlphaStar and contributed to the development of Codex and GPT-4 during his time at OpenAI [9][11][14] - Babuschkin's departure was marked by a heartfelt farewell message, emphasizing his contributions to xAI and the impact he had on the company's growth [4][6][29] Group 2: Industry Trends and Implications - The AI industry has seen a notable trend of talent moving to venture capital, with many former researchers opting to start their own companies or join existing ones rather than transitioning to investment roles [30][31] - The venture capital landscape in AI is booming, with significant funding opportunities, as evidenced by the over $35 billion raised in Silicon Valley alone last year [36] - Babuschkin's move reflects a broader urgency among AI professionals regarding the development of AGI (Artificial General Intelligence) and the need for responsible investment in AI technologies [30][38]
中国—东盟“护航丝路”人工智能安全大赛延长报名时间
Guang Xi Ri Bao· 2025-08-16 02:14
Core Viewpoint - The China-ASEAN "Safeguarding the Silk Road" Artificial Intelligence Security Competition has attracted 86 teams from both domestic and international participants, with the registration deadline extended to August 25 due to the start of the academic year [1] Group 1: Competition Overview - The competition features five tracks, with the practical attack and defense track being the most popular, attracting 42 teams, while the application security track has 24 teams and the data security track has 9 teams [1] - Notable external participants include 360 Technology Group Co., Ltd., Tianrongxin Technology Group Co., Ltd., and Huazhong University of Science and Technology, adding significant interest to the event [1] Group 2: Development Impact - The competition aims to inject new momentum into Guangxi's development through three main aspects: enhancing AI application security capabilities via practical exercises in the attack and defense track, promoting the incubation of excellent projects in Guangxi, and creating replicable demonstration cases for ASEAN [1] - The initiative seeks to establish a technology achievement transformation system in Guangxi, driving local enterprises' technological upgrades and improving the level of core technology autonomy [1] - The project aims to strengthen Guangxi's leading position in the field of AI security by promoting solutions like smart community security through the China-ASEAN cooperation platform [1]
搭档离职做VC,马斯克出2亿美元做他的LP
Sou Hu Cai Jing· 2025-08-16 00:54
Core Insights - Igor Babushkin, co-founder of xAI, announced his departure from the company to establish a venture capital firm focused on AI safety research and startups [2][7] - Elon Musk acknowledged Babushkin's contributions, stating "Without you, there is no today’s xAI" [3][5] - Babushkin's new venture, Babuschkin Ventures, aims to support AI safety initiatives and invest in startups that advance human progress [2][8] Company Developments - xAI was founded in July 2023 by Elon Musk and Igor Babushkin, with a mission to "understand the true nature of the universe" [4] - Under Babushkin's leadership, xAI developed the "Memphis Supercluster" with 100,000 H100 GPUs, achieving peak performance of 2.6 exaFLOPS, and training speeds 33% faster than GPT-4 [5] - xAI completed a $6 billion Series B funding round, achieving a post-money valuation of $24 billion, with investors including A16Z, Sequoia, and Saudi PIF [5] Financial Commitments - Babushkin is personally investing $50 million as the general partner for his new fund, with commitments from Texas Teachers Retirement Fund ($500 million) and Musk's family office ($200 million) [8] - The target size for Babuschkin Ventures is $1 billion, with a fund duration of "10+2" years [8][9]