Workflow
AI安全治理
icon
Search documents
大咖云集!第九届啄木鸟数据治理论坛前瞻,共话AI安全边界
Nan Fang Du Shi Bao· 2025-12-16 03:35
2025年,生成式人工智能的浪潮已从技术狂热步入深度应用的冷静期。AI工具愈发强大与普及,但随 之划出的安全边界却日益模糊——从AI造假、版权纠纷到情感深度依赖,乃至具身智能带来的实体安 全风险,公众的期待与忧虑从未像今天这样交织。 如何看待AI手机智能体引起的争议?AI安全机制如何从"打补丁"走向"原生设计"?AI时代,"合理使 用"的边界应如何被重新定义? 最权威的政策与战略解读,最前沿的法律与伦理思辨,最硬核的安全技术演示,最接地气的产业实践观 察——本场论坛将围绕 "AI安全边界:技术、信任与治理新秩序" 这一核心议题,试图回答:在技术狂 奔的时代,我们应如何构建可信的护栏,又该如何在创新与秩序之间找到至关重要的平衡点? 欢迎扫码报名! 当AI智能体开始替我们做决定,当数字替身无处不在,我们正站在一个技术能力飞跃与安全边界模糊 并存的关键十字路口。在这一背景下,由南方都市报社、南都数字经济治理研究中心主办的第九届"啄 木鸟数据治理论坛",将于12月18日在北京举行。 十四届全国政协委员、工业和信息化部原副部长王江平,将从哲思视角勾勒"以对齐求善治"的AI治理理 念;中国法学会副会长、中国人民大学一级教 ...
诺奖得主杰弗里·辛顿对谈云天励飞董事长陈宁 AI训练成本或下降99%
Shen Zhen Shang Bao· 2025-12-03 23:06
此次峰会上,辛顿阐述了AI系统强大的学习效率,并再次强调训练"向善"AI的重要性。"我们正在建立 非常庞大的AI系统,AI系统之间的'蒸馏效率'要高得多,换句话说AI大模型是可以很快将全网信息进行 吸收,这样的知识'蒸馏'与分享真的非常高效,比人与人之间的信息传递、代际传递效率提升好几十亿 倍。" 深圳商报首席记者 陈小慧 2025年,AI正在从大模型算法走向落地应用阶段。未来有哪些技术趋势值得关注?如何训练"向善"的 AI?近日,一场全球巅峰对话给出了最新答案。 12月2日,在以"智汇全球·绿动未来"为主题的2025GIS全球创新展暨全球创新峰会上,2024年诺贝尔物 理学奖获得者、"AI教父"、2018年图灵奖得主杰弗里·辛顿,硅谷著名计算机科学家、《浪潮之巅》作 者吴军以及深圳云天励飞董事长兼CEO陈宁,围绕AI如何改变人类世界、AI安全治理、推理芯片技术 突破等议题展开了一场深度对谈。 在长达约1个小时的对话中,辛顿再次强调了对AI安全治理的重要性,称AI学习效率和知识传递速度比 人类提升了好几十亿倍。"AI要朝着正确、向善的方向发展",成为了这场对话的共识。 AI要朝着正确、向善的方向发展 在陈宁看 ...
AI“向善”、训练成本、推理芯片……“AI教父”辛顿对话云天励飞董事长陈宁
Sou Hu Cai Jing· 2025-12-03 10:43
Core Insights - The dialogue emphasized the importance of AI safety governance and the need for AI to develop in a "good" direction, as highlighted by Jeffrey Hinton, a prominent figure in AI research [5][6][8] - The transition from AI training to application reasoning is expected to occur by 2025, with a significant focus on reducing AI training costs and improving efficiency [7][14] Group 1: AI Safety and Governance - Jeffrey Hinton reiterated the necessity of ensuring AI develops safely and beneficially for humanity, stating that AI's learning efficiency surpasses human capabilities by billions of times [5][6] - The consensus among experts is that while AI development cannot be halted, measures must be taken to ensure its safety and ethical use [5][6] Group 2: Cost Reduction in AI Training - The current cost of training large AI models can reach billions of dollars, and there is a strong push to reduce this cost significantly, aiming to lower it from $1 to just $0.01 per token [8][14] - Chen Ning emphasized that making AI affordable and accessible to a broader population is crucial for its meaningful application in various sectors, including education and healthcare [6][8] Group 3: Future of AI Chips - The industry is transitioning from training chips to reasoning chips, with predictions that the market for reasoning chips could reach $4 trillion by 2030, surpassing the $1 trillion market for training chips [14] - Chen Ning highlighted the potential for AI to redefine digital applications and consumer electronics, suggesting that AI processing chips could become as ubiquitous as utilities like water and electricity [14]
姚期智、王兴兴发声!预见人工智能“下一个十年”
新浪财经· 2025-11-16 09:51
Core Viewpoint - The future development of artificial intelligence (AI) is centered around achieving satisfactory general artificial intelligence (AGI), which will significantly impact various sectors including science, strategy, and economic competition [2][3]. Group 1: Directions Towards AGI - The journey towards AGI will inevitably focus on four key directions: continuous evolution of large models, embodied general intelligence, AI for science, and AI safety governance [5][8]. - In the past five years, China has made remarkable progress in large model development, reaching a competitive level internationally [7]. - Embodied intelligence is crucial for enhancing robots' capabilities, allowing them to perform tasks that were previously difficult due to their rigid nature [8]. - AI for science is expected to revolutionize scientific research methodologies within the next 5 to 10 years, making collaboration between scientists and AI essential for competitive advantage [9]. Group 2: Risks and Governance - The development of AI poses significant safety risks, as it can potentially lead to loss of control and conflict with human intentions [10][11]. - AI algorithms inherently possess characteristics such as lack of robustness, uncertainty, and non-interpretability, which can impact societal values and ethics [11]. - Addressing the "survival risk" associated with AI requires the development of provably safe AI systems, leveraging theories from cryptography and game theory [12]. Group 3: Future of Robotics - The next decade is anticipated to transform robots from mere tools into life partners, capable of understanding the world and performing various tasks [14][17]. - Robots will increasingly collaborate with humans in industrial settings and provide assistance in community services, such as elderly care [17]. - The robotics industry will benefit from open-source collaboration to accelerate technological advancements and reduce innovation costs [17]. Group 4: Market Potential - The AI market is projected to reach a trillion-dollar scale as it empowers various industries, with open-source initiatives playing a crucial role in fostering commercial growth [19][20]. - The focus on intelligent terminals as potential AI entry points highlights the importance of integrating AI into everyday life, particularly in the automotive sector [22].
360数字安全总裁胡振泉:已走出AI安全治理有效路径
Xin Lang Ke Ji· 2025-11-09 08:48
Core Viewpoint - The 2025 World Internet Conference in Wuzhen highlighted the release of the "Large Model Security White Paper" by 360 Digital Security Group, addressing complex AI security issues through a comprehensive set of practical security solutions [1][3]. Group 1: Security Solutions - The proposed security solutions include an "external" security capability focused on model protection, utilizing the Large Model Guardian to create flexible and rapid dynamic defenses [3]. - Additionally, the solutions incorporate "native security capabilities" that embed security into core components such as enterprise knowledge bases, intelligent agent construction, and operation platforms [3]. - The external protection acts as an "external bodyguard" for AI, while the internal security functions as an "internal armor," establishing a robust security foundation from the outset [3]. Group 2: Industry Expertise - The company emphasizes the necessity of a profound understanding of AI, extensive practical experience with AI products, and a solid background in the security industry to effectively address AI security challenges [3]. - 360 Digital Security Group is recognized as one of the few companies capable of providing mature solutions in the AI security sector due to its accumulation of AI security data and practical experience [3]. - The company's approach to security assumes that security issues will inevitably arise, advocating for immediate detection, response, handling, and recovery to ensure smooth operations [3].
中国AI破局
3 6 Ke· 2025-08-13 00:03
Core Insights - ChatGPT-5 was launched on August 8, 2025, but was quickly criticized for slow response times and frequent errors, leading to the reintroduction of GPT-4o by OpenAI [1] - The AI industry is facing two major challenges: data exhaustion and computational cost limitations [1] - China is addressing these challenges through open-source initiatives and algorithm innovations, positioning itself as a key player in the AI landscape [1][2] Group 1: Current AI Development Challenges - The latest AI model, GPT-5, has been criticized for its slow response and frequent errors, raising questions about the effectiveness of generative AI algorithms [10] - AI systems, particularly deep learning models, require significant computational power, with Nvidia's H100 chips consuming up to 700W each, leading to concerns about energy consumption [11] - The depletion of high-quality training data is forcing a reevaluation of current pre-training methods for AI models [12] Group 2: China's AI Advantages and Contributions - China is leveraging open-source models like DeepSeek-V3, which has a training cost of less than $6 million, to drive global AI accessibility [24] - The country is actively integrating AI into its real economy, focusing on innovative production models and large-scale replication [1][24] - Chinese companies are increasingly becoming key players in the AI landscape, with a focus on collaboration and technological breakthroughs [24][32] Group 3: Future Trends in AI Development - The AI revolution is being driven by algorithm innovations, autonomous chips, and application scenarios, with China leading the charge [6] - The emergence of photonic and quantum chips is expected to significantly enhance AI computational capabilities [40][43] - The trend towards open-source AI models is seen as a necessary evolution for the industry, promoting collaboration and innovation [20][24] Group 4: AI Application Areas - The automotive industry is a primary battleground for AI applications, particularly in autonomous driving technology [51][52] - Humanoid robots are increasingly integrating AI technology, with a growing number of companies involved in this sector [55] - AI agents are expected to play a crucial role in various sectors, enhancing decision-making and operational efficiency [57][58] Group 5: Global AI Governance and Cooperation - China is advocating for global cooperation in AI governance, emphasizing the need for a shared ethical framework [63][66] - The country has been proactive in establishing international agreements and frameworks for AI safety and governance [67] - The focus on collaborative efforts in AI development is seen as essential for ensuring the technology aligns with human values and long-term interests [66][68]
WAIC 2025 启示录:安全治理走到台前
Core Insights - The 2025 World Artificial Intelligence Conference (WAIC) highlighted the importance of global cooperation and governance in AI, with a focus on safety and ethical considerations [1][6] - Key figures in AI, including Geoffrey Hinton and Yao Qizhi, emphasized the need for AI to be trained with a focus on benevolence and the societal implications of training data [2][3] - The issue of AI hallucinations was identified as a significant barrier to the reliability of AI systems, with over 70% of surveyed industry professionals acknowledging its impact on decision-making [3] Group 1: AI Governance and Safety - The release of the "Global Governance Action Plan for Artificial Intelligence" and the establishment of the "Global AI Innovation Governance Center" aim to provide institutional support for AI governance [1][6] - Hinton's metaphor of "taming a tiger" underscores the necessity of controlling AI to prevent potential harm to humanity, advocating for global collaboration to ensure AI remains beneficial [2] - Yao Qizhi called for a dual governance approach, addressing both AI ethics and the societal conditions that influence AI training data [2] Group 2: Data Quality and Training - The quality of training data is critical for developing "gentle" AI, with Hinton stressing the need for finely-tuned datasets [4] - Industry leaders, including Nvidia's Neil Trevett, discussed challenges in acquiring high-quality data, particularly in graphics generation and physical simulation [4] - The importance of multimodal interaction data was highlighted by SenseTime's CEO Xu Li, suggesting it can enhance AI's understanding of the physical world [5] Group 3: Addressing AI Hallucinations - The hallucination problem in AI is a pressing concern, with experts noting that current models lack structured knowledge representation and causal reasoning capabilities [3] - Solutions such as text authenticity verification and AI safety testing are being developed to tackle the hallucination issue [3] - The industry recognizes that overcoming the hallucination challenge is essential for fostering a positive human-AI relationship [3]
当安全治理成为WAIC关键词丨南财合规周报(第200期)
AI Governance - AI safety emerged as a key topic at the 2025 World Artificial Intelligence Conference (WAIC), with notable figures like Geoffrey Hinton emphasizing the need to train AI to be beneficial, likening the relationship between humans and AI to raising a tiger [1][2] - Hinton highlighted the challenges of training AI, stating it is more difficult than raising children, as it requires precise data to instill good behavior [2] - The conference also featured a global governance action plan that includes 13 action directions, emphasizing the importance of quality data supply and the protection of personal privacy [3] AI Browser Development - The industry consensus indicates a shift in AI competition from chatbots to browsers, which are seen as the primary entry point for AI in the internet era [4] - Companies are actively developing AI browsers to enhance user experience through personalized AI agents, with Perplexity CEO revealing plans to pre-install their Comet AI mobile browser on smartphones, challenging Google's dominance [5] - OpenAI is also advancing its AI browser, integrating chat interfaces and AI agent functionalities to streamline user interactions [5] Personal Information Protection - New guidelines require "shake to activate" ads to include a prominent "one-click close" option to enhance user autonomy and prevent misleading practices [6] - The guidelines specify three principles: transparency, autonomy, and personal information protection, detailing the responsibilities of app operators and third-party ad SDKs [6] - A draft guideline on QR code dining services prohibits the forced collection of personal information, emphasizing user consent and the right to delete personal data [7]
独家|姚期智:AGI时代比想象中来得快,安全治理是一个长期工作
Di Yi Cai Jing· 2025-07-26 13:35
Group 1 - The core issue of AI governance is the potential for artificial intelligence to surpass human intelligence, raising concerns about control and alignment with human values [1][2] - The WAIC event highlighted the urgency of AI safety, emphasizing that AI's security lacks theoretical guarantees compared to traditional algorithm designs [1][2] - The "Shanghai Consensus" calls for global cooperation among governments and researchers to ensure advanced AI systems remain aligned with human control and welfare [2][3] Group 2 - The consensus stresses the need for major countries to coordinate on credible safety measures and invest in AI safety research to build trust at the international level [3] - It advocates for AI developers to provide safety assurances and for the establishment of verifiable global behavioral red lines [3] - The future impact of AI is uncertain, but effective governance could lead to improved living conditions for people worldwide [3] Group 3 - Young individuals are encouraged to strengthen foundational skills in mathematics, physics, and computer science to adapt to rapid technological changes [5] - Continuous self-learning and adaptability are identified as essential for future job security in a fast-evolving world [5]
AI独角兽Anthropic迎来强援:奈飞(NFLX.US)创始人加入董事会
智通财经网· 2025-05-29 04:02
Group 1 - Anthropic has appointed Reed Hastings, the chairman of Netflix, to its board of directors, a decision made by the company's long-term interests trust committee [1] - Hastings brings extensive board experience from notable companies such as Microsoft, Bloomberg, and Meta, and has been a co-founder and CEO of Netflix since its inception in 1997 until 2023 [1] - Hastings expressed his belief in the potential benefits of AI for humanity while acknowledging its economic, social, and security challenges, aligning with Anthropic's vision for AI development [1] Group 2 - Anthropic's research plan aligns closely with Hastings' focus on the human impact of technology during his tenure at Netflix and through his global health and education initiatives [2] - The company, currently valued at $61.5 billion, aims to remain competitive in the AI arms race led by OpenAI, Google, and Microsoft, while emphasizing the importance of AI safety governance [2]