AI治理
Search documents
金融壹账通与意大利忠利保险集团举行高层战略交流会
Zheng Quan Ri Bao Wang· 2025-11-25 11:22
本报讯 (记者李冰)近日,金融壹账通与意大利忠利保险集团(Generali Group)(以下简称"忠利保 险")高层战略交流会在新加坡成功举行。本次战略交流会旨在系统了解平安集团在客户经营、数字化 转型及人工智能应用方面的经验,探讨双方在未来合作中的潜在方向。 近年来,金融壹账通在东南亚、中东、南非等市场推动银行核心系统、保险核心系统、车生态能力、 eKYC与反欺诈能力的输出,为不同市场提升运营效率、风险管理能力和客户体验提供技术支持,逐步 形成可复制、可扩展的全球服务能力。 双方认为,在保险行业迈向智能化和高质量增长的大趋势下,围绕客户价值、AI创新、数字化运营的 系统性交流,将为跨区域合作带来更大的可能性。 金融壹账通方面表示,将继续深化与国际同业的沟通交流,积极探索在客户经营、技术创新及AI治理 等领域的合作可能,共同推动全球保险行业的数字化发展。 作为平安唯一的金融科技输出窗口,金融壹账通在会谈中向忠利保险高层系统介绍了平安集团综合金融 模式的发展历程和客户经营体系以及寿险渠道改革、车生态建设、多渠道协同等实践,展示了如何通 过"保险+生态"构建长期的客户触点,并形成以客户为中心的可持续增长模式。 ...
香港金发局报告:香港AI发展必须深化与粤港澳大湾区融合 加快AI研发商业化
Zhi Tong Cai Jing· 2025-11-19 13:04
报告提及,香港的人工智能产业化发展,采取务实并以金融为主导的方式,香港既是国际金融枢纽,亦 努力发展成为区域AI及数据科技领先者,香港最强优势之一是在AI金融领域或监管科技处于领先地 位,各项措施令香港的金融生态系统成为负责任及可信的AI融合的全球典范。 报告表示,香港目标利用AI巩固在金融服务、生物医学、物流方面的领先地位,包括发展AI推动的智 慧城市解决方案、扩大AI研发的公私营合作、培育有关人才、构建AI治理及监管能力。 展望未来,香港金发局行政总监区景麟表示,国家"十五五"规划表明要加强人工智能同产业发展、文化 建设、民生保障、社会治理相结合,抢占人工智能产业应用制高点,全方位赋能千行百业,因此内地人 工智能产业将有更多发展空间。至于香港则于2024年10月发表有关在金融市场负责任地应用人工智能的 政策宣言。因此,他表示香港作为国际金融中心与国际创科中心,将可以发挥其"超级联系人"的角色, 为内地与海外人工智能产业发展作出贡献。 11月19日,香港金融发展局(香港金发局)与Deep Knowlege Group联合发布《全球人工智能竞争力指数报 告》(第四版)。香港金发局表示,随着人工智能逐渐成为推动 ...
2025全球互联网人才卓越计划研修班:AI治理拐点上的全球人才对话 | 巴伦精选
Tai Mei Ti A P P· 2025-11-16 06:57
Core Insights - The article emphasizes the importance of talent development, technological governance, and the future of the digital society as systemic challenges faced by global industries, governments, and international organizations [2][4]. Group 1: Digital Economy and Sustainable Development - The "2025 Global Internet Talent Excellence Program" focuses on digital economy and sustainable development, featuring courses on green development, investment, and AI's role in transformation [2][4]. - Experts highlighted that digital technology is not only an engine for economic growth but also a key driver for achieving global sustainable development goals (SDGs) [8][9]. Group 2: AI Governance and Security - AI governance is becoming urgent as AI reshapes cybersecurity, data flow, and industrial efficiency, necessitating a proactive approach to manage new risks [9][10]. - The need for a "safe and controllable AI productivity" system is emphasized, addressing model security, data privacy, and ethical review [9][10]. Group 3: Data Flow and Protection - The concept of "dynamic data security" is introduced, advocating for real-time protection of data within a secure and compliant framework [10][11]. - The interconnectivity and interoperability of data infrastructure are deemed essential for bridging the digital divide [10][11]. Group 4: International Cooperation and Inclusive Development - The article stresses that global challenges cannot be addressed by any single country or enterprise, highlighting the importance of international cooperation in digital infrastructure and capacity building [11][12]. - Open-source models and digital public goods are identified as trends that enhance accessibility for developing countries [11][12]. Group 5: Talent and Innovation as Drivers - The interaction between R&D investment, entrepreneurial ecosystems, and venture capital is crucial for driving digital transformation [12][13]. - Leadership qualities such as professional competence, empathy, courage, and execution are essential for navigating the rapidly changing digital landscape [12][13]. Group 6: Future of Digital Civilization - The program aims to provide a strategic framework for the next generation of internet talent, focusing on global governance, data infrastructure, and AI safety [13]. - The article raises critical questions about the future of digital economy governance, including the responsibilities for managing AI systems and the regulatory boundaries in the era of data flow [13].
从数字世界迈向物理世界 AI的跃升与下沉|2025年乌镇峰会观察
Mei Ri Jing Ji Xin Wen· 2025-11-12 16:53
Core Insights - The 2025 World Internet Conference in Wuzhen showcased the integration of AI into the physical world, moving beyond digital applications to real-world interactions [1] - The Chinese government aims for a comprehensive transition to an intelligent economy and society by 2035, emphasizing the transformative potential of AI on productivity and human liberation [1] - AI is increasingly penetrating various industries, including healthcare, logistics, and commodity trading, enhancing productivity and efficiency [1][2] Industry Applications - Ant Group demonstrated its AI health manager, AQ, which interprets data from health devices and provides real-time alerts for anomalies, indicating a strong focus on AI in healthcare [2] - The steel industry is leveraging AI for better resource allocation and price trend predictions, with solutions developed by Wanlian Yida Group to assist in production planning and supply chain optimization [4] - AI is also being applied in industrial settings, with companies like Qunke Technology introducing cloud-native industrial AI platforms to facilitate zero-deployment solutions [6] Human-Machine Interaction - AI is evolving through human-machine interaction, with advancements in robotics and AI glasses showcased at the conference, indicating a shift towards practical applications in everyday life [5][7] - The development of embodied intelligent robots is expected to first occur in industrial environments before expanding into more complex tasks in human settings [5] Data Security and AI Governance - The rapid increase in token consumption for AI models highlights the urgent need for high-quality data and effective governance frameworks to manage AI applications [8][10] - Concerns regarding the reliability of AI models and the potential for misinformation necessitate the establishment of policies and regulations to guide AI technology applications [11] - The integration of data elements with AI architecture is anticipated to lead to a paradigm shift in data circulation, emphasizing the importance of protecting sensitive information [10]
流浪汉闯家中?AI整蛊图吓坏小区
Nan Fang Du Shi Bao· 2025-11-10 23:06
Core Viewpoint - The rise of AI-generated prank videos, particularly involving scenarios like "homeless people breaking into homes," has sparked significant public concern and highlighted the challenges of AI governance and misuse [2][5][9]. Group 1: Incident Overview - A recent incident in Guangzhou involved a child using AI to create a realistic image of a homeless person attempting to enter their home, which led to panic among residents and calls for security investigations [3][5]. - Similar incidents have occurred in other regions, where AI-generated images have caused police to respond to false alarms, wasting public resources [3][6]. Group 2: Social Media Influence - The trend of AI pranks originated from overseas social media platforms, particularly TikTok, where users began sharing videos of these pranks, leading to widespread imitation among teenagers [5][6]. - The hashtag "homelessmanprank" on TikTok has accumulated over 1,600 videos, with some receiving significant engagement, indicating a viral spread of this content [5]. Group 3: Legal and Ethical Implications - Legal experts warn that creating and sharing AI-generated images that could mislead others may lead to criminal liability, as seen in cases where individuals have been arrested for causing false alarms [7][8]. - The misuse of AI technology raises concerns about the blurred lines between reality and fiction, necessitating increased education on AI literacy and ethical responsibilities, especially among youth [9].
瞭望 | 盯紧AI失控风险
Xin Hua She· 2025-11-10 08:27
Core Viewpoint - The article emphasizes the urgent need to establish a resilient and inclusive intelligent society amidst the explosive growth of computing power and the inherent risks associated with AI, particularly the potential for AI to become uncontrollable [1][2]. Group 1: AI Control Risks - Experts, including Geoffrey Hinton, estimate the probability of AI becoming completely uncontrollable to be between 10% and 20% [2]. - The rapid evolution of AI systems, driven by intense competition among companies and nations, often lacks adequate consideration of potential consequences [2]. - There is a consensus among many professionals that the risk of AI losing control is a real concern, necessitating serious attention [2]. Group 2: Regulatory Challenges - The article identifies three main challenges contributing to the risk of AI losing control: lagging regulatory mechanisms, deficits in collaborative governance, and insufficient safety measures [3]. - Regulatory policies struggle to keep pace with the rapid technological advancements, as seen with the swift release of competing AI models following OpenAI's GPT-4 [3]. - The lack of international consensus on AI governance, highlighted by the refusal of some countries to sign collaborative agreements, exacerbates the regulatory challenges [3][4]. Group 3: Safety and Governance Improvements - Experts advocate for a shift towards agile governance that supports the healthy and sustainable development of AI [6]. - Recommendations include updating governance frameworks, enhancing communication between regulators and stakeholders, and adopting flexible regulatory measures [6][7]. - There is a call for improved risk assessment and management mechanisms for large AI models, as well as clearer definitions of rights and responsibilities for AI developers and users [7][8]. Group 4: Global Collaboration - Addressing the risks of AI control requires global cooperation, yet there is currently a lack of effective communication among leading AI companies [8]. - Strengthening bilateral dialogues, particularly between the US and China, and implementing existing international agreements on AI governance are essential steps [8].
OpenAI内斗法律证词曝光;“人工智能教父”直言:科技巨头需要裁员才能从AI中获利丨AIGC日报
创业邦· 2025-11-03 02:28
Group 1 - OpenAI's former chief scientist Ilya Sutskever submitted a 52-page memo revealing internal conflicts leading to the dismissal of Sam Altman, accusing him of "persistent lying and misleading behavior," which eroded the board's trust [2] - The board discussed a potential merger with competitor Anthropic and considered appointing Anthropic's founder Dario Amodei as CEO, marking a significant internal crisis [2] - The ongoing legal case involving Elon Musk and OpenAI is expected to provide critical insights into AI governance and regulatory policies, as internal documents are becoming key historical records [2] Group 2 - Microsoft CEO Satya Nadella highlighted that the shortage of power supply in data centers is hindering the expansion of AI computing capabilities, resulting in many AI chips remaining unused in warehouses [2] - Geoffrey Hinton, known as the "father of AI," warned that tech giants may need to lay off workers to profit from AI, as companies are betting on AI to replace human labor for significant profit margins [2] - Google CEO Sundar Pichai confirmed that the next-generation AI model, Gemini 3, is set to be released in 2025, indicating ongoing advancements in AI technology [2]
伊利亚为马斯克诉OpenAI案出庭10小时质询,首次披露惊人内幕
Tai Mei Ti A P P· 2025-11-02 02:46
Core Insights - The recent court testimony by Ilya Sutskever, co-founder and former chief scientist of OpenAI, has provided a detailed account of the decision to remove CEO Sam Altman, marking a significant moment in the ongoing legal battle involving Elon Musk and OpenAI [4][6] - Sutskever's testimony accuses Altman of a "persistent pattern of lying," which has led to a breakdown of trust between the board and the CEO, highlighting governance issues within OpenAI [4][7] - The board's consideration of a merger with Anthropic and appointing Dario Amodei as CEO indicates a drastic shift in strategy during a crisis, emphasizing the competitive landscape between OpenAI and Anthropic [5][6] Governance and Trust Issues - Sutskever's claims suggest that the core issue in the OpenAI crisis is not merely a difference in AI vision but a complete collapse of governance and trust structures [7] - The testimony has transformed previous speculations about a lack of transparency into formal legal evidence, reinforcing concerns about the board's ability to oversee the CEO effectively [4][6] Legal Proceedings and Evidence - The emergence of the "Brockman memo" as a critical document in the case may further illuminate the governance narrative of OpenAI from 2019 to 2023, pending its alignment with other evidence [6][7] - The ongoing legal proceedings are expected to reveal more internal documents and communications, which will serve as essential historical records for understanding AI governance and regulatory policies [7]
【环时深度】1.5万亿承诺后,硅谷白宫的关系变了多少?
Huan Qiu Wang· 2025-10-19 23:05
Group 1 - Major tech CEOs from Silicon Valley made a total investment commitment of $1.5 trillion during a White House dinner in September [1][2] - Apple announced an increase in its investment in U.S. manufacturing to $600 billion over four years, focusing on supply chain and high-end manufacturing [3] - Meta plans to invest significantly in building data centers and infrastructure in the U.S., with projected spending reaching $66 to $72 billion by 2025 [4] Group 2 - Microsoft expects to invest around $800 billion globally in AI data centers by fiscal year 2025, with over half of that investment in the U.S. [5] - Google announced a $25 billion investment over the next two years for building more data centers and AI infrastructure in the U.S. [4] - The investments from these tech giants are primarily directed towards foundational projects such as data centers, fiber networks, and clean energy [5] Group 3 - The relationship between the White House and Silicon Valley has evolved from friction to closer cooperation, impacting the tech industry and political landscape [6] - Tech companies are seeking support from the government on various issues, including energy access, talent acquisition, and regulatory clarity [7][8] - The tightening of U.S. immigration policies may lead tech companies to hire more foreign employees outside the U.S. [11] Group 4 - The evolving relationship between the White House and Silicon Valley is expected to reshape the global tech landscape, with implications for international business and political dynamics [12] - Concerns have been raised about the ability of the U.S. to attract top talent and lead in AI development due to policy uncertainties [10][12] - The political influence of Silicon Valley is likely to increase, making it a significant force in U.S. politics [12]
全文|《AI智能体的崛起》作者佩塔尔·拉达尼列夫:AI或缩小数字鸿沟,全球共识是治理关键
Xin Lang Zheng Quan· 2025-10-17 04:27
Core Insights - The 2025 Sustainable Global Leaders Conference is set to take place from October 16 to 18 in Shanghai, focusing on global action, innovation, and sustainable growth [5][6] - Petar Radanliev emphasizes the dual potential of AI to bridge the global digital divide while also risking increased inequality if monopolized by wealthy nations [2][3] Group 1: AI Governance and Global Development - Radanliev argues that AI can integrate knowledge from around the world to provide equal learning opportunities in resource-scarce regions like Africa [2] - He highlights the current lack of consensus in global AI governance, exacerbated by geopolitical competition, which hinders the establishment of unified standards [2][13] - The need for transparency and safety in AI governance is crucial, suggesting the creation of an "AI material list" to clarify data elements and sources [2][12] Group 2: Human Oversight and Collaboration - Radanliev stresses the importance of maintaining human oversight in AI development to prevent technology from becoming uncontrollable [3][12] - He calls for global collaboration over technological monopolization, advocating for AI to be a common tool for humanity to reduce the digital divide and promote sustainable development [3][12] Group 3: Conference Details and Participation - The conference is co-hosted by the World Green Design Organization and Sina Group, with support from the Shanghai Huangpu District Government [5][6] - Approximately 500 prominent guests, including Nobel laureates and leaders from Fortune 500 companies, will participate in discussions covering nearly 50 topics related to sustainable development [6]