Workflow
AI治理
icon
Search documents
驯服AI“猛虎”,企业需有治理思维
Core Insights - The discussion at the 2025 World Artificial Intelligence Conference highlighted the urgent need for AI safety measures, with experts emphasizing that safety is now a core production factor rather than a marginal cost [1][2] - The shift in focus from computational power to safety benefits is expected to lead to a significant restructuring of the industry [2][3] Group 1: AI Safety and Governance - Jeffrey Hinton's warning about AI being like an "untrainable tiger" stresses the necessity of embedding safety into AI models to avoid catastrophic risks [1] - Eric Schmidt advocates for companies to establish their own safety standards before international regulations are in place, suggesting that investments in safety today can secure future regulatory advantages [2] - The concept of "Safety-as-a-Service" is emerging, where companies that successfully implement safety measures can transform compliance costs into competitive advantages [2] Group 2: Market Dynamics and Value Distribution - The AI safety revolution is disrupting the previous profit distribution dominated by GPU manufacturers, as companies providing algorithm auditing services are seeing significant order premiums [3] - The strategy of "safety for market share" is reshaping traditional business logic, allowing companies to gain market presence through open-source strategies despite short-term losses [3] - Companies are adopting different strategies in various markets, such as prioritizing compliance in Europe while leveraging technology exchanges in emerging markets [3] Group 3: Organizational Restructuring and Collaboration - Some companies are integrating AI ethics committees into their product development processes to ensure real-time corrections between engineers and AI systems [3] - The rise of distributed development models, where platforms connect global developers, is becoming essential for responding to complex governance requirements [3] Group 4: Future Industry Trends - The AI industry is expected to evolve into a form characterized by intertwined "technological spirals" and "institutional spirals," moving from single model competition to a composite capability competition [4] - The establishment of international cooperation frameworks for AI, potentially led by China, may create a new governance structure akin to a "digital WTO" [5] - Companies that can embed governance thinking into their business models will likely lead the next phase of global economic order, emphasizing the importance of integrating ethical considerations into technological advancements [5]
直击WAIC 2025 | 专访德勤TMT行业主管合伙人程中:有效的AI治理范式应从被动向主动转变
Mei Ri Jing Ji Xin Wen· 2025-07-28 13:49
Core Insights - The imbalance between value extraction and risk management in generative AI has become a critical gap for enterprises to bridge [1] - Deloitte emphasizes that generative AI governance is not an option to delay, and companies must act quickly to clarify responsibilities, enhance skills, and integrate risk management throughout the AI lifecycle [1] Group 1: AI Investment and ROI - The AI transformation process typically involves four stages: establishing an AI strategic vision, pilot exploration, deep integration into core business processes, and financial mapping [4] - In the initial stage, there is often a significant gap between management's ROI expectations and reality, with departments pursuing projects independently [4] - The final stage involves linking AI investments directly to financial metrics, although companies still face challenges in quantifying indirect benefits like customer satisfaction [4] Group 2: AI Architecture and Cost Management - Traditional enterprises face challenges such as complex legacy systems and limited budgets, which can be addressed through "light architecture, soft integration, and distributed evolution" [5] - Light architecture involves encapsulating AI capabilities as API services to reduce the need for core system overhauls [5] - Companies should maintain flexibility in technology selection and establish flexible contracts with technology vendors to mitigate cost risks associated with technology shifts [5] Group 3: Addressing AI Illusions and Black Box Issues - "Illusions" in AI outputs can mislead business decisions and compliance, necessitating a multi-layered defense strategy [6] - Structural illusions, which often appear in AI-generated tables and data analyses, should be prioritized for resolution due to their high risk of misleading decision-makers [6] - To quantify hidden costs from these illusions, companies can assess model output accuracy and operational data impacts [6] Group 4: Risk Mitigation in High-Stakes Scenarios - In high-stakes environments like healthcare and finance, a systematic approach to building illusion mitigation mechanisms is recommended [7] - A mixed architecture of small models and expert rules is suggested for better reliability in regulated fields [7] - Detailed logging capabilities are essential for traceability and accountability in AI outputs [7] Group 5: Strategic AI Governance - Effective AI governance should transition from passive to proactive, with clear strategic goals and dedicated governance teams [11] - Companies should adopt explainable AI technologies and data governance tools to ensure transparency and control [11] - Cultivating employee AI literacy is crucial for fostering a responsible AI usage culture [11] Group 6: AI Security and Revenue Impact - Companies should integrate AI into a unified architecture rather than treating it as an add-on to legacy systems [12] - A secure AI system can enhance customer satisfaction and loyalty, indirectly boosting revenue [12] - Real-world examples show that integrating AI into cybersecurity can significantly reduce response times and downtime, leading to revenue growth [12]
东西问丨宋海涛:国际合作为何是人工智能时代“鲜明底色”?
Huan Qiu Wang Zi Xun· 2025-07-27 06:35
Core Insights - International cooperation is increasingly recognized as a vital element in the development of artificial intelligence (AI), particularly in the context of global governance and ethical standards [3][6][8] - The 2025 World Artificial Intelligence Conference (WAIC) in Shanghai highlighted the significance of cultural inclusivity in AI governance, emphasizing the need for a collaborative approach that respects diverse values and ethical frameworks [5][11] Group 1: AI Governance and International Cooperation - AI governance encompasses the establishment of a framework that balances technological ethics, cultural diversity, and the provision of global public goods [6] - The development of AI requires a new governance paradigm that addresses the "black box" effect of AI technologies, ensuring transparency and accountability in decision-making processes [6][10] - International collaboration is essential for standardization and regulation of AI technologies, as it fosters mutual recognition of ethical standards and promotes shared governance mechanisms [7][10] Group 2: China's Role in AI Development - China advocates for global open cooperation in AI, leveraging its comprehensive technology research and manufacturing capabilities to facilitate rapid development and application of AI technologies [8][10] - The country has initiated training programs in collaboration with the United Nations to assist developing nations in understanding AI technology, aiming to bridge the technological gap [8][10] - China's approach to AI governance emphasizes inclusivity and shared standards, promoting a cooperative framework that benefits all nations, particularly those in the Global South [10][11] Group 3: Challenges in AI Global Governance - The current landscape of AI governance is characterized by a complex interplay of three paradigms: technological hegemony, ethical regulation, and development rights prioritization [10] - The European Union's AI Act represents a significant regulatory effort but may inadvertently stifle innovation and competitiveness within the European AI ecosystem [10] - There is a pressing need for a governance path that maintains technological openness while respecting cultural diversity, as disparities in AI development and application exist across different countries [12] Group 4: Future Opportunities with Embodied Intelligence - The emergence of embodied intelligence represents a new phase in AI evolution, necessitating international collaboration to address the complexities of integrating physical and digital realms [14][16] - The development of a complete embodied intelligence industry chain requires cooperation across multiple disciplines and sectors, making international partnerships essential for success [16] - As the industry evolves, early consensus on collaborative frameworks will be crucial to minimize resource wastage and maximize the benefits of new technologies [16]
炉边对话 | 施密特与沈向洋议AI:靠竞争谋发展,靠合作守底线
3 6 Ke· 2025-07-26 13:59
Core Insights - The dialogue between Harry Shum and Eric Schmidt at the 2025 World Artificial Intelligence Conference (WAIC) highlights the global competition and cooperation in the field of artificial intelligence (AI) [3][5] - AI is recognized as a transformative technology that impacts not only engineering and business but also social governance, ethical order, and global dynamics [5][6] Group 1: AI Development and Governance - The discussion emphasizes the importance of determining who sets the boundaries for AI technology and how this process requires international cooperation and shared values [5][6] - Schmidt points out that competition has driven industry progress, citing his experiences with Microsoft and Apple during his time at Google [6] - The need for dialogue on critical issues such as AI's role in weapon control and self-replication is highlighted, suggesting that common goals can facilitate cooperation between the US and China [6][8] Group 2: Ethical Considerations and AI Regulation - Schmidt identifies the core issue of AI governance as being rooted in values, noting that existing communication mechanisms between the US and China are insufficient for ensuring AI compliance with ethical standards [8] - He proposes an ideal scenario where AI systems are designed from the training phase to avoid learning harmful behaviors [8][9] - The risks associated with open-source AI models are discussed, emphasizing that while open-source promotes participation and innovation, it also poses security challenges compared to closed-source models [9] Group 3: Philosophical and Ethical Frameworks - The conversation reflects on the need for AI to be framed within philosophical, ethical, and governance contexts to ensure it serves humanity positively [11] - This perspective is echoed in the upcoming book "Genesis," co-authored by Schmidt, Kissinger, and Craig Mundie, which argues that AI could be a pivotal point in human civilization's evolution [11]
“知东汇西:中美青年共话未来”在北京启动
Zhong Guo Xin Wen Wang· 2025-07-09 01:41
Group 1 - The event "Bridging East and West: Chinese and American Youth Discuss the Future" was launched in Beijing, aiming to enhance cultural exchange and mutual understanding between Chinese and American youth [1][4] - 25 youth representatives from China and the U.S. will participate in visits and dialogues in Xi'an, Suzhou, and Shanghai, including a summer camp for "Future Diplomats" in Suzhou [1][4] - The event is co-hosted by the China Foreign Languages Publishing Administration, the American International Student Conference, and Xi'an Jiaotong-Liverpool University, with the goal of solidifying the public foundation for Sino-U.S. friendly relations [4] Group 2 - The opening ceremony featured speeches emphasizing the importance of youth dialogue in addressing global uncertainties and fostering cooperation for peace and prosperity [3] - Participants engaged in roundtable discussions on topics such as educational cooperation and future economies, focusing on technology innovation and AI governance [3] - The event included cultural performances, with youth representatives singing songs in both Chinese and English, symbolizing cross-cultural collaboration [3]
广东深入开展“清朗·整治AI技术滥用”专项行动取得阶段性成效
智通财经网· 2025-06-16 11:54
自"清朗·整治AI技术滥用"专项行动开展以来,广东省委网信办深入贯彻落实中央网信办有关工作部 署,聚焦重点平台、重点环节、重点领域,压紧压实平台主体责任,全面强化AI技术源头治理,深入 清理整治违规AI应用程序,加强AI生成合成技术和内容标识管理,专项整治取得阶段性成效。现向社 会通报有关工作情况。 一、聚焦重点平台,压实主体责任 制定专项行动工作方案,建立政企直联机制,深入指导华为、腾讯、网易、夸克、OPPO、vivo、荣 耀、唯品会、金山办公、迅雷等20余个重点平台集中开展专项治理,围绕违规AI产品宣推、训练语料 管理不严、内容标识要求落实不力、安全审核措施薄弱等重点风险问题,深入开展自查自纠,集中清 理"一键脱衣"、未经授权的人声或人脸克隆编辑等违规AI功能和应用,严厉打击违规售卖AI账号、教程 及传授技术规避手段、利用AI技术刷量涨粉、虚假互动、恶意引流等违法违规行为。截至目前,各重 点网站平台拦截清理售卖违规AI产品教程或商品、假冒仿冒、不当营销等信息8260余条,处置违规账 号470余个。 三、聚焦重点领域,筑牢安全屏障 加强对医疗、金融、教育以及涉未成年人等重点领域的AI服务应用的督导,要求平台 ...
造假内容横行 警惕绕过“AI打标”成为隐患
Mei Ri Shang Bao· 2025-05-27 23:15
Core Viewpoint - The proliferation of AI-generated content has led to a rise in misinformation and low-quality content, necessitating regulatory measures to ensure responsible use of AI technology [1][3][4] Group 1: Current Issues with AI Content - AI-generated accounts are becoming breeding grounds for false information, particularly in areas like health and education, posing significant public risks [1][2] - The phenomenon of "AI hallucination" contributes to the spread of misleading information, as AI can fabricate seemingly credible content [1][2] - Existing mechanisms for detecting AI-generated content are insufficient, with only a small percentage of videos being flagged for AI content [2] Group 2: Regulatory Responses - In March 2023, four departments jointly issued guidelines for labeling AI-generated content, which will take effect on September 1, 2025 [1][3] - A nationwide campaign titled "Clear and Bright: Rectifying AI Technology Abuse" was launched to address the misuse of AI, focusing on cleaning up false information and inappropriate content [3][4] - Platforms like Douyin have begun to take action against AI-generated low-quality content, with significant numbers of violations being addressed [3][4] Group 3: Future of AI Governance - Experts emphasize the need for a balanced approach to AI regulation, encouraging innovation while preventing misuse [4][6] - The development of a legal and ethical framework for AI is seen as essential for promoting healthy and orderly growth in the sector [5][6] - The ongoing evolution of AI technology presents both opportunities and challenges, necessitating continuous dialogue on governance strategies [5][6]
“AI的真正价值不在于有多酷,而在于多有用、多可靠”
腾讯研究院· 2025-05-26 09:02
郭凯天认为,AI应当尊重人类作为价值源头的独特性, AI的真正价值不在于"看起来多酷",而在于"用 起来多好用、多可靠", 为此,腾讯高度重视开源透明的技术生态,倡导开放、参与、监督并行的治理 模式,推动建立AI时代的信任基础。他也表示,AI文明的篇章才刚刚开启,腾讯愿与各方携手,共同塑 造一个技术与人文并重、开放包容的未来。 生成式AI加速发展,治理需同步演进 5月22日下午,由腾讯研究院和新加坡管理大学数字法研究中心(SMU Centre for Digital Law)联合主 办的AI与社会研讨会——" 生成式 AI 进展:应用、治理与社会影响 ",在新加坡管理大学顺利召开。 近百名来自中国和新加坡的业界、学界专家参加了会议,围绕生成式AI的技术趋势、产业应用、监管治 理、社会伦理等议题展开分享与讨论,为构建开放共享、健康可持续的AI发展生态和AI社会探寻对策思 路。 腾讯集团高级副总裁郭凯天代表主办方作欢迎致辞,他提出, AI不仅是一次技术革命,更是一场关于 人类、社会与智能之间关系的深刻变革。 我们正站在一个技术飞跃的关键节点,大模型技术的快速演进 正推动人工智能从"会认知"迈向"会行动",成为人类 ...
麦肯锡全球AI调研:企业AI部署现状(上篇)
麦肯锡· 2025-05-07 10:54
Core Insights - The development of generative AI is prompting companies to restructure their organizational frameworks and business processes to unlock its potential value. Although AI deployment is still in its early stages, more companies are reshaping workflows, enhancing governance mechanisms, and actively addressing related risks [1] Group 1: Organizational Changes and AI Deployment - Companies are initiating organizational transformations to leverage generative AI for future value, with larger enterprises moving faster and more decisively. A McKinsey global AI survey indicates that many companies have taken substantial steps to drive AI deployment for tangible financial returns [1] - Among companies that have deployed generative AI, 21% of respondents reported that their organizations have thoroughly restructured certain workflows [6][14] Group 2: AI Governance and Leadership - AI governance involves establishing a series of policies, processes, and technologies to ensure responsible development and deployment of AI systems. The survey analysis shows that direct oversight by the CEO is a key factor for companies to enhance financial performance through generative AI [2] - In companies that have deployed AI, 28% of respondents indicated that the CEO is responsible for AI governance, while 17% stated that the board is responsible. Typically, this work is co-led by an average of two leaders [2][3] Group 3: Risk Management and Compliance - Many companies are intensifying efforts to manage risks associated with generative AI, particularly concerning inaccuracies, cybersecurity, and intellectual property infringement. These three issues are the most frequently mentioned risk types and have already impacted several companies [10][13] - Larger enterprises are more proactive in managing potential cybersecurity and privacy risks, although they have not significantly outpaced smaller companies in addressing risks related to AI output accuracy or explainability [13] Group 4: Best Practices and Performance Metrics - Most respondents have not yet perceived a significant impact of generative AI on overall corporate profits, and many companies have not adopted best practices that could create value in new technology deployment. Only 1% of executives believe their generative AI initiatives have reached a "mature" stage [14][15] - The survey identified 12 practices related to generative AI application and promotion, each positively correlated with improvements in earnings before interest and taxes (EBIT). Setting and tracking clear KPIs for generative AI solutions has the most significant impact on actual returns [14][15] Group 5: Workforce and Skills Transformation - The survey explored the recruitment of AI-related positions and its impact on workforce structure. Among companies that have deployed AI, the proportion of respondents indicating recruitment of AI-related personnel over the past 12 months remained stable compared to early 2024 [17][18] - Many respondents expect that AI-related skills retraining will exceed that of the past year, with companies actively managing the time saved from AI deployment. Most employees are expected to use this time for new tasks or to focus more on existing responsibilities that have not yet been automated [21][22]
速递|马斯克或许仍有机会阻止 OpenAI 的盈利转型
Z Potentials· 2025-03-10 03:07
Core Viewpoint - Elon Musk's lawsuit against OpenAI regarding its shift to a for-profit model has faced a setback, but a federal judge has expressed concerns that may provide hope for Musk and others opposing this transition [1][3][4]. Group 1: Lawsuit and Court Ruling - Musk's lawsuit includes Microsoft and OpenAI CEO Sam Altman as defendants, accusing OpenAI of abandoning its non-profit mission to ensure AI research benefits humanity [1][2]. - The federal judge, Yvonne Gonzalez Rogers, rejected Musk's request for a preliminary injunction but raised legal concerns about OpenAI's transition to a for-profit entity [3][4]. - The judge noted that using public funds to support a non-profit's shift to a for-profit could cause "significant and irreparable harm" [4]. Group 2: OpenAI's Transition and Financial Implications - OpenAI's non-profit organization currently holds a majority stake in its for-profit business and is reportedly set to receive billions in compensation as part of the transition [5]. - The judge acknowledged that OpenAI's founders, including Altman and President Greg Brockman, have made "fundamental commitments" not to use OpenAI for personal enrichment [6]. - OpenAI faces high stakes, needing to complete its for-profit conversion by 2026, or it risks recent funding turning into debt [12]. Group 3: Regulatory and Safety Concerns - Concerns have been raised about the potential impact of OpenAI's for-profit transition on AI safety, with investigations by attorneys general in California and Delaware already underway [9]. - A former OpenAI employee expressed worries that the shift could prioritize profit over the organization's mission to ensure AI research benefits humanity [12][13]. - The regulatory landscape and the scrutiny from AI safety advocates and tech investors will be critical as OpenAI navigates its transition [14].