AI安全
Search documents
谷歌DeepMind CEO谈AGI愿景:十年内成为现实,因安全问题彻夜难眠
3 6 Ke· 2025-04-28 11:06
Group 1 - The core viewpoint of the article is that AGI (Artificial General Intelligence) may be achieved within the next decade, with significant advancements in AI research and development being made by companies like Google DeepMind [1][3][4] - Demis Hassabis, CEO of Google DeepMind, emphasizes the transformative potential of AGI in addressing major global challenges such as diseases and energy crises, while also warning of the risks associated with its misuse [1][4][5] - The article discusses the importance of defining AGI accurately, with Hassabis stating that the timeline for its realization depends on how AGI is defined, highlighting the need for a consistent definition that encompasses human cognitive abilities [3][12] Group 2 - Hassabis identifies two main risks associated with AI: the potential for malicious use of AI technologies and the challenges of maintaining human control over increasingly autonomous systems [5][7] - He calls for the establishment of a global governance framework for AI, emphasizing the need for international cooperation to create safety standards and regulatory measures [7][10] - The article highlights the necessity of a multi-dimensional risk assessment system to proactively address potential dangers posed by AI technologies [9][10] Group 3 - The discussion includes the philosophical implications of AGI, particularly regarding wealth distribution and the potential need for political reform in a future of extreme abundance enabled by AI [21][22][24] - Hassabis suggests that achieving extreme abundance through technological advancements could fundamentally change the nature of resource scarcity and inequality, necessitating new political philosophies [22][23][24] - The article concludes with a call for new philosophical frameworks to address the societal changes brought about by AGI and its impact on human life and governance [20][24][25]
格尔软件(603232):市场拓展有望带动25年业绩修复
HTSC· 2025-04-27 09:08
Investment Rating - The report maintains a "Buy" rating for the company with a target price of 16.00 RMB [8][9]. Core Views - The company's revenue for 2024 was 529.28 million RMB, a year-over-year decrease of 5.71%, while the net profit attributable to the parent company was 36.81 million RMB, a slight decrease of 0.42%. However, the non-recurring net profit increased significantly by 233.45% to 15.69 million RMB, indicating a shift towards higher-margin business segments [1][2]. - The company is expected to recover its performance in 2025, driven by the expansion into new industry clients such as judiciary, telecommunications, and tobacco, as well as overseas clients along the Belt and Road Initiative [1][3]. Summary by Sections Financial Performance - In 2024, the company's revenue from PKI infrastructure products, PKI security application products, and general security products was 1.58 billion RMB, 2.48 billion RMB, and 1.23 billion RMB respectively, showing year-over-year growth of 41.41%, 10.64%, and a decline of 45.28% [2]. - The overall gross margin improved by 4.52 percentage points to 51.99% due to the increased revenue share from high-margin businesses [2]. Business Opportunities - The company is focusing on new security scenarios such as quantum security and AI security, and has made significant progress in product development [3]. - New industry clients are expected to contribute to sustained growth in 2025, with successful projects like the digital trust system in Algeria indicating potential for international expansion [3]. Earnings Forecast and Valuation - The revenue projections for 2025, 2026, and 2027 are 642.05 million RMB, 774.72 million RMB, and 919.08 million RMB respectively, with corresponding EPS estimates of 0.37 RMB, 0.54 RMB, and 0.74 RMB [5][7]. - The report adjusts the revenue and profit forecasts downward due to the transition period affecting low-margin businesses, but maintains a target price based on a 43.3x PE ratio for 2025 [5].
AI 教父最新警告:AI 导致人类灭绝风险高达 20%,留给人类的时间不多了!
AI科技大本营· 2025-04-18 05:53
责编 |梦依丹 采访伊始,他用幽默的语气回忆起领取 诺贝尔物理学奖时的趣事:"他们只是假装我搞的是物理。" 然而,轻松的谈笑之后,是他对未来的深沉忧虑:"我认为人类面临的 AI 风险,远比我们想象中要严重得多。"更令人瞩目的是,辛顿首次给出了一个令 人不寒而栗的预测:AI 导致人类灭绝的可能性高达 10% 至 20%。他直言,我们正处在决定未来的关键节点,亟需投入大量资源研究 AI 安全,否则后 果不堪设想。 出品丨AI 科技大本营(ID:rgznai100) 继去年荣获诺贝尔物理学奖引发全球关注后,"AI 教父"杰弗里·辛顿(Geoffrey Hinton), 这位深度学习领域的奠基人 近日在接受最新采访中坦 言:"几乎所有顶尖研究人员都认为 AI 将变得比人类更聪明。"他之前在诺贝尔奖的官方采访中表示:AI 最快 5 年超越人类智慧。 具体见 诺奖采访深度学习教父辛顿:最快五年内 AI 有 50% 概率超越人类,任何说"一切都会好起来"的人都是疯子 此外,他还罕见地公开批评了科技巨头埃隆·马斯克,认为其行为正在损害美国的科学根基,这场"教父"与首富的隔空交锋,也折射出 AI 发展道路上复 杂的科技、伦理与 ...
启明星辰一季度预盈利 锚定DeepSeek大模型赛道 以周迭代抢占AI安全战略先机
Cai Jing Wang· 2025-04-15 06:26
4月14日晚,启明星辰(002439)(002439.SZ)发布了2025年一季度业绩预告及2024年度报告,2025年一 季度毛利率同比明显改善,毛利率提升8个百分点,一季度实现盈利突破,为全年稳健经营与高质量发 展奠定基础。 积极拥抱AI时代聚力打造AI安全大模型及智能体的场景化应用 2025年,作为国内全面接入DeepSeek的头部网安厂商,启明星辰在一个月内连续推出了大模型应用安 全产品"新三件套"、大模型应用安全服务组合,以及《大模型深度应用安全基座》系列白皮书。借力 DeepSeek,启明星辰掀起了一场效率革命:威胁检测从人工分析的小时级压缩至分钟级,漏洞修复周 期从30天缩至7天,90%高频攻击实现30秒自动闭环。 取得这样成绩的背后,是启明星辰以周为周期实现产品迭代闭环,是公司一季度精准锚定AI大模型驱 动的安全产业新赛道,在大模型安全业务构建中抢占战略先机。公司通过"赋能安全"与"护航安全"双轮 驱动,迅速形成行业标杆落地案例。报告显示,一季度启明星辰大模型安全产品已经在医疗、公安等高 敏感场景实现标杆项目落地,成功将技术先发优势转化为行业示范效应。 实际上,自2024年以来伴随着新质生产力 ...
2030年AGI到来?谷歌DeepMind写了份“人类自保指南”
虎嗅APP· 2025-04-07 23:59
Core Viewpoint - The article discusses the dual concerns surrounding Artificial General Intelligence (AGI), highlighting the potential risks and the need for safety measures as outlined in a report by Google's DeepMind, which predicts AGI could emerge by 2030 [5][21]. Group 1: AGI Predictions and Concerns - DeepMind's report predicts the emergence of "Exceptional AGI" by 2030, which would surpass 99% of human adult capabilities in non-physical tasks [5][6]. - The report emphasizes the potential for "severe harm" from AI, including manipulation of political discourse, automated cyberattacks, and biological safety risks [7][8][9]. Group 2: Types of Risks Identified - DeepMind categorizes risks into four main types: malicious use, model misalignment, unintentional harm, and systemic risks [18]. - The report highlights concerns about "malicious use" where AI could be exploited for harmful purposes, and "misalignment" where AI's actions diverge from human intentions [11][18]. Group 3: Proposed Safety Measures - DeepMind suggests two defensive strategies: ensuring AI is compliant during training and implementing strict controls during deployment to prevent harmful actions [12][13]. - The focus is on creating a system that minimizes the risk of "severe harm" even if the AI makes mistakes [14]. Group 4: Industry Perspectives on AI Safety - Various AI companies have differing approaches to safety, with OpenAI focusing on automated alignment and Anthropic advocating for a safety grading system [16][20]. - DeepMind's approach is more engineering-oriented, emphasizing immediate deployable systems rather than theoretical frameworks [20]. Group 5: Broader Implications and Concerns - There is skepticism within the academic community regarding the feasibility of AGI and the clarity of its definition [22]. - Concerns are raised about a self-reinforcing cycle of data pollution, where AI models learn from flawed outputs, potentially leading to widespread misinformation [23][24].
速递|马斯克或许仍有机会阻止 OpenAI 的盈利转型
Z Potentials· 2025-03-10 03:07
图片来源: Unsplash 本周,埃隆 ·马斯克在针对 OpenAI 的诉讼中输掉了最新一轮较量,但一位联邦法官似乎为马斯克,以及其他反对 OpenAI 转向营利模式的人——提供了保 持希望的理由。 马斯克对 OpenAI 提起的诉讼,也将微软和 OpenAI CEO,Sam Altman为被告。指控 OpenAI 放弃了,其确保 AI 研究造福全人类的非营利使命。 OpenAI 于 2015 年作为非营利组织成立,但在 2019 年转变为"有上限利润"结构,如今又寻求再次重组为公益公司。 Rogers 法官对 OpenAI 转为营利性公司的评论对该公司来说并不是什么好消息。 代表非营利组织 Encode 的律师 Tyler Whitmer 告诉 TechCrunch , Rogers 法官的裁决在 OpenAI 董事会头上笼罩了一层监管不确定性的"阴云"。 Encode 在该案中提交了一份法庭之友陈述,认为 OpenAI 的营利性转型可能危及 AI 安全。 Whitmer 表示,加利福尼亚州和特拉华州的检察长已经在调查 这一转型,而 Rogers 法官提出的担忧可能会促使他们更加积极地展开调查。 在罗杰斯 ...