AI Security
Search documents
Fortinet(FTNT) - 2025 Q4 - Earnings Call Transcript
2026-02-05 22:32
Fortinet (NasdaqGS:FTNT) Q4 2025 Earnings call February 05, 2026 04:30 PM ET Company ParticipantsAnthony Luscri - Vice President of Investor RelationsChristiane Ohlgart - CFOFatima Boolani - Managing DirectorGabriela Borges - Managing DirectorJohn Whittle - COOKen Xie - Founder, Chairman, and CEORob Owens - Managing DirectorShaul Eyal - Managing DirectorConference Call ParticipantsAdam Borg - Equity Research AnalystBrian Essex - Equity Research AnalystJunaid Siddiqui - Equity Research AnalystPatrick Colvill ...
欺骗、勒索、作弊、演戏,AI真没你想得那么乖
3 6 Ke· 2026-02-04 02:57
文章开头问你一个问题: 假如地球上突然冒出一个 5000 万人口的国家,这 5000 万"国民",每一个都比诺贝尔奖得主聪明,思考速度是人类的 10 倍。他们不吃饭、不睡觉,24 小 时搞编程、做研究、想方案。 你作为某个国家的安全部负责人,你觉得要怎么和这样一个国家共存而不被吞噬? 上面这个假设,听起来有点夸张是吧? 首先,这些 AI 们会不会失控、背叛、对人类做出有威胁的事情? 乍一听像科幻片剧情。 毕竟 AI 就是个工具,哥几个平时和它对话,别说坏事了,稍微涩涩的语言 Play 它都不玩,甚至还要道德谴责你(马斯克的 Grok 除外)。 但 Anthropic 这种大模型厂商在训练大模型时,找到了大量证据来表明:AI 系统是不可预测且难以控制的。它们会表现出痴迷、阿谀奉承、偷懒、欺骗、 勒索、耍心眼、钻空子、作弊等各种人类才有的毛病。 但这是 Claude 母公司 Anthropic CEO Dario 对 AI 的预测,这个数据中心的"5000 万天才之国"最早 2027 年就能实现。 所以问题来了,我们应该如何应对这个场景? 于是他写了篇名叫《技术青春期》的两万字长文,里面列了一张清单告诉大家,未来 ...
Varonis(VRNS) - 2025 Q4 - Earnings Call Transcript
2026-02-03 22:32
Varonis Systems (NasdaqGS:VRNS) Q4 2025 Earnings call February 03, 2026 04:30 PM ET Company ParticipantsGuy Melamed - CFO and COOTim Perz - Head of Investor RelationsYaki Faitelson - CEOConference Call ParticipantsBrian Essex - AnalystFatima Boolani - AnalystJason Ader - AnalystJoseph Gallo - AnalystJoshua Tilton - AnalystJunaid Siddiqui - AnalystMatthew Hedberg - AnalystMeta Marshall - AnalystMike Cikos - AnalystRob Owens - AnalystRoger Boyd - AnalystRudy Kessinger - AnalystSaket Kalia - AnalystShaul Eyal ...
Varonis(VRNS) - 2025 Q4 - Earnings Call Transcript
2026-02-03 22:30
Varonis Systems (NasdaqGS:VRNS) Q4 2025 Earnings call February 03, 2026 04:30 PM ET Speaker1Welcome to the Varonis Systems fourth quarter 2025 earnings conference call. At this time, all participants are in a listen-only mode. A question-and-answer session will follow the formal presentation. If anyone should require operator assistance, please press star zero on your telephone keypad. As a reminder, this conference is being recorded. It is now my pleasure to introduce Tim Perz, Investor Relations. Please g ...
Varonis to Acquire AllTrue.ai to Manage and Secure AI Across the Enterprise
Globenewswire· 2026-02-03 21:05
Acquisition strengthens Varonis’ ability to help organizations adopt safe, compliant and trustworthy AI at scaleMIAMI, Feb. 03, 2026 (GLOBE NEWSWIRE) -- Varonis Systems, Inc. (NASDAQ: VRNS), the leader in data security, today announced it is acquiring AllTrue.ai, an AI Trust, Risk, and Security Management (AI TRiSM) company that helps organizations understand and control how AI systems behave across the enterprise. AllTrue.ai brings real-time visibility and security to AI systems, complementing Varonis’ dee ...
Radware Unveils Agentic AI Protection Solution to Shield Enterprises from New Agentic Threats
Globenewswire· 2026-02-03 11:00
Agentic AI Protection Solution is the industry’s first agentic security posture management solution that leverages patent-pending, automated, behavioral analysis to defend AI agents against bad actorsMAHWAH, N.J., Feb. 03, 2026 (GLOBE NEWSWIRE) -- Radware® (NASDAQ: RDWR), a global leader in application security and delivery solutions for multi-cloud environments, today announced the launch of its Agentic AI Protection Solution, extending the Radware Platform into the rapidly growing AI security market. As o ...
谁来防御桌面Agent的危险边界
3 6 Ke· 2026-02-03 07:52
2026开年,一款大红大紫的AI助手,上演了冰与火之歌。 这几天,AI行业出现魔幻一幕,一群极客和科技大佬排队抢Macmini,只为跑上OpenClaw(原名Clawdbot)。 这个"AI万能助手"快速席卷了GitHub和各大技术社区,仅用了十天时间便在GitHub狂揽8万星标,腾讯云、阿里云都连夜上线了一键部署服务。 然而短短几日,就有用户因操作失误账号被币圈黑客秒抢、卷入诈骗案,很快OpenClaw就被曝出数据库"裸奔"、用户量数据造假,多名安全研究员在技 术社区发出预警,原先盛赞OpenClaw的大佬们也纷纷改口。 "AI贾维斯"的口碑反噬来得如此之快令人咋舌。揭开agent能力"天宫一角"后,OpenClaw证明桌面agent的潜力无限。 但后续的agent元年,前赴后继的玩家能否弥补其中的安全漏洞,让AI依旧掌控在人类的管辖之中,则是一场更重要的大考。 交代一句:帮我处理点生活琐事然后就去睡了 早上醒来,它已经: 替我辞职(谈好了N+18补偿+年终奖); 提交了4项发明专利申请(内容我一眼没看过); 现实影响 账号与资产安全隐患 企业数据泄密风险 生产环境停摆可能 产业变化 Agent-Secur ...
Anthropic首席执行官:技术的青春期:直面和克服强大AI的风险
欧米伽未来研究所2025· 2026-01-28 02:02
Core Argument - The article discusses the imminent arrival of "powerful AI," which could be equivalent to a "nation of geniuses" within data centers, potentially emerging within 1-2 years. The author categorizes the associated risks into five main types: autonomy risks, destructive misuse, power abuse, economic disruption, and indirect effects [4][5][19]. Group 1: Types of Risks - Autonomy Risks: Concerns whether AI could develop autonomous intentions and attempt to control the world [4][20]. - Destructive Misuse: The potential for terrorists to exploit AI for large-scale destruction [4][20]. - Power Abuse: The possibility of dictators using AI to establish global dominance [4][20]. - Economic Disruption: The risk of AI causing mass unemployment and extreme wealth concentration [4][20]. - Indirect Effects: The unpredictable social upheaval resulting from rapid technological advancement [4][20]. Group 2: Defense Strategies - The article outlines defense strategies employed by Anthropic, including the "Constitutional AI" training method, research on mechanism interpretability, and real-time monitoring [4][31]. - The "Constitutional AI" approach involves training AI models with a core set of values and principles to ensure they act predictably and positively [32][33]. - Emphasis is placed on developing a scientific understanding of AI's internal mechanisms to diagnose and address behavioral issues [34][35]. Group 3: Importance of Caution - The author stresses the need to avoid apocalyptic thinking regarding AI risks while also warning against complacency, labeling the situation as potentially the most severe national security threat in a century [5][19]. - A pragmatic and fact-based approach is advocated for discussing and addressing AI risks, highlighting the importance of preparedness for evolving circumstances [9][10]. Group 4: Future Considerations - The article suggests that the emergence of powerful AI could lead to significant societal changes, necessitating careful consideration of the implications and potential risks involved [4][16]. - The author expresses a belief that while risks are present, they can be managed through decisive and cautious actions, leading to a better future [19][40].
GSI Technology Shares Slide 7% Despite New Government-Funded AI Security Project
RTTNews· 2026-01-14 17:52
Core Viewpoint - GSI Technology, Inc. (GSIT) shares experienced a decline of 7.29 percent, trading at $7.12, despite the announcement of a new proof-of-concept engagement with two government agencies and a partnership with Israel's G2 Tech for the Sentinel project, which is supported by the U.S. Department of War and a foreign government [1]. Group 1 - GSI Technology's stock opened at $7.96, up from a previous close of $7.68, with trading occurring between $6.90 and $8.90 during the session [2]. - The current bid for GSI Technology shares was $3.96, while the ask was $6.77, indicating a significant spread [2]. - Trading volume reached approximately 6.14 million shares, surpassing the average volume of 5.66 million shares [2]. Group 2 - GSI Technology's 52-week stock price range is between $1.62 and $18.15, highlighting significant volatility in its stock performance over the past year [3].
抨击AI炒作、曝企业需求为先,Anthropic 联创:模型提 0.01 性能就血赚,算力烧钱但值!
AI前线· 2026-01-09 07:00
Core Insights - Anthropic was founded by seven former core members of OpenAI, focusing on AI safety and reliability as core advantages rather than burdens [2][3] - The company aims to be a leader in AI safety, emphasizing transparency about risks associated with their models, such as Claude's behavior in extreme scenarios [3][12] - Anthropic has adopted a cautious approach to spending and algorithm efficiency, contrasting with competitors like OpenAI, which has committed $1.4 trillion to computing resources [3][15] Company Background - Anthropic was established during the COVID-19 pandemic by individuals who had previously worked on significant projects at OpenAI, including GPT-2 and GPT-3 [6][7] - The founding team shared a vision of creating a company that prioritizes AI safety and reliability, leading to the decision to leave OpenAI [9][10] Business Strategy - Anthropic's internal value system emphasizes "don't believe the hype," focusing on delivering real value to B2B clients rather than seeking attention [3][12] - The company has successfully partnered with major cloud platforms like Microsoft, Amazon, and Google, indicating strong demand from enterprise clients [3][17] - Anthropic has invested $500 billion in building data centers in New York and Texas to support its infrastructure needs [14] Market Position - The company has experienced demand for its models that often exceeds its computational supply capacity, highlighting its competitive position in the market [17][24] - Anthropic's approach to AI safety and reliability has positioned it favorably among enterprise clients, who prioritize these attributes [25][26] Future Outlook - Anthropic is considering an IPO in 2026 but has no specific plans to announce at this time [23] - The company is committed to responsible capital management, ensuring that every dollar spent contributes to better and safer models [21][22] - The ongoing evolution of AI technology and its integration into business processes remains a critical area of focus for the company [18][19]