Workflow
Bing Chat
icon
Search documents
广告,救不了 AI 搜索
3 6 Ke· 2025-09-01 10:31
Core Insights - Perplexity, an AI search startup, has seen its valuation soar to $18 billion but struggles with monetization, particularly in its advertising business, which generated only $20,000 in revenue for Q4 2024 [1][4][5] - The departure of Taz Patel, the head of advertising, highlights the challenges Perplexity faces in establishing a viable advertising model [2][4] - The company is also dealing with legal challenges related to content copyright, which has resulted in significant legal expenses [4][5] Group 1: Company Challenges - Perplexity's advertising revenue is negligible compared to its annualized revenue of over $100 million, primarily from subscriptions and API usage [4][5] - The company has attempted partnerships with brands like TurboTax and Whole Foods to integrate sponsored links but has seen limited success [4][5] - Legal issues have led to millions in expenses, with lawsuits from major publishers like The New York Times and Nikkei [4][5] Group 2: Industry Context - Other major players, including Microsoft and Google, are also exploring advertising in AI search but face their own challenges [6][12] - Microsoft has integrated OpenAI's technology into Bing and is experimenting with embedding ads in conversational responses, but its daily active users still lag behind Google [6][12] - Google is also trying to adapt its traditional search advertising model to AI but has encountered issues with AI-generated content quality [12][13] Group 3: Future Outlook - The AI search advertising market is still in its infancy, with projected spending of only $1 billion in 2024, growing to $26 billion by 2029, which is a small fraction of the overall search advertising market [15] - Despite the challenges, AI search advertising may have higher conversion rates compared to traditional search, as evidenced by Microsoft's Copilot showing a 73% increase in user interaction and a 16% increase in conversion rates [15] - The future of AI search may shift from "selling attention" to "selling results," raising questions about the reliability of AI-generated recommendations and the evolving nature of advertising [17][18]
企业 GenAI 的最大风险以及早期使用者的经验教训
3 6 Ke· 2025-08-11 00:20
Overview - Generative AI is included in corporate roadmaps, but companies should not release any unsafe products. The threat model has changed due to LLMs, where untrusted natural language can become an attack surface, and outputs can be weaponized. Models should operate in a sandboxed, monitored, and strictly authorized environment [1][2] Security Challenges - Immediate injection attacks, including indirect attacks hidden in files and web pages, are now a top risk for LLMs. Attackers can compromise inputs without breaching backend systems, leading to data theft or unsafe operations [4][5] - Abuse of agents/tools and "over-proxying" create new permission boundaries. Overly permissive agents can be lured into executing powerful operations, necessitating strict RBAC and human approval for sensitive actions [4][5] - RAG (Retrieval-Augmented Generation) introduces new attack surfaces, where poisoned indexes can lead to adversarial outputs. Defensive measures are still evolving [4][5] - Privacy leaks and IP spillage are active research areas, with large models sometimes memorizing sensitive training data. Improvements in vendor settings are ongoing [4][5] - The AI supply chain is vulnerable, with risks from backdoored models and deceptive alignments. Organizations need robust provenance and behavior review measures [4][5] - Unsafe output handling can lead to various security issues, including XSS and SSRF attacks. Strict output validation and execution policies are essential [4][5] - DoS attacks and cost abuse can arise from malicious workloads, necessitating rate limits and alert systems [4][5] - Observability and compliance challenges exist, requiring structured logging and change control while adhering to privacy laws [4][5] - Governance drift and model/version risks arise from frequent updates, emphasizing the need for continuous security testing and version control [4][5] - Content authenticity and downstream misuse remain concerns, with organizations encouraged to track output provenance [4][5] Action Plan for Next 90 Days - Conduct a GenAI security and privacy audit to identify sensitive data entry points and deploy immediate controls [6][7] - Pilot high-value, low-risk use cases to demonstrate value while minimizing customer risk [6][7] - Implement evaluation tools with human review and key metrics before widespread deployment [6][7] Case Studies - JPMorgan Chase implemented strict prompts and a code snippet checker to prevent sensitive data leaks in their AI coding assistant, resulting in zero code leak incidents by 2024 [16] - Microsoft enhanced Bing Chat's security by limiting session lengths and improving prompt isolation, significantly reducing successful prompt injection attempts [17] - Syntegra utilized differential privacy in their medical AI to prevent the model from recalling sensitive patient data, ensuring compliance with HIPAA [18] - Waymo employed a model registry to ensure the security of their machine learning supply chain, successfully avoiding security issues over 18 months [19][20] 30-60-90 Day Action Plan - The first 30 days should focus on threat modeling workshops and implementing basic input/output filtering [22][23] - The next 31-60 days should involve red team simulations and the deployment of advanced controls based on early findings [24][25] - The final phase (61-90 days) should include external audits and optimization of monitoring metrics to ensure ongoing compliance and security [27][28]
OpenAI对微软的“独立战争”
虎嗅APP· 2025-07-05 03:09
Core Viewpoint - The ongoing negotiations between OpenAI and Microsoft represent a significant shift in their relationship, moving from a collaborative partnership to a competitive standoff, primarily driven by conflicting interests regarding technology control, profit sharing, and future business strategies [1][9][19]. Group 1: Background and Initial Partnership - OpenAI and Microsoft formed a strategic partnership in 2019, with Microsoft investing $1 billion to support OpenAI's AI research and providing cloud computing resources [5]. - The relationship flourished during a "honeymoon period," highlighted by successful product launches like GitHub Copilot, which leveraged OpenAI's technology [6]. Group 2: Recent Developments and Tensions - Tensions escalated in 2023 following internal upheavals at OpenAI, leading to a loss of trust from Microsoft, which had invested over $13 billion [6][7]. - OpenAI's restructuring into a Public Benefit Corporation (PBC) aimed to facilitate new funding and an IPO, but required Microsoft's consent due to existing agreements [2][8]. Group 3: Key Negotiation Issues - The core disagreement centers around the "declaration of sufficient AGI," which would allow OpenAI to partner with other cloud providers, ending Microsoft's exclusive rights [3][13]. - OpenAI proposed a shift from profit-sharing to equity stakes, suggesting Microsoft could hold about 33% of the new PBC, but Microsoft preferred maintaining profit-sharing for stability [11][12]. Group 4: Strategic Moves and Future Implications - OpenAI is actively seeking to diversify its cloud partnerships, including agreements with Oracle and Google, to reduce reliance on Microsoft Azure [17][18]. - The potential for OpenAI to develop its own AI chips and the Stargate super data center project indicates a strategic move towards independence from Microsoft [18]. Group 5: Conclusion and Future Outlook - The negotiations reflect a broader power struggle in the AI industry, with both companies recognizing the stakes extend beyond financial terms to control over technology and market positioning [19]. - The outcome of these negotiations will likely reshape the future landscape of AI partnerships and competition, making it uncertain whether another collaboration like that of Microsoft and OpenAI will emerge [19].