Workflow
Sequoia Capital
icon
Search documents
Securing the AI Frontier: Irregular Founder Dan Lahav
Sequoia Capital· 2025-10-21 09:00
There was a scenario where there was an agent on agent interaction. It was a critical security task. That was the simulation that they were in, but after working for a while, one of the models decided that they've worked enough.And they and they should stop. It did not stop there. It convinced the other model that they should both take a break.So the model did social engineering on the other model to another model. But now try to think about a situation where you actually as an enterprise are delegating an ...
Building the "App Store" for Robots: Hugging Face's Thomas Wolf on Physical AI
Sequoia Capital· 2025-09-09 09:00
Robotics Industry Trends and Opportunities - Robotics is at the same inflection point that transformers and language models were a few years ago, indicating a significant growth opportunity [3] - There's a potential future transition where many software developers become roboticists if given the right tools [6] - The entertainment, fun, and education sectors present accessible entry points for robotics, with less emphasis on reliability compared to enterprise applications [17] - Many startups are building on top of robots, automating manual tasks and physical world interactions, indicating a growing ecosystem [20] Hugging Face's Strategy and Role - Hugging Face aims to reproduce the success of the Transformers library in the robotics field with the "robot" project, providing a central library for algorithms, datasets, and hardware interfaces [6] - Hugging Face's role is to build communities and promote open-source AI, allowing users to tweak, train, and control models, with a focus on local model hosting for safety in robotics applications [7][8][9] - Hugging Face acquired a hardware company, Poland Robotics, and opened orders for its first robots, signaling a move into hardware [6] - Hugging Face is focused on enabling the community by working with various actors to ensure efficient integration and collaboration within the ecosystem [53] Data and Model Development - A major challenge in robotics is the lack of sufficient data, limiting the diversity and generalizability of trained robots [22][24] - Hugging Face aims to incentivize data sharing to build a diverse, multi-location dataset for robotics training [25] - World models, aided by advancements in open-source video and image generation, are being used to generate more data for robots [28][29] Open Source and Community - Hugging Face has a robotics community of several thousand people, with exponential growth in community members and datasets [10][11] - Open source is seen as a winning solution in the long term for many applications, especially as the market matures and cost and ownership become more important [51] - The company is pushing for open science, aiming to make AI training recipes accessible to everyone, similar to how physics is learned [71][72]
How Crosby is Building an AI Law Firm on Deal Velocity not Billable Hours
Sequoia Capital· 2025-09-02 09:01
I think lawyers are quite good at learning, but in a law firm structure, as much time goes into apprenticeship. It's a teaching hospital. You don't actually spend that much time getting really good at teaching because you just do it through reps and reps and reps and reps.And so actually explaining things um is something that like I think is going to be a very prized skill for not just lawyers, but for any domain experts, but in particular lawyers. And we're seeing it like when you can make an AI do this th ...
The $10 Trillion AI Revolution: Why It’s Bigger Than the Industrial Revolution
Sequoia Capital· 2025-08-28 09:01
AI Revolution Thesis - Sequoia believes the AI revolution is comparable to the industrial revolution, presenting a significant transformation [1][2] - The cognitive revolution represents a $10 trillion (10 to the 13th power) opportunity [1][8] - Startups are crucial in specializing general AI technologies for specific applications [6] Commercial Opportunity - The AI-driven automation of the US services market, currently at approximately $20 billion, holds a $10 trillion potential [8] - The cognitive revolution can expand the market to include large, standalone public companies built around AI in the services space [12] Investment Trends - Work is shifting towards higher leverage (100+%) on tasks with less certainty in outcomes, requiring human correction [13][14][15] - Real-world measurement is becoming the new gold standard for proving AI excellence, surpassing academic benchmarks [15][16][17] - The industry forecasts at minimum a 10x increase in compute (flops) per knowledge worker, with optimistic views suggesting 1000x to 10,000x consumption [20] Investment Themes - Persistent memory, including long-term memory and consistent AI identity, is critical for AI's expansion into more work functions [21][22][23] - Seamless communication protocols between AIs, beyond initial protocols like MCP, will yield major applications [24][25] - AI voice is currently viable due to increased fidelity and decreased latency, with applications in both B2C and enterprise sectors [27][28][29] - AI security presents a huge opportunity across development, distribution, and user layers, potentially involving numerous AI security agents per person/agent [30][31][32][33] - Open source's ability to compete with state-of-the-art foundation models is critical for a free, open AI future [34][35][36]
Building in the application layer? Gamma's Jon Noronha gives advice for #founders in #AI
Sequoia Capital· 2025-08-19 19:22
Product Strategy & Market Differentiation - Gamma's unique perspective focuses on differentiating the presentation medium itself, aiming to replace traditional slide decks [1] - The company advises application layer founders to identify their unique market lens to navigate competitive spaces [2] - It cautions against creating similar AI coding startups and suggests exploring neglected areas where AI is not heavily applied [2] - The industry should consider working on areas that foundation models are not heavily optimizing for to avoid direct competition with larger entities [3] Technology & Experimentation - The industry should incorporate experimentation and try different models, avoiding reliance on a single provider [4] - Rapid and unpredictable innovation requires planning for a dynamic environment with potentially changing best models [4]
Delphi’s Dara Ladjevardian: How AI Digital Minds Can Scale Human Connection
Sequoia Capital· 2025-08-12 09:00
Let's call Adelfi right now. I'm down. Yeah. How about we call my friend? I actually haven't met him in person, but I'm I'm friends with this Deli. Arnold Schwarzenegger. Yeah. What do we think? Love it. Hey, this is AI Arnold. I'm here to cut the crap and help you get stronger, healthier, and happier. So, what's on your mind today? Arnold, I have 15 minutes a day to work out, which I feel like is not a lot, but I want to feel good and I want to get better in my health. What do you recommend I do? 15 minute ...
OpenAI Just Released ChatGPT Agent, Its Most Powerful Agent Yet
Sequoia Capital· 2025-07-22 09:00
Agent Capabilities & Architecture - OpenAI has created a new agent in ChatGPT that can perform tasks that would take humans a long time, by giving the agent access to a virtual computer [6] - The agent has access to a text browser (similar to deep research tool), a virtual browser (similar to operator tool with full GUI access), and a terminal for running code and calling APIs [6][7][8] - All tools have shared state, allowing for flexible and complex tasks [9] - The agent is trained using reinforcement learning across thousands of virtual machines, allowing it to discover optimal strategies for tool usage [3] Development & Training - The agent is a collaboration between the Deep Research and Operator teams, combining the strengths of both [6] - The agent is trained with reinforcement learning, rewarding efficient and correct task completion [36] - The model figures out when to use which tool through experimentation, without explicit instructions [38] - Reinforcement learning is data-efficient, allowing new capabilities to be taught with smaller, high-quality datasets [75][76] Safety & Limitations - Safety training and mitigations were a core part of the development process due to the agent's ability to take actions with external side effects [44] - The team has implemented a monitor that watches for suspicious activity, similar to antivirus software [48] - Date picking remains a difficult task for the AI system [4][83][84] Future Directions - Future development will focus on improving the accuracy and performance across a wide distribution of tasks [62][85] - The team is exploring different ways of interacting with the agent, beyond the current chat-based interface [68][86] - Personalization and memory for agents will be important for future development, allowing agents to do things without being explicitly asked [67][68]
Understanding Neural Nets: Mechanical Interpretation w/ Goodfire CEO Eric HO #ai #machinelearning
Sequoia Capital· 2025-07-08 18:44
Feasibility of Understanding Large Language Models - The field of mechanistic interpretability has a significant advantage due to perfect access to neurons, parameters, weights, and attention patterns in neural networks [1] - Understanding large language models is deeply necessary and critical for the future [2] - Establishing a norm to explain a percentage of the network by reconstructing it and extracting its concepts and features is crucial [2] Approaches to Understanding - Progress can be made by trying to understand all aspects of the network [2] - A baseline rudimentary understanding can be used to improve and understand more of the network [3]
Passing the Turing Test w/ ElevenLabs' Mati Staniszewski #ai #nextgenai #machinelearning
Sequoia Capital· 2025-07-01 20:46
Goal & Timeline - The company aims to achieve human-like conversational AI, potentially passing the Turing test with an agent, possibly by the end of the year or early 2026 [1][2] - The timeline depends on whether the model will be cascading (speech-to-text-to-speech) or a truly duplex "omni model" [3] Model Architecture - The company is developing both cascading and duplex models, with the cascading model currently in production and the duplex model soon to be deployed [4] - The industry faces a reliability versus expressivity trade-off between the two models [5] Trade-offs & Challenges - The duplex model is expected to be quicker and more expressive but potentially less reliable, while the cascaded model is more reliable and can be extremely expressive but may lack contextual responsiveness [5] - Latency is a significant engineering challenge, especially in fusing modalities of language models with audio [5] - No company has successfully fused language models with audio well, and the company hopes to be the first [5]
The Origins of 'Member of the Technical Staff' at OpenAI - Former Chief Research Officer Bob McGrew
Sequoia Capital· 2025-06-17 19:22
Organizational Structure & Culture - OpenAI aimed to eliminate the distinction between engineers and researchers to foster collaboration and innovation [2] - The company wanted to create a level playing field by calling everyone "member of the technical staff," regardless of their academic background [5] - OpenAI values individuals who understand the full technology stack, emphasizing the importance of hands-on experience with data and implementation [3][4] Research & Development - The company believes that researchers should act like artists, implying a creative and exploratory approach to problem-solving [5] - OpenAI recognizes that many of its great researchers learned their trade by working at the company, highlighting the importance of practical experience [5] - The organization emphasizes the significance of closely examining data to understand its possibilities, as exemplified by Alec Radford's approach [3][4]