Workflow
LLM
icon
Search documents
从业 43 年的程序员直言:AI 不会取代程序员,软件开发的核心从未改变
程序员的那些事· 2026-01-12 00:48
Core Viewpoint - The article argues that AI will not replace software developers, emphasizing that the future of software development remains in the hands of developers who can translate ambiguous human thoughts into precise computational logic [1][2]. Group 1: Historical Context - The prediction that "programmers will be replaced" has never come true throughout the history of computing, which spans over 43 years [3]. - The author has witnessed multiple technological revolutions, each heralded as the end of programmers, such as the rise of Visual Basic and low-code platforms [4][6]. - Historical cycles show that each wave of technology has led to an increase in the number of programs and programmers, exemplifying the "Jevons Paradox" with a market size of $1.5 trillion [9]. Group 2: Differences with Current Technology - The current wave of Large Language Models (LLMs) differs significantly from past technologies in scale and impact, with LLMs not reliably improving development speed or software reliability [10][11]. - Unlike previous technologies that provided stable and reliable solutions, LLMs often slow down development and create a dual loss situation unless real bottlenecks are addressed [11]. Group 3: Essence of Programming - The core challenge of programming has always been converting vague human ideas into logical and precise computational expressions, a difficulty that persists regardless of the programming tools used [12][17]. - The complexity of programming lies not in the syntax but in understanding what needs to be achieved, a challenge that remains unchanged over decades [17][18]. Group 4: Future Outlook - AI will not eliminate the need for programmers; instead, the demand for skilled developers will continue to grow, especially as companies realize the true costs and limitations of AI technologies [19][20]. - The future of software development will likely see AI playing a supportive role, assisting in tasks like prototype code generation, while the critical decision-making and understanding will still rely on human developers [19][20].
Why History Says We Could See the Nasdaq Double
ZACKS· 2026-01-09 19:55
Core Insights - The market has experienced a significant turnaround, with the Nasdaq Composite rising 50% in under a year, shifting from panic to discussions of an AI bubble [2][12] - Paul Tudor Jones compares the current market to 1999, suggesting that there may still be substantial growth potential ahead, similar to the Nasdaq's doubling between 1999 and 2000 [3][4] - The Federal Reserve's interest rate cuts are seen as a positive catalyst for the market, with historical data indicating that markets tend to rise after such cuts [5][7] Market Trends - The current bull market, which began after the initial panic from tariff announcements, is still in its early stages, with the average bull market lasting about four years [4] - AI-related gains are expanding beyond hardware and infrastructure to include software companies, exemplified by partnerships like Figma and Shopify with OpenAI [5][6] Investor Sentiment - Despite significant stock price increases, investor sentiment remains neutral, as indicated by the CNN Fear/Greed indicator, and there is a substantial amount of cash, approximately $7 trillion, in low-risk money market funds [9][10] - The fear of missing out (FOMO) is expected to drive sidelined cash back into the market, further fueling growth [10] Valuation Considerations - The S&P 500's P/E ratio is currently at 23x, which, while high, is not as extreme as the 40x peak seen in 2000, suggesting that investors may still be willing to pay a premium for innovative companies [11] Investment Opportunities - Four stocks have been identified as having significant upside potential for Q1 2026, selected from a broader pool of companies due to their strong growth prospects [8][12]
What Surprised Us Most In 2025
Y Combinator· 2025-12-22 15:01
I think perhaps the thing that most surprised me is the extent to which I feel like the AI economy stabilized. We have like the model layer companies and the application layer companies and the infrastructure layer companies. Seems like everyone is going to make a a lot of money and there's kind of like a relative playbook for how to build an AI native company on top of the models.Many episodes ago, we talked about how it was felt easier than ever to pivot and find a startup idea because if you could just s ...
Tracing Claude Code to LangSmith
LangChain· 2025-12-19 21:05
Are you curious about what cloud code is doing behind the scenes. Or do you want observability in the critical workflows that you've set up with claude code. Hey, I'm Tanish from Langchain and we built a claude code to LinkSmith integration so that you can see each step that cla takes whether that be an LLM call or tool calls.Um it's pretty fascinating to see the entire trace. So I want to show you what this looks like. Um uh I have uh a project here.It's a very very very simple uh agent that I build with u ...
倒计时3周离职,LeCun最后警告:硅谷已陷入集体幻觉
3 6 Ke· 2025-12-16 07:11
Core Viewpoint - LeCun criticizes the obsession with large language models (LLMs) in Silicon Valley, asserting that this approach is a dead end and will not lead to artificial general intelligence (AGI) [1][3][26] Group 1: Critique of Current AI Approaches - LeCun argues that the current trend of stacking LLMs and relying on extensive synthetic data is misguided and ineffective for achieving true intelligence [1][3][26] - He emphasizes that the real challenge in AI is not achieving human-like intelligence but rather understanding basic intelligence, as demonstrated by simple creatures like cats and children [3][12] - The focus on LLMs is seen as a dangerous "herd mentality" in the industry, with major companies like OpenAI, Google, and Meta all pursuing similar strategies [26][30] Group 2: Introduction of World Models - LeCun is advocating for a different approach called "world models," which involves making predictions in an abstract representation space rather than relying solely on pixel-level outputs [3][14] - He believes that world models can effectively handle high-dimensional, continuous, and noisy data, which LLMs struggle with [14][12] - The concept of world models is tied to the idea of planning, where the system predicts the outcomes of actions to optimize task completion [14][12] Group 3: Future Directions and Company Formation - LeCun plans to establish a new company, Advanced Machine Intelligence (AMI), focusing on world models and maintaining an open research tradition [4][5][30] - AMI aims to not only conduct research but also develop practical products related to world models and planning [9][30] - The company will be global, with headquarters in Paris and offices in other locations, including New York [30] Group 4: Perspectives on AGI and AI Development Timeline - LeCun dismisses the concept of AGI as meaningless, arguing that human intelligence is highly specialized and cannot be replicated in a single model [31][36] - He predicts that significant advancements in AI could occur within 5-10 years, potentially achieving intelligence levels comparable to dogs, but acknowledges that unforeseen obstacles may extend this timeline [31][33] Group 5: Advice for Future AI Professionals - LeCun advises against pursuing computer science as a primary focus, suggesting instead to study subjects with long-lasting relevance, such as mathematics, engineering, and physics [45][46] - He emphasizes the importance of learning how to learn and adapting to rapid technological changes in the AI field [45][46]
Insurers and AI, a systemic risk
Freakonometrics· 2025-11-25 05:00
Core Viewpoint - Major insurers are retreating from providing coverage for risks associated with artificial intelligence due to the potential for multibillion-dollar claims and systemic risk posed by correlated losses across multiple incidents [1][2][12] Group 1: Insurers' Response to AI Risks - Insurers like AIG, Great American, and WR Berkley are introducing explicit exclusions for AI-related risks, particularly concerning agents and language models [1] - The potential losses related to AI could reach several hundreds of millions of dollars, with the primary concern being the possibility of simultaneous, massive losses that cannot be mutualized [1][2] Group 2: Systemic Risk and Interconnectedness - The interconnected nature of AI systems creates a breeding ground for contagion, where a single error can propagate rapidly across a network, affecting thousands of users simultaneously [5][10] - Financial systems exhibit a "robust-yet-fragile" dynamic, where they can withstand numerous shocks but may collapse suddenly when a specific shock travels through interconnected channels [3][4] Group 3: Challenges in Insurability - Insurability relies on the law of large numbers, which requires events to be independent; however, cyber risks and generative AI create environments where losses are highly correlated and difficult to attribute [6][8] - Generative AI amplifies the structural fragility of cyber insurance, as a single defect or vulnerability can lead to widespread, identical losses across an entire sector [7][8] Group 4: Legal and Regulatory Implications - The issue of "AI liability" remains largely unexplored, with significant contractual asymmetry where AI providers limit their liability and transfer risk to users [19][20] - This creates a regulatory gap, a contractual gap, and an insurance gap, leading to a legal systemic risk characterized by diffuse responsibility and concentrated dependency [23]
From Stateless Nightmares to Durable Agents — Samuel Colvin, Pydantic
AI Engineer· 2025-11-24 20:16
Pantic AI Products & Features - Pantic AI supports temporal and other durable execution frameworks, with ongoing efforts to integrate more workflow orchestration backends [1] - Pantic AI offers tools for building AI agents, including the ability to perform web searches and analyze data [11][41] - Pantic AI's temporal agent handles the IO needed to call an LLM, including tool calls, by turning them into activities [16] - Pantic AI is developing a gateway for buying inference from various models, including observability features [61] Temporal & Durable Execution - Temporal is highlighted as a leading solution for durable execution, crucial for long-running workflows where progress preservation is essential [2] - Temporal records every activity and its inputs/outputs, enabling rerun from any point by plugging in the answers [15] - Temporal enables the resumption of workflows without adding resume code to the agent code [29] - Temporal's retry logic handles runtime errors and ensures continuous operation [22][25] Deep Research & Agent Architecture - Deep research is presented as analogous to a 20 questions game, with web search or RAG as intermediate steps [11] - The company is shifting towards viewing agents as micro-tasks that form larger autonomous task completion systems [40] - A deep research agent can be composed of multiple specialized agents, such as a plan agent, a search agent, and an analysis agent [41] Evaluation & Performance - Pantic AI evals are used to compare the performance of different models, considering factors like cost, speed, and accuracy [33] - Gemini was initially found to be faster and cheaper, but later discovered to sometimes invent incorrect answers [33][35]
Ice Cold, Zen-Like Investing With Alex King
Seeking Alpha· 2025-10-26 20:00
AI and Technology Sector - The AI demand cycle is still in its early stages, with significant growth in GPU shipments, server shipments, and data center builds expected to continue [6][7][8] - Many large companies are adopting AI but are struggling with use cases and understanding the true economics of implementation [8][9] - The current excitement around AI may lead to a "trough of disillusionment," where valuations could drop as reality catches up with expectations [12][14] - Nvidia's valuation is considered reasonable based on its growth and margins, but concerns exist about potential competition affecting its market share [15][26] Semiconductor Industry - The semiconductor sector has seen a significant run-up in prices, with the SOXX ETF moving from 148 to 290 over six months [56][58] - There is a possibility that the semiconductor sector may become a source of funds for investors, as profits are taken and capital is rotated into other sectors [57][64] - Intel is positioned to benefit from government support and reshoring of semiconductor manufacturing, but its fundamentals remain weak [65][69] Tesla - Tesla's stock is viewed positively due to potential synergies with xAI, despite challenges in its core automotive business [34][38] - The market perception of Tesla is driven more by Elon Musk's leadership than by traditional automotive fundamentals [41][42] Gold Market - Gold prices are perceived to have risen too quickly, driven by fear rather than fundamental economic indicators [43][45] - The current demand for gold is seen as a reaction to global uncertainties, but there is skepticism about its sustainability at current price levels [48][50] Quantum Computing - The quantum computing sector has experienced significant momentum, but the long-term viability of smaller companies in this space remains uncertain [30][32] - Government investments may provide temporary support, but stock prices are currently viewed as overvalued relative to fundamentals [32][33] Cryptocurrency - The cryptocurrency market is characterized by high volatility, with Bitcoin and Ether seen as having potential upside, while lower-order coins are viewed with caution [74][84] - The use of ETFs for cryptocurrency investments is recommended as a safer alternative to direct holdings [86]
AI芯片,大泡沫?
半导体行业观察· 2025-10-21 00:51
Core Viewpoint - The article discusses the current state of the AI industry, comparing it to the internet bubble of 1999-2000, highlighting the rapid rise in valuations and the potential risks associated with companies like Coreweave [3][5]. Valuation and Market Trends - As of September, the Nasdaq composite index had a P/E ratio of 33, with major companies like Amazon, Apple, Google, Microsoft, Meta, and TSMC ranging from 27 to 39 [6]. - Nvidia's P/E ratio is notably high at 52, reflecting its leadership in the AI sector, while AMD's P/E has surged to 140 due to its acquisition of OpenAI [6][7]. - GenAI revenue is experiencing rapid growth, with predictions of AI data center investments reaching $5 trillion by 2030, primarily from large, profitable companies [6][7]. Adoption Rates and Consumer Behavior - GenAI adoption is accelerating, with ChatGPT reaching 100 million users in just two months, significantly faster than other platforms like TikTok and Facebook [6][11]. - A consumer AI market valued at $12 billion has emerged within two and a half years, with 60% of U.S. adults using AI in the past six months [11][12]. Enterprise Use Cases and Productivity - GenAI is expected to be the largest market, with significant applications in enhancing productivity, particularly in programming and financial analysis [13][14]. - Companies like Walmart and Salesforce are leveraging AI to avoid hiring additional staff while still achieving growth [14][15]. Competitive Landscape and Future Outlook - The cost of training advanced models is projected to reach billions, limiting participation to companies with substantial resources [16]. - Major players like Anthropic, AWS, Google, and Microsoft are expected to dominate, while smaller companies may need to specialize in niche markets [30][31]. - The article suggests that multiple winners may emerge in the GenAI space, as differentiation and ecosystem bundling are likely to occur [40]. Hardware and Infrastructure Challenges - The demand for data center capacity is surging, with predictions that the scale of data centers will grow significantly by 2026 [32]. - There are concerns about the adequacy of power supply to meet the growing needs of AI data centers, with projections indicating that AI could consume a substantial portion of the U.S. electricity supply by 2024 [38][39].
Tool-Integrated RL 会是 Agents 应用突破 「基模能力限制」 的关键吗?
机器之心· 2025-09-21 01:30
Core Insights - The article discusses the evolution of AI agents, emphasizing the need for enhanced reasoning capabilities through Tool-Integrated Reasoning (TIR) and Reinforcement Learning (RL) to overcome limitations in current AI models [7][8][10]. Group 1: AI Agent Development - The term "Agent" has evolved, with a consensus that stronger agents must interact with the external world and take actions, moving beyond reliance on pre-trained knowledge [8][9]. - AI systems are categorized into LLM, AI Assistant, and AI Agent, with the latter gaining proactive execution capabilities [9][10]. - The shift from simple tool use to TIR is crucial for agents to handle complex tasks that require multi-step reasoning and real-time interaction [10][12]. Group 2: Tool-Integrated Reasoning (TIR) - TIR is identified as a significant research direction, allowing agents to understand goals, plan autonomously, and utilize tools effectively [10][12]. - The transition from supervised fine-tuning (SFT) to RL in TIR is driven by the need for agents to actively learn when and how to use external APIs [12][14]. - TIR enhances the capabilities of LLMs by integrating external tools, enabling them to perform tasks that were previously impossible, such as complex calculations [12][13]. Group 3: Practical Implications of TIR - TIR allows for empirical support expansion, enabling LLMs to generate previously unattainable problem-solving trajectories [12][14]. - Feasible support expansion through TIR makes complex strategies practically executable within token limits, transforming theoretical solutions into efficient strategies [14][15]. - The integration of tool usage into the reasoning process elevates the agent's ability to optimize multi-step decision-making through feedback from tool outcomes [15].