Core Viewpoint - The release of OpenAI's GPT-5 has not met expectations, leading to disappointment and raising questions about the current limits of generative AI technology, despite ongoing enthusiasm in capital markets for practical applications of AI [1][2][3]. Group 1: Performance and Expectations - Users have reported low-level errors in GPT-5, such as incorrect labeling of the U.S. map, and expressed dissatisfaction with its performance compared to previous models [2][3]. - CEO Sam Altman acknowledged the release was "bumpy," attributing issues to a malfunctioning "automatic switcher" that caused the system to call a weaker model [3][4]. - The optimism surrounding AGI has not materialized with GPT-5, leading to a reassessment of its capabilities and the competitive landscape, as rivals like Google and Anthropic have narrowed the gap with OpenAI [4][6]. Group 2: Scaling Laws and Limitations - The core logic supporting large language models, known as "scaling laws," is approaching its limits, with data exhaustion and physical/economic constraints on computational power being significant challenges [6][8]. - The training of GPT-5 reportedly utilized hundreds of thousands of next-generation Nvidia processors, highlighting the immense energy consumption required for such models [6]. Group 3: Market Dynamics and Investment Trends - Despite concerns about technological stagnation, investment in AI startups and infrastructure remains robust, with AI accounting for 33% of global venture capital this year [7][10]. - The focus of the AI race is shifting from achieving AGI to practical productization, with companies like OpenAI deploying engineers to assist clients in integrating AI models [8][9]. - Investors are increasingly valuing the strong growth of products like ChatGPT, which has generated an annual recurring revenue of $12 billion for OpenAI, rather than the distant promise of AGI [10][11].
GPT-5“让人失望”,AI“撞墙”了吗?
华尔街见闻·2025-08-18 10:44