Workflow
你的AI越来越蠢?因为它学会见人下菜碟了
创业邦·2025-09-12 03:14

Core Viewpoint - The article discusses the perceived decline in the performance of AI models, particularly OpenAI's ChatGPT, highlighting a trend where AI models are designed to conserve resources by reducing their computational effort when possible [6][13][18]. Group 1: AI Model Performance - OpenAI's ChatGPT was found to struggle with basic arithmetic, raising concerns about its current capabilities compared to earlier versions [6][7]. - The introduction of models like LongCat and DeepSeek indicates a shift in the industry towards efficiency, with these models employing mechanisms to optimize token usage and processing [10][15][24]. Group 2: Cost Efficiency and Token Management - AI companies are implementing strategies to reduce token consumption, with OpenAI's GPT-5 reportedly saving 50%-80% in output tokens, which translates to significant cost savings for large organizations [13][18]. - The concept of a "perceptual router" has been introduced, allowing models to determine when to engage in complex processing versus simpler tasks, thereby enhancing efficiency [22][24]. Group 3: User Experience and Model Limitations - The new routing mechanisms have led to instances where models fail to engage deeply with user prompts, resulting in a lack of nuanced responses [30][34]. - Users have expressed frustration over the perceived loss of control and depth in interactions with AI models, particularly with the introduction of a one-size-fits-all approach [29][30].