Core Viewpoint - The article discusses the recent bugs encountered by AI models, particularly focusing on DeepSeek and Gemini, highlighting the impact of these issues on automated coding and testing processes. Group 1: DeepSeek Issues - DeepSeek has been reported to insert random characters into code, affecting the coding process and potentially leading to system failures [2][3][10]. - The bug is not isolated to third-party platforms, as it also appears in official full-precision versions, indicating a broader issue within the model [2][10]. - Previous updates to DeepSeek have also revealed bugs, including language mixing in writing tasks and overfitting in coding tasks [2]. Group 2: Gemini Issues - Gemini has faced its own challenges, including a self-referential loop that resulted in nonsensical outputs, which has drawn both humor and frustration from users [5][8]. - The issues with Gemini have been characterized as a cyclical bug, stemming from interactions between safety layers, alignment layers, and decoding layers [8]. Group 3: General Stability Concerns - The stability of large models like DeepSeek and Gemini has been a persistent issue, with feedback from users indicating problems such as loss of historical context [12]. - Bugs may arise from minor maintenance activities, such as system prompt changes or updates to tokenizers, which can disrupt previously stable operations [18]. - The increasing complexity of AI systems, particularly those involving multiple agents and toolchains, adds to the fragility of these models, often leading to failures not directly related to the models themselves [20]. Group 4: Importance of Stability - The incidents with DeepSeek and Gemini underscore the necessity of engineering stability, emphasizing that predictability and control in error management are crucial for AI systems [21].
DeepSeek V3.1 突现离谱 Bug:「极」字满屏乱蹦,开发者一脸懵逼