Core Viewpoint - The release of GPT-5 has led to user dissatisfaction, particularly due to OpenAI's removal of the model selector in ChatGPT, which has sparked online petitions from users demanding the return of the GPT-4o model [1][3]. User Reactions - Users have expressed their frustration on platforms like Reddit, with one user stating that OpenAI's actions led him to cancel his subscription, claiming that the removal of model options stripped users of control [1]. - Another user revealed that emotional content sent to GPT-4o is rerouted to a hidden model called GPT-5-Chat-Safety, which has not been disclosed to users [3][4]. Model Functionality and Concerns - The GPT-5-Chat-Safety model is activated when messages are deemed "risky," leading to the loss of user messages that contain emotional context, which raises concerns about transparency and user rights [4][5]. - Users have criticized the performance of GPT-5-Chat-Safety, describing it as inferior to GPT-5 and noting that it alters the nature of conversations, making them less personal [4][11]. Ethical Implications - The rerouting of user messages to a model designed for crisis response without user knowledge has been labeled as a form of fraud, violating consumer rights in some jurisdictions [5][11]. - The situation has ignited discussions about the need for ethical audits of OpenAI's practices, with calls for greater transparency and accountability [16][17][27]. Company Response - As of the latest updates, OpenAI has not provided a clear comment on the situation, although a representative indicated that users would be informed of the model in use upon direct inquiry [23][24].
OpenAI被指欺诈,用户输入可能会被秘密路由到新模型GPT-5-Chat-Safety
3 6 Ke·2025-09-28 08:05