Core Insights - OpenAI has launched a more powerful version of its inference AI model, o1-pro, which consumes more computational resources to provide consistently better responses [1][2] - The pricing for o1-pro is significantly higher than its predecessors, with input costs at $150 per million tokens and output costs at $600 per million tokens, making it twice the price of GPT-4.5 input and ten times the standard o1 price [1] - Initial impressions of o1-pro in the ChatGPT platform have been mixed, with users reporting difficulties in solving certain problems, indicating that the model may not yet meet expectations [2] Pricing and Comparison - OpenAI's o1-pro charges $150 for every million input tokens and $600 for output tokens, while competitors like DeepSeek-V3 charge $0.07 and $0.27 for input tokens, and $1.10 for output tokens [1] - DeepSeek-R1 has input token prices of $0.14 and $0.55, with output tokens priced at $2.19, highlighting the competitive pricing landscape [2] Performance Insights - Despite the increased computational power, o1-pro has shown only slight improvements over the standard o1 model in coding and mathematical problem-solving, although it is noted to be more reliable in its responses [2] - User feedback indicates that o1-pro struggles with certain tasks, such as solving Sudoku puzzles, which raises questions about its practical effectiveness [2]
速递|OpenAI新模型定价为DeepSeek的一千倍,o1-pro API为其目前最贵模型
Z Potentials·2025-03-20 02:56