Core Insights - Google has announced the launch of Gemini 3 Flash, the fastest and most cost-effective model in the Gemini 3 series, outperforming previous flagship models in both speed and performance while being more affordable [3][6][11] Performance Metrics - Gemini 3 Flash has achieved a score of 78% in the SWE-bench Verified benchmark, surpassing Gemini 3 Pro and Claude Sonnet 4.5, and scored 81.2% in the MMMU-Pro benchmark, outperforming GPT-5.2 by several percentage points [6][7] - The model demonstrates significant improvements in various benchmarks, including a 30% reduction in token usage compared to the previous generation, enhancing performance and accuracy in daily tasks [9][10] Cost Efficiency - The input cost for Gemini 3 Flash is $0.50 per million tokens, while the output cost is $3 per million tokens, making it significantly cheaper than competitors like Claude Sonnet 4.5 and GPT-5.2, which charge $15 and $14 per million tokens respectively [8][9] - Developers have reported that switching to Gemini 3 Flash can reduce operational costs by 50%-70% compared to using previous models like GPT-4o or Gemini 3 Pro [10] Market Position - Following the release of Gemini 3 Pro and Gemini 3 Deep Think, Google has gained significant market recognition, processing over 1 trillion tokens daily through its internal API since the launch [11] - The introduction of Gemini 3 Flash is expected to further solidify Google's position as a leader in the large model arena, providing developers with a model that balances speed and intelligence without compromise [11]
狙击Open AI!谷歌一个月内连发“数弹”