Core Insights - UBS highlights the recent advancements in AI with the launch of new large language models (LLMs) by companies like Google, Anthropic, and DeepSeek, intensifying competition in the industry [1] - The report emphasizes the continued relevance of the "scaling laws" in model performance, indicating that computational power will remain a critical factor in determining the AI competitive landscape [1] Model Performance - The latest generation of models has shown significant breakthroughs, with Gemini 3 Deep Think and Claude Opus 4.5 achieving multi-step reasoning task scores of 45% and 38%, respectively, surpassing previous models that scored between 10%-20% [2] - This performance aligns with the effectiveness of the AI model pre-training scaling laws, where increased computational investment leads to non-linear improvements in model capabilities [2] Chip Technology Competition - Google’s Gemini 3 Pro is trained entirely on self-developed TPU chips, sparking discussions about the competition between GPUs and AI-specific ASIC chips [2] - ASIC chips are noted for their higher efficiency in specific AI tasks, while GPUs maintain a 90% market share in data center chips due to their flexible architecture and extensive software ecosystem [2] - The collaboration between OpenAI and Broadcom, as well as Anthropic and Google, is expected to enhance the focus on ASIC chips, with both chip types anticipated to coexist in the future [2] Market Trends - The introduction of next-generation chips like NVIDIA's Blackwell and Rubin is expected to sustain the competition for computational expansion, leading to an upward revision of AI capital expenditure forecasts by UBS [3] - The advancements from Google, Anthropic, and DeepSeek are increasing competitive pressure on companies like OpenAI, driving the AI industry towards a multi-model and multi-vendor landscape, a trend expected to persist at least until 2026 [3]
模型可以“卷”、算力必须“烧”!瑞银:AI巨头密集推新模型,算力投入将继续加码