AI推理服务器SR650i
Search documents
一口气集齐老黄苏妈英特尔,还得是AI,还得是联想
量子位· 2026-01-09 04:09
Core Viewpoint - The article discusses the emerging trend of AI hardware and the concept of "super entrance" in the tech industry, emphasizing that all devices will evolve into AI devices, marking a significant shift in technology at CES 2026 [1][6]. Group 1: AI Hardware Evolution - The CES 2026 showcased a consensus among manufacturers that all devices, including traditional smartphones and PCs, will adopt more intelligent forms [1]. - The emergence of new intelligent hardware species is increasingly diverse, indicating a shift in how AI is integrated into everyday devices [3]. Group 2: Super Entrance Concept - The "super entrance" concept refers to platforms that aggregate user traffic and connect various digital scenarios, similar to the role of super apps in the mobile internet era [7]. - The competition for "super entrance" is shifting from foundational technology to application layers and broader ecosystems, as seen in the AI landscape [9]. Group 3: Hybrid AI as the Ultimate Path - Lenovo's CEO proposed that integrating personal, enterprise, and public intelligence into a hybrid AI model is essential for creating personalized and diverse AI solutions [14][17]. - The hybrid AI model emphasizes the deep integration of cloud-based large models with localized customized small models to better meet user needs [18]. Group 4: Lenovo's Innovations - Lenovo introduced the world's first personal AI super intelligent agent, Lenovo Qira, which connects various devices and enhances task execution through cross-platform capabilities [20]. - The Qira agent can remember user preferences and interact in a personalized manner while ensuring privacy protection [22]. Group 5: Enterprise AI Solutions - Lenovo launched a series of AI inference servers aimed at improving efficiency and reducing operational costs for enterprises, adapting to diverse AI deployment needs [24]. - The collaboration with NVIDIA to establish an AI cloud super factory aims to expedite AI deployment for cloud service providers [25]. Group 6: Market Position and Future Outlook - Lenovo's AI-related business accounted for 30% of its total revenue, showing a 13% year-on-year growth, indicating a strong market position in both consumer and enterprise segments [34][35]. - The company aims to quadruple its business cooperation scale with NVIDIA over the next 3-4 years, highlighting its commitment to expanding its AI ecosystem [38].
杨元庆:新一轮算力浪潮将源于AI推理的爆发|直击CES
Xin Lang Cai Jing· 2026-01-07 02:35
Core Viewpoint - The new wave of computing power will be driven by the explosion of AI inference, as stated by Lenovo's Chairman and CEO Yang Yuanqing during the CES keynote speech [6]. Group 1: Evolution of Computing Power Infrastructure - The global computing power infrastructure market has undergone four waves of innovation: the first wave focused on traditional computing for enterprise information and digital transformation; the second wave was driven by cloud services and applications, leading to the rapid rise of cloud computing; the third wave was characterized by large-scale computing clusters for training large language models, primarily in the cloud [3][8]. - The current trend is shifting from "training" to "inference," with a broad consensus in the global AI industry that local deployment of AI inference is becoming a true competitive advantage for enterprises [3][8]. Group 2: Local AI Inference Deployment - Local deployment of AI inference allows for faster response times as inference occurs closer to the data generation source, necessitating a hybrid computing infrastructure composed of public cloud, private cloud, local data centers, and edge computing [3][8]. - AMD's Chair and CEO Lisa Su agrees with this perspective, emphasizing the need for global enterprises to bring AI closer to their data while maintaining flexibility and the ability to evolve over time [3][8]. Group 3: New Product Launches - Lenovo has launched a comprehensive suite of inference optimization server products, including AI inference servers SR675i, SR650i, and edge computing server SE455i, aimed at enhancing inference efficiency, reducing operational costs, and strengthening data security to meet diverse and real-time AI deployment needs [4][9].