QuestMobile2025下半年AI应用交互革新与生态落地报告:头部梯队玩家快速变化,垂直赛道新秀迭出,三层渗透实现集团化复用
QuestMobile·2025-12-23 02:02

Core Insights - The article discusses the latest developments in the AI application industry, highlighting significant changes in active user rankings and investment trends in the sector [4][10]. Investment Trends - From July to November 2025, the AIGC industry completed 186 financing events, amounting to 33.67 billion yuan, a 20.8% increase compared to the first half of the year [8][10]. - Nearly 50% of the investment events during this period were focused on downstream application layers [8]. AI Model Development - As of November 2025, among eight major manufacturers, the distribution of large models is as follows: single-modal (61.4%), multi-modal (36.7%), and full-modal (1.9%) [8][19]. - Multi-modal interactions have become mainstream, with the combination of multi-modal input leading to single-modal output accounting for 73.3% [8][23]. Application Landscape - Over 200 AI applications were launched from July to November 2025, with plugins making up 81.5% of new applications [5][34]. - Key application areas include AI image processing (24.9%), AI professional consulting (18.5%), and AI efficiency tools (6.8%) [5][36]. Competitive Dynamics - Major internet companies are leveraging multi-modal interactions to enhance user engagement and retention [41]. - Tencent, Baidu, and Alibaba are leading the industry by embedding AI applications into their ecosystems, maximizing user engagement and data utilization [9][55]. User Engagement - The newly launched Ant Financial's "Afu" app and "Lingguang" app achieved significant user engagement, with weekly active users reaching 10.25 million and 2.95 million, respectively [46][49]. - The "Lingguang" app experienced a sevenfold increase in daily active users since its launch [49]. Technological Innovations - The GUI intelligent agent is becoming a mainstream direction for mobile manufacturers, addressing long-tail operational pain points and enhancing user experience [60][67]. - Full-modal models emphasize a native unified architecture, integrating input, alignment, reasoning, and output processes for a more natural user interaction experience [27][29].