Gemini 1.0
Search documents
Gemini 3拉动业务显著增长,谷歌AI模型申请量五个月翻倍
Hua Er Jie Jian Wen· 2026-01-20 00:34
知情人士透露,去年11月发布的Gemini 3再次引发了新的使用热潮,并获得了广泛好评。 随着模型质量的提升,谷歌Gemini AI模型的销售业务在过去一年中呈现爆发式增长。 当地时间1月19日,据The Information援引谷歌内部数据指出,通过谷歌云进行的Gemini API(应用程序 编程接口)调用请求量增长迅猛。数据显示,API调用量从去年3月Gemini 2.5发布时的约350亿次,飙 升至8月的约850亿次,翻了一倍多。 这一增长不仅体现在数量上,更体现在质量和利润率的改善上。据悉,早期的Gemini 1.0和1.5模型由于 大幅折扣,利润率为负。而随着Gemini 2.5及后续版本的推出,模型质量的提升使谷歌能够从单纯的"价 格战"转向"质量战",从而实现了正向的边际利润。 巨额资本开支下的市场大考 尽管业务数据向好,但市场仍关注高昂的投入产出比。 市场有风险,投资需谨慎。本文不构成个人投资建议,也未考虑到个别用户特殊的投资目标、财务状况或需要。用户应考虑本文中的任何 意见、观点或结论是否符合其特定状况。据此投资,责任自负。 投资者正密切关注即将披露的Q4财报,以寻找这些巨额投资正在产生回 ...
阿里通义千问再放大招
21世纪经济报道· 2025-08-20 01:45
Core Viewpoint - The article discusses the rapid advancements in multimodal AI models, particularly focusing on Alibaba's Qwen series and the competitive landscape among various domestic companies in China, highlighting the shift from single-language models to multimodal integration as a pathway to achieving Artificial General Intelligence (AGI) [1][3][7]. Group 1: Multimodal AI Developments - Alibaba's Qwen-Image-Edit, based on the 20B parameter Qwen-Image model, enhances semantic and visual editing capabilities, supporting bilingual text modification and style transfer [1][4]. - The global multimodal AI market is projected to reach $2.4 billion by 2025 and $98.9 billion by the end of 2037, indicating significant growth potential in this sector [1][3]. - Major companies, including Alibaba, are intensifying their focus on multimodal capabilities, with Alibaba's Qwen2.5 series demonstrating superior visual understanding compared to competitors like GPT-4o and Claude3.5 [3][5]. Group 2: Competitive Landscape - Other domestic firms, such as Step and SenseTime, are also launching new multimodal models, with Step's latest model supporting multimodal reasoning and complex inference capabilities [5][6]. - The rapid release of various multimodal models by companies like Kunlun Wanwei and Zhiyuan reflects a strategic push to capture developer interest and establish influence in the multimodal domain [5][6]. - The competition in the multimodal space is still in its early stages, providing opportunities for companies to innovate and differentiate their offerings [6][9]. Group 3: Challenges and Future Directions - Despite advancements, the multimodal field faces significant challenges, including the complexity of visual data representation and the need for effective cross-modal mapping [7][8]. - Current multimodal models primarily rely on logical reasoning, lacking strong spatial perception abilities, which poses a barrier to achieving true AGI [9]. - The industry is expected to explore how to convert multimodal capabilities into practical productivity and social value as technology matures [9].
专访张祥雨:多模态推理和自主学习是未来的 2 个 「GPT-4」 时刻
海外独角兽· 2025-06-08 04:51
本期内容是拾象 CEO 李广密对大模型公司阶跃星辰首席科学家张祥雨的访谈。 张祥雨专注于多模态领域,他提出了 DreamLLM 多模态大模型框架,这是业内最早的图文生成理解 一体化的多模态大模型架构之一,基于这个框架,阶跃星辰发布了中国首个千亿参数原生多模态大 模型 Step-1V。此外,他的学术影响力相当突出,论文总引用量已经超过了 37 万次。 一直以来,业界都相当期待一个理解、生成一体化的多模态,但直到今天这个模型还没出现,如何 才能达到多模态领域的 GPT-4 时刻?这一期对谈中,祥雨结合自己在多模态领域的研究和实践历 程,从纯粹的技术视角下分享了自己对多模态领域关键问题的全新思考,在他看来,虽然语言模型 领域的进步极快,但多模态生成和理解的难度被低估了: • 接下来 2-3 年,多模态领域会有两个 GPT-4 时刻:多模态推理和自主学习; • o1 范式的技术本质在于激发出 Meta CoT 思维链:允许模型在关键节点反悔、重试、选择不同分 支,使推理过程从单线变为图状结构。 目录 01 研究主线: 重新回归大模型 • 多模态生成理解一体化难以实现的原因在于,语言对视觉的控制能力弱,图文对齐不精确, ...