Workflow
多模态模型
icon
Search documents
21对话|联汇科技CEO赵天成:具身智能演进方向的“非常答”
Sou Hu Cai Jing· 2025-07-28 04:37
Core Insights - The 2025 World Artificial Intelligence Conference (WAIC) held in Shanghai showcased a significant interest in AI applications, particularly in embodied intelligence and multimodal models [1][2] - Lianhui Technology, a pioneer in multimodal models, has launched the world's first "OmAgent" platform, which focuses on physical world applications rather than digital spaces [1][2] Company Developments - Lianhui Technology has developed its multimodal model from its first generation in 2021 to the fifth generation, with an iteration speed of approximately one year per generation [2] - The company has established its international headquarters in Zhangjiang, Shanghai, to leverage the concentration of intelligent terminals and embodied robots, as well as rich application scenarios in logistics, ports, and industrial manufacturing [2] Industry Trends - The current trend in AI applications is characterized by a shift towards the integration of various technologies, with embodied intelligence being a major focus for 2023 [1] - The evolution of embodied intelligence is seen as progressing through different stages, with various hardware carriers at different maturity levels, indicating a phased approach to deployment [2]
师兄自己发了篇自动驾大模型,申博去TOP2了。。。
自动驾驶之心· 2025-07-09 12:56
Core Viewpoint - The article discusses the advancements in large models (LLMs) for autonomous driving, highlighting the need for optimization in efficiency, knowledge expansion, and reasoning capabilities as the technology matures [2][3]. Group 1: Development of Large Models - Companies like Li Auto and Huawei are implementing their own VLA and VLM solutions, indicating a trend towards the practical application of large models in autonomous driving [2]. - The focus for the next generation of large models includes lightweight design, hardware adaptation, knowledge distillation, quantization acceleration, and efficient fine-tuning [2][3]. Group 2: Course Introduction - A course is being offered to explore cutting-edge optimization methods for large models, focusing on parameter-efficient computation, dynamic knowledge expansion, and complex reasoning [3]. - The course aims to address core challenges in model optimization, including pruning, quantization, retrieval-augmented generation (RAG), and advanced reasoning paradigms like Chain-of-Thought (CoT) and reinforcement learning [3][4]. Group 3: Enrollment and Requirements - The course will accept a maximum of 8 students per session, targeting individuals with a background in deep learning or machine learning who are familiar with Python and PyTorch [5][10]. - Participants will gain a systematic understanding of large model optimization, practical coding skills, and insights into academic writing and publication processes [8][10]. Group 4: Course Outcomes - Students will learn to combine theoretical knowledge with practical coding, develop their own research ideas, and produce a draft of a research paper [8][9]. - The course includes a structured timeline with specific topics each week, covering model pruning, quantization, efficient fine-tuning, and advanced reasoning techniques [20].
[大模型实践] 卡比人贵时代的深度学习经验
自动驾驶之心· 2025-06-20 14:06
Core Viewpoint - The article emphasizes the importance of developing new methodologies for large model experiments, focusing on key indicators, identifying true bottlenecks, balancing large and small experiments, and enhancing team collaboration [1]. Group 1: Key Indicators - Identifying key indicators is crucial as they should clearly differentiate between state-of-the-art (SoTA) models and others, guiding the direction of model iterations [4]. - Good indicators must objectively reflect performance levels and accurately indicate the direction for model improvements, avoiding the pitfalls of focusing on misleading metrics [4]. Group 2: Experimentation Methodologies - The cost of experiments has increased significantly, making it essential to conduct meaningful experiments rather than low-value ones [5]. - It is advised to conduct large experiments to identify significant issues while using small experiments to filter out incorrect ideas [6]. Group 3: Team Collaboration - Given the complexity of large model experiments, it is important for team members to understand their comparative advantages and roles within the team [8]. - Effective collaboration can be enhanced by finding ways to observe and document experiments together, increasing communication frequency [8].
启明创投周志峰:AI的性能和成本已达到临界点,AI应用将在今年爆发
IPO早知道· 2025-04-29 03:01
2025年会是AI应用全面落地的大年 近两年 人工智能市场最热闹的是 大模型领域, 我们 已投资 了 14 家 大语言模型、多模态模型 、 具身智能 模型或端到端智驾模型的领军企业 ,这个数量在亚洲位居前列。同时我们 协助 管理着规 模达 100亿 元 的 北京市人工智能产业投资基金。 这些 都是 "触点",为 我们 判断 AI行业的发展 脉络 提供了 更多的数据,能够 更好地训练我们的投资 思维模 型 。 任何一轮科技浪潮,都开始于底层基础技术的耕耘。 本文为IPO早知道原创 作者| Stone Jin 过去几年,启明创投 一直把 AI的投资分成三个层次 : 微信公众号|ipozaozhidao 据 IPO早知道消息, 启明创投主管合伙人周志峰 日前 发表了题为 "2025,AI照进现实之旅"的主旨 演讲,分享了对AI投资的见解,和对AI市场演进路径的推演与预判。 以下系演讲精选: 为什么不是去年 或 前年? 原因是 任何 一轮科技 浪潮 ,都开始于底层基础技术的耕耘,其中有两个核心技术指标,一是性 能,从凑合用到真正好用,二是成本,从 "高不可攀"到"轻松消费",当这两个核心指标均达到临界 点时,应用就会 ...