Workflow
视频生成模型
icon
Search documents
字节大会来袭,利好AI应用!字节产业链含量33%的科创人工智能ETF(589520)逆市活跃,近3日吸金1346万元
Xin Lang Cai Jing· 2025-12-18 02:55
今日(12月18日)字节产业链含量超33%的科创人工智能ETF(589520)逆市活跃,场内价格现涨 0.53%,上交所数据显示,该ETF此前3日连续获资金净流入,合计金额1346万元,反映资金看好国产AI 产业链后市表现,逐步进场布局! 消息面上,字节火山引擎FORCE原动力大会于12月18-19日举行。开源证券表示,大会将发布豆包大模 型全新成员,有望实现性能提升与成本降低,特别是视频生成模型有望进一步升级。同时,大会将重点 关注Agent开发工具(如TRAE、MCP服务等)的升级与Agent生态的扩容,推动AI在企业生产场景中的 落地应用。关注字节产业链及AI应用投资机会。 业内人士指出,AI投资逻辑逐步迭代,从"卖铲子"(硬件和基础设施)转向"挖金子"(实际应用和商 业化)。相比算力,AI应用似乎更具有想象空间。算力层的"价值底座"已趋于稳固,估值变得"有迹可 循";应用层的爆发才刚刚开始,其重塑各行各业、创造全新商业模式的"想象空间"显然更大。 值得关注的是,科创人工智能ETF(589520)标的指数囊括热门概念,截至11月底,AI应用概念股权重 占比达30.94%,字节跳动产业链权重占比达33.6 ...
PhysicalAgent:迈向通用认知机器人的基础世界模型框架
具身智能之心· 2025-09-22 00:03
Core Viewpoint - The article discusses the development of PhysicalAgent, a robotic control framework designed to overcome key limitations in the current robot manipulation field, specifically addressing the robustness and generalizability of visual-language-action (VLM) models and world model-based methods [2][3]. Group 1: Key Bottlenecks and Solutions - Current VLM models require task-specific fine-tuning, leading to a significant drop in robustness when switching robots or environments [2]. - World model-based methods depend on specially trained predictive models, limiting their generalizability due to the need for carefully curated training data [2]. - PhysicalAgent aims to integrate iterative reasoning, diffusion video generation, and closed-loop execution to achieve cross-modal and cross-task general manipulation capabilities [2]. Group 2: Framework Design Principles - The framework's design allows perception and reasoning modules to remain independent of specific robot forms, requiring only lightweight skeletal detection models for different robots [3]. - Video generation models have inherent advantages due to pre-training on vast multimodal datasets, enabling quick integration without local training [5]. - The framework aligns with human-like reasoning, generating visual representations of actions based solely on textual instructions [5]. - The architecture demonstrates cross-modal adaptability by generating different manipulation tasks for various robot forms without retraining [5]. Group 3: VLM as the Cognitive Core - VLM serves as the cognitive core of the framework, facilitating a multi-step process of instruction, environment interaction, and execution [6]. - The innovative approach redefines action generation as conditional video synthesis rather than direct control strategy learning [6]. - The robot adaptation layer is the only part requiring specific robot tuning, converting generated action videos into motor commands [6]. Group 4: Experimental Validation - Two types of experiments were conducted to validate the framework's cross-modal generalization and iterative execution robustness [8]. - The first experiment focused on verifying the framework's performance against task-specific baselines and its ability to generalize across different robot forms [9]. - The second experiment assessed the iterative execution capabilities of physical robots, demonstrating the effectiveness of the "Perceive→Plan→Reason→Act" pipeline [12]. Group 5: Key Results - The framework achieved an 80% final success rate across various tasks for both the bimanual UR3 and humanoid G1 robots [13][16]. - The first-attempt success rates were 30% for UR3 and 20% for G1, with average iterations required for success being 2.25 and 2.75, respectively [16]. - The iterative correction process significantly improved task completion rates, with a sharp decline in the proportion of unfinished tasks after the first few iterations [16].
宇树科技王兴兴发“暴论”,对智驾有什么参考?
3 6 Ke· 2025-08-11 23:58
Core Viewpoint - The current state of embodied intelligent AI models, particularly the VLA model, is seen as inadequate for large-scale application in robotics, with a need for more advanced model architectures and a shift towards video generation models for better efficiency [1][10][13]. Summary by Sections Key Bottlenecks - The primary reason for the limited large-scale application of robots is not hardware performance or cost, but rather the immaturity of embodied intelligent AI models [4]. - Current robot hardware is sufficient for basic functions, but the AI models have not yet reached a critical threshold for advancement [6]. - The industry is overly focused on data, neglecting the fundamental issues with the models themselves [6][8]. New Technology Directions - Video generation models are proposed as a more promising direction than the VLA model, as they can simulate robot actions in video form, which can then be translated into control signals for real robots [13]. - However, there is a challenge with current video generation models being too focused on video quality, leading to high GPU consumption, which may not be necessary for robotic applications [15]. Future Technological Focus - The development of embodied intelligent robots will concentrate on three main areas over the next 2-5 years: 1. Unified end-to-end intelligent robot models to enhance capabilities [16]. 2. Lower-cost, longer-lasting hardware with mass manufacturing capabilities [16]. 3. Low-cost, large-scale distributed computing networks to support robotic operations [16]. Market Expectations - There is a belief that once robots achieve large-scale operational capabilities, they could potentially be free to users, as their value creation could be taxed [17].