WorldVLA

Search documents
 阿里新研究:一统VLA和世界模型
 具身智能之心· 2025-10-31 00:04
编辑丨 量子位 点击下方 卡片 ,关注" 具身智能之心 "公众号 >> 点击进入→ 具身 智能之心 技术交流群 更多干货,欢迎加入国内首个具身智能全栈学习社区 : 具身智能之心知识星球 (戳我) , 这里包含所有你想要的。 如果说视觉让AI看见世界,动作让AI改变世界,那么—— WorldVLA正在让AI理解世界。 顾名思义, WorldVLA 是一个将视觉语言动作模型(VLA)与世界模型相融合的统一框架,由阿里巴巴达摩院、湖畔实验室和浙江大学共同 提出。 在该框架下, 实验结果表明,WorldVLA的表现显著优于独立的动作模型与世界模型,充分体现了二者之间的相互增强效应。 | Model Type | Discrete | Continous | Input Output | | --- | --- | --- | --- | | Action Model | OpenVLA (Kim et al., 2024) | TO (Black et al., 2024) | T + V | | Video Prediction Model | MAGVIT (Yu et al., 2023) | SVD (Bla ...
 阿里新研究:统一了VLA和世界模型
 3 6 Ke· 2025-10-29 10:32
 Core Insights - WorldVLA is a unified framework that integrates Visual Language Action models (VLA) with world models, developed collaboratively by Alibaba DAMO Academy, Lakehead Laboratory, and Zhejiang University [1][4].   Group 1: Framework Overview - The world model predicts future images by understanding actions and images, aiming to learn the underlying physical laws of the environment to enhance action generation accuracy [2]. - The action model generates subsequent actions based on image observations, which not only aids visual understanding but also enhances the visual generation capability of the world model [2]. - Experimental results indicate that WorldVLA significantly outperforms independent action and world models, showcasing a mutual enhancement effect between the two [2][12].   Group 2: Model Architecture - WorldVLA utilizes three independent tokenizers for encoding images, text, and actions, initialized based on the Chameleon model [6]. - The image tokenizer employs a VQ-GAN model with a compression ratio of 16 and a codebook size of 8192, generating 256 tokens for 256×256 images and 1024 tokens for 512×512 images [6]. - The action tokenizer discretizes continuous robot actions into 256 intervals, represented by 7 tokens, including relative positions and angles [6].   Group 3: Training and Performance - WorldVLA employs a self-regressive training approach, where all text, actions, and images are tokenized and trained in a causal manner [8]. - A novel attention mask for action generation ensures that the current action generation relies solely on text and visual inputs, preventing errors from previous actions from affecting subsequent ones [10]. - Benchmark results show that even without pre-training, WorldVLA outperforms the discrete OpenVLA model, validating its architectural design [12].   Group 4: Mutual Benefits of Models - The introduction of the world model significantly enhances the performance of the action model by enabling it to learn the underlying physical laws of the system, which is crucial for tasks requiring precision [15]. - The world model provides predictive capabilities that inform decision-making processes, optimizing action selection strategies and improving task success rates [18]. - Conversely, the action model improves the quality of the world model's output, particularly in generating longer video sequences [21].   Group 5: Expert Opinions - Chen Long, Senior Research Director at Xiaomi Auto, emphasizes that VLA and world models do not need to be mutually exclusive; their combination can promote each other, leading to advancements in embodied intelligence (AGI) [24].
 阿里新研究:统一了VLA和世界模型
 量子位· 2025-10-29 09:30
 Core Insights - WorldVLA is a unified framework that integrates Visual Language Action Models (VLA) with World Models, proposed by Alibaba DAMO Academy, Lake Lab, and Zhejiang University [1][4] - Experimental results indicate that WorldVLA significantly outperforms independent action models and world models, showcasing a mutual enhancement effect [2]   Model Overview - The framework combines three independent tokenizers for encoding images, text, and actions, utilizing a VQ-GAN model for image tokenization with a compression ratio of 16 and a codebook size of 8192 [8] - The action tokenizer discretizes continuous robot actions into 256 intervals, representing actions with 7 tokens [8]   Model Design - WorldVLA employs a self-regressive action world model to unify action and image understanding and generation [4] - The model addresses limitations of existing VLA and world models by enhancing action generation accuracy through environmental physical understanding [5][14]   Training and Performance - WorldVLA is jointly trained by integrating data from both action models and world models, enhancing action generation capabilities [13] - The model's performance is positively correlated with image resolution, with 512x512 pixel resolution showing significant improvements over 256x256 [21][23]   Benchmark Results - WorldVLA demonstrates superior performance compared to discrete OpenVLA models, even without pre-training, validating its architectural design [19] - The model's ability to generate coherent and physically plausible states in various scenarios is highlighted, outperforming pure world models [31][32]   Mutual Enhancement - The world model enhances the action model's performance by predicting environmental state changes based on current actions, crucial for tasks requiring precision [25] - Conversely, the action model improves the visual understanding of the world model, supporting better visual generation [17][30]
 VLA和World Model世界模型,哪种自动驾驶路线会胜出?
 自动驾驶之心· 2025-09-04 23:33
 Core Viewpoint - The article discusses the advancements and differences between Vision-Language-Action (VLA) models and World Models in the context of autonomous driving, emphasizing that while VLA is currently dominant, World Models possess inherent advantages in understanding and predicting physical realities [3][4][30].   Group 1: VLA vs. World Models - VLA currently dominates the market, with over 95% of global models generating videos for autonomous driving training rather than direct application [3]. - World Models are considered to have a significant theoretical advantage as they enable end-to-end learning without relying on language, directly linking perception to action [3][4]. - Proponents of World Models argue that they can understand the physical world and infer causal relationships, unlike VLA, which primarily mimics learned patterns [4][6].   Group 2: Development and Architecture - The World Model framework consists of three main modules: Vision Model (V), Memory RNN (M), and Controller (C), which work together to learn visual representations and predict future states [11]. - The architecture of World Models has evolved, with notable developments like RSSM and JEPA, which focus on combining deterministic and stochastic elements to enhance performance [15][17]. - JEPA, introduced in 2023, emphasizes predicting abstract representations rather than pixel-level details, significantly reducing computational requirements [17][19].   Group 3: Advantages and Challenges - World Models have two main advantages: they require less computational power than VLA and can utilize unlabelled data from the internet for training [19]. - However, challenges remain, such as the need for diverse and high-quality data to accurately understand physical environments, and the limitations of current sensors in capturing all necessary information [19][20]. - Issues like representation collapse and error accumulation in long-term predictions pose significant hurdles for the effective deployment of World Models [21][22].   Group 4: Future Directions - The integration of VLA and World Models is seen as a promising direction, with frameworks like IRL-VLA combining the strengths of both approaches for enhanced performance in autonomous driving [22][28]. - The article suggests that while VLA is likely to prevail in the near term, the combination of VLA with World Model enhancements could lead to superior outcomes in the long run [30].
 FlowVLA:破解 VLA 模型 “物理失真” 难题,机器人世界建模再升级
 具身智能之心· 2025-08-29 00:03
 Core Viewpoint - The article discusses the limitations of traditional Vision-Language-Action (VLA) models and introduces FlowVLA, a new framework that addresses these issues by implementing a Visual Chain of Thought (Visual CoT) principle, enhancing the model's ability to predict future frames through structured physical reasoning rather than mere pixel replication [5][8][36].   Group 1: Background and Current State - VLA models, particularly those pre-trained as world models, show significant potential in the field of general robotics, primarily through large self-regressive Transformers that learn environmental dynamics from vast video data [6][7]. - Existing models face critical flaws, including task confusion leading to prediction failures, knowledge transfer inefficiencies between passive observation and active control, and entangled learning of dynamics and appearance [7].   Group 2: Contributions of FlowVLA - FlowVLA introduces a new learning framework that emphasizes structured physical reasoning by requiring the model to infer motion dynamics before predicting future frames [8][10]. - The model is designed to unify appearance and motion reasoning within a single self-regressive Transformer, maintaining parameter efficiency and architectural simplicity [9][10]. - Experimental results validate FlowVLA's superior performance across various robotic operation benchmarks, demonstrating enhanced sample efficiency and bridging the gap between pre-training and policy fine-tuning [10][20].   Group 3: Research Content - The Visual CoT reasoning process decomposes the frame prediction into a causal chain of "current frame → optical flow → future frame," allowing the model to separate dynamic and appearance learning [12][14]. - The two-phase training paradigm consists of a pre-training phase focused on world model learning and a fine-tuning phase for adapting to control tasks [15][16].   Group 4: Experimental Analysis - FlowVLA outperforms existing methods in the LIBERO dataset across all task sets, particularly excelling in long-term tasks, showcasing its robust understanding of physical dynamics [20][21]. - In the SimplerEnv dataset, FlowVLA demonstrates strong adaptability to visual domain shifts, achieving significant performance improvements in tasks where other models struggle [22][23]. - The model's sample efficiency is validated, requiring only one-third of the training steps to reach peak performance compared to baseline models, with a 55% higher peak success rate in low-data scenarios [30][32].   Group 5: Key Component Validation - Ablation studies on the LIBERO-10 benchmark highlight the importance of the Visual CoT structure, flow loss, and interleaved sequence format, confirming their critical roles in the model's performance [33][34].   Group 6: Comparison with Related Work - FlowVLA distinguishes itself from traditional VLA models by prioritizing dynamic understanding and establishing a robust world model before adapting to control tasks, thus laying a solid foundation for physical knowledge [35].
 首次!世界模型、动作模型融合,全自回归模型WorldVLA来了
 机器之心· 2025-07-03 08:01
 Core Viewpoint - Alibaba's Damo Academy has introduced WorldVLA, a model that integrates World Model and Action Model into a unified autoregressive framework, enhancing understanding and generation across text, images, and actions [1][4].   Summary by Sections   Research Overview - The development of Vision-Language-Action (VLA) models has become a significant focus in robotic action modeling, typically built on large-scale pretrained multimodal language models (MLLMs) with added action output capabilities [4]. - Existing VLA models often lack a deep understanding of actions, treating them merely as output rather than analyzing them as input [5].   Model Description - WorldVLA addresses the limitations of both VLA and World Models by using a unified autoregressive mechanism for action and image understanding and generation [5][10]. - It employs three independent encoders for processing images, text, and action data, sharing the same vocabulary to facilitate cross-modal tasks [12].   Mechanism and Strategy - The World Model component generates visual representations based on input actions, learning the physical dynamics of the environment, while the Action Model enhances visual understanding [7]. - An action attention masking strategy is introduced to mitigate error accumulation during the generation of multiple actions, significantly improving performance in action chunking tasks [8][14].   Experimental Results - In the LIBERO benchmark, WorldVLA achieved a 4% improvement in grasp success rate compared to traditional action models and a 10% reduction in Fréchet Video Distance (FVD) compared to traditional world models [8]. - The introduction of the attention mask strategy led to a performance improvement in grasp success rates ranging from 4% to 23% in action chunking tasks [8].   Comparative Analysis - WorldVLA outperformed other models in various metrics, demonstrating its effectiveness in integrating action and world modeling [18]. - The model's ability to generate the next frame based on actions and images showcases its advanced capabilities in visual prediction [24].
 WorldVLA:世界模型实现视觉-动作双向增强,抓取精度显著提升
 自动驾驶之心· 2025-07-01 04:04
 Core Viewpoint - WorldVLA is introduced as a self-regressive action world model that integrates action and image understanding and generation, outperforming independent action and world models through mutual enhancement [4][7][9].   Group 1: Model Definition and Components - WorldVLA combines visual, language, and action (VLA) models with a world model to predict future images based on actions and visual understanding [4][6]. - The model employs three independent tokenizers for images, text, and actions, sharing the same vocabulary for unified cross-modal understanding [7][14]. - The action model generates subsequent actions based on image observations, while the world model predicts future visual states, enhancing decision-making in action models [6][29].   Group 2: Performance and Evaluation - Experiments show that WorldVLA achieves a 4% higher success rate in grasping tasks compared to traditional action models and reduces Fréchet Video Distance (FVD) by 10% compared to standard world models [8][27]. - The attention mask strategy significantly mitigates performance degradation in action sequence generation, improving grasping success rates by 4% to 23% [8][32]. - The model's performance correlates positively with image resolution, indicating that higher resolution provides better visual information for robotic tasks [27].   Group 3: Training Strategy and Data - WorldVLA is trained using a mix of action model data and world model data, enhancing action generation through understanding of environmental physics [16][22]. - The training involves generating actions based on text instructions and image observations, while the world model predicts the next image frame based on current observations and actions [17][18]. - The loss function balances contributions from action and world model data, ensuring effective training despite the disparity in token counts [22].   Group 4: Contributions and Innovations - The introduction of the attention mask strategy allows for independent generation of actions, reducing error propagation in sequential action generation [19][20]. - WorldVLA demonstrates superior performance in generating longer video sequences compared to pure world models, highlighting the benefits of integrating action models [31]. - The model's architecture and training strategies reveal the potential for enhanced task performance through pre-training with world model data [36].
 WorldVLA:世界模型实现视觉-动作双向增强,抓取精度显著提升
 具身智能之心· 2025-06-30 12:17
 Core Insights - The article introduces WorldVLA, a self-regressive action world model that integrates action and image understanding and generation, outperforming independent action and world models [3][6][8].   Group 1: WorldVLA Overview - WorldVLA combines visual-language-action (VLA) models and world models in a single framework, enhancing performance through mutual reinforcement between the two components [3][6]. - The model utilizes three independent tokenizers for images, text, and actions, sharing the same vocabulary to unify cross-modal understanding and generation [6][14]. - An attention mask strategy is proposed to mitigate error propagation in action sequence generation, significantly improving performance in action block generation tasks [7][31].   Group 2: Model Architecture and Training - The architecture consists of an action model and a world model, where the action model generates actions based on image observations and language instructions, while the world model predicts future states based on observed sequences and actions [11][13]. - Training involves mixing action model data and world model data to enhance action generation, with the world model providing a better understanding of environmental physics [15][20]. - The loss function combines cross-entropy losses from both models, balancing contributions due to the disparity in token counts [20].   Group 3: Experimental Results - WorldVLA shows a 4% higher success rate in grasping tasks compared to similar action models and a 10% reduction in Fréchet Video Distance (FVD) compared to standard world models [7][26]. - The model's performance improves with higher image resolutions, which is crucial for tasks requiring high operational precision [26]. - The integration of the world model significantly enhances the action model's performance by providing a better understanding of the underlying physical dynamics [28].   Group 4: Attention Mask and Performance - The proposed attention mask allows for parallel generation of multiple actions, reducing dependency on previous actions and alleviating error accumulation [19][31]. - The model's performance is optimized by using two historical image frames as input, balancing task success rates and computational efficiency [32].   Group 5: Pre-training and Future Potential - Pre-training the action model with world model data significantly improves grasping performance, highlighting the potential of leveraging general world knowledge to enhance specific task performance in robotics [35].








