豆包视频生成模型Seedance 2.0发布 豆包、即梦接入
智通财经网·2026-02-12 07:31

Core Insights - The official release of Seedance 2.0 on February 12 enhances video generation capabilities, addressing challenges related to physical laws and long-term consistency while providing unprecedented creative freedom for users [1][2] - The model currently restricts the use of real human images/videos as reference subjects, requiring verification or authorization for such use [1] Group 1: Video Generation Capabilities - Seedance 2.0 achieves state-of-the-art (SOTA) levels in generating complex interactions and movements, with high fidelity in modeling character actions that adhere to real-world physics [1][2] - The model supports multi-modal input, including text, images, audio, and video, significantly increasing creative flexibility by allowing reference to various elements such as composition, actions, and effects [1][2] Group 2: Control and Editing Features - The model enhances instruction adherence and controllability, accurately reproducing complex scripts while maintaining stable subject consistency [2] - New features include video editing and extension capabilities, enabling users to direct their projects like filmmakers [2] Group 3: Audio and Production Adaptability - Seedance 2.0 integrates dual-channel stereo technology for high-fidelity, immersive sound generation, supporting multi-track audio outputs that align precisely with visual content [2] - The model is adaptable for various production scenarios, including commercial ads, film effects, game animations, and commentary videos, with an API service expected to launch in mid-February [2] Group 4: Performance Evaluation - Comprehensive assessments indicate that Seedance 2.0 performs at an industry-leading level across multi-modal scenarios, although there are areas for improvement in detail stability, multi-character matching, subject consistency, text accuracy, and complex editing effects [6]