字节发布Seedance 2.0,豆包、即梦官宣接入
Huan Qiu Wang·2026-02-12 08:45

Core Insights - ByteDance has launched its latest video generation model, Seedance 2.0, which is integrated into its AI products Doubao and Jimeng, allowing users to create AI videos using their digital avatars [1][2] - The model supports four modalities: image, video, audio, and text, enhancing the creative process by allowing users to specify styles, actions, and atmospheres more intuitively [1][5] Group 1 - Seedance 2.0 has been tested in a limited scope and has garnered global attention for its multi-modal capabilities and precision [2] - An overseas creator compared the output of Seedance 2.0 with previous models, noting a significant improvement in realism and richness, which even caught the attention of Elon Musk [2] - Users from abroad are reportedly seeking ways to obtain Chinese phone numbers to access Seedance 2.0, indicating high demand [2] Group 2 - The CEO of Game Science, Feng Ji, praised Seedance 2.0 as the "strongest video generation model" currently available, highlighting its advancements in multi-modal information understanding and integration [5] - The official technical report indicates that Seedance 2.0 employs a sparse architecture to enhance training and inference efficiency, showcasing strong generalization capabilities [5] - The model excels in generating high-quality audio-visual content, supporting complex functions such as multi-modal references, video editing, and motion stability, with significant improvements in visual aesthetics and narrative control [5]

字节发布Seedance 2.0,豆包、即梦官宣接入 - Reportify