Mureka V7.5

Search documents
AI动态汇总:智元推出机器人世界模型平台genieenvesioner,智谱上线GLM-4.5a视觉推理模型
China Post Securities· 2025-08-25 11:47
- The Genie Envisioner platform introduces a video-centric world modeling paradigm, directly modeling robot-environment interactions in the visual space, which retains spatial structure and temporal evolution information. This approach enhances cross-domain generalization and long-sequence task execution capabilities, achieving a 76% success rate in long-step tasks like folding cardboard boxes, outperforming the π0 model's 48%[12][13][16] - The Genie Envisioner platform comprises three core components: GE-Base, a multi-view video world foundation model trained on 3000 hours of real robot data; GE-Act, a lightweight 160M parameter action decoder enabling real-time control; and GE-Sim, a hierarchical action-conditioned simulator for closed-loop strategy evaluation and large-scale data generation[16][17][19] - The GLM-4.5V visual reasoning model, with 106B total parameters and 120B activation parameters, achieves state-of-the-art (SOTA) performance across 41 multimodal benchmarks, including image, video, document understanding, and GUI agent tasks. It incorporates 3D-RoPE and bicubic interpolation mechanisms to enhance 3D spatial relationship perception and high-resolution adaptability[20][21][22] - GLM-4.5V employs a three-stage training strategy: pretraining on large-scale multimodal corpora, supervised fine-tuning with "chain of thought" samples, and reinforcement learning with RLVR and RLHF techniques. This layered training enables superior document processing capabilities and emergent abilities like generating structured HTML/CSS/JavaScript code from screenshots or videos[23][24][26] - VeOmni, a fully modular multimodal training framework, decouples model definition from distributed parallel logic, enabling flexible parallel strategies like FSDP, HSDP+SP, and EP. It achieves 43.98% MFU for 64K sequence training and supports up to 192K sequence lengths, reducing engineering complexity and improving efficiency by over 90%[27][28][31] - VeOmni introduces asynchronous sequence parallelism (Async-Ulysses) and COMET technology for MoE models, achieving linear scalability in training throughput for 30B parameter models under 160K sequence lengths. It also integrates dynamic batch processing and FlashAttention to minimize memory waste and optimize operator-level recomputation[31][32][34] - Skywork UniPic 2.0, a unified multimodal framework, integrates image understanding, text-to-image (T2I) generation, and image-to-image (I2I) editing within a single model. It employs a progressive dual-task reinforcement strategy (Flow-GRPO) to optimize image editing and T2I tasks sequentially, achieving superior performance in benchmarks like GenEval and GEdit-EN[35][38][39] - UniPic 2.0 leverages Skywork-EditReward, an image-editing-specific reward model, to provide pixel-level quality scores. This design enables precise recognition of image elements and generation of corresponding textual descriptions, achieving 83.5 points in MMBench, comparable to 19B parameter models[38][42][43] - FlowReasoner, a query-level meta-agent framework, dynamically generates personalized multi-agent systems for individual queries. It employs GRPO reinforcement learning with multi-objective reward mechanisms, achieving 92.15% accuracy on the MBPP dataset and outperforming baseline models like Aflow and LLM-Blender[63][64][68] - FlowReasoner utilizes a three-stage training process: supervised fine-tuning with synthetic data, SFT fine-tuning for workflow generation, and RL with external feedback for capability enhancement. It demonstrates robust generalization, maintaining high accuracy even when the base worker model is replaced[66][68][69]
腾讯研究院AI速递 20250818
腾讯研究院· 2025-08-17 16:01
Group 1 - Google has released the lightweight model Gemma 3 270M, which has 270 million parameters and a download size of only 241MB, designed specifically for terminal use [1] - The model is energy-efficient, consuming only 0.75% of battery power after 25 conversations on the Pixel 9 Pro, and can run efficiently on resource-constrained devices after INT4 quantization [1] - Gemma 3 270M outperforms the Qwen 2.5 model in the IFEval benchmark test and has surpassed 200 million downloads, tailored for specific task fine-tuning [1] Group 2 - Meta has open-sourced the DINOv3 visual foundation model, which surpasses weakly supervised models in multiple dense prediction tasks using self-supervised learning [2] - The model features innovative Gram Anchoring strategy and RoPE, with a parameter scale of 7 billion and training data expanded to 1.7 billion images [2] - DINOv3 is commercially licensed and offers various model sizes, including ViT-B and ViT-L, with specialized training for satellite image backbone networks, already applied in environmental monitoring [2] Group 3 - Tencent has launched the Lite version of its 3D world model, reducing memory requirements to below 17GB, allowing efficient operation on consumer-grade graphics cards with a 35% reduction in memory usage [3] - Technical breakthroughs include dynamic FP8 quantization, SageAttention quantization technology, and cache algorithms that enhance inference speed by over 3 times with less than 1% accuracy loss [3] - Users can generate a complete navigable 3D world by inputting a sentence or uploading an image, supporting 360-degree panoramic generation and Mesh file export for seamless integration with games and physics engines [3] Group 4 - Kunlun Wanwei has released six models from August 11 to 15, covering popular fields such as video generation, world models, unified multimodal, agents, and AI music creation [4] - The latest music model Mureka V7.5 significantly enhances the tonal quality and articulation of Chinese songs, improving voice authenticity and emotional depth through optimized ASR technology, surpassing top foreign music models [4] - A MoE-based character description voice synthesis framework, MoE-TTS, was also released, allowing users to precisely control voice features and styles through natural language, outperforming closed-source commercial products under open data conditions [4] Group 5 - OpenAI has released a programming prompt guide for GPT-5, emphasizing the importance of clear and non-conflicting instructions to avoid confusion [5][6] - It suggests using appropriate reasoning intensity and structured rules similar to XML for complex tasks, while planning self-reflection before execution for zero-to-one tasks [6] Group 6 - The first humanoid robot sports event showcased various competitions, including running, soccer, boxing, dance, and martial arts, with the Yushu robot winning the 1500m race [7] - The soccer 5V5 group matches demonstrated real-time computation and collaboration capabilities of robot players, with standout performances from specific players [7] - The event featured commentary focusing on AI knowledge, with humorous moments such as robots colliding and falling over during gameplay [7] Group 7 - DeepMind's Genie 3 model can generate 24 frames of 720p HD visuals per second and create interactive worlds with a single sentence, showcasing advanced memory capabilities [8] - The model's physical law representation improves as training data scale and depth increase, marking a significant step towards AGI [8] - Future developments will focus on realism and interactivity, potentially providing unlimited training scenarios for robots to overcome data limitations [8] Group 8 - OpenAI's CEO hinted at plans to invest trillions in building data centers and suggested that an AI might become the CEO in three years [9] - He confirmed the development of AI devices in collaboration with Jony Ive and acknowledged the increasing value of human-created content [9] - The CEO believes the current "AI bubble" is similar to the internet bubble but emphasizes that AI is a crucial long-term technological revolution [9] Group 9 - OpenAI's chief scientist discussed the evolution of AGI definitions from abstract concepts to multidimensional capabilities, highlighting the need for practical application value assessments [10] - The researchers noted that AI developments have exceeded expectations, with models excelling in competitions, demonstrating strong reasoning and creative thinking [10] - Experts recommend not abandoning programming education but rather viewing AI as a supportive tool, emphasizing the importance of structured and critical thinking [11] Group 10 - Sierra AI's founder predicts the AI market will split into three main tracks: frontier foundational models, AI toolchains, and application-type agents, with the latter presenting the greatest opportunities [12] - Agents can significantly enhance productivity, shifting from "software enhancing human efficiency" to "software completing tasks independently," akin to early computer impacts [12] - The future will see many long-tail agent companies emerging, similar to the evolution of the software market, with pricing based on business outcomes rather than technical details [12]
一周六连发!昆仑万维将多模态AI卷到了新高度
量子位· 2025-08-17 09:00
Core Viewpoint - Kunlun Wanwei has launched six new models in one week, showcasing its advancements in multimodal AI applications, including video generation, world models, and AI music creation, indicating a strategic push in the AI sector [2][5][63]. Group 1: Model Launches - The company released the SkyReels-A3 model, designed for digital human live-streaming, which can generate realistic videos driven by audio input, enhancing the e-commerce landscape [9][10][16]. - Matrix-Game 2.0, an upgraded interactive world model, was introduced, boasting real-time generation and long-sequence capabilities, positioning it as a competitor to Google's Genie 3 [19][20][22]. - The Matrix-3D model was launched, integrating panoramic video generation and 3D reconstruction, breaking barriers between content generation and interaction [25][27]. - Skywork UniPic 2.0 was unveiled as a unified multimodal model capable of image understanding, generation, and editing, demonstrating a new training paradigm that reduces hardware requirements [29][31][33]. - The Skywork Deep Research Agent v2 was released, enhancing multimodal capabilities for deep research and content generation [37][38]. - Mureka V7.5, a music generation model, was launched, focusing on Chinese music, showcasing significant improvements in emotional expression and musicality [53][54][56]. Group 2: Strategic Insights - Kunlun Wanwei's strategy emphasizes vertical integration in AI, focusing on high-frequency application scenarios rather than general-purpose agents, which is seen as a more viable approach for future development [70][72][76]. - The company has committed substantial resources to R&D, with a projected R&D expenditure of 1.54 billion yuan in 2024, reflecting a 59.5% year-on-year increase, and a workforce of 1,554 dedicated to AI research [73][74]. - The open-source approach adopted by Kunlun Wanwei has positioned it as a leader in the AI ecosystem, contributing to its recognition as one of the "Top 16 AI Open Source Companies in China" [5][78].