Workflow
“AI教母”,公布最新世界模型
财联社·2025-10-17 12:28

Group 1 - The article discusses the launch of a new real-time interactive 3D world model called RTFM (Real-Time Frame Model) developed by World Labs, founded by AI expert Fei-Fei Li. The model is designed around three key principles: efficiency, scalability, and durability, allowing it to run on a single H100 GPU to render persistent and consistent 3D worlds [2] - World Labs emphasizes that as world model technology advances, the demand for computing power will increase significantly, surpassing the current requirements of large language models (LLMs). To achieve 4K+60FPS interactive video streaming, traditional video architectures need to generate over 100,000 tokens per second, which is economically unfeasible with current computing infrastructure [2] - The article highlights a strategic partnership between OpenAI and Broadcom to deploy a 10-gigawatt AI accelerator, which is expected to create a diversified computing power system for OpenAI, reducing reliance on a single supplier and driving down computing costs through competition [3] Group 2 - The phenomenon known as "Jevons Paradox" is noted, where advancements in AI model technology that improve computing efficiency can lead to an overall increase in the total consumption of computing resources. For instance, the DeepSeek R1 model, released earlier this year, demonstrates strong AI performance but is expected to increase the demand for computing resources [4] - World Labs previously released the Marble model, which generates 3D worlds from a single image or text prompt, showcasing improved geometric structures and diverse styles compared to its predecessor. Fei-Fei Li has stated that the significance of world models lies in their ability to understand and reason about both textual information and the physical world's operational laws [4] - Companies across the AI and terminal sectors are increasingly investing in world models, with xAI hiring experts from NVIDIA and competitors like Meta and Google also focusing on this area. In China, robotics firms such as Yushu and Zhiyuan have open-sourced their world models [4] Group 3 - Dongwu Securities notes that as computing power becomes cheaper and more accessible, developers will set more complex models and systems as new benchmarks, increasing parameters, context, and parallelism. While model architecture iterations may reduce the computing power required for single inference and training, models like Genie3 that generate videos may require a significant increase in computing power to meet demands [5] - The higher ceiling for AI computing power and improved competitive landscape are expected to support a higher valuation framework for AI computing compared to 4G/5G, along with a stronger Beta [5]