Juny

Search documents
Google Genie 3 - The Most Advanced World Simulator Ever...
Matthew Berman· 2025-08-05 14:02
Model Overview - Google announced Genie 3, a general-purpose world model for generating diverse interactive environments [1][8] - Genie 3 allows real-time interaction with improved consistency and realism compared to Genie 2 [12] - The model generates 720p high-quality environments [3] Technical Aspects - Genie 3 considers the entire previously generated trajectory, not just the previous frame, for autoregressive generation [15] - Consistency in Genie 3 is an emergent capability resulting from training scale, not pre-programming [19] - Genie 3 generates dynamic and rich worlds frame by frame based on world description and user actions, unlike methods relying on explicit 3D representation [20] Potential Applications - World models like Genie 3 can be used for training robots and agents [9] - The technology has potential applications in creating video games, movies, and television shows [9] - Google positions world models as a key step towards AGI by providing AI agents with unlimited simulation environments for training [9][10] Comparison with Previous Models - Genie 3 demonstrates significant improvements in consistency, detail, and generation length compared to Genie 2 [22][23] - Genie 3 allows for deeper world exploration than Genie 2 [23] Interactive Features - Users can prompt events in real-time, adding elements to the scene [21] - The model demonstrates realistic interactions, such as light moving out of the way of a jet ski and reflections in mirrors [6] - The model can simulate actions like painting, with paint only being applied when the brush touches the wall [29][30]
China Went HARD...
Matthew Berman· 2025-07-24 00:30
Model Performance & Capabilities - Quen 3 coder rivals Anthropic's Claude family in coding performance, achieving 69.6% on SWEBench verified compared to Claude Sonnet 4's 70.4% [1] - The most powerful variant, Quen 3 coder 480B, features 480 billion parameters with 35 billion active parameters as a mixture of experts model [2][3] - The model supports a native context length of 256k tokens and up to 1 million tokens with extrapolation methods, enhancing its capabilities for tool calling and agentic uses [4] Training Data & Methodology - The model was pre-trained on 7.5 trillion tokens with a 70% code ratio, improving coding abilities while maintaining general and math skills [5] - Quen 2.5 coder was leveraged to clean and rewrite noisy data, significantly improving overall data quality [6] - Code RL training was scaled on a broader set of real-world coding tasks, focusing on diverse coding tasks to unlock the full potential of reinforcement learning [7][8] Tooling & Infrastructure - Quen launched Quen code, a command line tool adapted from Gemini code, enabling agentic and multi-turn execution with planning [2][5][9] - A scalable system was built to run 20,000 independent environments in parallel, leveraging Alibaba cloud's infrastructure for self-play [10] Open Source & Accessibility - The model is hosted on HuggingFace, making it free to use and try out [11]