AI不再困于屏幕:谷歌发布Project Aura,开启物理世界智能交互新范式

Core Insights - Google officially launched Project Aura, an XR hardware reference design, in collaboration with Chinese AR hardware company XREAL, marking a significant advancement in Android XR operating system architecture [1][3] - Project Aura integrates Gemini AI with spatial awareness capabilities, transitioning AI from a "screen-based intelligence" to a "physical world intelligence" [1][3] Group 1: Project Aura Overview - Project Aura is positioned as the closest realization of the ideal Android XR hardware, enabling Gemini AI to construct environmental semantic maps and understand user intentions in real-time [3][5] - The strategic goal of Android XR is to create an open and unified extended reality platform, embedding AI into the real world [3][5] Group 2: Technical Innovations - Project Aura features a 70° optical field of view (FOV), the largest practical limit for consumer AR, enhancing the natural integration of digital content with the physical environment [5][6] - The X1S spatial computing chip, developed by XREAL, is optimized for spatial AI, providing a high-efficiency inference link from sensor input to semantic output [6][8] - Gemini AI is deeply integrated into the Android XR operating system, functioning as an OS-level service rather than a standalone application, enabling continuous and context-aware interactions [6][8] Group 3: Manufacturing and Future Outlook - Project Aura's core technology relies heavily on Chinese manufacturing, with key components like the X-Prism optical module and X1S chip developed and produced in China [8] - The global supply chain for Project Aura is based in the Yangtze River Delta, facilitating rapid hardware iteration cycles [8] - Project Aura is set to be commercially available by 2026, potentially transforming the XR industry from "display devices" to "spatial intelligent terminals" [8]