Workflow
腾讯研究院
icon
Search documents
第六次突破
腾讯研究院· 2025-09-25 08:33
Core Insights - The article outlines five major breakthroughs in the evolution of intelligence, from the development of basic navigation in early organisms to the potential emergence of superintelligence in artificial entities [2][3][5][11]. Breakthroughs in Intelligence - **First Breakthrough: Turning** - Approximately 600 million years ago, early bilateral animals evolved a simple nervous system that allowed for basic navigation by distinguishing between positive and negative stimuli [2]. - **Second Breakthrough: Reinforcement** - Around 500 million years ago, the first vertebrates developed a brain structure that enabled learning from past experiences, establishing a foundation for emotional and cognitive traits [3]. - **Third Breakthrough: Simulation** - About 100 million years ago, early mammals developed the ability to mentally simulate actions and events, leading to advanced planning and fine motor skills [4]. - **Fourth Breakthrough: Mentalization** - Between 10 to 30 million years ago, early primates evolved the capacity to understand their own and others' mental states, enhancing social interactions and learning [5]. - **Fifth Breakthrough: Language** - Language emerged as a means to connect internal simulations, allowing for the accumulation of knowledge across generations [5]. Evolutionary Context - Human history can be divided into two main chapters: the evolutionary chapter, detailing the biological development of modern humans, and the cultural chapter, which encompasses the rapid advancements in civilization over the last 100,000 years [6][7]. - The article emphasizes the significance of the last 100,000 years in shaping human civilization, contrasting it with the extensive evolutionary timeline [6]. Future of Intelligence - The article posits that the next breakthrough may involve the emergence of superintelligence, where artificial entities surpass biological limitations, leading to unprecedented cognitive capabilities [9][10]. - It discusses the implications of this potential shift, including the redefinition of individuality and the evolution of intelligence beyond biological constraints [10][11]. Philosophical Considerations - The article raises critical questions about the goals of humanity as it approaches the sixth breakthrough, emphasizing the importance of values and choices in shaping the future of intelligence [11][12].
腾讯研究院AI速递 20250925
腾讯研究院· 2025-09-24 16:01
Group 1: AI Tools and Applications - Google has launched the Mixboard, an AI drawing tool supported by Nano Banana, allowing users to visualize ideas instantly using natural language [1] - Alibaba introduced the Wan2.5 Preview model, which can generate synchronized audio-visual videos, supporting 1080P HD video at 24 frames per second [2] - Kuaishou's Keling 2.5 Turbo model has significantly reduced costs by nearly 30% while improving the quality of generated sports action videos [3] - Mita AI has unveiled the "Agentic Search" mode, enabling users to perform multiple tasks simultaneously through a new search paradigm [4] - Suno has released its V5 model, claiming to be the most powerful music generation model to date, offering studio-quality sound [5][6] Group 2: Robotics and AI Development - Wang Xingxing from Yushu Technology highlighted the challenges in general robotics, including cable issues and AI chip power limitations [8] - The Google Cloud AI entrepreneur report emphasizes the importance of speed and innovation as core competitive advantages in the AI era [9] Group 3: AI Chip Market Dynamics - NVIDIA's investment of $5 billion in Intel is expected to reshape the PC and data center markets, posing a significant threat to AMD and ARM [10] - Huawei is emerging as a strong competitor in the AI chip sector despite facing U.S. sanctions, making progress in 7nm chips and custom HBM [10] - AI computing expenditure is projected to rise from $360 billion to approximately $500 billion, with Oracle capitalizing on major clients like OpenAI [10] Group 4: Future of AI Infrastructure - Sam Altman envisions a future where AI becomes a fundamental economic driver and a basic human right, proposing the establishment of factories to produce AI infrastructure [12] - He emphasizes that increasing computing power is key to generating revenue and plans to build substantial AI infrastructure in the U.S. [12]
中国公众对生成式AI的看法与使用行为|年度调研
腾讯研究院· 2025-09-24 07:03
Core Insights - Generative AI has nearly achieved full penetration among Chinese adults, fundamentally integrating into their daily work and study routines [2][4] - There exists a complex mindset among the public, characterized by high expectations for technological progress alongside deep anxieties regarding employment prospects, information authenticity, and social equity [2][5] Public Participation in AI - The survey indicates an impressive penetration rate of generative AI, with 96.2% of respondents having used AI-generated content (AIGC) products or features [4] - Over two-thirds (67.7%) of users engage with AIGC products daily, with 30% being heavy users who utilize these tools multiple times a day [5] - The primary motivations for using AI are text processing (72%) and information retrieval (70.9%), with learning (75.7%) and work (70.6%) being the main usage scenarios [5][26] Market Dynamics and User Willingness to Pay - Approximately 75% of users are either already paying (16.1%) or willing to pay for quality services (59%), with most paid subscriptions under 100 RMB per month [5][25] - The preference for local AI products is evident, with domestic applications like Doubao, DeepSeek, and Tencent Yuanbao leading the market [17] - Users exhibit a strong preference for monthly subscriptions (30.2%) and one-time payments (28.4%), indicating a demand for flexible payment options [22][23] Social Impact and Concerns - There is significant anxiety regarding the impact of AI on professional skills and job security, with 77% concerned about skill devaluation and 70% about job replacement [31][32] - The public perceives the primary risks of generative AI to be the spread of misinformation (60.4%), job displacement (59.7%), and privacy breaches (46.7%) [50][53] - The sentiment towards AI is mixed, with 50.2% expressing excitement about its application, while 46.3% feel both hopeful and anxious [42][45] Age and Education Correlation - Younger individuals show higher engagement with generative AI, with 73.9% of those aged 30-39 using it daily, and 69.4% in the 20-29 age group [8] - Higher education levels correlate with increased usage frequency, with 81.8% of respondents holding a graduate degree using AI daily [11] Future Outlook - The public's cautious optimism suggests a recognition of AI's potential benefits, with 71.9% believing its overall impact will be positive [45] - The demand for lifelong learning is anticipated to rise as individuals seek to enhance their skills in response to AI's influence on the job market [40]
腾讯研究院AI速递 20250924
腾讯研究院· 2025-09-23 16:01
Group 1: Nvidia and OpenAI Partnership - Nvidia announced a strategic partnership with OpenAI, planning to invest up to $100 billion, with OpenAI deploying up to 10 gigawatts of Nvidia systems, equivalent to 4-5 million GPUs [1] - The first phase of the system is set to operate in the second half of 2026 based on Nvidia's Vera Rubin platform [1] - Both companies will collaborate to optimize the technical roadmap for models and infrastructure software and hardware, aiming to advance OpenAI's mission for general artificial intelligence, resulting in a nearly 4% increase in Nvidia's stock price following the announcement [1] Group 2: Wuwen Xinqun's Agentic Infra - Wuwen Xinqun launched an infrastructure intelligent agent swarm, utilizing a multi-agent collaborative architecture to cover various modules such as model selection, resource operation, troubleshooting, and cluster operation and maintenance [2] - This solution transforms the traditional production model from IaaS to PaaS to MaaS to Agent applications, building a highly collaborative system centered around intelligent agents, significantly enhancing resource utilization and operational efficiency [2] - Collaborations with clients like Nia TA and Soul have resulted in a fivefold increase in iteration speed and a hundredfold expansion in operational capabilities, promoting the shift from "AI infrastructure paradigm" to "Agentic Infra" [2] Group 3: Alibaba's Qwen3-Omni Model - Alibaba's Tongyi has open-sourced the Qwen3-Omni multimodal model, capable of seamlessly processing text, images, audio, and video inputs, supporting real-time streaming responses and simultaneous text and voice output [3] - The model achieved state-of-the-art (SOTA) results in 32 out of 36 audio and audio-video benchmark tests, surpassing closed-source strong models like Gemini-2.5-Pro, and supports 119 text languages, 19 speech understanding languages, and 10 speech generation languages [3] - Alibaba also open-sourced the Qwen3-TTS-Flash speech synthesis model and the Qwen-Image-Edit-2509 image editing model, with the former supporting 17 voice tones and 10 languages, and the latter introducing multi-image editing and single-image consistency enhancement features [3] Group 4: Kimi's Agent Membership Service - Kimi introduced an Agent membership service, allowing users to receive a full refund of previous tipping amounts upon first subscription [4] - The membership service is named after musical tempos: the free version is Adagio, with paid versions priced at 49 yuan for Andante and 99 yuan for Moderato, and an overseas option at $199 for Vivace [4] - The main difference between paid and free users lies in the number of Agent usage instances, with mid to high-tier subscriptions offering equivalent API exchange vouchers and higher-tier members receiving priority access during peak times [4] Group 5: MiniCPM-V 4.5 Model Release - Tsinghua University's NLP lab and Mianbi Intelligence released the MiniCPM-V 4.5 technical report, which, with 8 billion parameters, surpasses larger models like GPT-4o-latest and Qwen2.5-VL-72B [5] - The model employs three innovative technologies: a unified 3D-Resampler architecture for high-density video compression, a document-oriented unified OCR knowledge learning paradigm, and controllable mixed fast/deep thinking multimodal reinforcement learning [6] - MiniCPM-V 4.5 achieved an average score of 77.0 in the OpenCompass comprehensive evaluation, demonstrating high inference efficiency, with time costs on VideoMME being only one-tenth of similar models, and has been downloaded over 220,000 times on HuggingFace and ModelScope [6] Group 6: ZhiYuan Robot's GO-1 Model - ZhiYuan Robot open-sourced the GO-1 general embodiment base model, utilizing the first global Vision-Language-Latent-Action (ViLLA) architecture, bridging the semantic gap between image-text input and robot action execution [8] - The model features a three-layer collaborative design: a multimodal understanding layer based on InternVL-2B, an implicit planner, and an action expert based on diffusion models, validated across various robots and simulation environments [8] - ZhiYuan Robot also launched Genie Studio, a one-stop development platform providing a full-stack solution for developers, including data collection, management, model training, fine-tuning, evaluation, and deployment, while supporting the LeRobot universal data format for compatibility with other robot platforms [8] Group 7: OpenAI's Future AI Development - Lukasz Kaiser, a member of the Transformer team at OpenAI, is involved in the development of GPT-5 and related reasoning models, emphasizing the potential of large models for cross-domain learning [9] - Kaiser proposed the concept of "One Model To Learn Them All" in 2017, predicting that the next phase of AI will focus on teaching models to "think" [9] - He forecasts a paradigm shift in AI computation from large-scale pre-training to massive reasoning calculations on a small amount of high-quality specific data, aligning more closely with human intelligence patterns [9]
游戏经济,正在兴起
腾讯研究院· 2025-09-23 08:43
Core Viewpoint - The article emphasizes the significance of the game economy as a dual engine of cultural and digital economies, highlighting its role in driving innovation, economic growth, and cultural integration [6][9]. Group 1: Game Economy Definition and Characteristics - The game economy is defined as an economic new form centered on the gaming industry, integrating software and hardware technology, IP content, and user experience to transform and utilize cultural resources [9]. - It exhibits four main characteristics: cultural expressiveness, technological innovation, industrial connectivity, and sustained consumption [19][20][22][24]. Group 2: Economic Impact and Growth - The Chinese gaming industry has shown continuous growth, becoming one of the largest gaming markets globally, with domestic game sales expected to exceed 450 billion yuan in 2024 [11][12]. - The gaming industry contributes significantly to employment, with over 2.74 million people employed in the sector as of 2020, and this number is expected to grow with new business models [12][13]. Group 3: Cultural Integration and Innovation - Games serve as a vital medium for cultural transmission and innovation, facilitating the integration of traditional culture into modern contexts, with 81.6% of respondents affirming that traditional culture enhances their gaming experience [21]. - The gaming industry fosters cross-industry collaborations, enhancing cultural and economic value through IP development and cultural content integration [14][25]. Group 4: Future Development and Recommendations - The article suggests establishing a comprehensive research and measurement system for the game economy, including standardized statistical methods and monitoring of the gaming market [30]. - It advocates for enhancing the synergy between gaming and other cultural industries, promoting cross-industry cooperation, and investing in gaming-related technology to drive economic vitality [31][33].
腾讯研究院AI速递 20250923
腾讯研究院· 2025-09-22 16:01
Group 1 - MediaTek launched the new flagship 5G AI chip Dimensity 9500, which uses a third-generation 3nm process and a full big-core architecture, integrating over 30 billion transistors, with NPU performance improved by 111% and power consumption reduced by 56% [1] - The chip features a dual NPU architecture for super performance and efficiency, introducing in-memory computing design and BitNet 1.58 bit quantization inference framework, supporting on-device model training [1] - In practical applications, it supports 128K long text processing and 4K quality image generation, with flagship new devices from manufacturers like vivo and OPPO set to utilize this chip for personalized AI scenarios [1] Group 2 - OpenAI has invested $16 billion in computing resources and plans to spend $350 billion on leasing services from 2024 to 2030, with an expected annual expenditure of $100 billion by 2030 [2] - The company signed a 5-year $300 billion computing power contract with Oracle, adding an extra $100 billion for backup servers, breaking the traditional tech giants' R&D cost model of 10%-20% of revenue [2] - OpenAI announced the upcoming launch of a compute-intensive new product in the coming weeks, but Pro users will need to pay extra, leading to user dissatisfaction [2] Group 3 - Google has introduced a new research paradigm for Agents, moving beyond the traditional "plan-retrieve-generate" model, allowing Agents to draft first and iteratively learn and correct [3] - The new framework employs a "diffusion denoising" process, enabling Agents to identify information gaps based on drafts and search for evidence externally, optimizing research content repeatedly [3] - Google has also incorporated multi-version intelligent self-critique and report-level denoising technology, outperforming OpenAI's DeepResearch in tasks like GAIA, and is available for trial in Google Agentspace [3] Group 4 - DeepSeek released the ultimate Terminus version of its model DeepSeek-V3.1, addressing user feedback with improvements in two main areas [4][5] - The new version alleviates language consistency issues such as mixed Chinese and English and further optimizes the performance of Code Agent and Search Agent [5] - DeepSeek-V3.1-Terminus is now available across official apps, web platforms, mini-programs, and DeepSeek API models, with the open-source version downloadable from Hugging Face and ModelScope [5] Group 5 - The Keling 2.5 video model has achieved significant breakthroughs in motion capabilities and expression performance, accurately depicting subtle facial expression changes and complex emotions while maintaining character consistency across different scenes [6] - The model seamlessly connects actions like falling, running, and riding a motorcycle, preserving realistic environmental interaction details and understanding complex causal relationships [6] - Keling 2.5 excels in action scenes, generating high-quality parkour, jumping, combat, and explosion scenarios, with greatly enhanced continuity and physical realism, currently in gray testing for super creators [6] Group 6 - Meituan's LongCat team has released the efficient reasoning model LongCat-Flash-Thinking, achieving advanced levels in logic, mathematics, coding, and agent capabilities while maintaining extreme speed [7] - The new model introduces a pioneering domain-parallel reinforcement learning training method, achieving threefold speedup through an asynchronous elastic shared card system, and features a dual-path reasoning framework to enhance agent capabilities [7] - In reasoning benchmark tests, it outperforms open-source models and performs comparably to top closed-source models like GPT-5 in tests such as AIME and LiveCodeBench, with formal reasoning capabilities significantly ahead of all participating models in the MiniF2F-test benchmark [7] Group 7 - Baidu's Qianfan-VL visual understanding model has been fully open-sourced, offering three configurations: 3B, 8B, and 70B, supporting OCR recognition and educational applications [8] - The model was developed by Baidu's team based on open-source models and completed all computational processes on its self-developed Kunlun chip P800, supporting single-task parallel computing at a scale of 5000 cards [8] - The Qianfan-VL series demonstrates chain-of-thought capabilities, full-scene OCR recognition, and complex document understanding, performing excellently in multiple benchmark tests, and is available for free experience on Baidu Smart Cloud [8] Group 8 - The 2025 "35 Innovators Under 35" Asia-Pacific list has been released by MIT Technology Review, featuring 35 innovators from fields such as AI, robotics, and materials [10] - Innovators like Xia Fei and Min Shiyuan have made breakthroughs in artificial intelligence, including embodied intelligence and non-parametric large language models [10] - China has the highest number of honorees, with 82 individuals selected over 11 editions as of 2024, surpassing Singapore's 76, reflecting a shift in the Asia-Pacific region from technology following to innovation leadership [10] Group 9 - The core team of Nano Banana suggests that the quality of image generation models is nearing its peak, with the next challenge being to integrate the "world knowledge" of LLMs into image models to understand user intentions [11] - While the quality ceiling of existing image models is close to being reached, there is still significant room for improvement in the "lower limit," with future developments focusing on enhancing "model expressiveness" and performance in complex scenarios [11] - Future interactive interfaces will integrate text, images, and voice, but user expectations for instant "finished product" generation are unrealistic, indicating that AI models and traditional tools will coexist in professional workflows for a long time [11]
邱泽奇:所谓“智能鸿沟”,可能源于我们的自大
腾讯研究院· 2025-09-22 08:48
Core Viewpoints - The question of whether AI leads to a decline in intelligence is not a binary issue and reflects a misunderstanding similar to questions from the industrial era [3][10] - Human cognition is still in its early stages of understanding, with human thought characterized by leaps and sudden changes that are not yet fully explained [3][8] - Current AI systems primarily absorb human knowledge, functioning more like a talking encyclopedia, but they lack the ability to interpret non-verbal cues and emotional contexts [6][8] Group 1: AI and Human Cognition - AI's learning is based on vast amounts of human-generated data, but the implications of the background and values of this data remain uncertain [4][12] - The interaction with AI should be seen as a collaborative process that enhances human thinking rather than a simple tool for information retrieval [11][15] - The importance of questioning and challenging AI outputs is emphasized as a means to foster deeper cognitive engagement [11][12] Group 2: The Role of AI in Education and Development - The development of foundational skills such as language, logic, and cognitive abilities is increasingly important in the AI era [13][14] - The concept of "companionship" in human development is paralleled in the potential market for private AI applications, such as AI companions and toys [4][14] - Educational approaches should shift towards cognitive enhancement rather than mere knowledge transmission, encouraging discussions with AI to deepen understanding [14][15] Group 3: The Digital Divide and Social Diversity - The emergence of AI has the potential to equalize knowledge access, but disparities in AI usage can widen the gap between different user groups [16] - The notion of an "intelligence gap" may stem from a misperception of one's position in society, highlighting the need for diverse perspectives [16] - The subjective experience of life and happiness varies greatly among individuals, underscoring the importance of embracing social diversity [16]
腾讯研究院AI速递 20250922
腾讯研究院· 2025-09-21 16:01
Group 1: Chrome Update - Chrome has undergone its largest update since its launch in 2008, integrating the Gemini AI assistant into the browser for enhanced functionality [1] - The address bar has been upgraded to the "Omnibox" which intelligently recommends questions based on page content and allows complex queries directly [1] - The new version utilizes Gemini Nano for enhanced security, identifying harmful websites and managing notifications, and is currently available to US users [1] Group 2: Notion 3.0 Launch - Notion 3.0 has been officially launched, introducing the Agent feature that can autonomously perform all Notion operations [2] - The Agent can work independently for up to 20 minutes, completing complex tasks across tools such as integrating customer feedback and updating knowledge bases [2] - The new version includes a highly personalized "memory bank" and will soon support custom Agents for automated tasks and team sharing [2] Group 3: Tencent's Mixed Reality Studio - Tencent has released the "Mixed Yuan 3D Studio," aimed at 3D design professionals, which integrates AI technology to streamline the entire 3D asset production process [3] - The platform reduces production time from days to minutes and offers a comprehensive pipeline for various 3D creative tasks [3] - It features the industry-leading Mixed Yuan 3D 3.0 model with innovative capabilities such as segmentation generation and material editing [3] Group 4: Alibaba's Wan2.2-Animate Model - Alibaba Cloud has open-sourced the Wan2.2-Animate model, which supports generating animations for characters and animals, applicable in short video creation [4] - The model enhances character consistency and generation quality, offering modes for character imitation and role replacement [4] - The development team has created a large dataset for training, surpassing closed-source models in subjective evaluations [4] Group 5: Luma AI's Ray3 Model - Luma AI has launched Ray3, the world's first inference video model, advancing AI video from experimental to professional use [5][6] - Ray3 allows for fine control over actions and camera movements, generating previews in just 20 seconds at a fraction of the final rendering cost [6] - The model supports high-fidelity motion and lighting interactions, integrating seamlessly into professional post-production workflows [6] Group 6: ElevenLabs Studio 3.0 - ElevenLabs has introduced Studio 3.0, a comprehensive AI audio-video editor that consolidates narration, music, sound effects, subtitles, and video editing into a single timeline [7] - The new version offers over 10,000 AI voices, automatic music generation, and multi-language subtitle capabilities [7] - This tool is designed for video creators, podcasters, and audiobook authors, with API support for large-scale workflows [7] Group 7: Xiaomi's Xiaomi-MiMo-Audio Model - Xiaomi has open-sourced its first native end-to-end speech model, Xiaomi-MiMo-Audio, with 7 billion parameters and over 100 million hours of pre-training data [8] - The model excels in natural dialogue, audio subtitling, and long audio comprehension, showcasing capabilities in speech conversion and style transfer [8] - The development team has introduced a lossless compression model and achieved state-of-the-art results in various benchmark tests [8] Group 8: Retro Biosciences' RTR242 Drug Trial - Retro Biosciences has announced the initiation of human trials for the RTR242 drug in Australia, aimed at activating the autophagy system in aging cells [9] - The company's mission is to clear accumulated proteins in the brain to extend healthy human lifespan by 10 years, differing from traditional Alzheimer's treatments [9] - OpenAI has assisted in optimizing protein interactions for the drug, with plans to raise $1 billion to compete with other longevity research firms [9] Group 9: AI-Generated Genome by Evo - The Arc Institute and Stanford University have utilized the Evo model to create the world's first AI-generated functional bacteriophage genome, marking a new era in generative gene design [10][11] - The research team developed a specialized annotation pipeline to identify all genes in the bacteriophage, resulting in genomes with numerous new mutations [10] - Experimental validation confirmed that the AI-designed genomes could infect specific host strains, demonstrating the model's ability to coordinate complex mutations [11] Group 10: OpenAI Codex Applications - OpenAI has publicly shared seven core applications of Codex within its team, including code understanding, refactoring, and performance optimization [12] - The technical team has utilized Codex to enhance efficiency and code quality through various tasks such as generating unit tests and modifying multiple files [12] - Six best practices for using Codex have been disclosed, focusing on analysis before code generation and maintaining context for improved output quality [12]
腾讯研究院AI每周关键词Top50
腾讯研究院· 2025-09-20 02:33
Group 1: Key Trends in AI - The article highlights the top 50 keywords in AI from September 15 to September 19, showcasing the dynamic developments in the industry [2][3] - Major companies like Huawei, OpenAI, and Tencent are leading various AI initiatives, including chip development and application innovations [3][4] Group 2: Notable AI Applications - Huawei's Ascend AI chip plan is a significant development in the chip category [3] - OpenAI's GPT-5-Codex and xAI's Grok 4 Fast are notable advancements in AI models [3] - Tencent's Mixed Yuan 3.0 and Meituan's "Lazy Ordering" are examples of innovative AI applications in the market [3][4] Group 3: Industry Insights and Opinions - The article discusses the new landscape of the AI industry as noted by Sequoia Capital [4] - Insights on the "AI Economy Index" by Anthropic and the "Smart World 2035" vision by Huawei reflect the strategic outlook for the future of AI [4]
探元计划及其共创项目入选世界互联网大会案例集——以数字技术赋能文化遗产高质量传承
腾讯研究院· 2025-09-19 07:48
Core Viewpoint - The article highlights the successful inclusion of the "Tao Yuan Plan 2024" in the "World Internet Conference Cultural Heritage Digitalization Case Collection (2025)", showcasing innovative projects that integrate digital technology with cultural heritage protection [1][7]. Summary by Sections Cultural Heritage Digitalization Case Collection - The "World Internet Conference Cultural Heritage Digitalization Case Collection (2025)" features 40 exemplary cases selected from hundreds of global submissions, emphasizing innovation and promotional value [1]. Tao Yuan Plan 2024 - The "Tao Yuan Plan 2024" is guided by the National Cultural Heritage Administration and involves collaboration among various institutions, focusing on the common needs of cultural heritage through advanced digital technologies [7]. - The plan aims to address challenges in cultural heritage protection and utilization by leveraging technologies such as high-precision 3D scanning and artificial intelligence [7]. Selected Projects - Three notable projects under the "Tao Yuan Plan 2024" include: 1. "3D Modeling and Automatic Understanding of Micro-Reliefs at Longmen Grottoes" which addresses technical challenges in traditional 3D modeling [8]. 2. "Value Excavation and Multi-Scenario Interpretation of the Great Wall Heritage" which utilizes drone technology to gather over 2 million high-definition images for data management [10]. 3. "Natural Muon Imaging Technology for Yungang Grottoes Protection" which offers a non-invasive method for internal structure detection of cultural relics [12]. Systematic Exploration and Innovation - The plan promotes a collaborative ecosystem by integrating various stakeholders, breaking down barriers between fields, and creating a sustainable cross-domain cooperation model [14]. - It has achieved breakthroughs in key technologies, resulting in standardized digital protection solutions that enhance the technological level of cultural heritage protection [15]. Societal Impact and Value Expansion - The outcomes of the projects contribute to cultural dissemination, public education, and industrial innovation, enhancing cultural awareness and confidence [16]. - The digital cultural achievements have been integrated into educational systems, transforming cultural relics into interactive knowledge carriers [16].