Workflow
安全可信
icon
Search documents
复旦大学漆远:开源开放、价值交付、安全可信是AI发展趋势
Xin Lang Ke Ji· 2025-09-11 06:22
Core Insights - The core viewpoint presented by the director of Fudan University's AI Innovation and Industry Research Institute is that the development of artificial intelligence (AI) is characterized by three main trends: open-source openness, value delivery, and safety and trustworthiness [1][5]. Group 1: Open-Source Openness - The most significant change in the AI field by 2025 is the transition of "open-source openness" from a concept to reality, reshaping the entire industry ecosystem [1]. - The emergence of "DeepSeek" has transformed the generative AI landscape, achieving "tenfold growth and efficiency improvement" through its open-source architecture and powerful capabilities [1]. - Major players in the industry, such as OpenAI, are recognizing the value of open-source, as evidenced by their first open-source release in six years, indicating a shift in industry perspective [1]. Group 2: Value Delivery - AI is evolving from "selling tools" to "selling results," transitioning from auxiliary tools to deliverable value systems like "Copilot" and "Auto Pilot," which rely on deep integration with industry-specific knowledge [1]. - In the medical field, the "Renewal Intelligent Agent" has been implemented at Zhongshan Hospital, showcasing the advantages of deeper contextual understanding and higher quality data, enabling comprehensive interpretation of multimodal data [2]. Group 3: Safety and Trustworthiness - Safety and trustworthiness are emphasized as the foundational requirements for AI development, with concerns about issues like "fabrication" and "hallucination" in large models [2]. - The accuracy of models in the medical field is notably low, with some achieving only 55% accuracy, raising significant concerns [2]. - Several risk cases highlight the challenges of distinguishing between true and false information, such as AI-generated doctoral theses and deepfake scams [2]. - Key technological pathways proposed to enhance safety include explainable AI, retrieval-augmented generation (RAG) combined with neural-symbolic systems, high-quality data governance, adversarial techniques, and self-awareness in models [3][4][5].