Workflow
可信AI
icon
Search documents
打造AI“虚拟开发区”,广州黄埔全国首创可信AI赋能平台
Nan Fang Du Shi Bao· 2025-09-12 03:19
广州开发区、黄埔区将打造一个AI时代的"虚拟开发区",深化产业数字化和数字产业化的协同发展。9 月11日,该区举行2025人工智能创新生态大会,正式对外发布了该区在全国首创建设的可信AI先进赋 能平台"湾区智城",为"虚拟开发区"提供坚实的数字底座。 黄埔区内企业众多,规上企业数量稳居全市第一,各大产业集群每日产生海量数据。据悉,"湾区智 城"定位为面向黄埔区内企业的普惠AI赋能平台,企业借助该平台可享受更低的算力成本、更可信的数 据支撑,还可安全自由地开发和交易AI智能体和数据产品。 而黄埔区也将通过"湾区智城"平台,有效沉淀运用各大产业链上的数据资源,突破数据的应用和流通壁 垒,进一步引导和聚集区内新型产业,并立足黄埔打造辐射大湾区的 AI 产业共性平台。 可信AI先进赋能平台"湾区智城"正式发布。 企业可获AI定制方案 还能"边用边赚" AI智能体正以惊人的速度进化,不少中小企业也因此陷入"AI焦虑",既担心错失技术红利,又烦恼成本 投入大、研发门槛高等问题。千行百业的大模型井喷,数据噪声、语料投毒、模型幻觉等新型风险也相 应产生。 黄埔区顺势而为,由科学城集团旗下数科集团在全国首创建设可信AI先进赋能 ...
观点| 杜雨博士接受吴晓波频道专访:解读AI生成内容强制标识政策
Core Viewpoint - The implementation of the "Artificial Intelligence Generated Synthetic Content Identification Measures" and "Cybersecurity Technology Artificial Intelligence Generated Synthetic Content Identification Methods" marks a new phase in the regulation of AI-generated content (AIGC) in China, addressing the risks associated with its rapid development and widespread use [1][2][3]. Policy Implementation - The new regulations are seen as a timely and necessary upgrade in supervision, establishing a foundation of trust within the industry [2][3]. - The policies transition AIGC governance from "industry self-regulation" to "national regulation," indicating a mature upgrade in governance systems [3][5]. Risk Prevention - The core objectives of the policy focus on three key risks: 1. Preventing fraud and the spread of false information by enabling quick identification of content authenticity [6][7]. 2. Clarifying copyright and content ownership to reduce legal disputes and protect the original ecosystem [7]. 3. Preventing internet data pollution by ensuring that low-quality AI-generated content does not degrade model performance [7]. Impact on AI Technology and Industry Applications - The policy is expected to positively influence the industry by shifting content creation focus from speed and quantity to quality and credibility, thus purifying the training data pool [8][9]. - It aims to provide a "license for entry" in high-trust sectors such as news, finance, healthcare, and education, alleviating societal concerns and accelerating value realization [8][9]. Long-term Governance Measures - Four supporting measures are proposed for achieving healthy AIGC development: 1. Strengthening responsibility tracing technology to ensure accountability [9][11]. 2. Controlling data quality from the source to enhance content reliability [11]. 3. Establishing a "human + AI" collaborative review mechanism for content verification [11]. 4. Enhancing public AI literacy through education and outreach initiatives [11]. International Comparison - The regulatory landscape for AIGC varies globally, with the U.S. favoring self-regulation, the EU implementing strict preemptive measures, and Japan taking a cautious approach [12][15]. - China's unique path combines explicit and implicit identification measures, emphasizing source and process management to mitigate misinformation [16]. Corporate Impact - The new regulations present both challenges and opportunities for companies, including increased costs for technology upgrades and extended responsibility chains [17][20]. - However, they also highlight new business opportunities in "trustworthy AI" and compliance technology, as well as the rising value of high-quality content [20]. Societal Value - The policy aims to reshape the content ecosystem and protect the public's cognitive space by preventing the spread of misinformation [21][26]. - The ongoing efforts of the Unknown Artificial Intelligence Research Institute will focus on promoting "technology for good" through standard-setting, technological development, and public education [22].
不止是“更会画画”,Google发布Gemini 2.5 Flash Image,为何Adobe率先拥抱?投资人必读
3 6 Ke· 2025-08-28 10:07
Core Insights - The release of Gemini 2.5 Flash Image-preview by Google marks a significant advancement in AI image generation, transitioning the technology from a "toy" to an "industrial-grade productivity tool" [1][10] - The model addresses three major pain points in AI-generated content: character consistency, modification difficulty, and style coherence, thus enhancing efficiency and controllability [3][10] Technological Breakthroughs - Gemini 2.5 Flash Image-preview enables a "controllable, iterative" creative process, integrating multimodal understanding and world knowledge, allowing AI to function more like a junior designer [5] - The model can seamlessly merge multiple images while maintaining character consistency across various scenes and styles, facilitating the creation of cohesive marketing materials [6] - Users can interact with the model using natural language for precise modifications, leveraging a vast knowledge base to understand complex instructions [6] Economic Implications - The cost of generating an image via the API is approximately $0.039, which supports its widespread commercial application [7] - The integration of Gemini 2.5 into Adobe's products signifies a major industry shift, allowing millions of designers and marketers to utilize advanced AI capabilities within their existing workflows [11][13] Market Dynamics - The demand for high-quality AI image generation is expected to drive significant growth in cloud computing services, particularly for companies like Google Cloud [14] - The rise of "model as a service" (MaaS) will encourage more SaaS platforms to integrate third-party AI models, fostering a robust API economy [14] Compliance and Trust - Google has introduced SynthID, an invisible digital watermark embedded in AI-generated images, enhancing transparency and trust in AI content [15][17] - This feature is particularly crucial for enterprises focused on brand safety and compliance, allowing them to manage legal and reputational risks effectively [17] Investment Opportunities - The emergence of Gemini 2.5 Flash Image-preview presents new investment coordinates, particularly in sectors reliant on visual content such as advertising, film production, and e-commerce [19] - Companies that effectively adopt AI tools are likely to see improved profit margins and market responsiveness, making "AI adoption rate" a key metric for assessing long-term competitiveness [19] - The infrastructure for AI, including AI chips and data centers, will benefit from the increasing demand for computational power [20] - Companies that successfully integrate top-tier AI models into their ecosystems, like Adobe, are expected to see enhanced user engagement and revenue metrics [20] Competitive Landscape - The competition in the AIGC space is intensifying, with Google's release serving as a strong response to rivals like OpenAI and Meta [21] - Investors should monitor advancements in model performance, ecosystem development, and commercialization efforts among leading tech companies [21] Ethical Considerations - The integration of SynthID highlights the growing importance of compliance and trust as competitive advantages in the AI industry [22]
从“幻觉”到“可信”,漆远谈AI如何跨越“敢用”门槛
Tai Mei Ti A P P· 2025-08-05 07:35
Core Insights - The global AI landscape is transitioning from a phase of technological exploration to one focused on creating tangible value through practical applications of AI technology [2] - There is a significant issue of homogeneity among current large model products, leading to market saturation [2] - The founder of Infinite Light Year, Qi Yuan, emphasizes that while the foundational large model market appears to be converging, industry applications are on the verge of an explosion, with unpredictable technological breakthroughs still possible [2] Industry Applications - Infinite Light Year has developed four major solutions for the financial sector, significantly expanding the coverage of index component stocks from 600 to 2600 and reducing the rebalancing cycle from quarterly to real-time responses in minutes [4][5] - The AI investment research assistant can complete a comprehensive analysis of a financial report within 5 minutes, improving efficiency by over 90% compared to manual analysis [10] Technological Innovations - The "Gray Box Large Model" concept proposed by Infinite Light Year aims to combine the probabilistic predictions of large language models with the logical reasoning of symbolic inference to address the issue of AI "hallucinations" [2] - The dual-engine technology system integrates neural-symbolic computing with large models, enabling precise handling of complex logical relationships and accurate predictions based on extensive data [9] Trust and Compliance - Trustworthiness is identified as a key factor for the successful implementation of AI in industries, particularly in finance where compliance with regulations is critical [8] - Infinite Light Year has introduced a "transparent reasoning mechanism" to enhance user trust by making the AI decision-making process clear and understandable [8] Future Outlook - The company is focusing on a dual-domain strategy for 2025, with horizontal development of a reusable AI infrastructure and vertical deepening in the financial and scientific intelligence sectors [3] - The future of AI competition is expected to shift from a focus on computational power to the ability to create value, with a strong emphasis on practical applications that address real-world problems [12]
直击WAIC 2025丨无限光年创始人漆远:对场景的深度理解和精耕是可信AI价值释放的终点
Mei Ri Jing Ji Xin Wen· 2025-07-29 13:56
Core Insights - The 2025 World Artificial Intelligence Conference (WAIC) held in Shanghai showcased over 800 companies and more than 3,000 cutting-edge exhibits, marking the largest scale in its history [1] - The focus of discussions shifted towards embodied intelligence, intelligent agents, and AI hardware terminals, indicating a more practical approach to AI applications [1] - The founder of Infinite Light Years, Qi Yuan, emphasized that the AI industry is transitioning from a phase of technological worship to a focus on value creation, with credibility being a central theme in this transformation [1] Industry Trends - The emergence of vertical large models is seen as a new phase in AI development, with companies now focusing on intelligent agents and specific industry applications [3][4] - Differentiated product value is crucial for vertical large models, as they must effectively address user pain points to stand out in the market [4] - The importance of Product Market Fit (PMF) is highlighted, suggesting that companies need to deeply understand industry-specific challenges to succeed [5] Trustworthy AI - The concept of trustworthy AI is gaining traction, with a need for models to transition from being merely usable to being reliable and effective in real-world applications [6][7] - The development of trustworthy AI involves a three-tiered approach: enhancing retrieval-augmented generation (RAG), implementing reinforcement learning with well-defined reward functions, and integrating knowledge with rules for open-domain problems [6] - AI companies must not only possess technical expertise but also understand the specific language, rules, and pain points of the industries they serve [7]
WAIC UP! 之夜:一场关于AI与人类未来的星空思辨
Guan Cha Zhe Wang· 2025-07-29 07:07
7月27日晚,正值2025世界人工智能大会暨人工智能全球治理高级别会议(WAIC 2025)会期,由威客 引力主办的「WAIC UP! 之夜」活动,在世博展览馆下沉式广场成功举办。 从暮色至星沉,这场关于技术、文明与人类未来的深度思辨,以"AI 有什么大不了"为主题,汇聚了来 自人工智能领域及人文社科界的先锋思想者,在场内的思想交锋与场外的灵感碰撞中,共续1956年那场 仲夏夜之梦。 回看2025年,全球AI领域风云激荡。 中国大模型崛起、具身智能爆发、AI应用狂飙…… 这些新闻似乎都在宣告:AI正在重塑世界。 而当技术与资本竞相追逐, 普通人的AI认知, 却远远跟不上这场变革的速度。 我们是否已经默认,AI接管世界只是时间问题? 但在这场技术狂欢中,一个更本质的问题被忽略了: 如果AI真的无所不能,人类的价值究竟在哪里? 在AI技术迅猛发展的今天,人工智能已从实验室走向产业核心,AI创造者们正享受着技术红利,成为 新时代的"超级个体"。他们指挥着数十个智能体协同工作,掀起生产力革命。 这正是本次「WAIC UP! 之夜」活动试图探讨的核心命题。 我们不想重复那些"AI将取代多少工作岗位"的陈词滥调, 也不愿 ...
AI幻觉成WAIC首个关键词,Hinton敲响警钟,讯飞星火X1升级展示治理新突破
量子位· 2025-07-28 02:26
Core Viewpoint - The term "hallucination" has become a hot topic at WAIC this year, highlighting the challenges and risks associated with AI models, particularly in their reliability and practical applications [1][12][20]. Group 1: AI and Hallucination - Nobel laureate Hinton emphasized the complex coexistence of humans and large models, suggesting that humans may also experience hallucinations similar to AI [2][3][15]. - Hinton warned about the potential dangers of AI, advocating for the development of AI that does not seek to harm humanity [4][20]. - The phenomenon of hallucination, where AI generates coherent but factually incorrect information, is a significant barrier to the reliability and usability of large models [5][18]. Group 2: Technological Developments - The upgraded version of iFlytek's large model, Spark-X1, focuses on addressing hallucination issues, achieving notable improvements in both factual and fidelity hallucination governance [7][30]. - The performance comparison of various models shows that Spark-X1 outperforms others in text generation and logical reasoning tasks, with a hallucination rate significantly lower than its competitors [8][30]. - iFlytek's advancements include a new reinforcement learning framework that provides detailed feedback, enhancing the model's training efficiency and reducing hallucination rates [27][29]. Group 3: Industry Implications - The collaboration between major AI companies like Google, OpenAI, and Anthropic on hallucination-related research indicates a collective effort to ensure AI safety and reliability [9][21]. - The ongoing evolution of AI capabilities raises concerns about the potential for AI to exceed human control, necessitating a focus on safety measures and governance frameworks [19][24]. - The concept of "trustworthy AI" is emerging as a critical factor for the successful integration of AI across various industries, ensuring that AI applications are reliable and effective [25][44].
CVPR 2025 Highlight | 国科大等新方法破译多模态「黑箱」,精准揪出犯错元凶
机器之心· 2025-06-15 04:40
Core Viewpoint - The article discusses the importance of reliability and safety in AI decision-making, emphasizing the urgent need for improved model interpretability to understand and verify decision processes, especially in critical scenarios [1][2]. Group 1: Research Background - A joint research effort by institutions including the Chinese Academy of Sciences and Huawei has achieved significant breakthroughs in explainable attribution techniques for multimodal object-level foundation models, enhancing human understanding of model predictions and identifying input factors leading to errors [2][4]. - Existing explanation methods, such as Shapley Value and Grad-CAM, have limitations when applied to large-scale models or multimodal tasks, highlighting the need for efficient attribution methods adaptable to both large and small models [1][8]. Group 2: Methodology - The proposed Visual Precision Search (VPS) method aims to generate high-precision attribution maps with fewer regions, addressing the challenges posed by the increasing complexity of model parameters and multimodal interactions [9][12]. - The VPS method models the attribution problem as a search problem based on subset selection, optimizing the selection of sub-regions to maximize interpretability [12][14]. - Key scores, such as clue scores and collaboration scores, are defined to evaluate the importance of sub-regions in the decision-making process, contributing to the construction of a submodular function for effective attribution [15][17]. Group 3: Experimental Results - The VPS method has demonstrated superior performance in various object-level tasks, surpassing existing methods like D-RISE in metrics such as Insertion and Deletion rates across datasets like MS COCO and RefCOCO [22][23]. - The method effectively highlights important sub-regions, improving clarity in attribution compared to existing techniques, which often produce noisy or diffuse significance maps [22][24]. Group 4: Error Explanation - The VPS method excels in explaining the reasons behind model prediction errors, showcasing capabilities not present in other existing methods [24][30]. - Visualizations reveal how input disturbances and background interference contribute to classification errors, providing insights into model limitations and potential improvement directions [27][30]. Group 5: Conclusion and Future Directions - The VPS method enhances interpretability for object-level foundation models and effectively explains failures in visual localization and object detection tasks [32]. - Future applications may include improving model decision rationality during training, monitoring decisions for safety during inference, and identifying key defects for cost-effective model repairs [32].
蚂蚁集团大模型数据安全总监杨小芳:用可信AI这一“缰绳”,驾驭大模型这匹“马”
Mei Ri Jing Ji Xin Wen· 2025-06-09 14:42
Core Viewpoint - The rapid development of AI technology presents significant application potential in data analysis, intelligent interaction, and efficiency enhancement, while also raising serious security concerns [1][2]. Group 1: Current AI Security Risks - Data privacy risks are increasing due to insufficient transparency in training data, which may lead to copyright issues and unauthorized access to user data by AI agents [3][4]. - The lowering of security attack thresholds allows individuals to execute attacks through natural language commands, complicating the defense against AI security threats [3][4]. - The misuse of generative AI (AIGC) can lead to social issues such as deepfakes, fake news, and the creation of tools for cyberattacks, which can disrupt social order [3][4]. - The long-standing challenge of insufficient inherent security in AI affects the reliability and credibility of AI technologies, potentially leading to misinformation and decision-making biases in critical sectors like healthcare and finance [3][4]. Group 2: Protective Strategies - The core strategy for preventing data leakage in both AI and non-AI fields is comprehensive data protection throughout its lifecycle, from collection to destruction [4][5]. - Specific measures include scanning training data to remove sensitive information, conducting supply chain vulnerability assessments, and performing security testing before deploying AI agents [5][6]. Group 3: Governance and Responsibility - Platform providers play a crucial role in governance by scanning and managing AI agents developed on their platforms, but broader regulatory oversight is necessary to ensure effective governance across multiple platforms [7][8]. - The establishment of national standards and regulatory policies is essential for monitoring and constraining platform development, similar to the regulation of mini-programs [7][8]. Group 4: Future Trends in AI Security - Future AI security development may focus on embedding security capabilities into AI infrastructure, achieving "security by design" to reduce costs associated with security measures [15][16]. - Breakthroughs in specific security technologies could provide ready-to-use solutions for small and medium enterprises facing AI-related security risks [15][16]. - The importance of industry standards is emphasized as they provide a foundational framework for building a secure ecosystem, guiding technical practices, and promoting compliance and innovation [17][18].
江西人在AI领域的逆袭,从被拒95次到估值10亿
Sou Hu Cai Jing· 2025-05-26 06:27
Core Insights - The article narrates the entrepreneurial journey of Yu Zhicheng, founder of Turing Robot, who transformed a simple dream of making machines understand humans into a significant AI empire serving 600,000 developers and responding to 146.2 billion dialogues over 17 years [2][25]. Company Development - Turing Robot was founded in 2008 with a modest startup capital of 2,500 yuan, initially developing a voice assistant called "Wormhole" in a cramped office [4][5]. - The team faced significant challenges, including a lack of funding and initial technological limitations, which they overcame by improving their algorithm's accuracy from 30% to 65% through intense dedication [4][6]. - A pivotal moment occurred in 2010 when Microsoft Ventures provided funding and resources, leading to a user base increase from a few thousand to 38 million and an accuracy rate exceeding 80% [6][10]. Technological Advancements - Turing Robot developed a comprehensive Chinese dialogue corpus of 15 billion entries and a deep learning-based semantic parsing model, achieving a 90% accuracy rate in Chinese semantic understanding, comparable to a human's cognitive level of a 6-7 year old [9][10]. - In 2015, the company launched Turing OS, the world's first AI-level operating system, and later ventured into the industrial sector to challenge foreign monopolies in high-end industrial robotics [11][12]. Market Strategy - Turing Robot adopted a dual strategy of continuous R&D investment while also launching industry-specific solutions for quick monetization, addressing the pressure from investors for profitability [16][20]. - The company has engaged in both collaboration and competition with major players like Microsoft and Lenovo, focusing on niche markets such as Chinese semantics and vertical industries [17][18]. Future Outlook - Turing Robot aims to expand into Southeast Asia, targeting a market with a population of 600 million and an AI penetration rate below 10% [18]. - The company is committed to social responsibility, developing tools like the "AI Anti-Fraud Assistant" and "Rural Revitalization AI Platform" to address real-world issues [21][22]. - Future plans include investing 100 million yuan in developing AI companion robots for the elderly, emphasizing the goal of making technology accessible to everyone [22][26].