Core Viewpoint - The article discusses the development of KRIS-Bench, a benchmark for evaluating the reasoning capabilities of image editing models, focusing on the structured knowledge acquisition process similar to human learning [2][3][4]. Group 1: Knowledge Structure - KRIS-Bench is designed to assess AI's knowledge structure through three categories: Factual Knowledge, Conceptual Knowledge, and Procedural Knowledge, allowing for a progressive challenge in image editing tasks [4][8]. - The benchmark includes 7 reasoning dimensions and 22 typical editing tasks, ranging from basic to advanced difficulty levels, covering a wide spectrum of challenges [6]. Group 2: Evaluation Metrics - KRIS-Bench introduces a four-dimensional automated evaluation system to score editing outputs, which includes Visual Consistency, Visual Quality, Instruction Following, and Knowledge Plausibility [11][13]. - The evaluation process involves a total of 1,267 image-instruction pairs, meticulously curated by an expert team to ensure diverse data sources and prevent model exploitation [12]. Group 3: Model Performance - The benchmark evaluates 10 models, including 3 closed-source and 7 open-source models, revealing that closed-source models like GPT-Image-1 outperform open-source counterparts in knowledge plausibility [14][18]. - Despite some models showing improvement in factual knowledge tasks, many still struggle with procedural reasoning and complex scientific tasks, indicating a significant gap in deep reasoning capabilities [18].
知识类型视角切入,全面评测图像编辑模型推理能力:所有模型在「程序性推理」方面表现不佳
量子位·2025-06-13 05:07