Workflow
AI修图工具
icon
Search documents
我们用21款AI修图工具修了100张图:谁才是真正的“修图神器”?|Jinqiu Scan
锦秋集· 2025-11-10 11:38
Core Viewpoint - The article focuses on evaluating 21 AI image editing tools across six real-life scenarios to determine their effectiveness in understanding and executing user requests for image modifications [4][11][141]. Group 1: Evaluation Methodology - The evaluation consists of six rounds, each using the same prompt for image editing, with all models set to their latest default configurations [11][12]. - Three general evaluation dimensions are used: visual consistency, local quality, and content consistency [12][13][14]. Group 2: Performance Results - Top performers include Tencent Yuanbao, Meitu Xiu Xiu, and Qwen Image Edit, scoring 15 points for effectively meeting user prompts without noticeable discrepancies [23]. - Nano Banana, Sora, Lovart, Manus, and Runway scored 14 points, with minor issues in image retrieval capabilities [28]. - Tools like Jiemeng 4.0, Wake Map, and Pixel Cake scored around 10 points, showing significant errors despite being dedicated image editing software [30]. Group 3: Specific Findings - In the first round, Tencent Yuanbao and Meitu Xiu Xiu excelled in removing unwanted elements while enhancing image clarity [23]. - The second round highlighted Qwen Image Edit and Genspark as top performers in foreground extraction, maintaining original details [41]. - The third round saw Jiemeng 4.0 and Tencent Yuanbao achieving high scores for effectively replacing elements while preserving the original image's integrity [65]. Group 4: Future Directions - The article indicates plans for future evaluations of AI tools in areas such as game development, knowledge bases, and companionship products [7].