Core Insights - Alibaba Cloud's AI model "Qwen" ranks 6th among 113 models in the "AI Model Scoring" list published by Nikkei, surpassing China's DeepSeek model [1][3] - The open-source nature of Qwen has led to its adoption by various emerging companies in Japan, including ABEJA, which developed the "QwQ-32B Reasoning Model" based on Qwen [3][4] - Qwen's performance in logical reasoning and mathematics has been highlighted, showcasing its capabilities beyond basic language skills [3] Group 1: Model Performance and Adoption - Qwen's "Qwen2.5-Max" model ranks 6th in a comprehensive performance evaluation conducted by NIKKEI Digital Governance, demonstrating strong performance in grammar, logical reasoning, and mathematics [3] - The open-source model "Qwen2.5-32B" ranks 26th, outperforming Google's "Gemma-3-27B" and Meta's "Llama-3-70B-Instruct" [3] - Japanese companies are increasingly utilizing Qwen, with ABEJA's model based on Qwen ranking 21st overall [3][4] Group 2: Global Recognition and Future Plans - Qwen has gained significant attention outside Japan, with over 100,000 derivative models developed on the "Hugging Face" platform [5] - Alibaba Cloud is considering providing debugging and customization services for Japanese companies, allowing them to utilize Qwen without transferring data overseas [5] - Alibaba Cloud aims to increase the number of projects using Qwen in Japan to over 1,000 within three years [6] Group 3: Research and Evaluation Methodology - The AI model scoring evaluation involved over 6,000 questions across 15 categories, assessing language ability and ethical considerations [7] - The evaluation was conducted in collaboration with Weights & Biases, focusing on models' performance in Japanese [7]
阿里“通义千问”成为日本AI开发基础