Core Insights - Alibaba's Marco-MT-Algharb translation model achieved significant success at the 2025 WMT competition, winning 6 championships, 4 second places, and 2 third places, particularly excelling in English-to-Chinese translation, surpassing top closed-source AI systems like Gemini 2.5 Pro and GPT-4.1 [1][2][3] Group 1: Competition Overview - The WMT competition is recognized as the "gold standard" in machine translation, combining automatic metrics like COMET and LLM Judge with extensive human evaluations to determine rankings [3] - Marco-MT participated in the more challenging constrained track of the WMT competition, which requires models to handle diverse content while adhering to strict guidelines of using only open-source data and models with a size limit of 20 billion parameters [2] Group 2: Model Performance and Methodology - Marco-MT's success is attributed to its integration of extensive e-commerce translation experience with an original training method called M2PO (Multi-stage Preference Optimization), which applies reinforcement learning to enhance translation quality [2] - The model's training process involves three steps: broadening knowledge through supervised fine-tuning, employing reinforcement learning to evaluate translation quality, and incorporating word alignment and reordering techniques during decoding to improve accuracy and fidelity [2] Group 3: Market Position and Future Prospects - Marco-MT, initially launched in 2024 for e-commerce translation, has expanded its capabilities to support various translation scenarios, including search, product information, dialogue, and images, establishing a strong foundation for its transition to general translation [3] - The model has already demonstrated its competitive edge in multimodal translation, having won 2 championships and 2 second places at the 2025 IWSLT international competition [3]
阿里国际Marco获WMT机器翻译大赛六项冠军,英中赛道超GPT-4.1与Gemini 2.5 Pro等巨头
Cai Jing Wang·2025-10-23 05:56