Workflow
Large Language Model Fine-tuning
icon
Search documents
开发者狂喜:Thinking Machines发布首款产品Tinker,后训练麻烦全给包了
机器之心· 2025-10-02 03:12
Core Insights - Tinker, the first product launched by Thinking Machines, is an API designed to simplify the fine-tuning of language models for developers and researchers, allowing them to focus on training data and algorithms while Tinker manages infrastructure-related tasks [2][4][16]. Product Features - Tinker supports various advanced models, including Qwen-235B-A22B, and allows users to switch from small to large models with ease, akin to changing a string in Python code [6][8]. - The API provides low-level primitives such as forward_backward and sample, which are essential for most common post-training methods. An open-source library, Tinker Cookbook, is also available to offer modern implementations of post-training methods [9][11]. Use Cases and Adoption - Teams from prestigious institutions like Princeton, Stanford, and UC Berkeley are already utilizing Tinker, demonstrating its versatility in supporting both supervised fine-tuning and experimental reinforcement learning pipelines [13]. - The Goedel team at Princeton achieved comparable performance to full-parameter models using only 20% of the data, while Stanford's chemistry group improved accuracy from 15% to 50% in a specific task using Tinker [14]. Market Position and Future Outlook - Tinker aims to democratize access to fine-tuning capabilities, potentially leading to more diverse product innovations in the AI space [16]. - The initial phase of Tinker will be free, with a usage-based pricing model to be introduced in the coming weeks [15].
基于开源Qwen2.5-VL实现自动驾驶VLM微调
自动驾驶之心· 2025-09-29 23:33
Core Viewpoint - The article discusses the development and application of LLaMA Factory, an open-source low-code framework for fine-tuning large models, particularly in the context of autonomous driving and visual-language models (VLM) [1][2]. Group 1: LLaMA Factory Overview - LLaMA Factory integrates widely used fine-tuning techniques and has become one of the most popular frameworks in the open-source community, with over 40,000 stars on GitHub [1]. - The framework is designed to train models like Qwen2.5-VL-7B-Instruct, which can provide traffic situation assessments through natural language interactions [1]. Group 2: Qwen2.5-VL Model - Qwen2.5-VL is the flagship model in the Qwen visual-language series, achieving significant breakthroughs in visual recognition, object localization, document parsing, and long video understanding [2]. - The model supports dynamic resolution processing and absolute time encoding, allowing it to handle images of various sizes and videos lasting several hours [2]. - It offers three model sizes, with the flagship Qwen2.5-VL-72B performing comparably to advanced models like GPT-4o and Claude 3.5 Sonnet [2]. Group 3: CoVLA Dataset - CoVLA (Comprehensive Vision-Language-Action) is a dataset designed for autonomous driving, containing 10,000 real driving scenes and over 80 hours of video [3]. - The dataset utilizes scalable methods to generate precise driving trajectories from raw sensor data, accompanied by detailed natural language descriptions [3]. - CoVLA surpasses existing datasets in scale and annotation richness, providing a comprehensive platform for training and evaluating visual-language-action models [3]. Group 4: Model and Dataset Installation - Instructions are provided for downloading and installing LLaMA Factory and the Qwen2.5-VL model, including commands for cloning the repository and installing necessary packages [4][5][6]. - The article emphasizes the importance of configuring local paths for images and datasets to ensure proper functionality [7][13]. Group 5: Fine-tuning Process - The fine-tuning process is tracked using SwanLab, an open-source tool for visualizing AI model training [11]. - After fine-tuning, the model's performance is evaluated through a web UI, allowing users to interact with the model and assess its responses to various queries related to autonomous driving [20][21]. - The article notes that the fine-tuned model provides more relevant answers compared to the original model, which may produce less focused responses [22].