Workflow
下一代大模型高效计算:参数压缩、硬件适配与多模态推理、CoT等方向论文指导班来啦!
自动驾驶之心·2025-07-04 07:13

Core Insights - The article discusses the rapid development of large language models (LLMs) and multimodal models, focusing on enhancing model efficiency, expanding knowledge capabilities, and improving reasoning performance as core issues in current AI research [1][2]. Course Overview - The course systematically explores cutting-edge optimization methods for large models, emphasizing three key areas: parameter-efficient computation, dynamic knowledge expansion, and complex reasoning [1]. - It addresses core challenges in model optimization, including lightweight methods such as pruning, sparsification, and quantization for parameter compression; dynamic knowledge injection techniques like retrieval-augmented generation (RAG) and parameter-efficient fine-tuning (PEFT) for knowledge expansion; and advanced reasoning paradigms such as chain-of-thought (CoT) and reinforcement learning optimization (GRPO) for reasoning enhancement [1]. Course Objectives - The course aims to help students systematically master key theoretical knowledge in specified directions and develop a clearer understanding of the content [5]. - It seeks to bridge the gap for students who lack direction and practical skills, enabling them to combine theoretical knowledge with coding practice and lay the groundwork for developing new models [5]. - The course also focuses on improving students' academic writing skills, providing guidance on manuscript preparation and submission [5]. Target Audience - The course is designed for master's and doctoral students in the field of large models, those seeking to enhance their resumes for graduate studies abroad, and professionals in the AI field looking to systematically improve their algorithmic theory and writing skills [6]. Admission Requirements - Basic requirements include a foundational understanding of deep learning/machine learning, familiarity with Python syntax, and experience with PyTorch [7]. Course Structure - The course consists of 12 weeks of online group research followed by 2 weeks of paper guidance, culminating in a 10-week paper maintenance period [11]. - Students will analyze classic and cutting-edge papers, understand key algorithms and principles, and develop their research ideas [11]. Weekly Breakdown - The course covers various topics, including model pruning, quantization, dynamic knowledge expansion, advanced reasoning techniques, and multimodal understanding [16][18]. - Each week includes specific themes and outputs, such as determining research ideas, optimizing model size and performance, and enhancing coding capabilities [16][18]. Additional Resources - The course provides access to datasets from public sources and baseline code tailored to specific applications [13][14]. - Essential papers and resources are recommended for foundational knowledge and advanced techniques in model optimization [15][17].