大语言模型
Search documents
当前的自动驾驶VLA,还有很多模块需要优化...
自动驾驶之心· 2025-09-18 11:00
Core Viewpoint - VLA (Vision-Language-Action) is emerging as a mainstream keyword in autonomous driving, with rapid advancements in both academia and industry, aiming to overcome the limitations of traditional modular architectures and enhance the capabilities of autonomous systems [1][5]. Summary by Sections VLA Research and Development - The transition from traditional modular architectures to end-to-end models is marked by the introduction of VLA, which aims to unify sensor inputs directly into driving commands, addressing previous bottlenecks in the development of autonomous driving systems [2][5]. - The VLA model leverages large language models (LLMs) to enhance reasoning, explanation, and interaction capabilities, making it a significant advancement in the field [5]. Traditional Modular Architecture - Early autonomous driving systems (L2-L4) utilized a modular design, where each module (e.g., object detection, trajectory prediction) was developed independently, leading to issues such as error accumulation and information loss [3]. - The limitations of traditional architectures include reliance on manually designed rules, making it difficult to handle complex traffic scenarios [3][4]. Emergence of Pure Vision End-to-End Models - The rise of pure vision end-to-end models, exemplified by NVIDIA's DAVE-2 and Wayve, aimed to simplify system architecture through imitation learning, but faced challenges related to transparency and generalization in unseen scenarios [4][5]. VLA Paradigm - The VLA paradigm introduces a new approach where language serves as a bridge between perception and action, enhancing the model's interpretability and trustworthiness [5]. - VLA models can utilize pre-trained knowledge from LLMs to better understand complex traffic situations and make logical decisions, improving generalization to novel scenarios [5]. Course Objectives and Structure - The course aims to provide a systematic understanding of VLA, addressing gaps in knowledge and practical skills, and includes a comprehensive curriculum covering various aspects of VLA research [6][12]. - The program consists of 12 weeks of online group research, followed by 2 weeks of paper guidance, and an additional 10 weeks for paper maintenance, focusing on both theoretical and practical applications [7][30]. Enrollment and Requirements - The course is designed for individuals with a background in deep learning and basic knowledge of autonomous driving algorithms, requiring familiarity with Python and PyTorch [16][19]. - The class size is limited to 6-8 participants to ensure personalized attention and effective learning [11]. Course Highlights - Participants will gain insights into classic and cutting-edge papers, coding skills, and methodologies for writing and submitting research papers, enhancing their academic and professional profiles [12][15][30].
DeepSeek 首登《自然》封面:中国大模型创造新历史,做了 OpenAI 不敢做的事
3 6 Ke· 2025-09-18 09:56
Core Insights - DeepSeek's AI model, R1, has gained significant recognition by being featured on the cover of Nature, a prestigious scientific journal, highlighting its impact in the AI industry [2][10][12] - The training cost for R1 was notably low at $294,000, which contrasts sharply with the multi-million dollar investments typical for models from companies like OpenAI [7][48] - The model's development process involved rigorous peer review, setting a new standard for transparency and scientific validation in AI [11][15][16] Group 1: Model Development and Training - DeepSeek R1's training process was detailed in a paper published on arXiv, which was later expanded upon in the Nature article, showcasing a comprehensive methodology [6][7] - The model was trained using a pure reinforcement learning framework, allowing it to develop reasoning capabilities without relying on human-annotated data [19][41] - R1 achieved an impressive accuracy of 77.9% in the AIME 2024 math competition, surpassing human average scores and even outperforming GPT-4 in certain tasks [23][31] Group 2: Peer Review and Industry Impact - The peer review process for R1 involved independent experts scrutinizing the model, which is a departure from the typical practices of major AI companies that often do not submit their models for academic evaluation [10][11][15] - Nature's editorial team has called for other companies to submit their models for peer review, emphasizing the importance of transparency and accountability in AI development [15][16] - The recognition from Nature not only validates R1's scientific contributions but also positions DeepSeek as a leader in the push for more rigorous standards in AI research [12][50] Group 3: Technical Innovations - R1's architecture is based on a mixture of experts (MoE) model with 671 billion parameters, which was pre-trained on a vast dataset of web pages and e-books [25] - The model's training involved a unique approach where it was rewarded solely based on the correctness of its answers, fostering an environment for self-reflection and dynamic adjustment during problem-solving [29][38] - The final version of R1 was developed through a multi-stage training process that combined reinforcement learning with supervised fine-tuning, enhancing both reasoning and general capabilities [39][47]
DeepSeek,严正声明!
Zhong Guo Ji Jin Bao· 2025-09-18 08:37
Core Viewpoint - DeepSeek has issued a statement regarding fraudulent activities where criminals impersonate the company or its employees to scam users, severely harming user rights and the company's reputation [1][2]. Group 1: Fraudulent Activities - Criminals have been using forged materials to solicit payments from users under the guise of "computing power leasing" and "equity financing" [1]. - DeepSeek emphasizes that it has never requested users to make payments to personal or unofficial accounts, and any such requests are fraudulent [2]. - The company warns users to verify information through its official website and certified accounts, as all official services are currently free [2]. Group 2: Company Background - DeepSeek was established in 2023 and is incubated by the well-known quantitative investment firm, Huansheng Quantitative [3]. - The founding team is led by quantitative expert Liang Wenfeng and includes top research talents from prestigious universities and experienced technical experts from international institutions [3]. - Recently, DeepSeek's research paper, DeepSeek-R1, was published on the cover of the prestigious journal Nature, marking it as the first major language model to undergo peer review [3].
从 ChatGPT 到 Marble,李飞飞押注的下一个爆发点是 3D 世界生成?
锦秋集· 2025-09-18 07:33
Core Viewpoint - The article discusses the launch of World Labs' latest spatial intelligence model, Marble, which allows users to generate persistent and navigable 3D worlds from images or text prompts, marking a significant advancement in spatial intelligence technology [1][2]. Summary by Sections Marble's Features and Comparison - Marble shows significant improvements over similar products in geometric consistency, style diversity, world scale, and cross-device support, allowing users to truly "walk into" AI-generated spaces [2]. Li Feifei's Vision and World Model Narrative - Li Feifei's approach emphasizes a transition from language understanding to world understanding, culminating in spatial intelligence as a pathway to AGI (Artificial General Intelligence) [3][6]. Limitations of LLMs - While acknowledging the achievements of large language models (LLMs), Li Feifei highlights their limitations in understanding the three-dimensional world, asserting that true intelligence requires spatial awareness [5][7]. The Necessity of Spatial Intelligence for AGI - Spatial intelligence is deemed essential for AGI, as the real world is inherently three-dimensional, and understanding it requires more than just two-dimensional observations [16]. Evolution of AI Learning Paradigms - The article outlines three phases of AI learning evolution: supervised learning, generative modeling, and the current focus on three-dimensional world models, emphasizing the importance of data, computation, and algorithms [21][24]. Data Strategy for World Models - A mixed approach to data collection is necessary for training world models, combining real data acquisition, reconstruction, and simulation to overcome the scarcity of high-quality three-dimensional data [26]. Practical Applications and Development Path - The initial focus for Marble's application is on content production, transitioning to robotics and AR/VR, with an emphasis on creating interactive 3D worlds for various industries [29][30].
DeepSeek打破历史!中国AI的“Nature时刻”
Zheng Quan Shi Bao· 2025-09-18 07:29
Core Insights - The DeepSeek-R1 inference model research paper has made history by being the first Chinese large model research to be published in the prestigious journal Nature, marking a significant recognition of China's AI technology on the global scientific stage [1][2] - Nature's editorial highlighted that DeepSeek has broken the gap of independent peer review for mainstream large models, which has been lacking in the industry [2] Group 1: Research and Development - The DeepSeek-R1 model's research paper underwent a rigorous peer review process involving eight external experts over six months, emphasizing the importance of transparency and reproducibility in AI model development [2] - The paper disclosed significant details about the training costs and methodologies, including a total training cost of $294,000 (approximately 2.09 million RMB) for R1, achieved using 512 H800 GPUs [3] Group 2: Model Performance and Criticism - DeepSeek addressed initial criticisms regarding the "distillation" method used in R1, clarifying that all training data was sourced from the internet without intentional use of outputs from proprietary models like OpenAI's [3] - The R1 model's training duration was 198 hours for R1-Zero and 80 hours for R1, showcasing a cost-effective approach compared to other models that often exceed tens of millions of dollars [3] Group 3: Future Developments - There is significant anticipation regarding the release of the R2 model, with speculation that delays may be due to computational limitations [4] - The recent release of DeepSeek-V3.1 indicates advancements towards the "Agent" era, featuring a mixed inference architecture and improved efficiency, which has sparked interest in the upcoming R2 model [4][5] Group 4: Industry Impact - DeepSeek's adoption of UE8M0 FP8 Scale parameter precision in V3.1 suggests a shift towards utilizing domestic AI chips, potentially accelerating the development of China's computing ecosystem [5] - The collaboration between software and hardware in DeepSeek's models is seen as a new paradigm in the AI wave, with expectations for significant performance improvements in domestic computing chips [5]
DeepSeek首次回应“蒸馏OpenAI”质疑
第一财经· 2025-09-18 05:34
Core Viewpoint - DeepSeek's R1 model has gained significant attention after being published in the prestigious journal "Nature," showcasing its ability to enhance reasoning capabilities through reinforcement learning without relying heavily on supervised data [3][11]. Group 1: Model Development and Training - The training cost for the DeepSeek-R1 model was approximately $294,000, with specific costs for different components detailed as follows: R1-Zero training cost was $202,000, SFT dataset creation cost was $10,000, and R1 training cost was $82,000 [10]. - DeepSeek-R1 utilized 64×8 H800 GPUs for training, taking about 198 hours for R1-Zero and around 80 hours for R1 [10]. - The total training cost, including the earlier V3 model, remains significantly lower than competitors, totaling around $6 million for V3 and $294,000 for R1 [10]. Group 2: Model Performance and Validation - DeepSeek's approach allows for significant performance improvements in reasoning capabilities through large-scale reinforcement learning, even without supervised fine-tuning [13]. - The model's ability to self-validate and reflect on its answers enhances its performance on complex programming and scientific problems [13]. - The research indicates that the R1 model has become the most popular open-source reasoning model globally, with over 10.9 million downloads on Hugging Face [10]. Group 3: Industry Impact and Peer Review - The publication of the R1 model in "Nature" sets a precedent for transparency in AI research, addressing concerns about the reliability of benchmark tests and the potential for manipulation [11]. - The research emphasizes the importance of independent peer review in validating the capabilities of AI systems, which is crucial in an industry facing scrutiny over performance claims [11].
DeepSeek,打破历史!中国AI的“Nature时刻”
证券时报· 2025-09-18 04:51
Core Viewpoint - The article highlights the significant achievement of the DeepSeek-R1 inference model, which has become the first Chinese large model research to be published in the prestigious journal Nature, marking a milestone for China's AI technology on the global stage [1][2]. Group 1: Publication and Recognition - DeepSeek-R1's research paper was published in Nature after a rigorous peer review process involving eight external experts, breaking the trend where major models like those from OpenAI and Google were released without independent validation [2][3]. - Nature's editorial praised DeepSeek for filling the gap in the independent peer review of mainstream large models, emphasizing the importance of transparency and reproducibility in AI research [3]. Group 2: Model Training and Cost - The training of the R1 model utilized 512 H800 GPUs for 198 hours and 80 hours respectively, with a total training cost of $294,000 (approximately 2.09 million RMB), which is significantly lower compared to other models that can cost tens of millions [3][4]. - The paper disclosed detailed training costs and methodologies, addressing previous criticisms regarding data sourcing and the "distillation" process, asserting that all data was sourced from the internet without intentional use of proprietary models [4]. Group 3: Future Developments and Innovations - There is ongoing speculation about the release of the R2 model, with delays attributed to computational limitations, while the recent release of DeepSeek-V3.1 has sparked interest in the advancements leading to R2 [5][6]. - DeepSeek-V3.1 introduces a mixed inference architecture and improved efficiency, indicating a shift towards the "Agent" era in AI, and highlights the use of UE8M0 FP8 Scale parameter precision, which is designed for upcoming domestic chips [6][7]. - The adoption of FP8 parameter precision is seen as a strategic move to enhance the performance of domestic AI chips, potentially revolutionizing the landscape of AI model training and inference in China [7].
“这一空白终于被打破”,梁文锋论文登上《自然》封面
Guan Cha Zhe Wang· 2025-09-18 03:27
《科技日报》则在报道中介绍称,梁文锋参与的研究表明,大语言模型的推理能力可通过纯强化学习来 提升,从而减少增强性能所需的人类输入工作量。训练出的模型在数学和STEM领域研究生水平问题等 任务上,比传统训练的大语言模型表现更好。 DeepSeek-R1包含一个在人类监督下的深入训练阶段,以优化推理过程。梁文锋团队报告称,该模型使 用了强化学习而非人类示例来开发推理步骤,减少了训练成本和复杂性。DeepSeek-R1在被展示优质的 问题解决案例后,会获得一个模板来产生推理过程,即这一模型通过解决问题获得奖励,从而强化学习 效果。在评估AI表现的各项测试中,DeepSeek-R1-Zero和DeepSeek-R1的表现都十分优异。 据智通财经9月18日消息,由DeepSeek团队共同完成、梁文锋担任通讯作者的DeepSeek-R1推理模型研 究论文,登上了国际权威期刊《自然(Nature)》的封面。 与今年1月发布的DeepSeek-R1的初版论文相比,本次论文披露了更多模型训练的细节,并正面回应了 模型发布之初的蒸馏质疑。DeepSeek-R1也是全球首个经过同行评审的主流大语言模型。Nature评价 道:目前几 ...
梁文锋执笔的R1论文登上Nature封面!首次回应外界三大质疑
AI前线· 2025-09-18 02:28
Core Viewpoint - The article highlights the significant breakthrough of DeepSeek's AI model, DeepSeek-R1, which has successfully passed peer review and is recognized as the first large language model to achieve this milestone, marking a notable advancement for domestic AI research on the global stage [3][8]. Summary by Sections Model Development and Features - DeepSeek-R1 utilizes reinforcement learning (RL) to develop reasoning capabilities without relying on extensive human-annotated data, showcasing a novel approach in AI model training [3][12]. - The model was built on DeepSeek-V3 Base, with a focus on rewarding correct predictions to enhance the generation of longer and more logical responses [3][12]. - The training cost for DeepSeek-R1 was approximately $294,000, significantly lower than competitors that often spend tens of millions [6][12]. Peer Review Process - The peer review process for DeepSeek-R1 involved eight external experts over five months, resulting in a comprehensive review document that was three times the length of the original paper [9][12]. - The review addressed various aspects, including originality, methodology, and robustness, leading to improvements in the final published version [9][12]. Data and Safety Measures - The pre-training data for DeepSeek-V3 Base was sourced entirely from the internet, with a significant effort made to clean the data to avoid contamination, removing around 6 million potentially polluted samples [6][12]. - DeepSeek-R1 has implemented external risk control mechanisms and real-time audits, demonstrating superior safety performance compared to other mainstream models like Claude-3.7-Sonnet and GPT-4o [6][12]. Impact and Future Directions - The innovative use of pure reinforcement learning in DeepSeek-R1 is expected to influence future research in large language models, with many researchers looking to apply similar methods to enhance reasoning capabilities across various domains [12][14]. - Despite some concerns regarding the transparency of training data composition, the model has shown competitive performance in balancing accuracy and cost in scientific task challenges [14][12].
梁文锋发表Nature封面论文:揭开DeepSeek-R1背后的科学原理——强化学习激励大模型推理能力
生物世界· 2025-09-18 01:44
Core Viewpoint - The article discusses the development and capabilities of DeepSeek-R1, a reasoning model that significantly reduces computational costs while enhancing reasoning abilities in large language models (LLMs) through pure reinforcement learning [1][2]. Group 1: Model Development and Training - DeepSeek-R1 was launched by a startup in Hangzhou, China, on January 20, 2025, and has gained global attention for its strong reasoning capabilities and low computational requirements [1]. - The training cost for DeepSeek-R1 was only $294,000, which is significantly lower than similar models that often cost tens of millions [2]. - The model employs a pure reinforcement learning approach, minimizing reliance on human-annotated reasoning paths, which allows for more autonomous exploration of reasoning capabilities [6][10]. Group 2: Performance and Capabilities - DeepSeek-R1-Zero, a precursor to DeepSeek-R1, demonstrated remarkable performance improvements in reasoning tasks, achieving an average pass@1 score of 77.9% in the American Mathematics Invitational Exam (AIME) 2024, up from 15.6% [17]. - The model also excelled in programming competitions and graduate-level problems in biology, physics, and chemistry, showcasing its versatility [19]. - The research indicates that advanced reasoning behaviors, such as self-validation and reflection, emerged organically during the reinforcement learning process [29]. Group 3: Challenges and Limitations - Despite its strengths, DeepSeek-R1-Zero faces challenges such as poor readability and language mixing issues, particularly when responding in both English and Chinese [21]. - The model's performance in broader domains like writing and open-domain Q&A remains limited due to its focus on reasoning tasks during training [22]. - The article highlights potential ethical risks associated with enhanced reasoning capabilities, including vulnerability to jailbreak attacks and the generation of dangerous content [27][28].