Workflow
小模型
icon
Search documents
人形机器人,需要多少算力?
创业邦· 2025-08-30 10:08
Core Viewpoint - The article discusses the advancements in humanoid robots, particularly focusing on the significant increase in edge computing power provided by NVIDIA's Jetson T5000, which boasts a computing power of 2070 TFLOPS, enabling more efficient AI inference and real-time processing of multimodal sensor data [6][10][15]. Group 1: Technological Advancements - NVIDIA's Jetson series has evolved from the initial Jetson TK1 with less than 1 TFLOPS to the current Jetson AGX Thor with 2070 TFLOPS, marking a significant leap in computational capabilities for robotics [11][13]. - The Orin series, with 100 TFLOPS, has become a foundational AI computing platform for many humanoid robots in China, showcasing the growing importance of computational power in the robotics sector [15][19]. Group 2: Industry Leaders and Their Impact - Elon Musk and Jensen Huang are highlighted as key figures in the resurgence of humanoid robots, with Musk's announcement of entering the humanoid robot space and Huang's continuous enhancement of computing platforms [7][10]. - Huang's vision for NVIDIA extends beyond traditional AI into the realm of "Physical AI," indicating a broader ambition for the company's role in robotics [15][19]. Group 3: Current Capabilities and Future Directions - Most humanoid robots currently utilize edge computing power in the range of 100-200 TFLOPS, which is sufficient for basic tasks like grasping and sorting [17][19]. - The article suggests a shift towards smaller models for AI processing, as demonstrated by Boston Dynamics' Atlas robot, which uses a 450 million parameter model to efficiently handle tasks while reducing computational load [21][22].
人形机器人,需要多少算力?
3 6 Ke· 2025-08-28 07:02
Core Insights - Huang Renxun has launched the Jetson T5000, a powerful edge computing platform with a computing power of 2070 TFLOPS, specifically designed for humanoid robots [1][2] - This advancement allows humanoid robots to perform more AI inference calculations and real-time processing of multimodal sensor data locally, without relying on cloud computing [2][4] - The development of humanoid robots has gained significant attention from tech leaders like Elon Musk and Huang Renxun, elevating the status of humanoid robots in the tech industry [4][6] Industry Trends - The Jetson series has evolved significantly since its inception, with the first model, Jetson TK1, launched in 2014, now reaching up to 2070 TFLOPS with the latest Jetson AGX Thor [8][10] - Major companies like JD.com and Meituan have utilized the Jetson AGX Xavier for their logistics robots, showcasing the practical applications of this technology in the industry [8] - Huang Renxun's focus on robotics and AI has positioned NVIDIA as a leader in the field, with the Jetson platform being a cornerstone of this strategy [6][12] Technological Developments - Current humanoid robots typically require 100-200 TFLOPS of computing power, which is sufficient for basic tasks like grasping and sorting [14][16] - For more complex tasks involving multi-sensor data processing, higher computing power is necessary, leading to the exploration of smaller models to optimize performance [16][19] - Boston Dynamics' Atlas robot has successfully implemented a small model with 450 million parameters, demonstrating that smaller models can effectively reduce computational load while enhancing real-time data processing capabilities [19][21] Future Directions - The industry is moving towards the use of smaller models for specific tasks, as opposed to relying on large models for all operations, which can be inefficient [21][23] - This approach aligns with the ongoing trend of optimizing hardware resources and task execution in humanoid robots, indicating a potential pathway for future advancements in the field [23]
英伟达新模型上线,4B推理狂飙53倍,全新注意力架构超越Mamba 2
3 6 Ke· 2025-08-27 02:03
Core Insights - Nvidia has launched a new series of small models called Jet-Nemotron, developed by an all-Chinese team, featuring innovations such as Post Neural Architecture Search (PostNAS) and a new linear attention module called JetBlock [1][2][8] - Jet-Nemotron models (2B and 4B) outperform leading open-source models like Qwen3, Gemma3, and Llama3.2 in various dimensions including math, code, commonsense, retrieval, and long context accuracy [2][20] - The inference throughput on H100 GPUs has been significantly enhanced, achieving up to a 53.6 times increase [4][20] Model Performance - Jet-Nemotron-2B and Jet-Nemotron-4B demonstrate superior performance in benchmark tests, with Jet-Nemotron-4B achieving a 65.2% accuracy in MMLU, compared to Qwen3's 60.3% [21] - In long context scenarios, Jet-Nemotron shows a dramatic throughput increase, reaching up to 50 times improvement over Qwen3-1.7B [5][20] - The models also exhibit faster speeds, with Jet-Nemotron-2B being 21 times faster and Jet-Nemotron-4B 47 times faster than Qwen3-1.7B-Base [20] Innovations - PostNAS allows for efficient architecture exploration and adaptation based on pre-trained Transformer models, significantly reducing the cost and risk of developing new language model architectures [9][10][14] - JetBlock, a new linear attention module, combines dynamic convolution with hardware-aware architecture search, leading to substantial accuracy improvements while maintaining similar training and inference throughput as previous designs [18][20] Technical Specifications - Jet-Nemotron models have been optimized for various parameters, including cache size and throughput, with configurations achieving a maximum throughput of 2,885 tokens per second [21] - The models utilize a flexible design for attention blocks, allowing for improved performance in long context and complex reasoning tasks [16][18]
琶洲“模术”秀专访:大模型不必“大而全”,也可“小而美”
Nan Fang Du Shi Bao· 2025-08-22 03:30
Core Insights - The article highlights the rapid development and application of AI large models, particularly focusing on the innovations brought by Lingju Information Technology Co., Ltd. and its CEO Zhang Sheng [1][3][4] Company Overview - Lingju Information was founded in 2013 with a focus on AI, specifically targeting natural language processing (NLP) and service robots [3] - The company has developed its core product, the "Lingju Artificial Brain," which integrates semantic analysis, knowledge graphs, and cognitive computing [3][4] Product Development - Lingju has launched its own AI large model, the "Lingju Lingna Xunling Model," which emphasizes flexible deployment and quick response, catering to specific application scenarios [1][5] - The company focuses on "small models" that require fewer parameters (around tens of millions) compared to general large models that may have hundreds of billions of parameters, achieving cost control and efficiency [5][8] Market Position and Strategy - Lingju's technology is widely used in various applications, including enterprise conversational AI, personal AI applications, digital humans, service robots, and AIoT products, serving major companies like Huawei, Alibaba, and Xiaomi [4][5] - The company aims to leverage its unique technology and open-source advancements to create tailored solutions for specific industry needs, emphasizing the importance of application scenarios in AI development [4][10] Future Plans - Lingju plans to deepen its focus from industry-level applications to specific scenarios, expanding from B2B to B2C markets to explore more possibilities for AI utilization [10][11]
英伟达开源9B参数小模型,比Qwen3快6倍
量子位· 2025-08-19 05:25
Core Insights - The article discusses the emergence of small AI models, highlighting the launch of NVIDIA's new small language model, Nemotron Nano v2, which is designed to perform complex reasoning tasks efficiently [1][3][7]. Group 1: Model Features and Performance - Nemotron Nano v2 is a 9 billion parameter model that matches or exceeds the accuracy of the leading open-source model Qwen3-8B in complex reasoning benchmarks while being 6 times faster [1][7]. - The model supports a "reasoning trace" feature, allowing it to generate reasoning processes before providing final answers, which enhances the quality of responses, especially for complex tasks [8][11]. - Users can control the "thinking budget," specifying the number of tokens the model can use during reasoning, which helps in managing the model's performance [10][12]. Group 2: Training and Data - The model underwent extensive pre-training on over 20 trillion tokens, utilizing FP8 precision and a Warmup-Stable-Decay learning rate schedule [19]. - Post-training involved various techniques, including supervised fine-tuning and reinforcement learning from human feedback, with about 5% of the data containing intentionally truncated reasoning traces [21]. - NVIDIA has also released a significant portion of the data used for training, including a diverse pre-training dataset with 66 trillion tokens across multiple categories [26][23]. Group 3: Open Source Strategy - NVIDIA's approach contrasts with other tech giants moving towards closed-source models, emphasizing an open-source strategy with the Nemotron ecosystem [30][32]. - The company has made significant strides in open-sourcing its models, which may influence the competitive landscape in AI development [29][33].
4o-mini华人领队也离职了,这次不怪小扎
量子位· 2025-08-19 01:17
Core Viewpoint - OpenAI's former key researcher Kevin Lu has left to join Thinking Machine Lab, a new AI startup co-founded by former OpenAI CTO Mira Murati, which has reached a valuation of $12 billion [3][19]. Group 1: Kevin Lu's Background and Contributions - Kevin Lu has a strong background in reinforcement learning and small model development, having previously worked at Hudson River Trading, Meta, and OpenAI [5][6]. - At OpenAI, he led the development of the 4o-mini model, which is a multimodal reasoning small model that supports text and image input, designed for complex tasks with improved speed and lower costs [7][9]. - His most cited paper, "Decision Transformer: Reinforcement Learning via Sequence Modeling," has been cited 2,254 times and presents a framework for treating reinforcement learning as conditional sequence modeling [10][11]. Group 2: Thinking Machine Lab - Thinking Machine Lab has attracted several former core researchers from OpenAI, including John Schulman and Barrett Zoph, and has recently completed a record-breaking $2 billion seed funding round [4][17]. - The startup has not yet publicly disclosed any results, which has generated significant anticipation within the AI community [21]. - Despite competitive offers from other tech giants, the team members at Thinking Machine Lab have chosen to remain, indicating strong confidence in the startup's potential [20].
英伟达新研究:小模型才是智能体的未来
量子位· 2025-08-18 09:16
Core Viewpoint - The article argues that small language models (SLMs) are the future of agentic AI, as they are more efficient and cost-effective compared to large language models (LLMs) for specific tasks [1][2][36]. Group 1: Performance Comparison - Small models can outperform large models in specific tasks, as evidenced by a 6.7 billion parameter Toolformer surpassing the performance of the 175 billion parameter GPT-3 [3]. - A 7 billion parameter DeepSeek-R1-Distill model has also shown better inference performance than Claude 3.5 and GPT-4o [4]. Group 2: Resource Optimization - Small models optimize hardware resources and task design, allowing for more efficient execution of agent tasks [6]. - They can efficiently share GPU resources, enabling parallel execution of multiple workloads while maintaining performance isolation [8]. - The smaller size of small models leads to lower memory usage, enhancing concurrency capabilities [9]. - GPU resources can be flexibly allocated based on operational needs, allowing for better overall resource optimization [10]. Group 3: Task-Specific Deployment - Traditional agent tasks often rely on large models for various operations, but many tasks are repetitive and predictable, making small models more suitable [14][15]. - Using specialized small models for each sub-task can avoid resource wastage associated with large models and significantly reduce inference costs, with small models being 10-30 times cheaper to run than large models [20]. Group 4: Flexibility and Adaptability - Small models can be fine-tuned quickly and efficiently, allowing for rapid adaptation to new requirements or rules, unlike large models which are more rigid [20][24]. - Advanced agent systems can break down complex problems into simpler sub-tasks, reducing the importance of large models' general understanding capabilities [24]. Group 5: Challenges and Considerations - Despite the advantages, small models face challenges such as lower market recognition and the need for better evaluation standards [29][27]. - The transition from large to small models may not necessarily lead to cost savings due to existing industry inertia favoring large models [27]. - A hybrid approach combining different scales of models may provide a more effective solution for various tasks [28]. Group 6: Community Perspectives - Some users have shared experiences indicating that small models are more cost-effective for simple tasks, aligning with the article's viewpoint [36]. - However, concerns have been raised about small models' robustness in handling unexpected situations compared to large models [37].
上交研究登Nature大子刊!可微分物理首次突破端到端无人机高速避障
机器之心· 2025-07-08 00:04
Core Viewpoint - The article discusses a groundbreaking research achievement from Shanghai Jiao Tong University and the University of Zurich, which successfully integrates drone physical modeling with deep learning for autonomous navigation in complex environments without relying on maps or communication [2][3]. Group 1: Research Background and Authors - The research team consists of authors from Shanghai Jiao Tong University and the University of Zurich, focusing on areas such as differentiable physics robots, multi-target tracking, and AIGC [1]. - The research has been published in "Nature Machine Intelligence," highlighting the contributions of the authors [3]. Group 2: Methodology and Innovations - The proposed method allows for training once and sharing weights among multiple drones, enabling zero-communication collaborative flight [7]. - The system achieves a navigation success rate of 90% in unknown complex environments, significantly outperforming existing methods in robustness [9]. - Drones can fly at speeds of up to 20 meters per second in real-world forest environments, which is double the speed of current imitation learning-based solutions [10]. Group 3: Technical Details - The approach utilizes a simple particle dynamics model instead of complex drone dynamics, employing a lightweight neural network with only three layers [12][21]. - The training framework involves receiving low-resolution depth images as input and outputting control commands, with a total network parameter size of only 2MB, making it deployable on inexpensive embedded computing platforms [21][12]. - The training process is efficient, requiring only 2 hours on an RTX 4090 GPU to converge [21]. Group 4: Comparison with Existing Methods - The research contrasts with traditional reinforcement learning and imitation learning methods, demonstrating that the proposed differentiable physics model effectively combines physical priors with end-to-end learning advantages [30]. - The method shows superior performance with only 10% of the training data compared to existing methods, achieving faster convergence and lower variance [39][38]. Group 5: Interpretability and Insights - The study introduces Grad-CAM activation maps to visualize the attention of the strategy network during flight, indicating that the network learns to focus on potential collision areas [36][37]. - The findings suggest that understanding the physical world is more crucial for navigation capability than sensor precision [50]. Group 6: Future Directions - The research team plans to extend their work to develop an end-to-end monocular FPV (First-Person View) drone navigation system, achieving speeds of up to 6 m/s in real outdoor environments without mapping [52].
AI在工业铺开应用,英伟达的“AI工厂”并非唯一解
第一财经· 2025-06-19 13:47
Core Viewpoint - Nvidia is increasingly emphasizing the concept of AI factories, which are designed to leverage AI for value creation, contrasting with traditional data centers that focus on general computing [1][2]. Group 1: Nvidia's AI Factory Concept - Nvidia's CEO Jensen Huang announced collaborations to build AI factories in Taiwan and Germany, featuring supercomputers equipped with 10,000 Blackwell GPUs [1]. - The AI factory concept includes a computational center and a platform to upgrade factories into AI factories, with a focus on simulation and digital twin technologies [4]. - The Omniverse platform is integral to Nvidia's strategy, allowing manufacturers to utilize AI for simulation and digital twin applications [2][3]. Group 2: Industry Applications and Collaborations - Various manufacturers are integrating Nvidia's AI technology through software from companies like Siemens and Ansys, enhancing applications in autonomous vehicle simulations and digital factory planning [3]. - Companies like Schaeffler and BMW are utilizing Nvidia's technology for real-time collaboration and optimization in manufacturing systems [3]. Group 3: AI Model Utilization - The industrial sector has been using small models for AI applications prior to the emergence of large models, focusing on data intelligence and visual intelligence [6][10]. - Small models are expected to continue to dominate industrial AI spending, with estimates suggesting they will account for 60-70% of the market [10][11]. Group 4: Cloud and Computational Needs - Nvidia's approach to building large-scale AI clouds is one option, but many companies prefer private cloud solutions due to data security concerns [13][14]. - The demand for computational power is expected to grow as AI applications become more prevalent, although current infrastructure may not be a bottleneck [15].
端侧AI的未来:苹果能否凭借“小模型”逆袭?
3 6 Ke· 2025-06-10 06:26
Core Insights - Apple's WWDC this year lacked the excitement of previous years, with developers expressing a lukewarm response to the anticipated AI features [1] - The company's strategy of utilizing "small models" for on-device AI applications has raised concerns among developers regarding performance and customization [2][3] - Tensions between Apple and developers have increased, particularly following legal rulings that challenge Apple's App Store commission model [4] Group 1: AI Strategy - Apple is expected to showcase advancements in on-device AI, focusing on "small models" that run directly on devices like iPhones, which require less data and computational resources [1] - Developers have expressed skepticism about the performance of these small models compared to cloud-based large models, particularly for complex AI tasks [2] - Some developers see potential in small models for specific use cases, while others remain uncertain about their effectiveness [3] Group 2: Developer Relations - The recent Epic Games lawsuit ruling allows developers to direct users to payment options outside the App Store, threatening Apple's revenue model [4] - Apple maintains that the App Store offers significant opportunities for developers, but growing dissatisfaction among developers may challenge this narrative [4] - Regulatory pressures in the U.S. may lead to changes in how Apple operates its App Store, potentially undermining its current dominance [4] Group 3: Innovation Challenges - Apple has struggled to meet market expectations with recent product launches, such as the Vision Pro headset, which has not gained widespread adoption [6] - The company’s AI initiatives, including enhanced Siri features, have been perceived as reactive rather than innovative [6] - The core question remains whether Apple is experiencing a slowdown in innovation or is undergoing a strategic transformation [5][8] Group 4: Future Opportunities - Despite challenges, Apple has opportunities to leverage its iPhone platform as a gateway for users to access mainstream AI technologies [8] - The company's strength in developing proprietary chips supports its on-device AI strategy [8] - To regain momentum, Apple must enhance its AI tools for developers and rebuild trust within its developer community [8]