Workflow
DeepSeek
icon
Search documents
不用GPS也能自主飞行,现在国赛的教育无人机都这么卷了?
机器人大讲堂· 2025-06-18 12:29
把你的眼睛蒙上,丢到一个陌生的房间里,你会怎么确定自己的位置?这就是室内无人机面临的第一个难题。 传统的光流 定位 方案 就像用鼠标原理来导航 ——通过拍摄地面纹理变化来判断移动。听起来不错,但遇到纯色地板就彻底懵圈。本人 见过某次展会上,一家公 司为了演示效果, 专门带了一卷花地毯铺在地上,场面一度很尴尬。 视觉 SLAM 倒是高级一些,通过摄像头"记住"周围环境的特征点,但它有个致命弱点:「开 灯我认识你,关灯我不认识你」。光线一变,定位精度直接腰斩。至于 UWB定位 ,虽然精度不错,但需要预先布置基站,4个基站加标签的成本就上万了,还失 去了无人机"即飞即走"的灵活性。 今年大赛推荐的光子 RC-L1选择了 目前最可靠的技术路线 : 直接上激光雷达 。 激光的最大优势是不挑环境 ——不管是大理石地面还是花地毯,不管是白天还 是晚上,每秒扫描几十万个点,都能构建出精确的环境地图。但随之而来的挑战是 数据量爆炸 。传统方案是把数据传到地面站处理,可问题来了: 3米/秒的飞行 速度下,100毫秒的通信延迟就意味着30厘米的位置偏差。在避障时,这可能就是"擦肩而过"和"正面相撞"的区别。 朋友们,如果我告诉你, ...
MiniMax追着DeepSeek打
Jing Ji Guan Cha Wang· 2025-06-18 11:32
Core Viewpoint - MiniMax has launched its self-developed MiniMax M1 model, which competes directly with DeepSeek R1 and Google's Gemini 2.5 Pro in terms of key technical specifications, architecture design, context processing capabilities, and training costs [1][2]. Group 1: Model Specifications - MiniMax M1 supports a context length of 1 million tokens, which is 8 times larger than DeepSeek R1's 128,000 tokens and only slightly behind Google's Gemini 2.5 Pro [1]. - The total parameter count for MiniMax M1 is 456 billion, with 45.9 billion parameters activated per token, while DeepSeek R1 has a total of 671 billion parameters but activates only 37 billion per token [1]. Group 2: Cost Efficiency - MiniMax M1 consumes only 25% of the floating-point operations compared to DeepSeek R1 when generating 100,000 tokens, and requires less than half the computational power for inference tasks of 64,000 tokens [2]. - The training cost for MiniMax M1 was only $535,000, significantly lower than the initial expectations and much less than the $5-6 million GPU cost for training DeepSeek R1 [2]. Group 3: Pricing Strategy - MiniMax M1 has a tiered pricing model for its API services based on the number of input or output tokens, with the first tier charging 0.8 yuan per million input tokens and 8 yuan per million output tokens, which is lower than DeepSeek R1's pricing [3]. - The pricing for the first two tiers of MiniMax M1 is lower than that of DeepSeek R1, and the third tier for long text is currently not covered by DeepSeek [3]. Group 4: Technology Innovations - MiniMax M1's capabilities are supported by two core technologies: the linear attention mechanism (Lightning Attention) and the reinforcement learning algorithm CISPO, which enhances efficiency and stability in training [2].
从空间服务商到生态连接器 WeWork中国升级灵活办公方案
Xin Hua Cai Jing· 2025-06-18 09:46
Core Insights - WeWork China has launched a flexible office intelligent solution called "悠座 FLEXJOY," marking its strategic shift from a space operator to an office ecosystem builder [1][2] - The initiative aims to enhance Shanghai's innovation and entrepreneurship ecosystem by leveraging AI and technology to foster collaboration between universities, research institutions, and enterprises [1] - WeWork China has evolved from serving large corporations to also catering to numerous Chinese unicorns, transitioning from a 1.0 to a 2.0 era focused on flexibility, innovation, and technology [2] Company Developments - WeWork China operates nearly 70 communities across 12 cities, offering a diverse range of flexible products to meet user demands for flexible office solutions [2] - The business model has expanded beyond traditional leasing to include light asset operation and system cooperation models, with nearly 100 collaborative office spaces connected through partnerships with various property owners [2][3] Technological Innovations - "悠座 FLEXJOY" serves as a technology-driven solution that effectively matches idle office spaces with market demand, addressing the current market pain points of oversupply and unmet flexible office needs [3][4] - The solution features a dynamic national office network that allows members to book flexible workspaces and meeting rooms via a mobile app, breaking physical space limitations [4] - It includes advanced technological support such as AI smart management, remote temperature control, and indoor navigation, enhancing the overall user experience [4] Strategic Partnerships - WeWork China has announced a deep collaboration with 互影科技 to launch an interactive content ecosystem platform, aimed at helping content creators realize their creative ideas [4]
比我们想象还要震撼!“硅谷创投教父”霍夫曼深度剖析:当前的硅谷投资与科技趋势
聪明投资者· 2025-06-18 08:33
Core Viewpoint - The article discusses the transformative impact of AI and robotics on the future of work and wealth distribution, emphasizing the need for investors to adapt to these changes and identify valuable investment opportunities in the AI sector [6][89]. Group 1: AI Trends and Investment Opportunities - The current AI wave is just beginning, with rapid growth and the emergence of thousands of new companies daily, although many may not survive beyond five years [8][13]. - Investment in AI is heavily concentrated in a few hot startups, with a stark divide in funding availability [3][24]. - The strategies of "open source" and "distillation" are reshaping the competitive landscape in AI, allowing smaller companies to innovate at lower costs [31][33]. - Investors should focus on small models and vertical AI that cater to specific industry needs, as these areas present significant growth potential [40][43]. Group 2: Evaluating AI Companies - Six key factors for assessing the investment value of AI companies include team quality, proprietary data, innovative business models, patent technology, network effects, and brand strength [36][39]. - Companies that can leverage proprietary data to create competitive advantages are more likely to attract investment [36][39]. Group 3: Robotics and AI Integration - The future direction of society is towards the integration of AI and robotics, with the potential for robots to perform traditional jobs at lower costs [81][89]. - As AI technology advances, the cost of humanoid robots may eventually match that of hiring human workers, leading to widespread adoption in various sectors [83][89]. - The development of AI agents capable of executing complex tasks will redefine job roles and the nature of work [48][50]. Group 4: Market Dynamics and Challenges - The venture capital landscape has changed significantly, with a 60% reduction in funding compared to 2021, making it harder for new funds to raise capital [15][16]. - Many unicorn companies are experiencing valuation declines, and the exit timelines for investments are lengthening [16][17]. - Investors must be cautious of overvalued companies in the AI space, as not all will achieve the expected profitability [12][20]. Group 5: Future Implications - The article highlights the potential for AI to replace many traditional jobs, raising questions about the future of work and human identity [90][91]. - The ongoing advancements in AI and robotics will likely lead to a significant shift in wealth distribution, with those controlling these technologies gaining substantial economic power [6][89].
200亿AI独角兽反击,MiniMax首款推理模型对标DeepSeeK,算力成本仅53万美元
Hua Er Jie Jian Wen· 2025-06-17 11:57
Core Insights - MiniMax, a Chinese AI startup valued at 20 billion RMB, has launched its first inference model, M1, which challenges leading models like DeepSeek and others with significantly lower training costs and superior efficiency [1][6]. Performance and Efficiency - M1 outperforms domestic closed-source models and approaches the performance of the best overseas models, surpassing DeepSeek, Alibaba, ByteDance, OpenAI, Google, and Anthropic in certain tasks [1]. - In terms of efficiency, M1 consumes less than 50% of the computational power of DeepSeek R1 when generating 64K tokens, and only 25% for 100K tokens [7]. - The model has a total of 456 billion parameters and supports context inputs of up to 1 million tokens, which is eight times that of DeepSeek R1 [3]. Cost Efficiency - The entire training process for M1 utilized 512 NVIDIA H800 GPUs over three weeks, with a rental cost of approximately 537,400 USD (around 3.8 million RMB), which is an order of magnitude lower than initially expected [6]. - MiniMax has developed a new reinforcement learning algorithm named CISPO, which achieved double the speed of ByteDance's recent DAPO algorithm, requiring only 50% of the training steps to reach similar performance [6]. Market Positioning - MiniMax has adopted a tiered pricing strategy for its API, making M1 more cost-effective compared to DeepSeek R1, especially in the input length ranges of 0-32K and 32K-128K tokens [8]. - M1 is positioned as a "price killer" in the market, receiving positive feedback from developers for its cost-performance ratio [8]. Future Developments - M1 is just the first product in a series of releases planned by MiniMax, which aims to introduce intelligent agent applications and further updates in video and music model capabilities [9]. - The company believes that M1's efficient architecture will provide unique advantages in future intelligent agent applications that require extensive reasoning and integration of long-context information [9].
创投观察:一级市场投资,回暖了?
Sou Hu Cai Jing· 2025-06-17 11:23
Group 1 - The investment market is experiencing a recovery, with some first-tier market practitioners feeling a significant increase in activity since the second half of 2024, with project numbers in the first half of 2025 reaching nearly 80% of the total from the previous year [1] - Investors are showing a higher level of enthusiasm for projects compared to last year, with many actively seeking opportunities and some projects securing multiple funding rounds within a year [1] - The sentiment among investors has shifted, with a focus on supporting companies to develop rather than pushing for immediate exits, especially in the biopharmaceutical sector where there are new systematic exit opportunities [1] Group 2 - The current recovery in the primary market is attributed to increased policy support, valuation recovery in the secondary market, and improved exit expectations, alongside the emergence of new investment trends in AI and humanoid robotics [2] - Despite the heightened enthusiasm among investors, actual investment activity remains cautious, with no significant year-on-year growth in the number and amount of investment events in the first half of 2025, although the number of new funds established has increased [2] - A clear divide exists in the primary market, with a strong interest in AI sectors contrasted by ongoing challenges such as fundraising difficulties and limited exit channels [2] Group 3 - Positive signals are emerging, including continuous policy support, the gradual entry of long-term funds from banks and insurance, relaxed requirements for government-guided fund reinvestment, and enhanced IPO exit expectations, indicating potential structural breakthroughs in the primary market [3]
“美出口管制或许能赢得时间,但未必能赢得胜利”
Guan Cha Zhe Wang· 2025-06-17 10:10
Core Viewpoint - The article discusses the counterproductive effects of U.S. export controls on AI chip development in China, as highlighted by Nvidia CEO Jensen Huang, suggesting that these restrictions may weaken the U.S. competitive advantage in the AI sector [1][2]. Group 1: U.S. Export Controls and Their Impact - Jensen Huang warns that U.S. efforts to block China's development of advanced AI chips and software are backfiring, undermining the U.S. position in the global tech landscape [2]. - Huang argues that the assumption that China cannot manufacture AI chips is fundamentally flawed, as China is now capable of developing its own AI tools [2][3]. - The "fortress" strategy employed by Washington is accelerating innovation in China and escalating geopolitical competition [2]. Group 2: Investment Trends and Market Dynamics - From 2000 to 2023, Chinese venture capital invested $184 billion in AI startups, with the industry projected to reach a value of $1.4 trillion by 2030, including related sectors [2]. - Among approximately 4,300 AI companies, six major players dominate the market, indicating a concentrated competitive landscape [2]. - The cost of developing high-performance AI models has been significantly reduced, as demonstrated by Chinese startup DeepSeek, which achieved comparable quality to OpenAI at a fraction of the cost [3]. Group 3: Comparative Analysis of U.S. and Chinese Tech Companies - The U.S. has 690 private tech companies valued over $1 billion, totaling $2.53 trillion, while China has only 162 companies valued at $702.46 billion, highlighting a disparity in market capitalization [3]. - Despite the U.S. having a lead in AI models, assessments indicate that this advantage is diminishing as competitors in China advance in education, capital markets, and technology [3]. Group 4: Nvidia's Business Implications - Nvidia's quarterly report acknowledges that restrictions on China will harm its business, with $8 billion in planned H20 chip orders needing to be canceled due to tightened export licenses [4]. - The interdependence between U.S. chip companies and China is significant, with 40% of revenues for companies like Qualcomm, Intel, and Broadcom coming from the Chinese market [4]. - The Chinese semiconductor market is expected to reach $204.03 billion this year, growing at a compound annual growth rate of 8.24% [4].
网页编程众测排名:DeepSeek-R1超越Claude 4加冕全球第一
量子位· 2025-06-17 07:41
Core Viewpoint - The article discusses the competitive landscape of coding models, highlighting that DeepSeek's new version R1 has surpassed Claude Opus 4 in web programming capabilities, indicating a shift in the dominance of coding models in the AI space [1][2]. Group 1: Model Performance - DeepSeek-R1-0528 achieved a score of 73.4 in coding tasks, ranking fourth overall, while Claude Opus 4 scored 1418, ranking sixth [4][27]. - In specific categories, DeepSeek-R1 ranked fourth in difficult prompts and fifth in mathematics among open-source models, showcasing its competitive edge [27][28]. Group 2: User Experience - DeepSeek-R1 is noted for being more user-friendly for domestic users compared to Claude, as it is free and easily accessible [23][24]. - The model demonstrated significant improvements in coding capabilities, although it still has room for enhancement [23]. Group 3: Additional Achievements - DeepSeek-R1 was recognized as the best open-source text model under the MIT license, ranking sixth overall in the coding model arena [25][26]. - The article mentions a new model, Kimi-Dev, which has achieved a state-of-the-art score of 60.4% in open-source coding benchmarks, outperforming DeepSeek-R1 [29][30].
MiniMax发布推理模型对标DeepSeek,算力成本仅约53万美元
Di Yi Cai Jing· 2025-06-17 07:26
Core Insights - MiniMax, one of the "Six Little Dragons," has announced significant updates, starting with the release of its first open-source inference model, MiniMax-M1 [1] - MiniMax-M1 has shown competitive performance in benchmark tests, comparable to leading overseas models like DeepSeek-R1 and Qwen3 [3] - The model's training was completed in just three weeks using 512 H800 GPUs, with a total computing cost of only $534,700, which is an order of magnitude lower than initially expected [3][8] Performance Metrics - MiniMax-M1's context window length is 1 million tokens, which is eight times that of DeepSeek R1 and matches Google's Gemini 2.5 Pro, allowing superior performance in long-context understanding tasks [5] - In the TAU-bench evaluation, MiniMax-M1 outperformed DeepSeek-R1-0528 and Google's Gemini 2.5 Pro, ranking just below OpenAI o3 and Claude 4 Opus globally [7] - The model excels in coding capabilities, significantly surpassing most open-source models, with only a slight gap behind the latest DeepSeek R1 [7] Innovations and Cost Efficiency - MiniMax-M1 utilizes a hybrid architecture based on a lightning attention mechanism, enhancing efficiency in long-text input and deep reasoning tasks [7] - The introduction of the CISPO reinforcement learning algorithm has resulted in faster convergence performance compared to Byte's recent DAPO algorithm, contributing to the low training cost [8] - MiniMax's pricing strategy is tiered based on input length, with costs ranging from $0.8 to $2.4 per million tokens for input and $8 to $24 for output, offering competitive pricing against DeepSeek [8] Competitive Landscape - Concurrently, another competitor, Moonlight, has released its programming model Kimi-Dev-72B, which reportedly achieved the highest open-source model level in SWE-bench tests, surpassing the new DeepSeek-R1 [8] - However, Kimi-Dev-72B faced scrutiny for potential overfitting, as it generated less code than required for certain tasks, raising questions about its performance reliability [9] - The AI industry is witnessing renewed competition among the "Six Little Dragons," with MiniMax expected to release further updates in the coming days, potentially impacting the multi-modal AI landscape [9]
Claude时代终结?LMArena实测DeepSeek R1编程得分超Opus 4,但月暗称其新模型更胜一筹
AI前线· 2025-06-17 06:56
Core Viewpoint - The article highlights the significant advancements of the open-source AI model DeepSeek-R1 (0528), which has demonstrated competitive performance against leading proprietary models like Claude Opus 4 and GPT-4.1 in various benchmarks, marking a notable milestone in the open-source AI landscape [1][14]. Performance in Benchmarks - DeepSeek-R1 (0528) achieved a score of 1408.84 in the WebDev Arena, surpassing Claude Opus 4's score of 1405.51, and tying with Gemini-2.5-Pro-Preview-06-05 for the top position [4][5]. - In the LMArena public benchmark tests, R1 (0528) outperformed several top closed models, showcasing its coding capabilities [3][4]. - The model ranks sixth in the Text Arena, indicating strong performance in language understanding and reasoning tasks [6]. Technical Specifications - DeepSeek-R1 (0528) utilizes a mixture of experts (MoE) architecture with a total parameter count of 685 billion, activating approximately 37 billion parameters during inference for efficient computation [9]. - It supports a long context window of 128K tokens, enhancing its performance in long text understanding and complex logical reasoning tasks [9]. Community Reactions - The release of DeepSeek-R1 (0528) has sparked discussions in developer communities, with some users expressing skepticism about its performance compared to proprietary models [10][11][16]. - Users have noted the impressive coding capabilities of R1, suggesting that developers using this model could outperform those using closed models [16]. Competitive Landscape - The article mentions the recent release of Kimi-Dev-72B, another open-source model that has achieved high scores in programming benchmarks, indicating a competitive environment in the open-source AI space [22][23]. - Kimi-Dev-72B scored 60.4% in the SWE-bench Verified programming benchmark, surpassing DeepSeek-R1 (0528) in specific coding tasks [23]. Conclusion - The advancements of DeepSeek-R1 (0528) signify a critical moment for open-source AI, demonstrating that open models can compete with proprietary systems in terms of performance and capabilities [14].