Workflow
Robotics
icon
Search documents
“小巨人”秀硬科技:中博会刮起“赛博”风
Core Viewpoint - The 20th China International Small and Medium Enterprises Expo showcased the significant role of small and medium enterprises (SMEs) in technological innovation, highlighting their potential and opportunities in the current economic landscape [1][2]. Group 1: Development Trends of SMEs - SMEs in China are experiencing a "quantity and quality rise," with over 60 million SMEs expected by the end of 2024, and revenue from large-scale industrial SMEs reaching 81 trillion yuan [3]. - The number of technology and innovation-oriented SMEs has surpassed 600,000, with over 140,000 specialized and innovative SMEs and 14,600 "little giant" enterprises [3]. - Guangdong province leads with over 7.74 million SMEs, accounting for about one-eighth of the national total, and has 2,089 specialized and innovative "little giant" enterprises [3]. Group 2: Focus on Robotics - The robotics sector, particularly specialized and innovative enterprises, has become a focal point at the expo, showcasing strong capabilities [2][4]. - The launch of the CR 30H collaborative robot by Yujian Technology represents a breakthrough in balancing high load and speed, achieving a load capacity of 30 kg and a joint speed of 300°/second [4]. - The introduction of a wall-cleaning robot by Guangdong Lingdu Intelligent Technology addresses safety and water conservation issues, achieving a water usage ratio of 20:1 compared to manual cleaning [5]. Group 3: AI and Digital Transformation - AI is playing a crucial role in enabling SMEs to achieve technological breakthroughs and enhance competitiveness through digital transformation [2][7]. - The Shoushi Taihe Pharmaceutical Research Group has developed the first intelligent traditional Chinese medicine ring, integrating health monitoring with traditional practices [7]. - AI-driven systems in cosmetics development are transforming traditional reliance on individual experience into a data-driven approach, enhancing efficiency and reducing resource waste [8]. Group 4: Digitalization as a Competitive Advantage - Digital transformation is essential for SMEs in Guangdong, with platforms like "Master Craftsman" industrial models aiding in operational efficiency [9][10]. - The integration of AI and 5G technologies is facilitating the development of smart factories, enhancing production capabilities and operational management [9][10]. - SMEs are now emphasizing their R&D and manufacturing capabilities during client visits, showcasing advanced technologies and systems that enhance their competitiveness [10].
“大疆教父”李泽湘投资!卧安机器人赴港IPO
Nan Fang Du Shi Bao· 2025-06-30 10:01
Core Viewpoint - The company, Woan Robotics, is focused on the AI embodiment home robot sector and aims to create a smart home ecosystem centered around intelligent home robot products, having submitted its listing application to the Hong Kong Stock Exchange [2][5]. Group 1: Company Overview - Woan Robotics was established in 2018, with its predecessor, Woan Technology, founded in January 2015 by Harbin Institute of Technology alumni Li Zhicheng and Pan Yang [3]. - The company has received investments from notable institutions such as Source Code Capital, Hillhouse Capital, and Guotai Junan Innovation, with significant backing from Li Zexiang, a prominent figure in robotics [3][4]. - Li Zexiang serves as a non-executive director, providing professional insights on product positioning and industry trends, raising expectations for Woan's potential to replicate DJI's success [4]. Group 2: Market Position and Products - According to a report by Frost & Sullivan, Woan Robotics is the largest AI embodiment home robot system provider globally, holding an 11.9% market share as of 2024 [5]. - The company offers a range of products that enhance ordinary household items with smart functionalities, including door lock robots and curtain robots [6]. - Major markets for Woan Robotics include Japan, Europe, and North America, with revenue contributions of 57.7%, 21.4%, and 15.9% respectively in 2024 [6]. Group 3: Financial Performance - Woan Robotics has experienced rapid growth, with revenue increasing from 275 million yuan in 2022 to 610 million yuan in 2024 [6]. - The company's gross margins improved significantly from 37.3% in 2022 to 53.5% in 2024, indicating a trend towards profitability [7]. - The adjusted net profit turned positive in 2024, reaching 1.107 million yuan, while losses decreased from 86.983 million yuan in 2022 to 3.074 million yuan in 2024 [7]. Group 4: Fundraising and Future Plans - The funds raised from the IPO will be allocated to enhance R&D capabilities, expand sales channels, and repay bank loans, among other operational needs [7].
Chinese Robot Startup’s Sales Leap After Beijing Marathon
Bloomberg Television· 2025-06-30 09:32
Welcome back. You're watching Insight and a really painfully slow soccer game, but bear with a clumsy footwork. As robotics tech in China continues to grow at a blistering pace.Now, these humanoid robots are still far from taking on Lionel Messi. This game, played in Beijing over the weekend, was seen as a breakthrough for robotkind and artificial intelligence. And that's because their movements weren't remote controlled by anyone on the sidelines but by built in algorithms.And for the record, the winning t ...
IEEET-ASE|基于视触觉传感器的柔性接触仿真与操作学习
机器人大讲堂· 2025-06-30 07:22
近期北京邮电大学方斌教授团队联合清华大学、 意大利比萨圣安娜大学、英国伦敦国王学院和德国汉堡大学 发布了基于掌状视触觉传感器的柔性接触仿真与操作学习,为基于视触觉传感器的柔性操作提供了新的思路。 相关工作发表在机器人、自动化领域 JCR Q1 期刊 IEEE Transactions on Automation Science and Engineering 。 研究背景: 可变形物体操控是机器人领域一个经典且极具挑战性的任务。相较于刚性物体,可变性物体具复杂的变形特性 (包括弹性变形、塑性变形和弹塑性变形),大量的自由度 (DOF) 需要复杂的建模方法,这使该问题更加复 杂。同时,可变形物体广泛存在于医院、工业和家庭环境中。因此,可变形物体操控在机器人技术发展中发挥 着至关重要的作用。 为此,本文开发了一款可变形物体与基于视觉的触觉传感器之间的软接触模拟器,该模拟器能够模拟视触觉传 感器与弹性、塑性以及弹塑性物体之间的接触变形。在此模拟器的基础上,本文提出了基于视触觉传感器的可 变形物体操控基准,包括可迁移的观测值、任务和专家演示系统。最后,本文搭建了相应的实验平台,完成了 相关任务的 Sim-to-rea ...
双非研究生,今年找工作有些迷茫。。。
自动驾驶之心· 2025-06-30 05:51
Core Viewpoint - The article emphasizes the importance of advanced skills and knowledge in the fields of autonomous driving and embodied intelligence, highlighting the need for candidates with strong backgrounds to meet industry demands. Group 1: Industry Trends - The demand for talent in autonomous driving and embodied intelligence is increasing, with a focus on cutting-edge technologies such as SLAM, ROS, and large models [3][4]. - Many companies are transitioning from traditional methods to more advanced techniques, indicating a shift in the required skill sets for job seekers [3][4]. - The article notes that while there is a saturation of talent in certain areas, the growth of startups in robotics presents new opportunities for learning and development [3][4]. Group 2: Learning and Development - The article encourages individuals to enhance their technical skills, particularly in areas related to robotics and embodied intelligence, which are seen as the forefront of technology [3][4]. - It mentions the availability of resources and community support for learning, including access to courses, hardware, and job information through platforms like Knowledge Planet [5][6]. - The community aims to create a comprehensive ecosystem for knowledge sharing and recruitment in the fields of intelligent driving and embodied intelligence [5][6]. Group 3: Technical Directions - The article outlines four major technical directions in the industry: visual large language models, world models, diffusion models, and end-to-end autonomous driving [7]. - It highlights the importance of staying updated with the latest research and developments in these areas, providing links to various resources and papers for further exploration [8][9].
X @Bloomberg
Bloomberg· 2025-06-30 05:29
RT Saritha Rai (@SarithaRai)Fist-pumping humanoid robots tottered along in a robot soccer game, sometimes colliding or falling on each other.The weekend game at a Beijing industrial zone was a breakthrough for humanoid robots & the AI powering them@business @technologyhttps://t.co/GANvN47Xcy#AI https://t.co/XT2CMKyua3 ...
具身智能入门必备的技术栈:从零基础到强化学习与Sim2Real
具身智能之心· 2025-06-30 03:47
在近20年AI发展的路线上,我们正站在⼀个前所未有的转折点。从早期的符号推理到深度学习的突破,再到 如今⼤语⾔模型的惊艳表现, AI 技术的每⼀次⻜跃都在重新定义着⼈类与机器的关系。⽽如今,具身智能正 在全面崛起。 想象⼀下这样的场景:⼀个机器⼈不仅能够理解你的语⾔指令,还能在复杂的现实环境中灵活移动,精确操作 各种物体,甚⾄在⾯对突发情况时做出智能决策。这不再是科幻电影中的幻想,⽽是正在快速成为现实的技术 ⾰命。从Tesla的Optimus⼈形机器⼈到Boston Dynamics的Atlas,从OpenAI的机械⼿到Google的RT-X项⽬,全 球顶尖的科技公司都在竞相布局这⼀颠覆性领域。具身智能的核⼼理念在于让AI系统不仅拥有"⼤脑",更要拥 有能够感知和改变物理世界的"身体"。这种AI不再局限于虚拟的数字空间,⽽是能够真正理解物理定律、掌握 运动技能、适应复杂环境。它们可以在⼯⼚中进⾏精密装配,在医院⾥协助⼿术操作,在家庭中提供贴⼼服 务,在危险环境中执⾏救援任务。这种技术的潜在影响⼒是⾰命性的:它将彻底改变制造业、服务业、医疗健 康、太空探索等⼏乎所有⾏业。 从顶级会议ICRA 、IROS到Neu ...
2025年人形机器人中期策略
2025-06-30 01:02
机器人行业目前的发展现状如何?未来有哪些看点? 机器人行业目前正处于从实验室迭代过渡到垂直场景商业化阶段。自 2024 年 下半年以来,机器人进入了快速的成本和技术迭代周期。一方面,供应链定点 和量产扩展使得成本加速迭代,核心部件如丝杠、电机、减速器等出现明显降 本,尤其是丝杠。另一方面,国内机器人的迭代周期仅约两个月,目前特斯拉 等公司已经迭代到 Optimus GEM Three,使得产品越来越接近大规模量产的 成熟状态。 下半年整个行业开始进入垂直场景商业化阶段,例如巡检、康养、 交互、物流、工厂等多个垂直场景都开始部署人形机器人。从产品和技术角度 来看,人形机器人的商业化主要卡在大小脑以及上肢协作领域。上肢协作能力 是工作效率的根本,因此未来人形机器人迭代最快的应该是上肢能力,这也是 目前最关键的领域。上肢协作需要采集大规模、高质量的数据,如 Figure Helix 智源 JOY 以及工业通用的 Grasp VLA 等均针对上肢协作的大模型。 下半年重点关注灵巧手、丝杠、Pick and Place、减速器、关键总成、 电子皮肤、六维力传感器七大赛道,关注特斯拉、华为等公司的供应链 机会。 Q&A ...
【早鸟票倒计时1天】CCRS2025 I 抢先看!大会日程和论坛首曝光!
机器人圈· 2025-06-29 13:04
Core Points - The 6th China Robotics Academic Annual Conference (CCRS2025) will be held from August 1 to 3, 2025, in Changsha, Hunan Province, with the theme "Human-Machine Integration, Intelligent Future" [14][15] - The conference aims to gather over 200 experts and academicians in the field of robotics and artificial intelligence to discuss trends and exchange technological achievements, expecting more than 3,000 attendees [14][15] Conference Overview - CCRS2025 is one of the largest and most influential academic events in China's robotics field, focusing on cutting-edge technologies, industry development, and innovative achievements [13][14] - The conference is co-hosted by multiple professional committees and societies related to robotics and automation in China [14] Agenda Highlights - The conference will feature various forums, including the Main Forum, Youth Scholar Forums, and specialized forums on industrial robots, service robots, special robots, and more [8][9][11] - Specific sessions will include keynote speeches, poster exhibitions, and discussions on embodied intelligence and large models [8][9] Registration Information - Registration fees are set at 2,300 RMB for non-students and 1,300 RMB for students if registered by June 30, 2025 [41] - Payment methods include WeChat Pay and bank transfer, with specific instructions provided for registration [41][42] Organizing Committee - The conference is chaired by prominent professors from leading universities and research institutes, ensuring high academic standards and collaboration opportunities [15][18][20][22]
港科大 | LiDAR端到端四足机器人全向避障系统 (宇树G1/Go2+PPO)
具身智能之心· 2025-06-29 09:51
Core Viewpoint - The article discusses the Omni-Perception framework developed by a team from the Hong Kong University of Science and Technology, which enables quadruped robots to navigate complex dynamic environments by directly processing raw LiDAR point cloud data for omnidirectional obstacle avoidance [2][4]. Group 1: Omni-Perception Framework Overview - The Omni-Perception framework consists of three main modules: PD-RiskNet perception network, high-fidelity LiDAR simulation tool, and risk-aware reinforcement learning strategy [4]. - The system takes raw LiDAR point clouds as input, extracts environmental risk features using PD-RiskNet, and outputs joint control signals, forming a complete closed-loop control [5]. Group 2: Advantages of the Framework - Direct utilization of spatiotemporal information avoids information loss during point cloud to grid/map conversion, preserving precise geometric relationships from the original data [7]. - Dynamic adaptability is achieved through reinforcement learning, allowing the robot to optimize obstacle avoidance strategies for previously unseen obstacle shapes [7]. - Computational efficiency is improved by reducing intermediate processing steps compared to traditional SLAM and planning pipelines [7]. Group 3: PD-RiskNet Architecture - PD-RiskNet employs a hierarchical risk perception network that processes near-field and far-field point clouds differently to capture local and global environmental features [8]. - The near-field processing uses farthest point sampling (FPS) to reduce data density while retaining key geometric features, and employs gated recurrent units (GRU) to capture local dynamic changes [8]. - The far-field processing uses average down-sampling to reduce noise and extract spatiotemporal features from distant environments [8]. Group 4: Reinforcement Learning Strategy - The obstacle avoidance task is modeled as an infinite horizon discounted Markov decision process, with state space including the robot's kinematic information and historical LiDAR point cloud sequences [10]. - The action space directly outputs target joint positions, allowing the policy to learn the mapping from raw sensor inputs to control signals without complex inverse kinematics [11]. - The reward function incorporates obstacle avoidance and distance maximization rewards to encourage the robot to seek open paths while penalizing deviations from target speeds [13][14]. Group 5: Simulation and Real-World Testing - The framework was validated against real LiDAR data collected using the Unitree G1 robot, demonstrating high consistency in point cloud distribution and structural integrity between simulated and real data [21]. - The Omni-Perception tool showed significant advantages in rendering efficiency, maintaining linear growth in rendering time as the number of environments increased, unlike traditional methods which exhibited exponential growth [22]. - In various tests, the framework achieved a 100% success rate in static obstacle scenarios and demonstrated superior performance in dynamic environments compared to traditional methods [26][27].