Workflow
触觉感知
icon
Search documents
孚腾资本领投、理想汽车跟注!「千觉机器人」再获得亿元投资,站上530亿触觉感知风口
Sou Hu Cai Jing· 2025-10-18 04:32
又有一家机器人种子明星企业诞生了。 近日,具身触觉头部企业千觉机器人科技(上海)有限公司(下称"千觉机器人")完成亿元级Pre-A轮融资,本轮融资由孚腾资本(上海具身智能基金) 领投,产业方理想汽车,彬复资本等头部市场化机构共同参与,高瓴创投、元禾原点、戈壁创投等老股东持续跟投。融资资金将主要用于技术研发投入、 加快产品迭代、团队规模扩充与市场渠道拓展,完善全链路业务布局。 今年以来,国内机器人领域融资热度不减,资本持续向具备创新技术与落地能力的机器人企业倾斜。早期阶段财务资本表现活跃,而随着企业成长,产业 资本在后期融资中的参与度逐步提升,推动行业从技术验证迈向规模化落地。 在千觉机器人本轮融资中,产业资本、特别是来自工业制造领域的投资方成为一大亮点。有分析认为,孚腾资本和理想汽车的加入既代表国资背景投资机 构与产业资本对千觉机器人前景的认可,也有利于其触觉感知技术在真实产业场景的落地与推广。此外,上一轮投资方中已出现智元机器人等产业方身 影。多家机器人本体企业连续押注,进一步验证了千觉在触觉感知环节的技术稀缺性与落地协同价值。 据机器人大讲堂了解,目前千觉机器人已在具身灵巧操作、工业精密装配、触觉检测、柔 ...
孚腾资本领投、理想汽车跟注!「千觉机器人」再获得亿元投资,站上530亿触觉感知风口
机器人大讲堂· 2025-10-17 13:41
又有一家机器人种子明星企业诞生了。 近日,具身触觉头部企业千觉机器人科技(上海)有限公司(下称 "千觉机器人")完成亿元级Pre-A轮融 资,本轮融资由 孚腾资本(上海具身智能基金)领投,产业方理想汽车,彬复资本 等头部市场化机构共同参 与, 高瓴创投、元禾原点、戈壁创投 等老股东持续跟投。融资资金将主要用于技术研发投入、加快产品迭 代、团队规模扩充与市场渠道拓展,完善全链路业务布局。 今年以来,国内机器人领域融资热度不减,资本持续向具备创新技术与落地能力的机器人企业倾斜。早期阶段 财务资本表现活跃,而随着企业成长,产业资本在后期融资中的参与度逐步提升,推动行业从技术验证迈向规 模化落地。 在千觉机器人本轮融资中, 产业资本、特别是来自工业制造领域的投资方成为一大亮点 。 有分析认为,孚 腾资本和理想汽车的加入既代表国资背景投资机构与产业资本对千觉机器人前景的认可,也有利于其触觉感知 技术在真实产业场景的落地与推广。此外,上一轮投资方中已出现智元机器人等产业方身影。多家机器人本体 企业连续押注,进一步验证了千觉在触觉感知环节的技术稀缺性与落地协同价值。 据机器人大讲堂了解 , 目前 千觉机器人已在具身灵巧操作 ...
一年斩获3轮融资:千觉机器人如何用“触觉感知”撬动亿元资本?
Sou Hu Cai Jing· 2025-10-17 05:48
当理想汽车和上海国资旗下的孚腾资本同时把支票填上同一家机器人公司时,资本市场的聚光灯再次聚焦在具身智能赛道。 红杉资本分析师在内部纪要中写道:"视觉和语音技术已趋成熟,触觉正是下一个必争之地。"不过赛道火热也意味着竞争白热 化,波士顿动力最新发布的机器人已经能凭触觉复原魔方,而亚马逊则在其物流机器人中测试类似技术。 谜底藏在机器人指尖的传感器里。千觉研发的多模态触觉感知技术,就像给机器人装上了人类般的触觉神经。无论是装配精密 零件还是分拣易碎品,机械手指都能通过微米级的压力反馈自动调整力度。 虽然前景诱人,风险却如影随形。一位参与尽调的投资人透露,触觉传感器的量产良率仍是行业难题,更棘手的是标准化困境 ——不同行业对触觉精度的要求相差百倍,医疗机器人需要感知组织弹性,而工业场景只需判断抓取力度。这就好比要求同一 个厨师既能切豆腐又能剁排骨。 这种突破让谷歌DeepMind实验室也抛来橄榄枝,双方正在共同探索如何让机器人真正"理解"物理世界。正如一位投资人调 侃:"当机器人能感知鸡蛋壳的脆弱和螺丝钉的坚硬时,工业自动化就进入了新次元。" 目前千觉的触角已延伸至五大应用场景:从汽车制造厂的精密装配线,到欧莱雅生产 ...
两月连融两轮!「模量科技」领跑触觉传感赛道!
机器人大讲堂· 2025-09-17 04:15
Core Viewpoint - The article highlights the advancements in tactile sensing technology by Modulus Technology, emphasizing its successful completion of a multi-million Pre-A funding round to enhance production capacity and deepen market penetration across various sectors such as robotics, smart vehicles, and healthcare [1][2]. Group 1: Company Overview - Modulus Technology, founded in 2024, is one of the few companies globally capable of mass-producing industrial-grade multimodal tactile sensing technology, aiming to integrate this technology into both industrial and consumer applications [2]. - The founding team consists of elite talents from Xiamen University and Hong Kong University, along with experienced executives from well-known hard-tech companies, all with over ten years of experience in tactile sensor research and engineering [2]. Group 2: Market Potential - According to VMR's latest report, the global tactile sensor market is expected to exceed $26 billion by 2028, with the electronic skin market for humanoid robots in China projected to reach 9 billion yuan by 2030, reflecting a compound annual growth rate of 64.3% [3]. - Mastery of tactile perception is seen as crucial for the next generation of smart devices across various sectors, including industrial inspection, embodied intelligence, smart vehicles, and wearable technology [3]. Group 3: Technological Advancements - Modulus Technology focuses on material innovation and algorithm integration to create a comprehensive product matrix for tactile sensing, ensuring both sensitivity and manufacturability [3]. - The company has developed various tactile sensing products, including flexible electronic skin, industrial pressure distribution detection systems, high-sensitivity temperature and pressure sensors, and fabric-based flexible sensors for smart automotive interiors [5][7][9]. Group 4: Commercialization and Applications - Modulus Technology has successfully transitioned from laboratory research to commercial production, providing innovative solutions across multiple industries, including consumer electronics, healthcare, and smart home applications [11][12]. - The company collaborates with leading manufacturers in the intelligent manufacturing sector and has established partnerships in the humanoid robotics and smart automotive industries to enhance operational precision and user experience [14][15].
商道创投网·会员动态|帕西尼·完成10亿元A轮融资
Sou Hu Cai Jing· 2025-08-05 16:05
Core Insights - The article highlights that Pasini has successfully completed a 1 billion yuan Series A financing round led by JD.com, with participation from various investors, indicating strong market interest and confidence in the company's technology [2][4]. Company Overview - Pasini, established in 2021, focuses on "multi-dimensional touch + embodied intelligence" and has developed a unique 6D Hall array sensor chip, ITPU touch processing unit, and flexible touch sensors, creating the world's only Super EID Factory capable of producing 200 million full-modal data annually [3]. Financing Purpose - The funds from this financing round will be allocated to three main areas: enhancing the R&D of Hall array chips and touch sensors for higher integration and lower power consumption, expanding the Super EID Factory to improve data collection scale and quality, and collaborating with industry partners like JD.com and BYD to accelerate the deployment of touch perception technology in smart logistics, automotive manufacturing, and healthcare [4]. Investment Rationale - JD.com's Vice President and Head of Strategic Investment, Hu Ningfeng, emphasized that JD.com is committed to future physical intelligence, recognizing Pasini's leading touch perception technology and unique full-modal data loop, which has already demonstrated significant cost reduction and efficiency improvements in warehouse picking scenarios [4]. Market and Policy Context - The article notes that the recent release of the Ministry of Industry and Information Technology's "Guiding Opinions on the Innovative Development of Humanoid Robots" and the corresponding management measures for robot industry funds in Beijing create a favorable policy environment, aligning with market needs for cost reduction and efficiency improvements [4].
触觉感知企业帕西尼,4个月狂揽10亿元!
2025世界机器人大会正式开幕前,具身智能赛道再度吸金,又有深圳初创公司官宣融资成功! 今日,帕西尼感知科技(深圳)有限公司公众号对外宣称公司完成新一轮A系列融资,这意味着短短4 个月内,帕西尼已狂揽10亿元人民币,而背后投资机构阵容也十分豪华。本轮融资由京东战略领投,浦 耀信晔、宏兆基金、张科垚坤、上市公司新国都(300130)、北京昌平先进能源制造产业直投基金、财 鑫资本、复琢投资跟投,老股东湖南财信产业基金、钧犀资本、TCL创投、毅达资本持续加码,庚辛资 本中国继续担任独家战略财务顾问。 有参与该项目的投资机构向证券时报记者表示,短时间内获得众多机构和产业资本的支持,核心在于公 司的领先技术,帕西尼的核心产品是触觉传感器,而且在搭建触觉数据库,目前触觉感知和具身智能领 域已经到达一个转折点,公司积累了很多国内外头部厂商客户,尤其值得关注的是,帕西尼和科技巨头 英伟达的深入合作绑定,依托英伟达软硬件解决方案,特别是高性能AI加速集群算力技术,实现了海 量触觉数据的高速运算与算法优化,这一技术支撑使帕西尼感知科技在触觉传感技术领域处于行业领先 地位。 构建"具身智能"技术核心生态闭环 目前,帕西尼已经成为国 ...
TACTILE-VLA:激活VLA模型的物理知识以实现触觉泛化(清华大学最新)
自动驾驶之心· 2025-07-16 04:05
Core Insights - The article discusses the development of Tactile-VLA, a model that integrates visual, language, action, and tactile perception to enhance robotic capabilities in contact-intensive tasks [2][6][20]. Group 1: Background and Core Issues - Visual-language-action (VLA) models are crucial for general-purpose robotic agents, but they struggle in contact-intensive scenarios due to a lack of tactile perception [2]. - Tactile perception provides essential feedback for physical interactions, which is often missing in existing models [2]. Group 2: Key Innovations and Research Goals - The core finding is that VLA models contain prior knowledge of physical interactions, which can be activated through tactile sensors for zero-shot generalization in contact tasks [6]. - Tactile-VLA framework introduces tactile perception as a primary modality, allowing direct mapping from abstract semantics to physical force control [6]. - The mixed position-force controller innovatively converts force targets into position adjustment commands, addressing the challenge of coordinating position and force control [6][10]. - Tactile-VLA-CoT variant incorporates a chain of thought (CoT) reasoning mechanism, enabling robots to analyze failure causes and autonomously adjust strategies [6][14]. Group 3: Overall Architecture - Tactile-VLA's architecture features four key modules, emphasizing token-level fusion through a non-causal attention mechanism for true semantic representation rooted in physical reality [9]. Group 4: Mixed Position-Force Control Mechanism - The mixed control strategy prioritizes position control while introducing force feedback adjustments when necessary, ensuring precision in movement and force control [10][12]. - The design separates external net force from internal grasping force, allowing for refined force adjustments suitable for contact-intensive tasks [13]. Group 5: Chain of Thought Reasoning Mechanism - Tactile-VLA-CoT enhances adaptive capabilities by transforming the adjustment process into an interpretable reasoning process, improving robustness in complex tasks [14][15]. Group 6: Data Collection Methods - A specialized data collection system was developed to obtain high-quality tactile-language aligned data, addressing the issue of missing force feedback in traditional remote operations [16][19]. Group 7: Experimental Validation and Results Analysis - Three experimental groups were designed to validate Tactile-VLA's capabilities in instruction following, common sense application, and adaptive reasoning [20]. - In the instruction following experiment, Tactile-VLA demonstrated the ability to learn the semantic meaning of force-related language, achieving a success rate of 35% in USB tasks and 90% in charger tasks [23]. - The model effectively utilized common sense knowledge to adjust interaction forces based on object properties, achieving significant performance improvements over baseline models [24][30]. - In the adaptive reasoning experiment, Tactile-VLA-CoT achieved an 80% success rate in a blackboard task, showcasing its ability to diagnose and correct failures autonomously [28][32].
TACTILE-VLA:激活VLA模型的物理知识以实现触觉泛化(清华大学最新)
具身智能之心· 2025-07-15 07:55
Core Insights - The article discusses the development of Tactile-VLA, a model that integrates visual, language, action, and tactile perception to enhance robotic capabilities in contact-intensive tasks [2][6]. Group 1: Background and Core Issues - Visual-language-action (VLA) models have strong semantic understanding and cross-modal generalization capabilities, but they struggle in contact-intensive scenarios due to a lack of tactile perception [2][6]. - Tactile perception provides critical feedback in physical interactions, such as friction and material properties, which are essential for tasks requiring fine motor control [2][6]. Group 2: Key Innovations and Research Goals - The core finding is that VLA models contain prior knowledge of physical interactions, which can be activated by connecting this knowledge with tactile sensors, enabling zero-shot generalization in contact-intensive tasks [6][7]. - Tactile-VLA framework introduces tactile perception as a primary modality, allowing for direct mapping from abstract semantics to physical force control [7]. - The mixed position-force controller innovatively converts force targets into position adjustment commands, addressing the challenge of coordinating position and force control [7]. Group 3: Architecture and Mechanisms - Tactile-VLA's architecture includes four key modules: instruction adherence to tactile cues, application of tactile-related common sense, adaptive reasoning through tactile feedback, and a multi-modal encoder for unified token representation [12][13]. - The mixed position-force control mechanism ensures precision in position while allowing for fine-tuned force adjustments during contact tasks [13]. - The Tactile-VLA-CoT variant incorporates a chain of thought (CoT) reasoning mechanism, enabling robots to analyze failure causes based on tactile feedback and autonomously adjust strategies [13][14]. Group 4: Experimental Validation and Results - Three experimental setups were designed to validate Tactile-VLA's capabilities in instruction adherence, common sense application, and adaptive reasoning [17]. - In the instruction adherence experiment, Tactile-VLA achieved a success rate of 35% in USB tasks and 90% in charger tasks, significantly outperforming baseline models [21][22]. - The common sense application experiment demonstrated Tactile-VLA's ability to adjust interaction forces based on object properties, achieving success rates of 90%-100% for known objects and 80%-100% for unknown objects [27]. - The adaptive reasoning experiment showed that Tactile-VLA-CoT could successfully complete a blackboard task with an 80% success rate, demonstrating its problem-solving capabilities through reasoning [33].
InformationFusion期刊发表:Touch100k用语言解锁触觉感知新维度
机器人大讲堂· 2025-06-08 08:47
Core Insights - The article discusses the significance of touch in enhancing the perception and interaction capabilities of robots, highlighting the development of the Touch100k dataset and the TLV-Link pre-training method [1][11]. Group 1: Touch100k Dataset - Touch100k is the first large-scale dataset that integrates tactile, multi-granular language, and visual modalities, aiming to expand tactile perception from "seeing" and "touching" to "expressing" through language [2][11]. - The dataset consists of tactile images, visual images, and multi-granular language descriptions, with tactile and visual images sourced from publicly available datasets and language descriptions generated through human-machine collaboration [2][11]. Group 2: TLV-Link Method - TLV-Link is a multi-modal pre-training method designed for tactile representation using the Touch100k dataset, consisting of two phases: course representation and modality alignment [6][11]. - The course representation phase employs a "teacher-student" paradigm where a well-trained visual encoder transfers knowledge to a tactile encoder, gradually reducing the teacher model's influence as the student model improves [6][11]. Group 3: Experiments and Analysis - Experiments evaluate TLV-Link from the perspectives of tactile representation and zero-shot tactile understanding, demonstrating its effectiveness in material property recognition and robot grasping prediction tasks [8][11]. - Results indicate that the Touch100k dataset is practical, and TLV-Link shows significant advantages over other models in both linear probing and zero-shot evaluations [9][11]. Group 4: Summary - The research establishes a foundational dataset and method for tactile representation learning, enhancing the modeling capabilities of tactile information and paving the way for applications in robotic perception and human-robot interaction [11].
帕西尼获比亚迪数亿元融资,具身智能融资热度持续升温
Nan Fang Du Shi Bao· 2025-04-28 09:55
Group 1 - The core point of the news is that PaxiNi Sensory Technology has received a strategic investment of several hundred million yuan from BYD, marking BYD's largest single investment in the field of embodied intelligence to date [1][3] - This funding will be used to advance the research and mass production of PaxiNi's multi-dimensional tactile sensing technology and humanoid robot product matrix [1][3] - PaxiNi, established in 2021, focuses on the independent research and industrialization of high-precision multi-dimensional tactile sensors, breaking the overseas technology monopoly with its 6D Hall array tactile sensor [3] Group 2 - The tactile sensing technology is considered a key component of the embodied intelligence industry and is ranked fourth among China's 35 critical technologies [3][4] - The investment landscape for humanoid robots has significantly increased in 2023, with 37 financing events in the first quarter alone, totaling approximately 3.5 billion yuan [4] - Major cities like Beijing, Shenzhen, and the Yangtze River Delta remain the primary hubs for entrepreneurship and investment in this sector, with many companies established in 2023 and 2024 [4] Group 3 - Despite the rising interest and investment in humanoid robots, challenges remain in commercializing these technologies, with uncertainties regarding application scenarios and profit models [4][5] - The industry faces the important challenge of balancing technological innovation with sustainable commercial development as it moves forward [5]