Workflow
Robotics
icon
Search documents
INVESTOR NOTICE: Richtech Robotics Inc. Investors with Substantial Losses Have Opportunity to Lead Securities Class Action - RGRD Law
Globenewswire· 2026-03-19 01:30
Core Viewpoint - Richtech Robotics Inc. is facing a class action lawsuit for allegedly misleading investors about its relationship with Microsoft, which has resulted in significant stock price declines [3][4]. Group 1: Lawsuit Details - The class action lawsuit, titled Diez v. Richtech Robotics Inc., allows purchasers of Richtech Robotics securities from January 27, 2026, to January 29, 2026, to seek lead plaintiff status by April 3, 2026 [1]. - The lawsuit alleges that Richtech Robotics falsely claimed a commercial relationship with Microsoft during the class period [3]. - Following the publication of an article by Hunterbrook Media on January 29, 2026, which denied any partnership with Microsoft, Richtech Robotics' Class B stock price dropped over 29% within two trading days [4]. Group 2: Legal Process - The Private Securities Litigation Reform Act of 1995 allows any investor who acquired Richtech Robotics securities during the class period to apply for lead plaintiff status, representing the interests of all class members [5]. - The lead plaintiff can choose a law firm to litigate the case, and participation as lead plaintiff does not affect an investor's ability to share in any potential recovery [5]. Group 3: Law Firm Background - Robbins Geller Rudman & Dowd LLP is a leading law firm specializing in securities fraud and shareholder rights litigation, having recovered over $916 million for investors in 2025 alone [6]. - The firm has achieved the top ranking in securities class action recoveries for four out of the last five years, totaling $8.4 billion recovered for investors during that period [6].
A $450 Billion Opportunity: Is This Physical Artificial Intelligence (AI) Stock a Buy Right Now?
The Motley Fool· 2026-03-18 23:30
Serve Robotics (SERV 1.77%) believes existing last-mile logistics solutions are inefficient because they rely on humans and cars to deliver relatively small orders from restaurants and retailers. The company says robots and drones are better suited for these tasks because they are significantly more cost-effective and more scalable.Serve predicts the shift from humans to robots in the last-mile logistics industry will create a $450 billion opportunity by 2030. Thousands of the company's latest Gen 3 autonom ...
“纸”折出来的机器人,跑出17倍身长!国防科大1.2克“越野小强”登一区Top顶刊
机器人大讲堂· 2026-03-18 15:03
它只有1.2克重,比一枚一元硬币还轻,却能在草地狂奔、沙地穿行、石堆攀爬,甚至背着1.4克重物钻过L 形弯道,还能下水游泳。 这不是科幻道具,而是国防科技大学团队刚刚登上《Nature》子刊《Microsystems & Nanoengineering》的 全新作品——PLioBot,一款并联腿昆虫级折纸机器人。 01. 不是组装,是"长"出来的机器人 做微型机器人,最难的不是设计,而是制造。要在厘米级的尺寸里塞进去驱动器、传动机构、腿、关节,还 得保证它们能协同工作,传统做法是分别造好各个零件,然后在显微镜下手工组装。工序复杂、对准困难, 精度全靠手感。 哈佛的HAMR系列、科罗拉多大学的CLARI,都是这么造出来的。但问题是,这种方式很难规模化,也难保 证一致性。 国防科大团队这次换了个思路,把所有东西都集成在一起,一次成型,然后折起来。他们用五层复合材料, 把机器人的所有部件都压在一起: 压电陶瓷:驱动,相当于肌肉; 碳纤维预浸料:结构,纵向横向交叉铺层,相当于骨骼; PLioBot最狠的地方,不是它跑得快。虽然它确实快得离谱,每秒能飙44.6厘米,相当于每秒狂奔17.8倍身 长。按这个比例,博尔特得跑 ...
Nature顶级子刊:100万次弯折不坏!机器人死后还能当肥料!
机器人大讲堂· 2026-03-18 15:03
当机器人产业以前所未有的速度蓬勃发展时,一个严峻的问题也随之而来:当这些不知疲倦的机械伙伴"生 命"终结后,它们将归于何处? 电子垃圾,这个日益沉重的环境负担,正随着机器人技术的普及而急剧膨胀。据预测,全球电子垃圾的年增 长量将达到惊人的 200万吨 。未来的软体机器人,因其复杂的结构和对软体电子的高度依赖,无疑将使这一 挑战变得更加棘手。 近日,来自 韩国首尔大学、耶鲁大学、奥地利约翰内斯·开普勒大学林茨分校等顶尖机构的科学家们 ,刚刚 在 顶级期刊《自然·可持续发展》(Nature Sustainability) 上给出了一个别具一格的答案。 他们成功研发了一款 既能生物降解,又"超耐用"的机器人手指 ,为实现"零浪费"软体电子设备的研发与应 用提供了重要思路。 | Explore content | | --- | 01. 矛盾的统一体:100万次弯折和完全降解 如何让一个材料既坚固耐用,又能在需要时"烟消云散"?这是可持续机器人领域最大的挑战之一。传统的生 物可降解材料,如果冻、聚乙烯醇等,往往性能衰减快,难以胜任长期、高强度的机器人工作。 而这支国际团队的第一个突破,就是找到了一种出色地平衡了耐用性 ...
MTR Lab and ZGC Science City Ltd Establish Ecosystem Partnership
BusinessLine· 2026-03-18 13:54
BEIJING and HONG KONG, March 18, 2026 /PRNewswire/ -- MTR Lab Company Limited (”MTR Lab”, a wholly owned subsidiary of MTR Corporation) and Beijing Zhongguancun Science City Innovation Development Co., Ltd. (”ZGC Science City Ltd”) are forming an ecosystem partnership. Focusing on smart city and sustainable development, the collaboration will accelerate the investments in and global expansion of frontier tech enterprises across AI, robotics, smart mobility, rail, retail, property and construction. This cros ...
CLASS ACTION REMINDER: Berger Montague Advises Richtech Robotics Inc. (RR) Investors to Inquire About a Securities Fraud Lawsuit by April 3, 2026
TMX Newsfile· 2026-03-18 13:36
Core Viewpoint - A class action lawsuit has been filed against Richtech Robotics Inc. for allegedly misleading investors about its relationship with Microsoft, leading to significant stock losses when the truth was revealed [1][3]. Company Overview - Richtech Robotics Inc. is headquartered in Las Vegas, Nevada, and specializes in the design and manufacture of AI-driven service robots, providing automation solutions for industries such as hospitality, healthcare, and manufacturing [2]. Lawsuit Details - The lawsuit claims that during the class period from January 27, 2026, to January 29, 2026, Richtech misrepresented its relationship with Microsoft as a "hands-on collaboration" and "joint engineering effort," when it was actually a standard customer relationship [3]. - Following the revelation of the true nature of the relationship on January 27, 2026, Richtech's shares experienced a substantial decline, resulting in heavy losses for investors [3]. Investor Information - Investors who purchased Richtech securities during the class period have until April 3, 2026, to seek appointment as lead plaintiff representatives [2].
机器人浓度最高的一届春晚后,具身智能离走进千家万户还有多远?
AI前线· 2026-03-18 08:33
作者 | QCon 全球软件开发大会 策划 | Kitty 编辑 | 宇琪 具身智能作为 AI 从数字世界迈向物理现实的核心跃迁,是通往 AGI 的关键路径,却依然受困于模型 泛化性不足、数据采集难、闭环难以实现等深层难题,真正的产业落地仍举步维艰。那么,具身智能 究竟卡在哪儿了? 近日 InfoQ《极客有约》X QCon 直播栏目特别邀请 地瓜机器人算法副总裁隋伟博士 担任主持人, 和 地瓜机器人具身智能负责人何泳澔博士、乐享科技 CTO 李元庆、北京科技大学副教授彭君然博士 一起,在 2026 年 QCon 全球软件开发大会( 北京站) 即将召开之际,共同探讨具身智能落地实战 中的卡点。 部分精彩观点如下: 在 4 月 16-18 日将于北京举办的 QCon 全球软件开发大会(北京站) 上,我们特别设置了 【具身智 能与物理世界交互】 专题。该专题将深度拆解具身智能技术链路,探讨模型现状、核心挑战与机会, 加速具身智能技术研发转化与产业规模化落地。查看大会日程解锁更多精彩内容: https://qcon.infoq.cn/2026/beijing/schedule 工业场景并不需要追求通用性,如果能将某个 ...
更全面的具身智能真机评测来了!CVPR 2026 ManipArena挑战赛邀你打榜
机器之心· 2026-03-18 07:39
Core Insights - The embodied intelligence sector has experienced explosive growth over the past year, with various impressive robot demonstrations emerging. However, the industry faces a critical question regarding how to assess whether an embodied intelligence model has genuinely improved its generalization capabilities or is merely optimized for specific tasks and scenarios [1][2]. Group 1: Industry Challenges - The lack of a unified, high-standard evaluation system for real-world performance has become a core pain point for the embodied intelligence industry, hindering model iteration efficiency and potentially leading to a misallocation of research resources [1]. - Establishing a scientific, quantifiable, reproducible, and high-fidelity evaluation metric for real-world performance is an urgent industry consensus at this pivotal moment for scaling embodied intelligence [2]. Group 2: ManipArena Initiative - Sun Yat-sen University, in collaboration with various institutions, launched the official competition ManipArena at the CVPR 2026 Embodied AI Workshop to address the evaluation challenges in the industry [3]. - ManipArena offers 20 real-world tasks, including 5 preliminary and 15 final tasks, with a unique framework designed to accurately diagnose model generalization capabilities through controlled environments and layered out-of-distribution (OOD) assessments [5][8]. Group 3: Evaluation Framework - The evaluation framework of ManipArena includes a layered OOD assessment that allows for precise diagnosis of generalization bottlenecks, moving beyond traditional single-score evaluations to a more nuanced understanding of model capabilities [10][11]. - Each task in ManipArena is tested 10 times, with difficulty levels stratified to reflect the model's performance across various scenarios, including in-domain and OOD challenges [11][12]. Group 4: Initial Findings - Preliminary evaluation results indicate that current mainstream visual-language-action (VLA) models exhibit significant generalization weaknesses, particularly when faced with compound out-of-distribution tests [13][14]. - The evaluation data reveal that the similarity of object shapes is more critical than semantic category affiliation for current models, highlighting their fragile generalization capabilities [15]. Group 5: Controlled Environment and Diversity - ManipArena employs a green screen controlled environment to eliminate visual disturbances, ensuring that performance differences reflect true strategy capabilities [16]. - The platform incorporates three levels of systematic diversity parameters to maintain uniform distribution across all dimensions, preventing models from taking shortcuts based on frequency biases [19][20]. Group 6: Task Complexity and Scoring - The tasks in ManipArena are designed to be challenging, with no simple grab-and-go tests, focusing on reasoning as the core consideration [25]. - The competition's scoring mechanism is based on a sub-task partial scoring system, allowing for a more detailed understanding of where models succeed or fail within task pipelines [46]. Group 7: Model Performance Insights - Initial tests of various models, including π₀.₅-Single, π₀.₅-OneModel, and DreamZero, reveal distinct performance boundaries, with π₀.₅-OneModel leading in scores but showing signs of procedural knowledge forgetting in specific tasks [48][51]. - The results indicate that VLA models excel in precision control and semantic understanding, while world models demonstrate advantages in spatial generalization and coarse-grained planning [52]. Group 8: Future Implications - ManipArena serves not only as a competition but also as a high-standard open research platform, encouraging researchers to publish high-level academic papers based on authoritative evaluation results [52]. - The initiative aims to empower the continuous iteration of visual-language-action models and world models, accelerating the industry's transition to large-scale deployment in the real world [52].
2月VC/PE报告,AI/机器人融资活跃
投中网· 2026-03-18 07:11
以下文章来源于超越 J Curve ,作者超越J曲线 超越 J Curve . 用数据延伸你的阅读 将投中网设为"星标⭐",第一时间收获最新推送 本期带来2026年2月VC/PE市场报告。春节假期扰动下募投环比回落,同比维持大幅增长,人工智能赛道在募投两端均成布局焦点。 作者丨 投中嘉川 来源丨 超越 J Curve 核心发现 第一部分 VC/PE市场募资分析 新设基金数量 2026年2月,中国VC/PE市场新成立基金数量共计 591支 ,较上月减少 287支 ,环比减少 32.69% 。和去年同期相比增加 285支 ,同比增加 93.14% 。 本期共有 473家 机构参与设立基金,其中 83.3% 的机构成立1支基金, 12% 的机构完成2支基金新设, 4.7% 的机构完成3支及以上基金新设。去 年同期,成立两支及以上机构占比为7.3%,机构活跃度同比维持上升。 图/2025年2月-2026年2月新设立基金数量 图/2026年2月成立不同支数基金的机构数量分布 基金新设地区情况 假期影响下,募投数量环比回落均超三成,同比持续大幅增加; 16.7%的管理机构新设多支基金,本期完成募集基金主要关注人工智能领域 ...
王兴兴点评Seedance 2.0:全球遥遥领先
经济观察报· 2026-03-18 06:55
Core Viewpoint - The article discusses the advancements in robotics and AI, particularly focusing on the potential of achieving "embodied intelligence" akin to a "ChatGPT moment" in the near future, driven by technologies like Seedance 2.0 [2][3][4]. Group 1: Technological Advancements - Wang Xingxing emphasizes the importance of Seedance 2.0, a video generation model that could allow robots to perform tasks by aligning generated videos with robotic actions, addressing a significant global challenge [4]. - The development of a full-body remote operation system by Yushu Technology aims to synchronize human and robot actions, facilitating large-scale data collection and remote control capabilities [2][3]. - The prediction of a "ChatGPT moment" in embodied intelligence suggests that AI models could perform 80% of tasks in unfamiliar scenarios using language and text instructions without prior mapping [2][3]. Group 2: Challenges in the Industry - The industry faces challenges in achieving embodied intelligence, primarily due to the insufficient generalization ability of AI models [3]. - Improving the generalization capability of robots requires enhancing the expression of robotic movements and increasing data utilization, as the current data in the robotics field is still scarce compared to other domains [3]. - The two main model approaches in embodied intelligence are VLA models, which integrate language and robotic models, and world models, which allow robots to imagine actions before executing them [3]. Group 3: Product Developments and Market Potential - Yushu Technology plans to release a new generation of industrial-grade robots by 2025, featuring enhanced capabilities such as dust and water resistance and a range of over 20 kilometers on a single charge [4]. - The company anticipates a significant increase in robot shipments, potentially reaching one million units annually if AGI reaches a critical point, with current global shipments of the G1 model expected to be around 5,000 units by the end of 2025 [5]. - Recent performances by Yushu robots on national television demonstrate their advanced capabilities in AI reinforcement learning, showcasing high-level movements and flexibility [5].