Workflow
pi0
icon
Search documents
开箱子,叠毛巾!从零把pi0部署到你的机械臂上吧!
具身智能之心· 2025-11-18 03:38
支持pi0部署了~ 最近刚把pi0任务打通,代码也会对客户正式开源,助力大家加速具身科研落地。感兴趣的同学可以关注下 ~ 面向具身科研领域打造的轻量级高性价比机械臂 还在为具身智能领域的硬件选择发愁吗? 太贵的机械臂买不起,太便宜的又难用、难上手? 别担心,Imeta-Y1 来了——这是一款专为新手和科研初学者设计的轻量级高性价比机械臂。 无论你是学生、教育工作者,还是刚踏入机器人领域的开发者,Imeta-Y1 都能帮你低成本、高效率地完成 算法验证与项目开发。 对小白尤其友好的是: ✅ 提供全流程开源工具链+代码示例,从数据采集到模型部署一气呵成; ✅ 支持 Python / C++ 双语言接口,无论你擅长哪种语言都能快速上手; ✅ 兼容 ROS1 / ROS2,并提供 URDF 模型,仿真与真机无缝切换; ✅ 24小时快速售后响应,遇到问题不卡壳,学习路上有保障! 该机械臂融合高精度运动控制、低功耗设计与开放软硬件架构,支持从仿真到真机的无缝联调,并提供全 流程开源SDK与工具链,助力用户快速实现算法验证、数据采集、模型训练与部署应用。 | 重量 | 631g | | --- | --- | | 尺寸 | ...
开箱子,叠毛巾!从零把pi0部署到你的机械臂上吧!
具身智能之心· 2025-11-14 04:00
支持pi0部署了~ 最近刚把pi0任务打通,代码也会对客户正式开源,助力大家加速具身科研落地。感兴趣的同学可以关注下 ~ 面向具身科研领域打造的轻量级高性价比机械臂 还在为具身智能领域的硬件选择发愁吗? 别担心,Imeta-Y1 来了——这是一款专为新手和科研初学者设计的轻量级高性价比机械臂。 无论你是学生、教育工作者,还是刚踏入机器人领域的开发者,Imeta-Y1 都能帮你低成本、高效率地完成 算法验证与项目开发。 对小白尤其友好的是: ✅ 提供全流程开源工具链+代码示例,从数据采集到模型部署一气呵成; ✅ 支持 Python / C++ 双语言接口,无论你擅长哪种语言都能快速上手; ✅ 兼容 ROS1 / ROS2,并提供 URDF 模型,仿真与真机无缝切换; ✅ 24小时快速售后响应,遇到问题不卡壳,学习路上有保障! 该机械臂融合高精度运动控制、低功耗设计与开放软硬件架构,支持从仿真到真机的无缝联调,并提供全 流程开源SDK与工具链,助力用户快速实现算法验证、数据采集、模型训练与部署应用。 其紧凑型结构与模块化接口,尤其适用于嵌入式AI与机器人学习平台的开发与应用推广。 太贵的机械臂买不起,太便宜的又难 ...
字节跳动一机器人团队研究员因泄密被开除
Nan Fang Du Shi Bao· 2025-11-12 08:24
采写:南都N视频记者 杨柳 在互联网大厂,因泄密被开除的现象并不鲜见。近期引发广泛关注的一起案例是,原小米集团中国区市 场部总经理王腾在今年9月被开除。官方通报称,其存在泄露公司机密信息及利益冲突等严重违规违纪 行为。 字节跳动也在9月初披露,今年第二季度辞退100名员工。"违反信息安全制度"的案例通报中提及,有10 名违规参与外部付费访谈的员工,因违反公司《员工行为准则》和公司信息安全制度,而受到公司处 罚。字节跳动在通报中提醒,外部咨询公司会以"专家访谈""行业研究"等名义,通过脉脉、领英、小红 书等平台发起有偿访谈邀约,以获取公司保密信息,"为保护公司信息及数据安全,守护自己的职业生 涯,请拒绝此类邀约。" GR-3是今年7月发布的一款具身智能VLA(视觉语言动作)模型。刊载于arxiv平台的技术报告称,该模 型能通过极少量的人类轨迹数据进行高效微调,从而实现对新场景快速且低成本的适应。任某某是这篇 技术报告的作者之一。 GR-3发布后,他在知乎上写道,短期内不能高估技术,无论是GR-3,还是美国physical intelligence公司 开发的pi0甚至pi0.5模型,和人类大脑智能相比较,现有具 ...
从零把pi0部署到你的机械臂上吧!
具身智能之心· 2025-11-12 00:03
面向具身科研领域打造的轻量级高性价比机械臂 还在为具身智能领域的硬件选择发愁吗? 支持pi0部署了~ 最近刚把pi0任务打通,代码也会对客户正式开源,助力大家加速具身科研落地。感兴趣的同学可以关注下 ~ 太贵的机械臂买不起,太便宜的又难用、难上手? 别担心,Imeta-Y1 来了——这是一款专为新手和科研初学者设计的轻量级高性价比机械臂。 无论你是学生、教育工作者,还是刚踏入机器人领域的开发者,Imeta-Y1 都能帮你低成本、高效率地完成 算法验证与项目开发。 对小白尤其友好的是: ✅ 提供全流程开源工具链+代码示例,从数据采集到模型部署一气呵成; ✅ 支持 Python / C++ 双语言接口,无论你擅长哪种语言都能快速上手; ✅ 兼容 ROS1 / ROS2,并提供 URDF 模型,仿真与真机无缝切换; ✅ 24小时快速售后响应,遇到问题不卡壳,学习路上有保障! 该机械臂融合高精度运动控制、低功耗设计与开放软硬件架构,支持从仿真到真机的无缝联调,并提供全 流程开源SDK与工具链,助力用户快速实现算法验证、数据采集、模型训练与部署应用。 其紧凑型结构与模块化接口,尤其适用于嵌入式AI与机器人学习平台的开发 ...
没有导师指导,最快多久可以产出一篇具身领域相关论文?
具身智能之心· 2025-09-28 07:00
Core Insights - The article emphasizes the importance of building a solid foundation in research before diving into complex topics like VLA (Vision-Language-Action) in embodied intelligence [1][6] - VLA is highlighted as a transformative model that allows robots to perform tasks based on language instructions, breaking the limitations of traditional single-task training [4][7] - The article discusses the rapid development of the embodied intelligence sector, with various teams transitioning from research to commercialization, and major tech companies actively investing in this field [6] Summary by Sections VLA Overview - VLA enables robots to autonomously make decisions in diverse environments, significantly enhancing their adaptability and application across industries such as manufacturing and logistics [4][6] - The model has become a research hotspot, fostering collaboration between academia and industry through various projects like pi0, RT-2, and OpenVLA [4][7] Industry Development - The embodied intelligence field is experiencing robust growth, with companies like Unitree, Zhiyuan, and major tech players like Huawei and Tencent making significant strides [6] - There is a growing interest in VLA-related research, with many seeking guidance to quickly enter or transition within this domain [6] Course Offerings - A specialized course on VLA research is introduced, focusing on the theoretical and practical aspects of embodied intelligence, including simulation environment setup and experimental design [10][12] - The course aims to cultivate independent research capabilities, guiding students from idea generation to the completion of a research paper [12][17] Learning Outcomes - Participants will gain comprehensive knowledge of VLA models, practical experience in simulation, and skills in academic writing and research methodology [17] - The course is designed to help students identify research opportunities and navigate the complexities of the embodied intelligence landscape [12][16]
VLA的论文占据具身方向的近一半......
具身智能之心· 2025-09-18 04:00
Core Insights - The article emphasizes the significance of Vision-Language-Action (VLA) models in the field of embodied intelligence, highlighting their ability to enable robots to autonomously make decisions in diverse environments, thus breaking the limitations of traditional single-task training methods [1][4]. Industry Development - The embodied intelligence sector is experiencing rapid growth, with teams like Unitree, Zhiyuan, Xinghaitu, and Yinhai General transitioning from laboratory research to commercialization, alongside major tech companies such as Huawei, JD, and Tencent collaborating with international firms like Tesla and Figure AI [3]. Research Opportunities - VLA is identified as a current research hotspot with many unresolved issues, making it a promising area for academic papers. The article mentions the establishment of a specialized VLA research guidance course aimed at helping individuals quickly enter or transition within this field [3][4]. Course Content and Structure - The course focuses on how agents interact effectively with the physical world through a perception-cognition-action loop, covering the evolution of VLA technology from early grasp pose detection to recent models like Diffusion Policy and multimodal foundational models [7][8]. - It addresses core challenges in embodied intelligence, such as cross-domain generalization and long-term planning, and explores how to integrate large language models with robotic control systems [8]. Learning Outcomes - Upon completion, participants are expected to master the theoretical foundations and technical evolution of VLA models, gain proficiency in simulation environments, and develop independent research capabilities [14]. - The course aims to guide students from idea generation to the completion of a high-quality academic paper, ensuring they can identify research opportunities and design effective experiments [10][14].
卷VLA,提供一些参考方向......
具身智能之心· 2025-09-15 10:00
Core Insights - The Vision-Language-Action (VLA) model represents a new paradigm in embodied intelligence, enabling robots to generate executable actions from language instructions and visual signals, thus enhancing their adaptability to complex environments [1][3]. - VLA breaks the traditional single-task limitations, allowing robots to make autonomous decisions in diverse scenarios, which is applicable in manufacturing, logistics, and home services [3]. - The VLA model has become a research hotspot, driving collaboration between academia and industry, with various cutting-edge projects like pi0, RT-2, OpenVLA, QUAR-VLA, and HumanVLA emerging [3][5]. Industry Development - The embodied intelligence sector is experiencing robust growth, with teams like Unitree, Zhiyuan, Xinghaitu, Galaxy General, and Zhujidongli transitioning from laboratories to commercialization [5]. - Major tech companies such as Huawei, JD.com, and Tencent are actively investing in this field, alongside international firms like Tesla and Figure AI [5]. Educational Initiatives - A specialized VLA research guidance course has been launched to assist students in quickly entering or transitioning into the VLA research area, addressing the complexity of the related systems and frameworks [5]. - The course focuses on the perception-cognition-action loop, providing a comprehensive understanding of VLA's theoretical foundations and practical applications [7][8]. Course Structure and Outcomes - The curriculum covers the entire research process, from theoretical foundations to experimental design and paper writing, ensuring students develop independent research capabilities [15]. - Students will learn to identify research opportunities, analyze unresolved challenges in the field, and receive personalized guidance tailored to their backgrounds and interests [15]. - The course aims to help students produce a complete research idea and a preliminary experimental validation, culminating in a draft of a high-quality academic paper [15][18].
当老师给我指了VLA作为研究方向后......
具身智能之心· 2025-09-10 11:00
Group 1 - VLA (Vision-Language-Action) model represents a new paradigm in embodied intelligence, enabling robots to generate executable actions from language instructions and visual signals, thus enhancing their understanding and adaptability in complex environments [1][3] - The VLA model breaks the limitations of traditional single-task training, allowing robots to make autonomous decisions in diverse scenarios, which is applicable in manufacturing, logistics, and home services [3][5] - The VLA model has become a research hotspot, driving the development of several cutting-edge projects such as pi0, RT-2, OpenVLA, QUAR-VLA, and HumanVLA, fostering collaboration between academia and industry [3][5] Group 2 - The embodied intelligence sector is experiencing rapid growth, with teams like Unitree, Zhiyuan, Xinghaitu, and Yinhai General transitioning from laboratories to commercialization, while tech giants like Huawei, JD.com, and Tencent are actively investing in this field [5] - The course on VLA research aims to equip students with comprehensive skills in academic research, including theoretical foundations, experimental design, and paper writing, focusing on independent research capabilities [13][15] - The curriculum emphasizes identifying research opportunities and innovative points, guiding students to develop their research ideas and complete preliminary experiments [14][15] Group 3 - The course covers the technical evolution of the VLA paradigm, from early grasp pose detection to recent advancements like Diffusion Policy and multimodal foundational models, focusing on end-to-end mapping from visual input and language instructions to robotic actions [8][9] - Core challenges in embodied intelligence, such as cross-domain generalization and long-term planning, are analyzed, along with strategies to combine large language model reasoning with robotic control systems [9] - The course aims to help students master the latest research methods and technical frameworks in embodied intelligence, addressing limitations and advancing towards true general robotic intelligence [9][15]