ATLAS

Search documents
著名机器人专家:人型机器人的未来是不像人
3 6 Ke· 2025-09-30 08:43
Group 1 - The article discusses the challenges faced by humanoid robots in achieving dexterity despite significant investments from venture capital firms and large tech companies [2][3][5] - Humanoid robots are designed to mimic human body structures and perform tasks in human environments, with the goal of creating versatile robots capable of handling various jobs [5][6] - Companies like Tesla and Figure are optimistic about the economic potential of humanoid robots, with predictions of generating trillions in revenue, but the timeline for achieving human-level dexterity remains uncertain [6][7] Group 2 - The history of humanoid robot development spans over six decades, with significant contributions from various researchers and institutions, including early models from Waseda University and Honda [8][9] - Despite advancements, no humanoid robot has demonstrated significant dexterity comparable to human capabilities, and existing designs have not been successfully applied in practical industrial settings [20][21] - The article emphasizes the importance of tactile feedback and dexterity in humanoid robots, arguing that current training methods relying on visual data are insufficient for achieving the desired level of skill [23][24][44] Group 3 - The article critiques the reliance on "learning from demonstration" methods, highlighting the limitations of current approaches that do not incorporate tactile or force feedback [23][24][25] - Companies like Figure and Tesla are shifting towards training humanoid robots using first-person videos of humans performing tasks, betting on the effectiveness of visual learning [26][27] - The article concludes that achieving true dexterity in humanoid robots will require a deeper understanding of tactile perception and the integration of such feedback into training methodologies [44][45]
最大的开源GraphRag:知识图谱完全自主构建|港科大&华为
量子位· 2025-06-12 01:37
Core Viewpoint - The article discusses the development of AutoSchemaKG, a framework for fully autonomous knowledge graph construction that eliminates the need for predefined schemas, enhancing scalability, adaptability, and domain coverage [1][7]. Group 1: Innovation and Methodology - AutoSchemaKG utilizes large language models to extract knowledge triples directly from text and dynamically generalize patterns, allowing for the modeling of entities and events [7][9]. - The system achieves 95% semantic alignment with human-designed patterns without any manual intervention [2]. - The framework supports zero-shot reasoning across domains and reduces sparsity in knowledge graphs by establishing semantic bridges between seemingly unrelated information [7][15]. Group 2: Knowledge Graph Construction - The construction process involves a multi-stage pipeline that extracts entity-entity, entity-event, and event-event relationships from unstructured text [9][11]. - The extracted triples are serialized into JSON files for further processing [10]. - The pipeline supports various large language models and is optimized for accuracy and GPU acceleration [9][10]. Group 3: Performance and Evaluation - AutoSchemaKG has been tested on multiple datasets, demonstrating high precision, recall, and F1 scores across different types of triples, with most metrics exceeding 90% [22]. - The knowledge graph retains information well, with performance on multiple-choice questions showing that the information from original paragraphs is preserved effectively [23]. - The framework's ability to classify entities, events, and relationships has been evaluated, achieving recall rates above 80% and often reaching 90% [26]. Group 4: Application and Results - AutoSchemaKG has shown superior performance in multi-hop question answering tasks compared to traditional retrieval methods, with improvements of 12-18% in complex reasoning scenarios [29]. - The framework's variants exhibit unique strengths in various knowledge domains, with ATLAS-Pes2o excelling in medical and social sciences, while ATLAS-Wiki performs well in general knowledge areas [35][36].