Workflow
Hugging Face
icon
Search documents
大湾区国际创客峰会暨Maker Faire Shenzhen 2025在深举办|最前线
3 6 Ke· 2025-11-17 08:02
Core Insights - The theme of the event is "AI Without Boundaries, New Life for All" and it marks the 14th year of Maker Faire Shenzhen, which was introduced to the city in 2012 [1][3] Group 1: Event Overview - The summit features two innovation forums, one maker super-evolution live event, 145 cutting-edge technology application displays, and 29 industry innovation and interactive experience workshops [3] - Participants include technology leaders, maker pioneers, industry experts, and university teams from over 30 countries, showcasing nearly a thousand hardware projects in various fields such as smart manufacturing, smart agriculture, smart cities, and cultural entertainment [3] Group 2: Key Highlights - The integration of AI and hardware has transitioned from concept to practical application, with a focus on modular hardware and open-source collaboration driving the democratization of edge AI [4] - A significant number of robotics applications were showcased, emphasizing embodied intelligence as a focal point of innovation, indicating a shift of AI from the cloud to physical entities [4] - Notable products include the Reachy Mini desktop robot from Hugging Face, which features dynamic antennas and small display screens for AI application development, and the Centauri Carbon 2 3D printer from ELEGOO, capable of supporting high-temperature materials [4] Group 3: Discussions and Perspectives - A panel of 15 influential innovators discussed how AI can be implemented in low-power, high-efficiency ways across various sectors, including medical diagnostics, industrial automation, and smart agriculture, benefiting remote and developing regions [6] - The summit aims to provide a platform for makers to showcase their creativity and spirit, promoting a global "super-evolution" in the maker community [6]
AI无界·万物新生,大湾区国际创客峰会在深圳开幕
Nan Fang Du Shi Bao· 2025-11-16 00:48
Core Insights - The Maker Faire Shenzhen 2025, themed "AI Without Boundaries, New Life for All," was inaugurated on November 15, showcasing a blend of cutting-edge ideas, technology exchanges, and cross-industry collaboration [1][3] - The event has evolved over 14 years, impacting nearly 100 countries and regions, and has become a core platform for dialogue and collaboration between industries and makers [3][4] Event Highlights - The summit gathered nearly 100 international innovation ambassadors and showcased around 1,000 AI hardware projects from over 30 countries, featuring two innovation forums, a live maker evolution event, and 145 cutting-edge technology applications [3][4] - Keynote speakers discussed the shift from centralized large model competition to fragmented AI scenarios as a new opportunity for makers, emphasizing the importance of ecosystem collaboration in AI development [4][10] Innovation and Technology - The summit featured 145 innovative projects from various countries, demonstrating AI applications in smart manufacturing, smart agriculture, smart cities, and cultural entertainment [8][10] - Notable projects included open-source desktop robots and modular handheld computers, highlighting the trend of embodied intelligence where AI integrates with mechanical control and environmental interaction [10][11] Interactive Experience - The event emphasized immersive and interactive learning experiences, with workshops on AI hardware and robotics development, allowing participants to engage in hands-on creation [11][13] - Various interactive experience workshops were set up, covering 3D printing, AI customization, and robotics performances, catering to participants of all ages and backgrounds [11][13] Community Engagement - The summit included micro-events like book signings and a "Maker Evolution Party," fostering deep exchanges among diverse interest groups and promoting collaboration in a relaxed atmosphere [13] - The integration of AI with hardware has progressed from concept to practical application, with expectations for deeper integration into sensors and wearable devices, aiming for ubiquitous intelligence [13]
X @Avi Chawla
Avi Chawla· 2025-10-30 19:45
RT Avi Chawla (@_avichawla)voyage-3-large embedding model just topped the RTEB leaderboard!It's a big deal because it:- ranks first across 33 eval datasets- outperforms OpenAI and cohere models- supports quantization to reduce storage costsHere's another reason that makes this model truly superior:Most retrieval benchmarks test models on academic datasets that don’t reflect real-world data.RTEB, on the other hand, is a newly-released leaderboard on HuggingFace that evaluates retrieval models across enterpri ...
X @Avi Chawla
Avi Chawla· 2025-10-30 06:31
General Information - Hugging Face 上有一个排行榜 [1]
腾讯研究院AI速递 20251030
腾讯研究院· 2025-10-29 17:07
Group 1: Generative AI Developments - Nvidia showcased the Vera Rubin superchip at the GTC Washington conference, featuring an 88-core Vera CPU and two Rubin GPUs, expected to be mass-produced in Q3 or Q4 of 2026 [1] - Following the announcement, Nvidia's stock price surged by 4.98%, increasing its market capitalization by over $230 billion to reach $4.89 trillion, making it the first company to approach a $5 trillion valuation [1] - Key highlights from the conference included NVQLink quantum interconnect technology, collaboration with the U.S. Department of Energy to build seven new supercomputers, and a partnership with Uber to deploy approximately 100,000 autonomous vehicles [1] Group 2: AI Voice Synthesis and Interaction - Soul App AI team launched the open-source podcast voice synthesis model SoulX-Podcast, supporting multiple dialects and capable of generating over 60 minutes of multi-turn dialogue [2] - The model features zero-shot cloning capabilities for multi-turn conversations, allowing for dialect-specific voice generation using only standard Mandarin reference audio [2] - The model is based on Qwen3-1.7B and employs LLM + Flow Matching for voice generation, achieving optimal results in voice intelligibility and tonal similarity in podcast scenarios [2] Group 3: Adobe's AI Innovations - Adobe introduced Firefly Image 5 at the MAX conference, capable of generating photo-realistic images at a native resolution of 4MP without requiring upgrades [3] - The Adobe CC 2026 suite was officially released for Windows, including updates to Photoshop 2026 and Illustrator 2026 [3] - The new version allows for image editing through simple prompts, enabling precise modifications while maintaining the integrity of other pixels, with a focus on commercial safety [3] Group 4: Interactive AI Podcasting - Tencent's Mix Yuan launched the first interactive AI podcast in China, allowing listeners to interrupt hosts and guests with questions via voice or text during the show [4] - The system utilizes large model intent recognition and multi-turn dialogue capabilities to provide accurate answers based on context and background information, transforming the traditional one-way podcast format [4] - The AI podcast supports three modes: default, deep exploration, and speculative discussion, offering eight different voice tones and accommodating both solo and dual-host formats [4] Group 5: PayPal and OpenAI Collaboration - PayPal announced a partnership with OpenAI to integrate ChatGPT into its digital wallet, enabling users to complete shopping payments directly through the chatbot [5] - Starting next year, consumers and merchants within the PayPal ecosystem will have access to ChatGPT, allowing for product purchases and inventory listings on the platform [5] - Following the announcement, PayPal's stock surged over 15% in pre-market trading, and the company raised its full-year earnings forecast while declaring its first dividend in 27 years [6] Group 6: Adoption of Chinese AI Models - American AI programming product Windsurf was found to be utilizing a new model from China's Zhipu GLM, with Cerebras also offering GLM-4.6 inference services [7] - Several U.S. AI companies are opting for Chinese large models due to their cost-effectiveness, as OpenAI and Anthropic models are perceived as too expensive despite their quality [7] - Platforms like Together AI and Vercel have also deployed GLM-4.6 and other domestic models, indicating a rising value of "Made in China" large models [7] Group 7: Home Robotics - 1X Technologies launched the world's first humanoid household robot, NEO, available for an early bird price of $20,000 or a monthly rental of $500, with shipments expected in 2026 [8] - NEO, standing 168 cm tall and weighing 30 kg, is equipped with the Redwood AI system to perform household tasks such as vacuuming, dishwashing, and pet feeding, with a battery life of four hours and a maximum load of 68 kg [8] - A Wall Street Journal reporter noted that current operations are controlled remotely by experts via VR, with a promise from 1X that NEO will be able to autonomously handle most household tasks by 2026 [8] Group 8: Advancements in Robotics Learning - Hugging Face released LeRobot v0.4.0, introducing support for scalable Datasets v3.0 for ultra-large datasets and new dataset editing tools [9] - The new version integrates cutting-edge VLA models like PI0.5 and GR00T N1.5, and adds support for LIBERO and Meta-World simulation environments, simplifying multi-GPU training [9] - A new plugin system was launched to streamline hardware integration, allowing users to connect any robotic device with a simple pip install command, alongside the release of Hugging Face's robotics learning courses [9] Group 9: AGI Assessment and Future Directions - Turing Award winner Yoshua Bengio and others proposed a new definition of AGI as AI that matches or exceeds the cognitive diversity and proficiency of well-educated adults [10] - A framework based on the Cattell-Horn-Carroll theory was developed to evaluate general intelligence across ten core cognitive domains, including general knowledge, literacy, and mathematical ability [10] - Assessment results indicated that GPT-4 scored only 27% on the AGI scale, while GPT-5 achieved a score of 57%, highlighting significant gaps in essential cognitive abilities for human-like general intelligence [10] Group 10: OpenAI's Strategic Roadmap - OpenAI restructured to become a public benefit corporation, with the non-profit board OpenAI Foundation holding 26% of shares valued at approximately $130 billion, and Microsoft as the largest shareholder with about 27% [11] - CEO Sam Altman revealed that the company anticipates cash expenditures exceeding $115 billion by 2029, with a projected financial responsibility of $1.4 trillion to build 30 GW of infrastructure, with an IPO being the most likely direction [11] - Chief Scientist Ilya Sutskever announced goals to develop an AI research assistant capable of significantly accelerating research by September 2026 and to achieve fully automated AI researchers by March 2028 [11]
X @TechCrunch
TechCrunch· 2025-10-28 20:06
Event Information - TechCrunch Disrupt 2025 活动已进行过半 [1] - 参与者可享受门票五折优惠 [1] - 活动演讲嘉宾包括 Cluely, Solana, Hugging Face, Character AI 等公司的代表 [1]
HuggingFace联合牛津大学新教程开源SOTA资源库!
具身智能之心· 2025-10-27 00:02
Core Viewpoint - The article emphasizes the significant advancements in robotics, particularly in robot learning, driven by the development of large models and multi-modal AI technologies, which have transformed traditional robotics into a more learning-based paradigm [3][4]. Group 1: Introduction to Robot Learning - The article introduces a comprehensive tutorial on modern robot learning, covering foundational principles of reinforcement learning and imitation learning, leading to the development of general-purpose, language-conditioned models [4][12]. - HuggingFace and Oxford University researchers have created a valuable resource for newcomers to the field, providing an accessible guide to robot learning [3][4]. Group 2: Classic Robotics - Classic robotics relies on explicit modeling through kinematics and control planning, while learning-based methods utilize deep reinforcement learning and expert demonstration for implicit modeling [15]. - Traditional robotic systems follow a modular pipeline, including perception, state estimation, planning, and control [16]. Group 3: Learning-Based Robotics - Learning-based robotics integrates perception and control more closely, adapts to tasks and entities, and reduces the need for expert modeling [26]. - The tutorial highlights the challenges of safety and efficiency in real-world applications, particularly during the initial training phases, and discusses advanced techniques like simulation training and domain randomization to mitigate risks [34][35]. Group 4: Reinforcement Learning - Reinforcement learning allows robots to autonomously learn optimal behavior strategies through trial and error, showcasing significant potential in various scenarios [28]. - The tutorial discusses the complexity of integrating multiple system components and the limitations of traditional physics-based models, which often oversimplify real-world phenomena [30]. Group 5: Imitation Learning - Imitation learning offers a more direct learning path for robots by replicating expert actions through behavior cloning, avoiding complex reward function designs [41]. - The tutorial addresses challenges such as compound errors and handling multi-modal behaviors in expert demonstrations [41][42]. Group 6: Advanced Techniques in Imitation Learning - The article introduces advanced imitation learning methods based on generative models, such as Action Chunking with Transformers (ACT) and Diffusion Policy, which effectively model multi-modal data [43][45]. - Diffusion Policy demonstrates strong performance in various tasks with minimal demonstration data, requiring only 50-150 demonstrations for training [45]. Group 7: General Robot Policies - The tutorial envisions the development of general robot policies capable of operating across tasks and devices, inspired by large-scale open robot datasets and powerful visual-language models [52][53]. - Two cutting-edge visual-language-action (VLA) models, π₀ and SmolVLA, are highlighted for their ability to understand visual and language instructions and generate precise control commands [53][56]. Group 8: Model Efficiency - SmolVLA represents a trend towards model miniaturization and open-sourcing, achieving high performance with significantly reduced parameter counts and memory consumption compared to π₀ [56][58].
手把手带你入门机器人学习,HuggingFace联合牛津大学新教程开源SOTA资源库
机器之心· 2025-10-26 07:00
Core Viewpoint - The article emphasizes the significant advancements in the field of robotics, particularly in robot learning, driven by the development of artificial intelligence technologies such as large models and multi-modal models. This shift has transformed traditional robotics into a learning-based paradigm, opening new potentials for autonomous decision-making robots [2]. Group 1: Introduction to Robot Learning - The article highlights the evolution of robotics from explicit modeling to implicit modeling, marking a fundamental change in motion generation methods. Traditional robotics relied on explicit modeling, while learning-based methods utilize deep reinforcement learning and expert demonstration learning for implicit modeling [15]. - A comprehensive tutorial provided by HuggingFace and researchers from Oxford University serves as a valuable resource for newcomers to modern robot learning, covering foundational principles of reinforcement learning and imitation learning [3][4]. Group 2: Learning-Based Robotics - Learning-based robotics simplifies the process from perception to action by training a unified high-level controller that can directly handle high-dimensional, unstructured perception-motion information without relying on a dynamics model [33]. - The tutorial addresses challenges in real-world applications, such as safety and efficiency issues during initial training phases, and high trial-and-error costs in physical environments. It introduces advanced techniques like simulator training and domain randomization to mitigate these risks [34][35]. Group 3: Reinforcement Learning - Reinforcement learning allows robots to autonomously learn optimal behavior strategies through trial and error, showcasing significant potential across various scenarios [28]. - The tutorial discusses the "Offline-to-Online" reinforcement learning framework, which enhances sample efficiency and safety by utilizing pre-collected expert data. The HIL-SERL method exemplifies this approach, enabling robots to master complex real-world tasks with near 100% success rates in just 1-2 hours of training [36][39]. Group 4: Imitation Learning - Imitation learning offers a more direct learning path for robots by replicating expert actions through behavior cloning, avoiding complex reward function designs and ensuring training safety [41]. - The tutorial presents advanced imitation learning methods based on generative models, such as Action Chunking with Transformers (ACT) and Diffusion Policy, which effectively model multi-modal data by learning the latent distribution of expert behaviors [42][43]. Group 5: Universal Robot Policies - The article envisions the future of robotics in developing universal robot policies capable of operating across tasks and devices, inspired by the emergence of large-scale open robot datasets and powerful visual-language models (VLMs) [52]. - Two cutting-edge VLA models, π₀ and SmolVLA, are highlighted for their ability to understand visual and language instructions and generate precise robot control commands, with SmolVLA being a compact, open-source model that significantly reduces application barriers [53][56].
X @TechCrunch
TechCrunch· 2025-10-21 15:28
This is your chance to meet the minds building AI's future.The second day of AI Stage at Disrupt 2025 features @character_ai, @huggingface, @runwayml, @Tinder, @GoogleCloud, and more. It’s a stacked lineup tackling everything from autonomous vehicles and generative AI to national security and vibe coding.Check out the full lineup and head here to get your tickets to see them all October 27-29 in San Francisco: https://t.co/TrvBc8T2nn ...
开源对机器人的价值,远超想象丨唐文斌深度对谈抱抱脸联创
具身智能之心· 2025-10-21 00:03
Core Insights - The article discusses the challenges in the field of robotics, particularly the gap between simulation and real-world application, and introduces RoboChallenge.ai as a solution to create a standardized evaluation platform for embodied intelligence [2][42][51]. Group 1: Current Challenges in Robotics - Many models perform well in simulations but fail in real-world scenarios, highlighting a significant pain point in robotics research [2][42]. - The need for a unified, open, and reproducible evaluation system for robotics is emphasized, as current benchmarks are primarily based on simulations [50][44]. Group 2: Introduction of RoboChallenge.ai - RoboChallenge.ai is launched as an open, standardized platform for evaluating robotic models in real-world environments, allowing researchers to remotely test their models on physical robots [6][51]. - The platform enables users to control local models through an API, facilitating remote testing without the need to upload models [8][53]. Group 3: Importance of Open Source in Robotics - Open source is identified as a crucial driver for advancements in AI and robotics, enabling collaboration and innovation across global teams [10][19]. - The article argues that open source in robotics may be even more critical than in large language models (LLMs) due to the necessity of hardware accessibility for model application [20][22]. Group 4: Future Directions and Community Involvement - The article anticipates that the next three to five years will see significant evolution in embodied intelligence research, with robots capable of executing longer and more complex tasks [82]. - Community participation is encouraged, with the expectation that diverse contributions will enhance data availability and model robustness [66][68].