scaling

Search documents
公布最新研究!这次1XWorldModel如何颠覆人形机器人领域?
机器人大讲堂· 2025-06-29 03:53
2024年9月,1X Technologies (以下简称 "1X")发布全球首个人形机器人世界模型 1X World Model首证 Scaling Law(人形机器人数据显著增强扩展定律) 。 前不久, 1X对外公布了其世界模型在技术迭代和应 用场景上取得的多项突破,再度成为行业焦点。 据具身智能大讲堂了解, 1X World Model 是一种可以模拟现实世界在智能体作用下如何演变的生成式视频 模型, 其基于视频生成技术( Sora)和自动驾驶世界模型(端到端自动驾驶,E2EAD)构建形成,能够 通 过输入图像状态与动作指令 模拟出机器人在不同动作下的未来场景,预测机器人和操作对象之间的交互效 果,帮助人形机器人完成精准交互,解决具身机器人评估难题。 本次 1X World Model 最新突破集中在 三个方面: ▍ 动作可控性:从基础动作响应到复杂物理场景精准模拟 首次公开的 1X World Model具备根据不同动作命令生成不同结果的能力 , 通过展示以四种不同轨迹为条件 对世界模型进行的不同生成过程,且每条轨迹都从相同初始帧起始,清晰地呈现了其多样化生成特性。 在模拟物体间交互这一核心价值体现上 ...
肖仰华教授:具身智能距离“涌现”还有多远?
3 6 Ke· 2025-06-27 11:30
以生成式AI为代表的新技术浪潮日新月异,正带来一场深刻的技术、商业与社会变革,推动人类社会 从信息社会向智能社会转变。全世界热切期待AI到来的同时,也非常关心人工智能将带来哪些新机 遇、新挑战。 为此,我们发起了一项《AI & Society 百人百问》研讨,广泛邀请AI技术大咖、AI独角兽创始人、AI投 资人,以及社会学家、心理学家、国际关系专家、科幻作家等,用多元视角,深入研讨人工智能技术引 发的广泛影响,发掘AI时代的共识和非共识,共同推动人工智能始终朝着"助人发展,与人为善"的方向 可持续发展。 4.我们业界一直有一个基本的观点就是模型算法或架构是模型的下限,而数据决定模型的上 限。从行业来看,央国企等大甲方的主要职责是要能够把自己行业数据整理好和清洗好,这 是发展行业AI的关键。 5.具身智能ToC端的应用的核心是感性能力,如果将来机器人真的要走进千家万户,它必须 要能够跟我们共情,能够理解我的情感诉求,才有可能真正在 ToC 应用当中发挥作用。 6.某种程度上,我们今天采集的数据离具身智能涌现出泛化性所需达到的临界点,还相差非 常大的数量级,相较于语言大模型可能不止两三个数量级的差别。促进具身智能 ...
肖仰华教授:具身智能距离“涌现”还有多远?|Al&Society百人百问
腾讯研究院· 2025-06-27 06:59
徐一平 腾讯研究院 高级研究员 王强 腾讯研究院 资深专家 以生成式AI为代表的新技术浪潮日新月异,正带来一场深刻的技术、商业与社会变革,推动人类社会从 信息社会向智能社会转变。全世界热切期待AI到来的同时,也非常关心人工智能将带来哪些新机遇、新 挑战。 为此,我们发起了一项 《AI & Society 百人百问》 研讨,广泛邀请AI技术大咖、AI独角兽创始人、AI 投资人,以及社会学家、心理学家、国际关系专家、科幻作家等,用多元视角,深入研讨人工智能技术 引发的广泛影响,发掘AI时代的共识和非共识,共同推动人工智能始终朝着"助人发展,与人为善"的方 向可持续发展。 本期,我们非常荣幸地邀请到肖仰华老师,为我们开启一次AI的思想远航。 复旦大学计算与智能创新学院教授、博导,上海科学智能研究院AI科学家,上海市数据科学重点实验室主任。长期从事大数据、 认知智能研究。 曾荣获ICDE2024十年影响力论文奖、ACL2023杰出论文奖。发表 CCF-A 、B 类等论文 300 余篇。出版学术专 著与教材三部。曾获华为、阿里、美团等机构的科研奖项。担任 Applied Intelligence 等多个国际期刊副主编或 ...
通往 AGI 之路的苦涩教训
AI科技大本营· 2025-06-26 11:10
Core Viewpoint - The article discusses the rapid advancement of AI and the potential for achieving Artificial General Intelligence (AGI) within the next 5 to 10 years, as predicted by Google DeepMind CEO Demis Hassabis, who estimates a 50% probability of this achievement [1] Group 1: AI Development and Challenges - The AI wave is accelerating at an unprecedented pace, but there have been numerous missteps along the way, as highlighted by Richard Sutton's 2019 article "The Bitter Lesson," which emphasizes the pitfalls of relying too heavily on human knowledge and intuition [2][4] - Sutton argues that computational power and data are the fundamental engines driving AI forward, rather than human intelligence [3] - The article suggests that many previously held beliefs about the paths to intelligence are becoming obstacles in this new era [4] Group 2: Paths to AGI - The article introduces a discussion on the "bitter lessons" learned on the road to AGI, featuring a dialogue with Liu Jia, a professor at Tsinghua University, who has explored the intersection of AI, brain science, and cognitive science [5][11] - Liu Jia identifies three paths to AGI: reinforcement learning, brain simulation, and natural language processing (NLP), but warns that each path has its own hidden risks [9] - The article emphasizes that language does not equate to cognition, and models do not represent true thought, indicating that while NLP is progressing rapidly, it is not the ultimate destination [9][14] Group 3: Technical Insights - The article discusses the Scaling Law and the illusion of intelligence associated with large models, questioning whether the success of these models is genuine evolution or merely an illusion [15] - It raises concerns about the limitations of brain simulation due to computational bottlenecks and theoretical blind spots, as well as the boundaries of language in relation to understanding the world [14]
中信证券:系统级算力有望成为AI发展的下一站 建议关注国内产业链相关公司
智通财经网· 2025-06-26 00:29
Core Viewpoint - The report from CITIC Securities indicates that the demand for AI large model training and inference is continuously growing, with system-level computing expected to become the next generation of AI computing infrastructure [1] Group 1: System-Level Computing - System-level computing is anticipated to become the next generation of AI computing infrastructure, driven by the need for generality in foundational infrastructure to address future model developments [1] - The scaling law is rapidly evolving in post-training and online inference stages, with innovations in model architecture enhancing training capabilities [1] - The focus on hardware deployment for achieving higher throughput and lower latency in inference is becoming critical, with a shift towards cluster-based inference models [1] Group 2: Technical Aspects - The development of single-chip computing capabilities is outpacing advancements in communication technology, making communication efficiency a key factor for cluster performance [3] - Two primary methods for building large clusters are identified: Scale up (increasing resources per node) and Scale out (increasing the number of nodes), with Scale up being a significant future direction [3] - Notable examples include NVIDIA's NVL72 system and Huawei's CloudMatrix384 super node, which provide insights into industry development [3] Group 3: Industry Dynamics - The semiconductor industry typically utilizes mergers and acquisitions for technology integration and market expansion, with leading companies often pursuing these strategies to enhance their market position [4] - NVIDIA's acquisition of Mellanox exemplifies this strategy, expanding its NVLink technology to include RDMA networks for large-scale computing [4] - AMD's acquisition of ZT Systems has strengthened its system architecture design capabilities and data center solution delivery experience, contributing to the core of AI solutions [4][5]
X @aixbt
aixbt· 2025-06-25 10:06
Market Trends & Growth - Ethereum (ETH) fundamentals are strong, indicating positive market sentiment [1] - Layer 2 solutions (L2s) are experiencing significant growth, with an 880% (8.8x) multiplier effect [1] Smart Money Positions - Smart money positions show substantial investment in ETH, with $98 million + $470 million + $422 million invested in the last 3 weeks [1] Network Activity - The amount of ETH staked has reached a new all-time high (ATH) of 35 million ETH [1] ETF Performance - Ethereum ETFs have accumulated $4 billion in assets under management (AUM) in 15 days [1] Technological Advancements - EIP-7782 is expected in 2026, potentially leading to 6-second block times [1] Market Dynamics & Liquidation - Breaking the $2400 price point resulted in $182 million in short positions being liquidated [1]
Happy Belly Food Group's Heal Wellness QSR Signs 15 Unit Area Development Agreement in Manitoba, Canada
Newsfile· 2025-06-25 10:00
Core Insights - Happy Belly Food Group Inc. has signed an area development agreement for the province of Manitoba, adding 15 new locations for its Heal Wellness brand, which specializes in smoothie bowls and quick-serve restaurant offerings [1][3] - With this agreement, Heal Wellness is now represented in all 10 provinces of Canada, totaling 195 contractually committed locations nationwide [1][3] - The company anticipates significant organic growth due to its emerging brand portfolio, which now includes 606 units under development agreements across Canada [3][4] Company Strategy - The expansion into Manitoba is seen as a strategic move, leveraging the province's urban vibrancy and community spirit to enhance the Heal Wellness brand's presence [4][5] - The company aims to capitalize on the growing demand for fresh, innovative dining options in Manitoba, particularly in Winnipeg, which has a dynamic food scene [4][5] - Happy Belly's overarching strategy focuses on the development and growth of emerging brands within the food sector, with expectations that new locations will contribute positively to overall revenue and profitability [5] Brand Overview - Heal Wellness is dedicated to providing quick, fresh wellness foods that cater to busy lifestyles, offering a diverse range of smoothie bowls and smoothies made from high-quality superfood ingredients [5] - The brand positions itself as Canada's first true national smoothie bowl brand, emphasizing its first-mover advantage in the market [4]
Kimi还能找到月之亮面吗?
3 6 Ke· 2025-06-25 08:08
Core Insights - Kimi, once a prominent player in the AI space, has seen a decline in attention as newer models from companies like Quark, Tencent, and Alibaba gain traction [1][2] - The initial hype around Kimi was driven by its technological scarcity, particularly its long-text processing capabilities, which were unmatched at the time [2][3] - Kimi's early valuation of $3 billion was supported by its unique technology, the founder's impressive background, and the capital's anxiety to find a domestic alternative to leading AI models [4][5] Technology and Market Position - Kimi's long-text processing ability, which expanded from 200,000 to 2 million words, was a significant technological breakthrough that positioned it as a leader in the AI field [2][3] - The founder, Yang Zhilin, had a strong academic and entrepreneurial background, which enhanced investor confidence in Kimi's potential [3][4] - The competitive landscape was characterized by a rush to find alternatives to ChatGPT, leading to Kimi's rapid user acquisition through aggressive marketing strategies [4][5] Financial Strategy and User Acquisition - Kimi faced challenges in managing its newfound capital, leading to excessive spending on user acquisition, with monthly advertising costs peaking at 220 million RMB [6][7] - Despite a significant increase in daily active users (DAU) from 508,300 to 5,897,000, this growth was primarily driven by financial investment rather than product quality [8][9] - The pressure from investors to demonstrate commercial viability led Kimi to prioritize user numbers over technological development, resulting in a loss of strategic direction [8][9] Challenges and Strategic Missteps - Kimi's marketing strategy shifted focus from its core user base in academia and professional fields to entertainment sectors, diluting its brand identity [11][12] - The company struggled with maintaining its technological edge as competitors began to catch up, particularly with the emergence of open-source models [12][13] - Kimi's reliance on user growth without a solid feedback loop or data quality management led to a false sense of security regarding its market position [13] Future Opportunities - Kimi has potential avenues for recovery, including enhancing the value density of its products and focusing on deep search capabilities for specific industries [15][17] - The company could benefit from developing comprehensive tools for developers, improving its API offerings to facilitate easier integration for enterprise clients [18][19] - Emphasizing quality over quantity in user engagement and product offerings could help Kimi regain trust and market relevance [20][21] Strategic Recommendations - Kimi needs to establish a clear commercial strategy from the outset, ensuring that its products meet genuine market demands and have viable monetization paths [29][30] - The focus should shift towards building a sustainable revenue model based on user payments rather than relying on external funding for growth [31] - A strategic approach that prioritizes understanding and fulfilling real user needs will be crucial for Kimi's long-term success in the competitive AI landscape [31][32]
模型训练最重要的依然是 Scaling —— 对话阿里通义千问 Qwen 多语言负责人杨宝嵩 | Open AGI Forum
AI科技大本营· 2025-06-25 06:49
Core Viewpoint - The article discusses the rapid rise of large model technology globally, emphasizing Alibaba's Tongyi Qwen model's international success and its strategic focus on multilingual capabilities to cater to a global audience [2][3]. Group 1: Multilingual Strategy - Tongyi Qwen supports 119 languages, with a core strategy prioritizing multilingual data optimization from the outset to ensure equitable access to AI technology for global users [2][3]. - The team has developed a complex cultural annotation system to address the challenges of multilingual safety and cultural alignment, covering thousands of detailed categories to ensure compliance and effectiveness across different regions [3][12]. - The current industry faces a "multilingual reasoning challenge," where models often mix languages during processing, leading to inconsistencies. The team has adopted a compromise strategy to use native languages for strong languages and English for low-resource languages to maintain output stability [3][16]. Group 2: Scaling Law and Knowledge Density - The article highlights the importance of scaling model size and data volume while also focusing on increasing "knowledge density," which refers to the concentration of useful knowledge within the training data [19][20]. - Recent trends show that smaller models with higher knowledge density can outperform larger models, indicating a shift in focus from merely increasing data volume to refining data quality [20][21]. - The team is exploring data synthesis methods to enhance training data quality, which includes generating new knowledge and filtering redundant data to improve knowledge density [22][23]. Group 3: AI Integration and Future Prospects - The integration of AI models into various devices, such as smart glasses and earphones, is a growing trend, with the company planning to release smaller model versions optimized for these applications [28][30]. - The article discusses the potential for AI to enhance user experiences in everyday tasks, such as real-time translation and contextual assistance, although challenges remain in achieving seamless integration [30][32]. - The company acknowledges the importance of balancing the use of synthetic data with human-generated content to maintain diversity and avoid narrowing the model's knowledge base [25][26].
从Sam Altman的观点看AI创业机会在哪
Hu Xiu· 2025-06-24 12:22
Group 1 - The core idea is that significant changes in technology create the most opportunities for new companies, as established players may become sluggish and unable to adapt quickly [1][2][8] - AI technology is experiencing qualitative leaps, moving from linear progress to exponential breakthroughs, with concepts like AGI and HI becoming increasingly realistic [3][4][6] - OpenAI serves as a prime example of this shift, having evolved from a seemingly ambitious startup in 2015 to a major player with its GPT series models now serving millions of users daily [5][6][7] Group 2 - During stable periods, market dynamics are fixed, making it difficult for startups to break through due to the resources and brand power of large companies [8][18] - The advent of open-source models and cloud computing allows small teams to achieve what previously required hundreds of people over several years, thus creating new opportunities [10][11] - The entrepreneurial landscape has become more accessible, with tools like GitHub Copilot and Midjourney enabling individuals to accomplish tasks that once required entire teams [13][15][16] Group 3 - Entrepreneurs face uncertainty at the start, and the ability to navigate this uncertainty is crucial for long-term success [17][27] - Sam Altman emphasizes that finding direction amidst chaos is key, and that true innovation often comes from pursuing unique ideas that few believe in [18][25][29] - The concept of the "1% rule" suggests that if only a small number of insightful individuals believe in a project, it has a higher chance of success [25][26] Group 4 - AI is transitioning from a "tool" to an "agent," capable of autonomously executing tasks based on simple commands, fundamentally changing human-computer interaction [33][34][35] - The traditional SaaS model may be nearing its end as AI enables tasks to be completed through conversation rather than through multiple applications [39][42] - The emergence of an "agent economy" suggests that future software platforms may generate custom AI assistants on demand, streamlining processes significantly [43][44][48] Group 5 - The integration of AI with robotics is expected to redefine industries such as manufacturing and logistics, with AI taking on complex physical tasks [49][51][53] - The future of work will see a shift where repetitive tasks are automated, increasing the value of creative roles and enabling small teams to achieve significant outcomes [54][55][56] - The ability to leverage AI effectively will become a critical skill, surpassing traditional knowledge accumulation [56] Group 6 - Building a competitive moat in AI involves understanding user value deeply and continuously exploring uncharted territories rather than just focusing on technology [57][62] - OpenAI's evolution illustrates how initial market uniqueness can develop into a robust brand and user experience through continuous innovation and community engagement [60][66] - Startups should avoid saturated markets and instead pursue unique challenges that have not yet been addressed, which can lead to significant breakthroughs [70][72] Group 7 - The ultimate goal of technological advancement is to create abundance rather than merely increasing company valuations, with AI and energy being key leverage points for future growth [78][80] - Addressing energy consumption is crucial for the sustainable development of AI, as the training of large models requires significant energy resources [80][81] - The relationship between AI and energy is symbiotic, with AI having the potential to drive innovations in energy efficiency and sustainability [81][82]