无监督学习

Search documents
当大脑独处时,它在思考什么?
Hu Xiu· 2025-10-08 01:33
两种学习方式的核心差异就是,有监督学习依赖外部的明确指引,而无监督学习则源于系统内部的自主 探索与规律发现。传统观点认为,学习需要依赖奖励信号(如食物、表扬)或明确反馈(如错误纠 正),但婴儿大脑展现出的惊人能力,正促使神经科学界重新审视这一认知。那么,生物大脑的无监督 学习能力到底从何而来? 一、无监督学习是大脑预习课 无监督学习的能力并非人类独有。例如,小鼠在探索新环境时,无需奖励就能自主形成空间记忆。为了 精准地观察这一过程,美国霍华德·休斯医学研究所(HHMI)的科学家Marius Pachitariu和Carsen Stringer领导的团队,设计了一项精巧的实验[1]。 当婴儿凝视旋转的风铃时,他或她的大脑便在悄然破解光影变化的规律。无需奖励或惩罚,这种对世界 的理解就已在神经回路中生根发芽——这正是"无监督学习"的生动体现。 反观当下最先进的人工智能,要区分"猫"和"狗"的图片,也需要在大量标注数据的"喂养"下才能实现。 这种需要引导的学习方式,则被称为"有监督学习"。 在神经科学里,有监督学习表现为外部奖惩引导的神经连接强化(如条件反射),而无监督学习是大脑 自主提取环境特征(如自发形成对线条 ...
语音分离最全综述来了!清华等团队深度分析200+文章,系统解析「鸡尾酒会问题」研究
机器之心· 2025-09-03 04:33
Core Viewpoint - The article discusses the revolutionary advancements in the field of speech separation, particularly addressing the "cocktail party problem" through the development of deep neural networks (DNN) [2]. Group 1: Overview of Speech Separation - Speech separation has become crucial for enhancing speech clarity in complex acoustic environments and serves as a preprocessing method for other speech processing tasks [2]. - Researchers from various institutions conducted a comprehensive survey of over 200 representative papers, analyzing the latest research methods across multiple dimensions including deep learning methods, model architectures, evaluation metrics, datasets, and future challenges [2]. Group 2: Problem Definition - The authors categorize speech separation tasks into known and unknown speaker separation based on whether the number of speakers is fixed or variable, highlighting the challenges associated with each scenario [6]. - The need for dynamic output channel determination and the balance between separation quality and termination timing are emphasized as significant challenges in unknown speaker scenarios [6]. Group 3: Learning Paradigms - The article compares supervised and unsupervised learning methods, detailing the advantages and limitations of each approach in the context of speech separation [10]. - Supervised learning is currently the most mature paradigm, utilizing paired mixed audio and clean source audio for training, while unsupervised methods explore training models directly on unlabelled mixed audio [12]. Group 4: Model Architectures - The core components and evolution of speech separation models are summarized, including encoder, separation network, and decoder [14]. - Various architectures such as RNN-based, CNN-based, and transformer models are discussed, showcasing their strengths in capturing long-term dependencies and local feature extraction [17][18]. Group 5: Evaluation Metrics - A comprehensive evaluation metric system is necessary for assessing model performance, which includes both subjective and objective metrics [19]. - The article compares various metrics, highlighting the trade-offs between subjective evaluations that reflect human experience and objective metrics that are efficient but may focus on different aspects [20]. Group 6: Datasets - The article summarizes publicly available datasets for speech separation research, categorizing them based on single-channel and multi-channel formats [22]. - Understanding the coverage and difficulty of these datasets aids researchers in selecting appropriate datasets for algorithm evaluation and identifying gaps in current research [22]. Group 7: Performance Comparison - The authors present a comparison of different models' performance on standard datasets, illustrating the progress in speech separation technology over recent years [24]. - Notable improvements in performance metrics, such as SDR, are highlighted, with advanced architectures achieving SDR levels around 20 dB [24][25]. Group 8: Tools and Platforms - The article introduces various open-source tools and platforms that facilitate the development and application of speech separation tasks, comparing their functionalities and limitations [28]. - These tools provide convenient interfaces for researchers to replicate results and build prototype systems, accelerating the transition from research to application [28]. Group 9: Challenges and Future Directions - The article discusses current challenges in the field, including long-duration audio processing, mobile and embedded applications, real-time speech separation, and the rise of generative methods [32][33]. - The integration of pre-training techniques and the focus on target speaker extraction are also identified as key areas for future exploration [33].
开学了:入门AI,可以从这第一课开始
机器之心· 2025-09-01 08:46
Core Viewpoint - The article emphasizes the importance of understanding AI and its underlying principles, suggesting that individuals should start their journey into AI by grasping fundamental concepts and practical skills. Group 1: Understanding AI - AI is defined through various learning methods, including supervised learning, unsupervised learning, and reinforcement learning, which allow machines to learn from data without rigid programming rules [9][11][12]. - The core idea of modern AI revolves around machine learning, particularly deep learning, which enables machines to learn from vast amounts of data and make predictions [12]. Group 2: Essential Skills for AI - Three essential skills for entering the AI field are mathematics, programming, and practical experience. Mathematics provides the foundational understanding, while programming, particularly in Python, is crucial for implementing AI concepts [13][19]. - Key mathematical areas include linear algebra, probability and statistics, and calculus, which are vital for understanding AI algorithms and models [13]. Group 3: Practical Application and Tools - Python is highlighted as the primary programming language for AI due to its simplicity and extensive ecosystem, including libraries like NumPy, Pandas, Scikit-learn, TensorFlow, and PyTorch [20][21]. - Engaging in hands-on projects, such as data analysis or machine learning tasks, is encouraged to solidify understanding and build a portfolio [27][46]. Group 4: Career Opportunities in AI - Various career paths in AI include machine learning engineers, data scientists, and algorithm researchers, each focusing on different aspects of AI development and application [38][40]. - The article suggests that AI skills can enhance various fields, creating opportunities for interdisciplinary applications, such as in finance, healthcare, and the arts [41][43]. Group 5: Challenges and Future Directions - The rapid evolution of AI technology presents challenges, including the need for continuous learning and adaptation to new developments [34][37]. - The article concludes by encouraging individuals to embrace uncertainty and find their passion within the AI landscape, highlighting the importance of human creativity and empathy in the technological realm [71][73].
微软副总裁X上「开课」,连更关于RL的一切,LLM从业者必读
机器之心· 2025-05-26 01:28
Core Viewpoint - The article discusses the educational series on artificial intelligence initiated by Nando de Freitas, focusing on reinforcement learning (RL) and its applications in large language models (LLMs) [1][2]. Summary by Sections Introduction to AI Education - Nando de Freitas aims to educate readers on AI through a series of posts on X, starting with reinforcement learning and gradually covering diffusion and flow matching technologies [1][2]. Learning Types - The article highlights that there is no ultimate conclusion on unsupervised learning, supervised learning, and reinforcement learning [8][19]. - Supervised learning is described as basic imitation, requiring high-quality expert data for effective learning [9]. - Reinforcement learning focuses on selective imitation, allowing agents to learn from suboptimal experiences and improve their performance [10][11]. Distributed Reinforcement Learning Systems - Modern distributed RL systems consist of two main components: Actors and Learners, where Actors interact with the environment and collect data, while Learners update the policy network based on this data [23][24]. - The importance of measuring operational durations and communication bandwidth in such systems is emphasized [24][27]. Offline Reinforcement Learning - Offline RL has unique value in scenarios like post-training LLMs, where it can leverage historical data for learning [28][29]. Single-step and Multi-step RL - The article differentiates between single-step and multi-step RL problems, with single-step focusing on immediate actions and multi-step involving planning over a series of interactions [35][39]. - The complexity of multi-step RL is noted, particularly in credit assignment issues where multiple decisions affect outcomes [40][41]. Policy Gradient and Techniques - Policy gradient methods are discussed, including the use of baseline subtraction to reduce variance in reward signals [49][56]. - The article also covers the significance of KL divergence in maintaining proximity to supervised fine-tuning strategies during post-training [69]. Importance Sampling and PPO - Importance sampling is introduced as a method to correct off-policy sample bias, with Proximal Policy Optimization (PPO) being a key technique to manage policy updates [73][78]. - The integration of various techniques in training models like DeepSeek-R1 is highlighted, showcasing the complexity of modern RL systems [81]. Future Directions - Freitas plans to expand the discussion from single-step to multi-step RL, indicating ongoing developments in the field [82].
GPT-5 有了雏形;OpenAI 和 Manus 研发 Agent 的经验;中国大公司扩大算力投资丨 AI 月报
晚点LatePost· 2025-03-08 12:17
2025 年 2 月的全球 AI 重要趋势。 文 丨 贺乾明 2025 年 2 月的 AI 月报,你会看到: 硅谷巨头的新共识:推理能力是大模型的一部分 OpenAI 和 Manus 的 Agent 开发经验 DeepSeek 推动中国大公司加大算力投入,阿里、字节两家加起来,今年就超过 2000 亿 3 家售价过亿的 AI 公司和 23 家获得超过 5000 万美元融资的 AI 公司 OpenAI 时薪 100 美元招专家生产数据提高模型能力 这一期月报中,我们开始邀请研究者、创业者和投资人提供一手视角的对每月 AI 趋势和标志性事件的评述和 洞察。 晚点 AI 月报,每月选取最值得你知道的 AI 信号。 以下是我们第 4 期 AI 月报,欢迎大家在留言区补充我们没有提到的重要趋势。 技术丨GPT-5 雏形出现,行业新共识诞生 DeepSeek 带来的冲击波继续扩散,全球大模型公司陷入混战:不论是马斯克用超过 10 万张 GPU 训练 的 Grok 3,还是 OpenAI 可能投入 10 亿美元训练的 GPT-4.5,或是 Anthropic 融合推理(reasoning) 能力的最新模型 Claude 3 ...