人工神经网络
Search documents
像大模型一样进化
腾讯研究院· 2026-01-05 08:44
本文摘选自刘嘉教授新书《通用人工智能》 大模型的成功并非偶然——从早期符号主义AI的失败,到深度学习的崛起,再到Transformer的成功,每 一次进化都是从无数被淘汰的算法、模型中艰难诞生。在这艰难曲折的探索中,人类智慧的金块无疑是 AI头上的一盏明灯。反过来,大模型的进化经验,能否成为我们人类认知进化的营养?由此,我们破茧 成蝶,与AI时代同频共振,开启认知与智慧的跃迁。 为人生定义目标函数 所有的机器学习,在开始训练前,都必须明确一个目标函数 (又 称损失函数或 成本函数) 。这个函数定 义了模型希望达到的理想状态,而训练的全部意义就在于不断优化参数,让模型越来越接近这个目标。 正所谓学习未动,目标先行。 作为机器学习的一个分支,人工神经网络从一开始就是另类,因为它的目标函数太宏大、太有野心,以 至于当辛顿请求其所在的多伦多大学校长再招收一名人工神经网络的研究者时,该校长是如此回答 的:"一个疯子就足够了。"的确,人工神经网络的开创者都有一个在外人眼里近似疯狂的目标函数: 1943年麦卡洛克和皮茨提出的"简陋"神经元是要模拟"神经活动内在观念的逻辑演算",1958年罗森布拉 刘嘉 清华大学基础科学讲席 ...
21书评︱“深度学习之父”辛顿:信仰之跃
2 1 Shi Ji Jing Ji Bao Dao· 2025-07-31 09:32
Group 1 - Geoffrey Hinton, known as the "father of deep learning," received the Nobel Prize in Physics in 2024 for his foundational discoveries in machine learning using artificial neural networks [1] - Hinton's journey in artificial intelligence faced significant challenges, including skepticism from academia during the AI winter, yet he persisted and contributed to the emergence of large models in AI [1][10] - The narrative highlights the importance of belief and perseverance in the face of adversity, as Hinton's commitment to neural networks ultimately led to breakthroughs in AI [10][11] Group 2 - Liu Jia, a professor at Tsinghua University, published a book titled "General Artificial Intelligence: Reconstruction of Cognition, Education, and Ways of Living," which discusses Hinton's story and the underlying logic of persistence in AI research [2][9] - The book aims to explore the connections between brain science and artificial intelligence, suggesting that this integration may aid in achieving true general artificial intelligence [2] - Hinton's early academic struggles and eventual return to AI research serve as a backdrop for understanding the evolution of AI and the significance of his contributions [6][7]
一种新型晶体管
半导体行业观察· 2025-04-04 03:46
Core Viewpoint - Researchers from the National University of Singapore (NUS) have demonstrated that a single standard silicon transistor can mimic the behavior of biological neurons and synapses, bringing hardware-based artificial neural networks (ANN) closer to reality [1][2]. Group 1: Research Findings - The NUS research team, led by Professor Mario Lanza, has provided a scalable and energy-efficient solution for hardware-based ANN, making neuromorphic computing more feasible [1][2]. - The study published in Nature on March 26, 2025, highlights that the human brain, with approximately 90 billion neurons and around 100 trillion connections, is more energy-efficient than electronic processors [1][2]. Group 2: Neuromorphic Computing - Neuromorphic computing aims to replicate the brain's computational capabilities and energy efficiency, requiring a redesign of system architecture to perform memory and computation in the same location [2]. - Current neuromorphic systems face challenges due to the need for complex multi-transistor circuits or emerging materials that have not been validated for large-scale manufacturing [2]. Group 3: Technological Advancements - The NUS team has shown that a single standard silicon transistor can replicate neural firing and synaptic weight changes by adjusting the resistance of the terminal to specific values [3]. - They developed a dual-transistor unit called "Neuro-Synaptic Random Access Memory" (NS-RAM), which operates in neuron or synapse states [3]. - The method utilizes commercial CMOS technology, ensuring scalability, reliability, and compatibility with existing semiconductor manufacturing processes [3]. Group 4: Performance and Applications - The NS-RAM unit demonstrated low power consumption, stable performance over multiple operational cycles, and consistent, predictable behavior across different devices, essential for building reliable ANN hardware for practical applications [3]. - This breakthrough marks a significant advancement in the development of compact, energy-efficient AI processors, enabling faster and more responsive computing [3].