Workflow
人工神经网络
icon
Search documents
21书评︱“深度学习之父”辛顿:信仰之跃
Group 1 - Geoffrey Hinton, known as the "father of deep learning," received the Nobel Prize in Physics in 2024 for his foundational discoveries in machine learning using artificial neural networks [1] - Hinton's journey in artificial intelligence faced significant challenges, including skepticism from academia during the AI winter, yet he persisted and contributed to the emergence of large models in AI [1][10] - The narrative highlights the importance of belief and perseverance in the face of adversity, as Hinton's commitment to neural networks ultimately led to breakthroughs in AI [10][11] Group 2 - Liu Jia, a professor at Tsinghua University, published a book titled "General Artificial Intelligence: Reconstruction of Cognition, Education, and Ways of Living," which discusses Hinton's story and the underlying logic of persistence in AI research [2][9] - The book aims to explore the connections between brain science and artificial intelligence, suggesting that this integration may aid in achieving true general artificial intelligence [2] - Hinton's early academic struggles and eventual return to AI research serve as a backdrop for understanding the evolution of AI and the significance of his contributions [6][7]
一种新型晶体管
半导体行业观察· 2025-04-04 03:46
Core Viewpoint - Researchers from the National University of Singapore (NUS) have demonstrated that a single standard silicon transistor can mimic the behavior of biological neurons and synapses, bringing hardware-based artificial neural networks (ANN) closer to reality [1][2]. Group 1: Research Findings - The NUS research team, led by Professor Mario Lanza, has provided a scalable and energy-efficient solution for hardware-based ANN, making neuromorphic computing more feasible [1][2]. - The study published in Nature on March 26, 2025, highlights that the human brain, with approximately 90 billion neurons and around 100 trillion connections, is more energy-efficient than electronic processors [1][2]. Group 2: Neuromorphic Computing - Neuromorphic computing aims to replicate the brain's computational capabilities and energy efficiency, requiring a redesign of system architecture to perform memory and computation in the same location [2]. - Current neuromorphic systems face challenges due to the need for complex multi-transistor circuits or emerging materials that have not been validated for large-scale manufacturing [2]. Group 3: Technological Advancements - The NUS team has shown that a single standard silicon transistor can replicate neural firing and synaptic weight changes by adjusting the resistance of the terminal to specific values [3]. - They developed a dual-transistor unit called "Neuro-Synaptic Random Access Memory" (NS-RAM), which operates in neuron or synapse states [3]. - The method utilizes commercial CMOS technology, ensuring scalability, reliability, and compatibility with existing semiconductor manufacturing processes [3]. Group 4: Performance and Applications - The NS-RAM unit demonstrated low power consumption, stable performance over multiple operational cycles, and consistent, predictable behavior across different devices, essential for building reliable ANN hardware for practical applications [3]. - This breakthrough marks a significant advancement in the development of compact, energy-efficient AI processors, enabling faster and more responsive computing [3].