Workflow
类脑计算
icon
Search documents
破解大脑奥秘 科幻照进现实
题图由AI生成 沈岳摄(人民视觉) 从"意念驱动肢体"梦想成真到"模拟大脑运算"智算突破,从"给思维过程拍电影"筛查疾病到"嗅觉 功能障碍预警脑疾"的便捷自测,我国脑认知科学领域的自主创新成果正密集涌现,为破解大脑奥秘、 守护生命健康打开全新空间。近日,"新天工开物——科技成就发布会"脑认知科学专场在北京国家科技 传播中心举行,集中发布了中国认知科学学会推荐的4项突破性创新成果。 "中国脑科学在最近十几年进入了飞速发展的阶段。"中国科学院院士、发展中国家科学院院士段树 民在接受采访时表示,"影像学、细胞分子、人工智能等一些交叉领域技术突破的渗透,对脑科学的发 展起到了很大推动作用。" 1 "北脑一号"智能脑机系统 意念驱动行动,重启患者人生 近期,全球首个神经重症脑机接口多中心临床试验启动。该项目聚焦解决脑积水精准诊疗这一国际性难 题,由天津大学脑机交互与人机共融海河实验室与天津市环湖医院牵头,联合首都医科大学宣武医院、 天坛医院、中国医学科学院北京协和医院、四川大学华西医院、中南大学湘雅医院、河南省人民医院、 中国科学技术大学附属第一医院、天津医科大学总医院等国内顶尖医疗机构。 项目以脑积水精准诊疗为切口, ...
从意念驱动到嗅觉预警,四大利器让脑科学应用更普惠
Huan Qiu Wang Zi Xun· 2025-09-26 02:56
Core Insights - The rapid development of brain science in China over the past decade has been significantly driven by breakthroughs in interdisciplinary technologies such as imaging, molecular biology, and artificial intelligence [1] Group 1: Innovations in Brain Science - The "North Brain No. 1" intelligent brain-machine system is the first international semi-invasive brain-machine interface product that achieves over 100 channels of high-throughput, wireless implantation, aimed at helping patients with motor and speech disabilities due to spinal cord injuries, strokes, and ALS [1][3] - The "Wukong" ultra-large-scale neuromorphic brain computer, developed by Zhejiang University, features a neuron scale close to that of a monkey brain and can complete simulation tasks in one minute that traditional computers take a day to perform [4] - A wearable atomic magnetometer brain magnetometry system has been developed to overcome the high cost and low flexibility of traditional brain magnetometry, making it applicable in various fields such as brain science research and brain-machine interface [4][6] Group 2: Clinical Applications and Benefits - The "North Brain No. 1" has been implanted in five patients, primarily benefiting those with motor disabilities from spinal cord injuries and strokes, demonstrating not only functional replacement but also rehabilitation effects [3] - The system has shown a signal quality stability of over 98% six months post-implantation, indicating its effectiveness in clinical applications [1] - A localized olfactory function assessment and training system has been developed to provide early warning solutions for neurodegenerative diseases such as Alzheimer's and Parkinson's, achieving international leading levels in reliability and validity [6]
国产类脑大模型适配国产沐曦GPU!长序列推理提速超百倍,仅用2%数据匹敌主流模型
量子位· 2025-09-11 10:19
SpikingBrain团队 投稿 量子位 | 公众号 QbitAI 超长序列推理时的巨大开销如何降低? 中国科学院自动化所李国齐、徐波团队 发布的 类脑脉冲大模型SpikingBrain (瞬悉)-1.0 提出了新思路。 SpikingBrain借鉴大脑信息处理机制,具有线性/近线性复杂度,在超长序列上具有显著速度优势。 在GPU上1M长度下TTFT 速度相比主流大模型提升26.5x,4M长度下保守估计速度提升超过100x;在手机CPU端64k-128k-256k长度下较 Llama3.2的同规模模型Decoding速度提升4.04x-7.52x-15.39x。 SpikingBrain适配了面向 沐曦MetaX国产GPU集群 的高效训练和推理框架、Triton算子库、模型并行策略以及集群通信原语,表明了构建国 产自主可控的新型非Transformer大模型架构生态的可行性。 SpikingBrain-1.0就是这一思路下的初步尝试。 大模型时代的新视角 人脑是目前唯一已知的通用智能系统,包含约1000亿神经元和约1000万亿突触数量、具有丰富的神经元种类、不同神经元又具有丰富的内部 结构,但功耗仅20W左 ...
高性能计算群星闪耀时
雷峰网· 2025-08-18 11:37
Core Viewpoint - The article emphasizes the critical role of high-performance computing (HPC) in the development and optimization of large language models (LLMs), highlighting the synergy between hardware and software in achieving efficient model training and inference [2][4][19]. Group 1: HPC's Role in LLM Development - HPC has become essential for LLMs, with a significant increase in researchers from HPC backgrounds contributing to system software optimization [2][4]. - The evolution of HPC in China has gone through three main stages, from self-developed computers to the current era of supercomputers built with self-developed processors [4][5]. - Tsinghua University's HPC research institute has played a pioneering role in China's HPC development, focusing on software optimization for large-scale cluster systems [5][11]. Group 2: Key Figures in HPC and AI - Zheng Weimin is recognized as a pioneer in China's HPC and storage fields, contributing significantly to the development of scalable storage solutions and cloud computing platforms [5][13]. - The article discusses the transition of Tsinghua's HPC research focus from traditional computing to storage optimization, driven by the increasing importance of data handling in AI applications [12][13]. - Key researchers like Chen Wenguang and Zhai Jidong have shifted their focus to AI systems software, contributing to the development of frameworks for optimizing large models [29][31]. Group 3: Innovations in Model Training and Inference - The article details the development of the "Eight Trigrams Furnace" system for training large models, which significantly improved the efficiency of training processes [37][39]. - Innovations such as FastMoE and SmartMoE frameworks have emerged to optimize the training of mixture of experts (MoE) models, showcasing the ongoing advancements in model training techniques [41][42]. - The Mooncake and KTransformers systems have been developed to enhance inference efficiency for large models, utilizing shared storage to reduce computational costs [55][57].
我科学家研发新一代神经拟态类脑计算机
Ren Min Ri Bao· 2025-08-15 21:46
Core Insights - Zhejiang University has launched a new generation of neuromorphic brain-like computer named "Darwin Monkey" or "Wukong" [1] - The computer is based on specialized neuromorphic chips, supporting over 2 billion pulse neurons and over 100 billion synapses, closely resembling the scale of a macaque monkey's brain [1] Neuromorphic Computing - Neuromorphic computing applies the working mechanisms of biological neural networks to computer system design, aiming to create low-power, highly parallel, efficient, and intelligent computing systems [1] - "Wukong" is equipped with 960 Darwin 3rd generation neuromorphic computing chips developed in collaboration with the ZheJiang University and the ZhiJiang Laboratory [1] - Each chip supports over 2.35 million pulse neurons and hundreds of millions of synapses, along with a dedicated instruction set for brain-like computing and an online learning mechanism [1] Technological Breakthroughs - "Wukong" has achieved breakthroughs in key technologies such as large-scale neural interconnection and integration architecture [1]
多突触神经元模型问世,国内团队打造类脑计算新引擎,登上《自然·通讯》
机器之心· 2025-08-15 03:29
Core Viewpoint - The rapid development of artificial intelligence (AI) technology is accompanied by increasing concerns over high energy consumption, leading to the exploration of Spiking Neural Networks (SNNs) as a more biologically plausible and energy-efficient computational paradigm [2][3]. Summary by Sections Current Challenges in SNNs - There is a lack of a spiking neuron model that effectively balances computational efficiency and biological plausibility, which is a key limitation for the development and application of SNNs [3]. - Existing spiking neuron models, such as Leaky Integrate-and-Fire (LIF), Adaptive LIF (ALIF), Hodgkin-Huxley (HH), and Multi-compartment models, primarily focus on simulating neuronal dynamic behavior and assume single-channel connections between neurons, leading to information loss in spatiotemporal tasks [3][9]. Introduction of Multi-Synaptic Firing Neuron Model - A new spiking neuron model called Multi-Synaptic Firing (MSF) neuron has been proposed, which can encode spatiotemporal information simultaneously without increasing computational delay or significantly raising power consumption [5][10]. - The MSF neuron model is inspired by the biological phenomenon of "multi-synaptic connections," allowing a single axon to establish multiple synapses with different firing thresholds on the same target neuron, a feature observed in various biological brains [9]. Theoretical and Experimental Findings - Theoretical analysis shows that the MSF neuron is a universal and more refined abstraction of neurons, with traditional LIF neurons and classic ReLU neurons being special cases under specific parameters, revealing the intrinsic connection between ANNs and SNNs [10]. - The study provides an optimal synaptic threshold selection scheme and an alternative parameter optimization criterion to avoid gradient vanishing or explosion issues during the training of deep SNNs, enabling scalability without performance degradation [10][13]. Performance and Applications - Experimental results demonstrate that the MSF neuron can simultaneously encode spatial intensity distribution and temporal dynamics through independent frequency and temporal coding methods, outperforming traditional LIF neurons in various benchmark tasks [13]. - In tasks involving continuous event streams, SNNs built on MSF neurons even surpassed ANNs with the same network structure, showcasing higher energy efficiency [13][14]. - The MSF neuron model has been successfully deployed on domestic neuromorphic hardware platforms, validating its compatibility in real-world scenarios such as event-driven object detection in autonomous driving [14][15]. Future Directions - The research team aims to explore the application potential of MSF neurons in a broader range of tasks, contributing to the advancement of AI technology towards more intelligent, green, and sustainable development [19].
告别Transformer,重塑机器学习范式:上海交大首个「类人脑」大模型诞生
机器之心· 2025-08-13 09:29
Core Viewpoint - The article discusses the introduction of BriLLM, a new language model inspired by human brain mechanisms, which aims to overcome the limitations of traditional Transformer-based models, such as high computational demands, lack of interpretability, and context size restrictions [3][8]. Group 1: Limitations of Current Models - Current Transformer-based models face three main issues: high computational requirements, black-box interpretability, and context size limitations [6][8]. - The self-attention mechanism in Transformers has a time and space complexity of O(n²), leading to increased computational costs as input length grows [7]. - The internal logic of Transformers lacks transparency, making it difficult to understand the decision-making process within the model [7][8]. Group 2: Innovations of BriLLM - BriLLM introduces a new learning mechanism called SiFu (Signal Fully-connected Flowing), which replaces traditional prediction operations with signal transmission, mimicking the way neural signals operate in the brain [9][13]. - The model architecture is based on a directed graph, allowing all nodes to be interpretable, unlike traditional models that only provide limited interpretability at the input and output layers [9][19]. - BriLLM supports unlimited context processing without increasing model parameters, allowing for efficient handling of long sequences [15][16]. Group 3: Model Specifications - BriLLM has two versions: BriLLM-Chinese and BriLLM-English, with non-sparse model sizes of 16.90 billion parameters for both languages [21]. - The sparse version of the Chinese model has 2.19 billion parameters, while the English version has 0.96 billion parameters, achieving a parameter reduction of approximately 90% [21]. - The model's design allows for the integration of multiple modalities, enabling it to process not just language but also visual and auditory inputs [25][26]. Group 4: Future Prospects - The team aims to develop a multi-modal brain-inspired AGI framework, which will integrate perception and motion [27]. - BriLLM has been selected for funding under Shanghai Jiao Tong University's "SJTU 2030" plan, which supports groundbreaking research projects [27].
中芯国际产能供不应求;传SK海力士HBM4大幅涨价;传三星DDR4停产延后…一周芯闻汇总(8.4-8.10)
芯世相· 2025-08-11 06:46
Key Events - Trump announced that the U.S. will impose approximately 100% tariffs on chips and semiconductors [7] - WSTS reported that the global semiconductor market size is expected to grow by 18.9% year-on-year in the first half of 2025, reaching $346 billion [10] - SMIC's Zhao Haijun stated that the capacity shortage will last at least until October this year [7][14] - Samsung is reportedly extending its DDR4 production plan until December 2026 [7][18] - SK Hynix has significantly raised the pricing for HBM4 [7][19] Industry Trends - The Chinese government is pushing for breakthroughs in key brain-machine interface chips, focusing on high-speed, low-power signal processing [9] - Shanghai is accelerating the development of specialized chips for embodied intelligence [9] - The global semiconductor sales in Q2 2025 are projected to reach $179.7 billion, with a year-on-year growth of nearly 20% [11] Company Updates - SMIC reported Q2 revenue of $2.21 billion, a 16% year-on-year increase, with a capacity utilization rate of 92.5% [13][14] - Hua Hong Semiconductor achieved a Q2 revenue of $566 million, with a gross margin of 10.9% [13] - Samsung is investing in a new 1c DRAM production line, aiming for a monthly capacity of 150,000 to 200,000 wafers by mid-next year [15] Market Dynamics - The average trading price of PC DRAM products has increased for four consecutive months, with July's price reaching $3.90, a 50% month-on-month increase [19] - The advanced IC substrate market is expected to reach $31 billion by 2030, driven by AI and other emerging applications [11] Technological Advancements - Zhejiang University announced a breakthrough in neuromorphic computing with the launch of a new generation of brain-like computers, supporting over 2 billion neurons [21]
“达尔文猴”出笼!中国类脑计算机颠覆AI底层逻辑
Jin Tou Wang· 2025-08-06 06:19
Core Insights - The world's first brain-like computer, "Darwin Monkey," has been unveiled by Chinese engineers, consisting of over 2 billion artificial neurons, aiming to advance brain-inspired artificial intelligence (AI) [1] - The system is built on 960 Darwin 3 brain-like computing chips, capable of generating over 100 billion synapses, marking a significant step towards achieving more advanced brain-like intelligence [1] - The Darwin Monkey has successfully completed tasks such as content generation, logical reasoning, and mathematics using a large brain-like model developed by a pioneering Chinese AI company [1] Group 1 - The Darwin Monkey's neural and synaptic resources can simulate various animal brains, including macaques, mice, and zebrafish, potentially advancing brain science research [1] - Neuromorphic computing, also known as brain-like computing, draws inspiration from the brain's neural networks and processing capabilities to achieve more efficient information processing [1] - The system's ability to simulate cognitive functions like decision-making, learning, and memory can lead to faster and more adaptive problem-solving, as well as more advanced AI systems [1] Group 2 - The Darwin 3 chip, developed in collaboration between Zhejiang University and Zhejiang Provincial Laboratory, supports over 2.35 million spiking neurons and hundreds of millions of synapses, featuring a dedicated brain-like computing instruction set and online learning mechanism [2] - Under typical operating conditions, the system consumes approximately 2000 watts of power, showcasing its low power consumption [2] - The director of the National Key Laboratory of Brain-Machine Intelligence at Zhejiang University stated that the large-scale, high parallelism, and low power characteristics of the Darwin Monkey will provide a new computing paradigm for existing computing scenarios [2]
浙大发布神经拟态类脑计算机“悟空”
Hang Zhou Ri Bao· 2025-08-06 03:27
Core Insights - The launch of the Darwin Monkey (悟空) represents a significant advancement in neuromorphic computing, achieving over 2 billion neurons and marking China's position at the international forefront of this technology [1][2] - The system is designed to address high energy consumption and computational demands of existing deep networks and large models, providing a new computational paradigm [2] Group 1: Technology and Features - The Darwin Monkey consists of 15 blade-type neuromorphic servers, each integrating 64 Darwin 3rd generation neuromorphic chips, closely mimicking the neuron count of a macaque brain [1] - The system operates at approximately 2000 watts under typical conditions, showcasing its low power consumption capabilities [1] - A new generation Darwin neuromorphic operating system has been developed to optimize resource management and enable efficient concurrent scheduling of neuromorphic tasks [1] Group 2: Applications and Implications - The system can perform intelligent tasks such as logical reasoning, content generation, and mathematical problem-solving through the DeepSeek application [1] - It serves as a simulation tool for neuroscientists, allowing for the modeling of various animal brains, thus providing new experimental methods while reducing the need for real biological experiments [2] - The capabilities of the system are expected to accelerate the development of general artificial intelligence by leveraging its brain-like operational mechanisms and surpassing human brain processing speeds [2]