人工神经网络
Search documents
运用人工神经网络的防空系统威胁评估模型
-· 2026-02-27 07:40
Investment Rating - The report does not explicitly provide an investment rating for the industry. Core Insights - The study introduces a dynamic threat assessment model for air defense systems utilizing Artificial Neural Networks (ANN) to enhance decision-making speed and accuracy while reducing human error [2][3] - The model incorporates 26 different threat criteria, significantly increasing the number of parameters compared to traditional static models, which typically use fewer criteria [3][21] - The performance of the model is validated with mean square errors (MSE) ranging from 0.0005 to 0.0072 and a correlation coefficient (R) exceeding 95%, indicating high accuracy in threat level predictions [3][56] Summary by Sections Introduction - The study emphasizes the importance of automating threat assessment and weapon assignment in air defense systems to improve decision-making under time constraints [2] - A novel "Combined Geometric Threat Score" is developed to align threat values with weighted scores based on the significance of various criteria [2] Literature Review - The literature reveals a variety of methods for threat assessment, categorized into four main types, highlighting the need for a combined approach to enhance performance [5][6] - The study identifies gaps in existing research, particularly the limited number of criteria used in previous models [3][18] Methodology - Data collection involved 26 criteria, with a total of 5,798 data points compiled from 56 studies, including both readily available and imputed data [21][24] - The model architecture consists of an input layer for normalized criteria, a hidden layer with variable neurons, and an output layer for threat scores [42] Simulation and Empirical Results - The model achieved optimal results with a training rate of 70%, validation rate of 10%, and test rate of 20%, demonstrating high accuracy and efficiency [56][66] - Comparative analysis shows that this study considered more criteria than previous studies, resulting in faster and more effective threat assessment [66][69] Future Directions - The study suggests that future research could focus on automating threat-based assignments for air defense systems, enhancing operational efficiency [73]
马斯克没吹牛!特斯拉能识别手势信号了,但国内还要继续等!
Sou Hu Cai Jing· 2026-02-22 15:28
Core Viewpoint - Tesla's recent advancements in its Full Self-Driving (FSD) system, particularly the ability to recognize hand gestures, represent a significant milestone in autonomous driving technology [3][6]. Group 1: Technological Advancements - Tesla's FSD system can now interpret human gestures, allowing it to navigate complex driving situations, such as narrow streets with parked cars [3][6]. - The latest FSD version, FSD v14.2, utilizes an end-to-end artificial neural network, enhancing the vehicle's ability to perceive and understand its environment [6][7]. - The FSD system has accumulated over 80 billion miles of driving data, resulting in a significant safety improvement, with major collisions occurring every 5.3 million miles compared to the average of 0.66 million miles for human drivers [9]. Group 2: Market Implications - Elon Musk has indicated that as FSD capabilities improve, the subscription fee of $99 per month may increase, suggesting a potential for higher revenue streams [12]. - Tesla is actively working on adapting its FSD technology for the Chinese market, with plans for a local training center to better understand local traffic conditions and regulations [14][16]. - The anticipation of FSD's full capabilities in China indicates a strategic move to enhance Tesla's market position in a key region [16].
像大模型一样进化
腾讯研究院· 2026-01-05 08:44
Group 1 - The core idea of the article emphasizes the evolution of AI models, particularly the transition from early symbolic AI to deep learning and the success of Transformer models, suggesting that this evolution can inform human cognitive development [1] - The article discusses the importance of defining a clear objective function in machine learning, which guides the optimization of models, and compares this to the necessity of setting long-term goals in personal development [3][4] - It highlights the concept of "local optimum" in both machine learning and personal growth, warning against settling for short-term achievements that may limit future opportunities [4][5] Group 2 - The article references Abraham Maslow's insights on self-actualization and the fear of success, suggesting that individuals often hesitate to pursue greatness due to self-doubt and societal pressures [5] - It recounts Sam Altman's experience in establishing OpenAI's ambitious goal of achieving AGI, illustrating how bold objectives can attract talent and drive innovation [6] - The importance of building a personal knowledge system is emphasized, as it enables individuals to engage deeply with the world and develop irreplaceable skills in the age of AI [7] Group 3 - The article explains the process of stochastic gradient descent (SGD) in machine learning, which involves iterative optimization based on error correction, and draws parallels to how humans learn from mistakes [10][12] - It discusses the significance of embracing errors as a means of growth, suggesting that mistakes provide valuable feedback that can enhance cognitive flexibility and adaptability [12][13] - The concept of "random exploration" is presented as a strategy for personal development, encouraging individuals to seek diverse experiences and knowledge to avoid cognitive stagnation [15][16] Group 4 - The article stresses the importance of attention in learning, likening it to the attention mechanism in Transformers, and advocates for focusing on high-quality data and relationships to enhance understanding [19][20] - It advises against rigid rule-based learning, promoting the idea of learning through examples and experiences, which allows for deeper understanding and adaptability [22][23] - The article concludes with the notion of selective forgetting as a cognitive strategy, emphasizing the need to prioritize valuable information while letting go of less useful knowledge [25][26]
21书评︱“深度学习之父”辛顿:信仰之跃
2 1 Shi Ji Jing Ji Bao Dao· 2025-07-31 09:32
Group 1 - Geoffrey Hinton, known as the "father of deep learning," received the Nobel Prize in Physics in 2024 for his foundational discoveries in machine learning using artificial neural networks [1] - Hinton's journey in artificial intelligence faced significant challenges, including skepticism from academia during the AI winter, yet he persisted and contributed to the emergence of large models in AI [1][10] - The narrative highlights the importance of belief and perseverance in the face of adversity, as Hinton's commitment to neural networks ultimately led to breakthroughs in AI [10][11] Group 2 - Liu Jia, a professor at Tsinghua University, published a book titled "General Artificial Intelligence: Reconstruction of Cognition, Education, and Ways of Living," which discusses Hinton's story and the underlying logic of persistence in AI research [2][9] - The book aims to explore the connections between brain science and artificial intelligence, suggesting that this integration may aid in achieving true general artificial intelligence [2] - Hinton's early academic struggles and eventual return to AI research serve as a backdrop for understanding the evolution of AI and the significance of his contributions [6][7]
一种新型晶体管
半导体行业观察· 2025-04-04 03:46
Core Viewpoint - Researchers from the National University of Singapore (NUS) have demonstrated that a single standard silicon transistor can mimic the behavior of biological neurons and synapses, bringing hardware-based artificial neural networks (ANN) closer to reality [1][2]. Group 1: Research Findings - The NUS research team, led by Professor Mario Lanza, has provided a scalable and energy-efficient solution for hardware-based ANN, making neuromorphic computing more feasible [1][2]. - The study published in Nature on March 26, 2025, highlights that the human brain, with approximately 90 billion neurons and around 100 trillion connections, is more energy-efficient than electronic processors [1][2]. Group 2: Neuromorphic Computing - Neuromorphic computing aims to replicate the brain's computational capabilities and energy efficiency, requiring a redesign of system architecture to perform memory and computation in the same location [2]. - Current neuromorphic systems face challenges due to the need for complex multi-transistor circuits or emerging materials that have not been validated for large-scale manufacturing [2]. Group 3: Technological Advancements - The NUS team has shown that a single standard silicon transistor can replicate neural firing and synaptic weight changes by adjusting the resistance of the terminal to specific values [3]. - They developed a dual-transistor unit called "Neuro-Synaptic Random Access Memory" (NS-RAM), which operates in neuron or synapse states [3]. - The method utilizes commercial CMOS technology, ensuring scalability, reliability, and compatibility with existing semiconductor manufacturing processes [3]. Group 4: Performance and Applications - The NS-RAM unit demonstrated low power consumption, stable performance over multiple operational cycles, and consistent, predictable behavior across different devices, essential for building reliable ANN hardware for practical applications [3]. - This breakthrough marks a significant advancement in the development of compact, energy-efficient AI processors, enabling faster and more responsive computing [3].