Workflow
人工智能奇点
icon
Search documents
机械行业研究:看好商业航天、机器人、核聚变、船舶和工程机械
SINOLINK SECURITIES· 2026-01-11 05:53
行情回顾 本周板块表现:上周(2026/1/5-2026/1/9)5 个交易日,SW 机械设备指数上涨 5.39%,在申万 31 个一级行业分 类中排名第 10;同期沪深 300 指数上涨 2.79%。2026 年至今表现:SW 机械设备指数上涨 5.39%,在申万 31 个一 级行业分类中排名第 10;同期沪深 300 指数上涨 2.79%。 核心观点 投资建议 见"股票组合"。 风险提示 宏观经济变化风险;原材料价格波动风险;政策变化的风险。 敬请参阅最后一页特别声明 1 谷神星一号将开启 26 年商业航天首次发射,看好 26 年国内进入火箭放量元年。谷神星一号海遥七运载火箭 1 月 16 日-1 月 18 日择机在日照近海发射,民营航天开启 2026 年首次火箭发射。根据 ITU 官网,我国 25 年 12 月 向 ITU 申请了超 20 万颗卫星频轨资源,看好国内卫星放量的迫切需求倒逼国内火箭放量。进入 2026 年,回收 尝试将进入密集区间:朱雀三号、长征十二号甲继续冲刺回收;长征十二号乙、智神星一号、双曲线三号、长 征十号乙、星云一号、引力二号、元行者一号等新型号陆续首飞;天龙三号、力箭二号可回收 ...
人工智能奇点与摩尔定律的终结
半导体芯闻· 2025-03-10 10:23
Core Viewpoint - The article discusses the end of Moore's Law and the rise of artificial intelligence (AI), highlighting the shift from traditional computing to AI-driven systems that can self-improve and process vast amounts of data more efficiently [1][3][6]. Group 1: The End of Moore's Law - Moore's Law, which predicted that the number of transistors on a chip would double every two years, is losing its effectiveness as transistors reach atomic limits, making further miniaturization costly and complex [1][3]. - Traditional computing faces challenges such as heat accumulation, power limitations, and rising chip production costs, which hinder further advancements [3][4]. Group 2: Rise of AI and Self-Learning Systems - AI is not constrained by the need for smaller transistors; instead, it utilizes parallel processing, machine learning, and specialized hardware to enhance performance [3][4]. - The demand for AI computing power is increasing rapidly, with AI capabilities growing fivefold annually, significantly outpacing Moore's Law's predicted doubling every two years [3][6]. - Companies like Tesla, Nvidia, Google DeepMind, and OpenAI are leading the transition with powerful GPUs, custom AI chips, and large-scale neural networks [2][4]. Group 3: Approaching the AI Singularity - The concept of the AI singularity refers to a point where AI surpasses human intelligence and begins self-improvement without human input, potentially occurring as early as 2027 [2][6]. - Experts have differing opinions on when Artificial General Intelligence (AGI) and subsequently Artificial Superintelligence (ASI) will be achieved, with predictions ranging from 2027 to 2029 [6][7]. Group 4: Implications of ASI - ASI has the potential to revolutionize various industries, particularly in healthcare, economics, and environmental sustainability, by accelerating drug discovery, automating repetitive tasks, and optimizing resource management [8][9][10]. - However, the rapid advancement of ASI also poses significant risks, including the potential for AI to make decisions that conflict with human values, leading to unpredictable or dangerous outcomes [10][12]. Group 5: Safety Measures and Ethical Considerations - Organizations like OpenAI and DeepMind are actively researching AI safety measures to ensure alignment with human values, including reinforcement learning from human feedback [12][13]. - The need for ethical guidelines and regulatory frameworks is critical to guide AI development responsibly and ensure it benefits humanity rather than becoming a threat [13][14].