Workflow
边缘AI芯片
icon
Search documents
马来西亚推出首款人工智能设备芯片,加入全球竞赛
Shang Wu Bu Wang Zhan· 2025-08-26 17:42
(原标题:马来西亚推出首款人工智能设备芯片,加入 全球竞赛) 据曼谷邮报8月26日报道,马来西亚周一发布了自己的AI处理器,加入了全球争夺最热门的人工智能开发电 子组件的竞赛。 马来西亚半导体行业协会在一场由高级政府官员出席的行业协会活动上,介绍了当地设计师SkyeChip推出 的MARS1000芯片。该协会在一份声明中表示,这款芯片是该国首款边缘AI处理器,意味着它是一种内部驱动 设备从汽车到机器人的组件。 东南亚国家正在寻求在全球芯片供应链中扮演更大的角色,并利用人工智能(AI)热潮。它已经是半导体 封装领域的关键全球参与者,并且是包括Lam Research Corp在内的齿轮供应商的制造中心。它还是人工智能数 据中心的蓬勃发展的枢纽,得到了包括甲骨文公司(Oracle Corp)和微软公司(Microsoft Corp)在内的大公司 的重大投资。 估值分位:23.33% 游戏ETF 吉隆坡的官员们正在执行一项为期多年的任务,以提升马来西亚在芯片设计、晶圆制造和人工智能数据中 心方面的实力。由总理安瓦尔领导的政府承诺至少投入250亿林吉特(1920亿泰铢)来攀升全球价值链。特朗 普政府提出限制向马来西亚 ...
全球边缘AI芯片市场生产商排名及市场占有率
QYResearch· 2025-08-25 09:38
对低延迟实时智能处理的需求快速增长是边缘 AI 芯片市场的主要驱动因素。随着自动驾驶、工业自动化、智能安防和可穿戴设备等对即 时决策响应的依赖加深,传统云计算架构在带宽、延迟和隐私方面的局限性日益显现,促使 AI 处理能力向终端转移。边缘 AI 芯片通过 在本地实现高效推理与数据处理,显著提升系统响应速度、减轻网络负担并增强数据安全性,成为推动 AI 应用普及与落地的关键引擎。 全球范围内边缘 AI 芯片生产商主要包括英伟达、 Ambarella 、地平线、 Intel 、 AMD 、 Xilinx 、 NXP 、高通、 Google 、黑芝麻、 STMicroelectronics 等。 2024 年,全球前十强厂商占有大约 79.0% 的市场份额 。 边缘 AI 芯片是专用处理器,旨在直接在边缘设备(例如智能手机、物联网设备、无人机和自动驾驶汽车)上执行人工智能 (AI) 计算,而 无需依赖云服务器。这些芯片能够在数据生成源头进行实时数据处理和决策,从而减少延迟、带宽占用以及对云基础设施的依赖。 据 QYResearch 调研团队最新报告" 2025-2031 全球和中国边缘 AI 芯片市场现状及未来发展 ...
商道创投网·会员动态|芯动力科技·完成近亿元B2轮融资
Sou Hu Cai Jing· 2025-08-24 16:33
《商道创投网》2025年8月24日从官方获悉:珠海市芯动力科技有限公司近日完成了由飞图创投独家投 资的近亿元B2轮融资。 《商道创投网》创业家会员·单位简介 芯动力科技成立于2017年,在珠海、深圳、西安及硅谷设有研发中心。公司原创可重构并行处理器RPP 架构,可把GPU通用性与NPU高能效合二为一,同时兼容CUDA生态,面向边缘AI推理市场提供高算 力、低功耗、易迁移的一站式芯片及加速卡解决方案。 《商道创投网》创业家会员·本轮融资用途是什么? 芯动力科技CEO李原表示:资金将主要用于RPP芯片量产爬坡、第二代7nm高能效产品流片,以及边缘 计算场景标杆客户的规模化部署,确保软硬件平台在安防、工业视觉、机器人等多领域快速落地。 《商道创投网》创投家会员·本轮投资原因是什么? 飞图创投管理合伙人赵戈称:RPP架构打破传统GPU与NPU边界,能效比实测领先同级产品3倍以上; 团队兼具清华学术背景与AMD、英伟达工程经验,商业化节奏清晰,已获多家头部客户POC订单,具 备弯道超车潜力。 《商道创投网》创投生态圈·本轮投融观点是什么? 商道创投网创始人王帥表示:工信部"算力基础设施高质量发展行动计划"刚落地,芯动力 ...
2025,谁是边缘AI芯片架构之王?
3 6 Ke· 2025-05-22 11:12
Core Insights - The semiconductor industry is undergoing significant structural changes driven by the rise of edge generative AI, marking 2025 as the "Year of Edge Generative AI" [1] - The global edge AI chip market is projected to grow by 217% year-on-year in Q1 2025, outpacing the cloud AI chip market [1] - Different architectures such as GPU, NPU, and FPGA are evolving along distinct paths, reflecting varying technological philosophies among semiconductor companies regarding future computing paradigms [1] GPU Insights - General-purpose GPUs have excelled in AI applications due to their strong sparse computing capabilities and programmability [2] - Edge hardware must handle multiple tasks beyond single model inference, necessitating a global perspective in AI design [2] - Power efficiency (TOPS/W) will become more critical than absolute performance (TOPS) in future edge AI applications [2] - Imagination's E-series GPU IP has achieved a 400% performance increase to 200 TOPS with a 35% improvement in power efficiency [3] NPU Insights - NPUs are increasingly valuable in edge computing, addressing limitations of traditional processors like CPU and GPU in power consumption and latency [4] - NPUs excel in accelerating AI model inference, significantly improving execution efficiency in real-time applications such as object detection and voice recognition [4] - NXP's i.MX 95 series processor integrates an NPU with 2 TOPS, achieving a fourfold speed increase in image recognition tasks while reducing power consumption by 30% [4] FPGA Insights - FPGAs play a unique role in edge AI due to their reconfigurability and low-latency characteristics [5] - FPGAs can handle large data processing tasks, such as 8K video, more efficiently than CPUs and GPUs [5] - The development barriers for FPGAs are lowering, with vendors providing specialized IP modules and complete solutions [6] Vendor Strategies - Companies like STMicroelectronics and Renesas are combining MCU and NPU strategies to capture IoT market share [7] - Imagination is leveraging its GPU architecture to support complex automotive applications, while NVIDIA's Jetson series is popular among robot developers [7] - Altera focuses on data centers and edge inference markets, while Lattice targets low-power FPGA applications in smart cameras and sensors [8] M&A Activities - STMicroelectronics acquired DeepLite to enhance its AI algorithm optimization capabilities [9] - Qualcomm's acquisition of Edge Impulse aims to simplify AI development for edge devices [10] - NXP's acquisition of Kinara strengthens its position in high-performance AI inference for smart automotive and industrial applications [10] Conclusion - The semiconductor industry is experiencing profound changes driven by edge generative AI, with diverse architectures exploring future computing forms [11] - The evolution of technology is not linear but adaptive, requiring a combination of software and hardware advantages for efficient and flexible system solutions [11] - Companies are accelerating resource integration through mergers and acquisitions, enhancing their competitive edge in a rapidly changing market [11]
AI推理时代:边缘计算成竞争新焦点
Huan Qiu Wang· 2025-03-28 06:18
Core Insights - The competition in the AI large model sector is shifting towards AI inference, marking the beginning of the AI inference era, with edge computing emerging as a new battleground in this field [1][2]. AI Inference Era - Major tech companies have been active in the AI inference space since last year, with OpenAI launching the O1 inference model, Anthropic introducing the "Computer Use" agent feature, and DeepSeek's R1 inference model gaining global attention [2]. - NVIDIA showcased its first inference model and software at the GTC conference, indicating a clear shift in focus towards AI inference capabilities [2][4]. Demand for AI Inference - According to a Barclays report, the demand for AI inference computing is expected to rise rapidly, potentially accounting for over 70% of the total computing demand for general artificial intelligence, surpassing training computing needs by 4.5 times [4]. - NVIDIA's founder Jensen Huang predicts that the computational power required for inference could exceed last year's estimates by 100 times [4]. Challenges and Solutions in AI Model Deployment - Prior to DeepSeek's introduction, deploying and training AI large models faced challenges such as high capital requirements and the need for extensive computational resources, making it difficult for small and medium enterprises to develop their own ecosystems [4]. - DeepSeek's approach utilizes large-scale cross-node expert parallelism and reinforcement learning to reduce reliance on manual input and data deficiencies, while its open-source model significantly lowers deployment costs to the range of hundreds of calories per thousand calories [4]. Advantages of Edge Computing - AI inference requires low latency and proximity to end-users, making edge or edge cloud environments advantageous for running workloads [5]. - Edge computing enhances data interaction and AI inference efficiency while ensuring information security, as it is geographically closer to users [5][6]. Market Competition and Player Strategies - The AI inference market is rapidly evolving, with key competitors including AI hardware manufacturers, model developers, and AI service providers focusing on edge computing [7]. - Companies like Apple and Qualcomm are developing edge AI chips for applications in AI smartphones and robotics, while Intel and Alibaba Cloud are offering edge AI inference solutions to enhance speed and efficiency [7][8]. Case Study: Wangsu Technology - Wangsu Technology, a leading player in edge computing, has been exploring this field since 2011 and has established a comprehensive layout from resources to applications [8]. - With nearly 3,000 global nodes and abundant GPU resources, Wangsu can significantly improve model interaction efficiency by 2 to 3 times [8]. - The company's edge AI platform has been applied across various industries, including healthcare and media, demonstrating the potential for AI inference to drive innovation and efficiency [8].