Workflow
AI ASIC
icon
Search documents
105.21元/股 37家机构抢份额
上周五发布询价转让公告后,芯原股份已经于本周一迅速敲定了初步转让定价。 8月25日晚间,芯原股份披露股东询价转让定价情况提示性公告,根据8月25日询价申购情况,初步确定 的本次询价转让价格为105.21元/股。受让方通过询价转让受让的股份,在受让后6个月内不得转让。 四天即初步敲定询价转让 根据公司披露,参与本次询价转让报价的机构投资者为37家,涵盖基金管理公司、保险公司、证券公 司、私募基金管理人、合格境外投资者等专业机构投资者。本次询价转让拟转让股份已获全额认购。 | 序号 | 拟转让股东名称 | 拟转让股份 | | 占总股本比 占所持股份 | 转让原因 | | --- | --- | --- | --- | --- | --- | | | | 数量(股) | 例 | 比例 | | | 1 | VeriSilicon Limited | 15,771,398 | 3.000% | 20.84% | 自身资金需求 | | 2 | 共青城时兴投资合伙企业(有 | 2,986,492 | 0.568% | 12.18% | 自身资金需求 | | | 限合伙) | | | | | | 3 | 嘉兴海橙创业投资合伙 ...
东吴证券晨会纪要-20250808
Soochow Securities· 2025-08-08 01:32
Macro Strategy - The report analyzes three historical cases of capacity adjustment over a century, highlighting the negative feedback loop of capacity imbalance and the importance of government intervention to restore balance [1][12] - It emphasizes that supply-demand rebalancing requires simultaneous efforts in controlling capacity, restoring credit, and stabilizing employment, rather than relying solely on supply or demand policies [1][12] Fixed Income - The new bond value-added tax regulation enhances the relative attractiveness of credit bonds, as their interest income is not subject to the tax, while government bonds lose their tax exemption [2][3][13] - The adjustment in tax rates is expected to narrow the yield spread between credit bonds and other interest rate bonds by approximately 10 basis points, with potential increases in relative value for credit bonds by 5-15 basis points for proprietary trading departments [2][3][14] Industry Analysis - The asset operation and maintenance (O&M) industry is gaining importance post-capital formation peak, with growth driven more by product development than by personnel or capital [4][15] - The report indicates that the O&M market is projected to grow significantly, with the current market size at approximately 2.44 trillion and expected to reach around 5.5 trillion in ten years [4][15] - The report suggests focusing on companies like Borui Data, Rongzhi Rixin, and Xianheng International, as they are positioned to benefit from the increasing demand for high-quality O&M services [4][15] Electronic Industry - The ASIC business model requires service providers to have strong IP design and SoC design capabilities, with major players like Broadcom and Marvell holding significant market shares [5][16][17] - The custom chip market is projected to reach $55.4 billion by 2028, with a compound annual growth rate (CAGR) of 53% from 2023 to 2028, driven by the demand for AI acceleration [5][16][17] - The report highlights the potential for margin pressure in the custom chip business due to increased competition from domestic firms entering the AI ASIC market [5][16][17]
二季度财报前聊聊台积电
傅里叶的猫· 2025-07-14 15:43
Group 1: TSMC's Investment and Pricing Strategy - TSMC plans to invest $165 billion in capacity expansion in the U.S., which may increase its chances of tariff exemptions [1] - TSMC's management indicated that potential semiconductor tariffs could suppress electronic product demand and reduce company revenue [1] - Due to inflation and potential tariff costs, TSMC expects profit margins from overseas factories to erode by 3-4 percentage points in the later years of the next five years [1] Group 2: Wafer Pricing and Currency Impact - TSMC is expected to increase wafer prices by 3%-5% globally due to strong demand for advanced processes and structural currency trends [2] - U.S. customers are reportedly locking in higher quotes for 4nm capacity at TSMC's U.S. factories, with plans to raise wafer prices by at least 10% [2] Group 3: 2nm Capacity Expansion - TSMC plans to start mass production of 2nm technology in the second half of 2025, with significant demand anticipated [5] - The projected capacity for 2nm will be 10k wafers per month (kwpm) in 2024, increasing to 40-50 kwpm in 2025, and reaching 90 kwpm by the end of 2026 [5] - Major clients for 2nm technology will include Apple, AMD, and Intel, with Apple expected to adopt the technology in Q4 2025 [5][6] Group 4: AI and Cryptocurrency Demand - By the end of 2026, AI ASICs will begin utilizing 2nm capacity, with increased usage expected in 2027 [6] - The contribution of cloud AI semiconductor business to TSMC's revenue is projected to rise from 13% in 2024 to 25% in 2025, and further to 34% by 2027 [12] Group 5: B30 GPU and Market Demand - TSMC's Blackwell chip production is expected to align with the demand from NVL72 server rack shipments, with a projected shipment of 30,000 racks in 2025 [10] - The design of the Chinese version of the B30 GPU is anticipated to be similar to the RTX PRO 6000, with demand continuing to grow [12] - If the B30 can be sold in China, it could account for 20% of TSMC's revenue growth in 2026 [12]
摩根士丹利:AI ASIC-协调 Trainium2 芯片的出货量
摩根· 2025-07-11 01:13
Investment Rating - The industry investment rating is classified as In-Line [8]. Core Insights - The report addresses the mismatch in AWS Trainium2/2.5 chip shipments attributed to unstable PCB yield rates, with an expectation of approximately 1.1 million chip shipments in 2025 [1][3]. - Supply chain checks estimate total shipments for the Trainium2/2.5 life cycle (2H24 to 1H26) at 1.9 million units, with a focus on production and consumption in 2025 [2][11]. - The report highlights a significant gap between upstream chip production and downstream consumption, suggesting improvements in yield rates may reduce this gap by 2H25 [6][11]. Upstream - Chip Output Perspective - As of late 2024, 0.3 million units of Trainium2 chips were produced, with a projected total of 1.1 million shipments in 2025, primarily packaged by TSMC (70%) and ASE (30%) [3][11]. - An additional 0.5 million Trainium2.5 chips are expected to be produced in 1H26, bringing the total life cycle shipments to 1.9 million units [3]. Midstream - PCB Perspective - Downstream checks indicate potential shipments exceeding 1.8 million units of Trainium chips, averaging around 200K per month since April [4][11]. - Key suppliers for PCB boards include Gold Circuit and King Slide, which provide essential components for Trainium computing trays [4]. Downstream - Server Rack System Perspective - Wiwynn is identified as a key supplier for server rack assembly, with revenue from AWS Trainium2 servers increasing in 1Q25, aligning with the upstream chip production estimates [5][11]. - The report notes that each server rack can accommodate 32 chips, supporting the projected consumption figures [5]. Component Suppliers - Major suppliers for Trainium2 AI ASIC servers include AVC for thermal solutions, Lite-On Tech for power supply, and Samsung for memory components [10][18]. - Other notable suppliers include King Slide for rail kits and Bizlink for interconnect solutions [10][18]. Future Projections - For Trainium3, shipments are estimated at 650K for 2026, with production managed by Alchip [12][13]. - The report anticipates that Trainium4 will enter small production by late 2027, with a rapid ramp-up expected in 2028 [14].
电子行业跟踪周报:Marvell上调数据中心TAM,关注ASIC趋势对铜连接市场的驱动-20250622
Soochow Securities· 2025-06-22 10:50
Investment Rating - The report maintains an "Overweight" investment rating for the industry [1] Core Insights - Marvell has raised its 2028 potential market size (TAM) for data centers from $75 billion to $94 billion, with custom acceleration chips expected to reach $55.4 billion, growing at a CAGR of 53% from 2023 to 2028 [2] - The report emphasizes the importance of the "XPU attachment market," which includes various components such as NICs and power management ICs, projected to grow significantly [2] - The trend towards using copper cables for short-distance interconnections in CSP ASIC server solutions is becoming clear, with major companies like AWS and Microsoft adopting AEC copper cables [3] Summary by Sections Market Size and Growth - Marvell's updated TAM includes $55.4 billion for custom acceleration chips, $19 billion for interconnect chips, and $13.2 billion for switching chips, with respective CAGRs of 53%, 35%, and 17% from 2023 to 2028 [2] Industry Trends - The report highlights a clear trend towards AI ASIC chips, with increasing demand for copper cables among major CSPs, indicating a robust growth opportunity for related suppliers [3] Key Companies in the Supply Chain - Companies involved in the copper cable and connector market include Bochuang Technology, Zhaolong Interconnect, and Huafeng Technology, which are expected to benefit from the growing demand for AI ASIC chips and related components [4]
摩根士丹利:全球科技-AI 供应链ASIC动态 -Trainium 与 TPU
摩根· 2025-06-19 09:46
Investment Rating - The report maintains an "Overweight" (OW) rating on several companies in the AI ASIC supply chain, including Accton, Wiwynn, Bizlink, and King Slide in downstream systems, as well as TSMC, Broadcom, Alchip, MediaTek, Advantest, KYEC, Aspeed, and ASE in upstream semiconductors [1][11]. Core Insights - The AI ASIC market is expected to grow significantly, with NVIDIA outpacing the ASIC market in 2025, generating enthusiasm for ASIC vendors. Asian design service providers like Alchip and MediaTek are anticipated to gain market share due to their efficient operations and quality services [2][21]. - The global semiconductor market is projected to reach $1 trillion by 2030, with AI semiconductors being a major growth driver, estimated to reach $480 billion, comprising $340 billion from cloud AI semiconductors and $120 billion from edge AI semiconductors [21][22]. Summary by Sections AI ASIC Market Developments - AWS Trainium: Alchip has taped out the Trainium3 design, with wafers already produced. Alchip is expected to have a strong chance of winning the 2nm Trainium4 project [3][15]. - Google TPU: Broadcom is expected to tape out a new 3nm TPU after the Ironwood (TPU v7p) enters mass production in 1H25, while MediaTek is also preparing for a 3nm TPU tape-out [4][18]. - Meta MTIA: Preliminary volume forecasts for MTIAv3 are expected in July, with considerations for larger packaging for MTIAv4 [5]. Downstream and Upstream Suppliers - Downstream suppliers for AWS Trainium2 include Gold Circuit for PCB boards, King Slide for rail kits, and Bizlink for active electrical cables. Wiwynn is expected to see 30-35% of its total revenue from Trainium2 servers in 2025 [6][11]. - Key upstream suppliers include TSMC for foundry services, Broadcom for IP and design services, and Alchip for back-end design services [11][10]. Market Size and Growth Projections - The AI semiconductor market is projected to grow to $50 billion by 2030, representing 15% of cloud AI semiconductors. This indicates a significant opportunity for AI ASIC vendors despite NVIDIA's dominance in the AI GPU market [21][24]. - The report estimates that the global AI capex total addressable market (TAM) for 2025 could reach around $199 billion, driven by major cloud service providers [26][58]. Financial Implications - Alchip's revenue from Trainium3 chips is estimated to be $1.5 billion in 2026, with expectations of continued growth in the AI ASIC market [18][21]. - MediaTek's revenue from TPU projects is projected to grow significantly, with estimates of $1 billion in 2026 and potential growth to $2-3 billion in 2027 [19][21].
野村:Meta 在 ASIC 服务器方面雄心勃勃,其 MTIA AI 服务器有望在 2026 年成为一个里程碑
野村· 2025-06-19 09:46
Investment Rating - The report does not explicitly provide an investment rating for the industry or specific companies involved in AI ASIC servers. Core Insights - The AI ASIC market is expected to grow significantly, with Meta's MTIA AI servers potentially marking a milestone in 2026. The total AI ASIC volume could exceed nVidia's AI GPU volume by 2026 [1][3][12] - nVidia currently dominates the AI server market with over 80% market value, while ASIC AI servers hold approximately 8-11% market value share. However, unit comparisons suggest that Google and AWS's AI ASICs could reach 40-60% of nVidia's GPU volume by 2025 [1][3] - Meta's MTIA AI server is projected to ramp up significantly, with expectations of 1 to 1.5 million units of MTIA V1 and V1.5 by late 2025 to 2026 [10][12] Summary by Sections AI ASIC Market Dynamics - The AI ASIC market is experiencing aggressive growth, with more cloud service providers (CSPs) developing their own ASIC solutions, including Meta and Microsoft [1][3] - nVidia is responding to competition by unveiling NVLink Fusion, which allows inter-chip connections between its GPUs and third-party CPUs, indicating a proactive approach to maintain its market position [2] Meta's MTIA AI Server Development - Meta's MTIA AI server is set to launch its first ASIC (MTIA T-V1) by late 2025, with subsequent versions (V1.5 and V2) expected in mid-2026 and 2027, respectively [9][10] - The MTIA T-V1.5 is anticipated to be significantly more powerful than V1, with a larger interposer size and complex design [9][10] Supply Chain and Component Insights - Key suppliers for Meta's MTIA projects include Quanta, Unimicron, EMC, WUS, and Bizlink, which are expected to benefit from the growing demand for AI ASICs [12][13][14][15][19] - The report highlights the importance of baseboard management controllers (BMCs) in Meta's ASIC AI server development, with an estimated 23 BMCs per rack [17][18] Competitive Landscape - Despite nVidia's current leadership in AI computing, the report suggests that the gap is narrowing as ASIC solutions improve in specifications and performance [3][7] - The specifications of AI ASICs from companies like Google and AWS are catching up to nVidia's offerings, although nVidia still holds advantages in connectivity and ecosystem [7][8]
迈威尔(MRVL.US)点燃AI ASIC需求井喷预期 最大受益者乃博通(AVGO.US)?
智通财经网· 2025-06-18 14:40
Core Viewpoint - Marvell Technology (MRVL.US) has seen a significant stock price increase due to positive evaluations from top Wall Street analysts regarding its customized AI ASIC chip activities and potential market announcements [1][3] Group 1: Market Opportunities - Analysts from Evercore ISI predict that the new AI ASIC chip designs could ramp up quickly between 2026 and 2027, indicating strong future demand [1] - Marvell expects each customized AI chip design win to generate billions in lifecycle revenue within 1.5 to 2 years, while each XPU Attach win could contribute hundreds of millions within 2 to 4 years [2] - The total addressable market (TAM) for customized data center chips has been raised to $94 billion, a 26% increase from last year's AI activities [3] Group 2: Financial Projections - Marvell has raised its financial targets, with analysts noting that the potential earnings per share could reach $8 by 2028, exceeding Wall Street estimates by 60% [4] - The company aims to capture at least 20% of the TAM, with over 50% of its data center revenue expected to come from AI ASIC-related demands [3][5] Group 3: Competitive Landscape - Broadcom (AVGO.US) is identified as the long-term beneficiary of Marvell's AI activities, holding a dominant market share of approximately 60% in the AI ASIC sector, while Marvell holds 13% to 15% [6][7] - The AI ASIC market is expected to grow significantly, with major tech companies like Google, Microsoft, and Amazon investing heavily in AI ASIC chips, indicating a shift in market dynamics away from GPU dominance [7]
华为昇腾910系列2025年出货量调研
傅里叶的猫· 2025-05-20 13:00
Core Viewpoint - The report from Mizuho Securities provides an analysis of companies including Broadcom, NVIDIA, AMD, Supermicro, and Huawei, highlighting the expected growth and challenges in the AI ASIC and GPU markets. Group 1: Broadcom and NVIDIA - Mizuho expects Broadcom's custom ASIC chips (TPUv7p/MTIA2) to accelerate in deployment by 2026, potentially being used in OpenAI's Strawberry and Apple's Baltra projects in the second half of 2026 [1] - In 2024, Broadcom's custom ASIC chips are projected to account for 70%-80% of usage, establishing it as a leader in AI ASICs, excluding self-manufactured AI ASICs like Google's TPU [1] - The UMAIN project in Saudi Arabia plans to deploy 4,000 GB200 NVL72 servers, corresponding to 280,000 NVIDIA GPUs and 350,000 AMD GPUs over the next five years [1] - The G42 project in the UAE has committed to importing 500,000 NVIDIA GB200 GPUs annually, valued at $15 billion, although the sustainability of this figure is questioned [1] Group 2: Huawei - The report anticipates that Huawei's Ascend 910 orders will exceed 700,000 units by 2025, with the next-generation Ascend 920 expected to launch in 2026 [2] - However, the current yield rate for the Ascend 910 is low at only 30%, a figure corroborated by previous reports [2][3] - Other estimates suggest that the shipment volume for the Ascend 910 series could be over 700,000 units this year [5]
英伟达(NVDA.US)不愿放弃中国市场! 欲再推“中国特供版”AI芯片
智通财经网· 2025-05-02 14:15
Core Viewpoint - Nvidia is modifying its AI chip design architecture to comply with new U.S. export restrictions while continuing to supply AI chips to major Chinese clients like ByteDance, Alibaba, and Tencent [1][2]. Group 1: Nvidia's AI Chip Strategy - Nvidia's CEO Jensen Huang announced a new AI chip plan for the Chinese market during a recent visit, indicating the company's commitment to developing chips that meet regulatory restrictions [1][2]. - The U.S. government has expanded its AI chip export restrictions, affecting the sales path for Nvidia's H20 chips, which are a customized version with significantly reduced performance compared to H100/H200 [1][2]. - Nvidia expects to incur up to $5.5 billion in additional costs due to these restrictions, which has led to a nearly 7% drop in its stock price [1]. Group 2: Market Impact and Sales - In the first three months of this year, Chinese tech giants ordered over $16 billion worth of H20 AI chips, but the impact of the new U.S. ban on these orders remains unclear [2]. - Nvidia's sales in the Chinese market reached $17.11 billion for the fiscal year ending January 26, 2025, accounting for approximately 13% of its total revenue of $130.5 billion [2]. Group 3: AI Chip Technology Shift - Analysts suggest that Nvidia may shift its AI chip technology from general-purpose GPUs to AI-specific ASICs to comply with U.S. export restrictions [3]. - The potential transition to ASICs could lead to performance reductions that may affect competitiveness against domestic AI chips, although some analysts believe Nvidia might focus on moderate downgrades to avoid regulatory issues [3]. Group 4: ASIC vs. GPU - AI ASICs, also known as custom AI chips, are designed for specific AI tasks and offer efficiency advantages over traditional processors like CPUs and GPUs [4]. - Companies like Google have successfully implemented AI ASICs, such as TPUs, to optimize deep learning tasks, showcasing the potential of ASICs in the AI landscape [4][5]. - The future may see Nvidia's GPUs focusing on large-scale exploratory training and complex tasks, while ASICs will target stable, high-throughput AI inference workloads [6].