傅里叶的猫

Search documents
AI这条赛道,大家都在卷
傅里叶的猫· 2025-07-06 15:23
Core Viewpoint - The article discusses the intense competition for AI talent in Silicon Valley, highlighting the rapid advancements in AI technology and the aggressive recruitment strategies employed by major tech companies to attract top experts in the field [1][5][6]. Group 1: AI Talent Competition - Since the launch of ChatGPT at the end of 2022, there has been a significant increase in demand for mid to senior-level AI talent, while entry-level tech job demand has dropped by 50% [5][6]. - Silicon Valley and New York attract over 65% of AI engineers, despite high living costs and the flexibility of remote work [5][6]. - The scarcity of top AI talent is a critical factor in the competition, with estimates suggesting that only a few dozen to a thousand researchers can drive significant breakthroughs in AI technology [6]. Group 2: Recruitment Strategies - Major tech companies like Meta, OpenAI, Google DeepMind, and Anthropic are offering exorbitant salaries, stock incentives, and strategic acquisitions to secure AI talent [6][7]. - Meta has notably led a recruitment drive, successfully hiring several key researchers from OpenAI, enhancing its capabilities in AI development [7][8]. - Meta's recruitment offers include signing bonuses up to $150 million and total contract values reaching $300 million, which are considered highly competitive in the industry [9]. Group 3: AI Chip Development - AI chip manufacturers are releasing new platforms almost annually, with Nvidia's roadmap indicating new products based on the Rubin architecture expected to ship in the second half of next year [1][3]. - AMD is also set to release its MI400 chip in the first half of next year, indicating ongoing advancements in AI hardware [2].
基于PCIe XDMA 的高速数据传输系统
傅里叶的猫· 2025-07-05 11:41
Core Viewpoint - The article discusses the design of a high-speed data transmission system using CXP acquisition cards based on PCIe interfaces, emphasizing the need for high bandwidth and reliability in video data transmission. Group 1: System Design - CXP acquisition cards typically utilize PCIe interfaces to achieve data transmission bandwidths of 12.5G for 4lane/8lane or 40G/100G for optical connections, necessitating PCIe Gen3x8/16 for rapid data transfer to the host computer [1] - A DMA write module (Multi_ch_dma_wr) is integrated between the CXP host and DDR4 cache to manage multi-channel block caching, allowing for flexible data handling [2] Group 2: Performance Metrics - PCIe Gen3x8 can achieve over 6.5GB/s bandwidth, while Gen3x16 can reach over 12GB/s, ensuring high-speed data transfer capabilities [5] - The system is designed to support simultaneous connections of 1-4 cameras, enhancing flexibility and reliability for long-duration data transmission without loss [5] Group 3: Data Handling - The data is organized into blocks based on the translate size set by the host, with a specific reading and writing sequence to ensure efficient data management [6] - In high-speed scenarios, the read pointer follows the write pointer, allowing for immediate reading after writing a block, optimizing the data flow [8] Group 4: Testing and Validation - Testing with DDR4 (64bit x 2400M) shows a read/write bandwidth limit of around 16GB, while using UltraRam with PCIe Gen3x16 yields a read bandwidth of approximately 11-12GB [8] - The system has been successfully tested on various operating systems (Windows 10, Ubuntu, CentOS) for long periods without data loss or errors, indicating robust performance [22]
半导体AI 专业数据分享
傅里叶的猫· 2025-07-05 11:41
在这个信息爆炸的时代,每天都有大量的信息涌进来,我们在星球( Global Semi Research ) 中,每 天也会分享行业的动态和行业的关键数据,但大部分的球友对这些数据并不会做深入的分析,也不会特 意去记这些数据,等到需要用的时候,回头再来找,就发现忘记是哪个资料中有这个数据了。 每天还会推送精选的外资投行/国内券商的优质研报和半导体行业信息/数据,方便我们在星球中进行半 导体、AI行业的交流。 为了避免这种情况,我们最近开始对这些关键的数据进行整理,把每天看到的比较有用的信息和数据都 放到云盘中,即方便大家好回溯这些数据,也可以给大家提供一些更系统的资料。 现在星球中领券后只需要390元,无论是我们自己做投资,还是对行业有更深入的研究,都是非常值得 的。扫描下图中的二维码可进星球。 目前里面的数据还并不是非常多,但这个云盘的数据会持续更新。 图片 | 类别 | 2024 | 2025e | 2026e | 2027e | | --- | --- | --- | --- | --- | | capacity for Local GPU(kwpm) | 2 | 10 | 20 | 26 | | B c ...
半导体AI 专业数据分享
傅里叶的猫· 2025-07-04 12:41
在这个信息爆炸的时代,每天都有大量的信息涌进来,我们在星球( Global Semi Research ) 中, 每天也会分享行业的动态和行业的关键数据,但大部分的球友对这些数据并不会做深入的分析,也 不会特意去记这些数据,等到需要用的时候,回头再来找,就发现忘记是哪个资料中有这个数据 了。 现在星球中领券后只需要390元,无论是我们自己做投资,还是对行业有更深入的研究,都是非常值得 的。扫描下图中的二维码可进星球。 图片 | 类别 | 2024 | 2025e | 2026e | 2027e | | --- | --- | --- | --- | --- | | capacity for Local GPU(kwpm) | 2 | 10 | 20 | 26 | | . B capacity (kwpm) | 2 | 9 | 0 | O | | C C capacity (kwpm) | 0 | 1 | 10 | ნ | | Clork capacity (kwpm) | 0 | 0 | 10 | 20 | | Die per wafer 13 | 78 | 78 | 78 | 78 | | Die per ...
Deepseek爆火之后的现状如何?
傅里叶的猫· 2025-07-04 12:41
Group 1 - The core viewpoint of the article is that DeepSeek R1's disruptive pricing strategy has significantly impacted the AI market, leading to a price war that may challenge the industry's sustainability [3][4]. - DeepSeek R1 was launched on January 20, 2025, and its input/output token price is only $10, which has caused a general decline in the prices of inference models, including an over $8 drop in OpenAI's output token price [3]. - The report highlights that DeepSeek's low-cost strategy relies on high batch processing, which reduces inference computational resource usage but may compromise user experience due to increased latency and lower throughput [10]. Group 2 - Technological advancements in DeepSeek R1 include significant upgrades through reinforcement learning, resulting in improved performance, particularly in coding tasks, with accuracy rising from 70% to 87.5% [5]. - Despite a nearly 20-fold increase in usage on third-party hosting platforms, DeepSeek's self-hosted model user growth has been sluggish, indicating that users prioritize service quality and stability over price [6]. - The tokenomics of AI models involves balancing pricing and performance, with DeepSeek's strategy leading to higher latency and lower throughput compared to competitors, which may explain the slow growth in self-hosted model users [7][9]. Group 3 - DeepSeek's low-cost strategy is aimed at expanding its global influence and promoting the development of artificial general intelligence (AGI), rather than focusing on profitability or user experience [10]. - The report mentions that DeepSeek R2's delay is rumored to be related to export controls, but the impact on training capabilities appears minimal, with the latest version R1-0528 showing significant improvements [16]. - Monthly active users for DeepSeek decreased from 614.7 million in February 2025 to 436.2 million in May 2025, a decline of 29%, while competitors like ChatGPT saw a 40.6% increase in users during the same period [14].
2025 Q2中国半导体市场分析
傅里叶的猫· 2025-07-03 13:03
Omdia是一家专注于半导体市场的咨询公司,他们在2025年的半导体市场季度简报中提供了详尽的市场 分析与预测。这包括但不限于全球及中国大陆半导体市场的增长趋势、不同应用类别的市场情况(如智 能手机、个人电脑、数据中心服务器、汽车等)、主要终端应用市场的表现、关税政策对中国半导体产 业的影响等。他们每次出的报告都是经过了大量的市场调研,所以报告的内容都非常硬核。 本篇文章的内容,在获得作者授权后,我们把Omdia的2025半导体市场季度简报中的 部分内容 分享给 大家,也推荐大家可以多关注及订阅Omdia,对 Omdia内容有兴趣的朋友,可以扫文章后面的微信。 半导体市场Overview 中国市场 | | 行业平均 | 行业平均 | 行业平均 | 统计范围内总营收 | | | --- | --- | --- | --- | --- | --- | | | 主利率 | 营业利润率 | 存货周转率 | (亿元人民币) | | | 2025Q1 | 32.68% | 9.25% | 0.53 | | 379.66 | | 2024Q1 | 34.11% | 9.23% | 0.51 | | 315.67 | | Y ...
半导体AI 专业数据分享
傅里叶的猫· 2025-07-03 13:03
| 类别 | 2024 | 2025e | 2026e | 2027e | | --- | --- | --- | --- | --- | | capacity for Local GPU(kwpm) | 2 | 10 | 20 | 26 | | . B capacity (kwpm) | 2 | 9 | 0 | O | | C C capacity (kwpm) | 0 | 1 | 10 | ნ | | Clork capacity (kwpm) | 0 | 0 | 10 | 20 | | Die per wafer 13 | 78 | 78 | 78 | 78 | | Die per wafer 91 C | За | За | За | За | | Die per wafer -- X | За | За | Зд | 39 | | Average yield rate of B (%) | 30% | 30% | 50% | 70% | | Average vield rate of 9 (%) | 0% | 15% | 30% | 50% | | Average yield rate of (%) ...
数据中心的运营成本和盈利情况
傅里叶的猫· 2025-07-02 16:00
Core Viewpoint - The financial analysis of Oracle's AI data center indicates that despite significant revenue, the operation is projected to incur substantial losses over five years, totaling approximately $10 billion [1][10]. Revenue - The average annual revenue over five years is projected to be $9,041 million, totaling $45 billion [3]. Hosting Cost - Hosting costs, which Oracle pays to data center service providers for GPU server placement, are expected to rise annually due to inflation and market conditions [4]. Electricity Cost - Electricity costs, a fixed expense associated with high-load GPU operations, are also anticipated to increase slightly each year [5]. Gross Profit - The largest cost in the financial model is server depreciation, estimated at $3.3 billion annually, leading to a total asset depreciation to zero within seven years [7]. Operating Profit - Operating profit is significantly impacted by interest expenses, which are expected to total $3.6 billion over the first four years, with a notable reduction in the final year [8]. Contribution Profit - After accounting for taxes, the annual contribution profit is projected to be around $2.5 billion, resulting in a total of $12.5 billion over five years [10].
Google说服OpenAI使用TPU来对抗英伟达?
傅里叶的猫· 2025-06-30 13:44
以下文章来源于傅里叶的猫AI ,作者猫叔 傅里叶的猫AI . 傅里叶的猫,防失联。半导体行业分析 这两天大家都在谈论OpenAI要使用Google TPU的信息,这件事的源头是The Information的一个报 道: 约 10 年前,Google 启动 TPU 研发,2017 年起向有训练自家 AI 模型需求的云客户开放 。在 AI 软硬 件生态中,Google 是唯一在九大类别(涵盖 AI 服务器芯片、训练集群、云服务器租赁、AI 应用程 序接口等 )均布局相关技术或业务的主要企业,构建起从芯片到 AI 全栈生态,强化竞争壁垒 。 这篇报告都讲了什么? OpenAI 的芯片策略调整 OpenAI 作为英伟达人工智能芯片的大型客户之一, 长期以来主要通过微软和甲骨文租赁英伟达服务 器芯片 ,用于开发、训练模型以及为 ChatGPT 提供算力支持 。过去一年,其在这类服务器上的投 入 超 40 亿美元 , 训练和推理环节支出近乎对半分 ,且预计 2025 年在 AI 芯片服务器上的花费将接 近 140 亿美元 。 伴随 ChatGPT 发展,其付费订阅用户从年初 1500 万增长至超 2500 万,每周还有 ...
回头看AMD在3年前对Xilinx的这次收购
傅里叶的猫· 2025-06-30 13:44
Core Viewpoint - The article discusses the acquisition of Xilinx by AMD, focusing on the developments and performance of Xilinx post-acquisition, particularly in the context of AI, data centers, and FPGA technology. Group 1: Acquisition Rationale - AMD's acquisition of Xilinx for $49 billion was primarily aimed at enhancing capabilities in AI, data centers, and edge computing, rather than traditional markets like 5G and automotive [2][4]. - Xilinx's FPGA and AI engine technologies complement AMD's CPU and GPU offerings, providing efficient solutions for data-intensive applications [2]. Group 2: Historical Context - The article references Intel's acquisition of Altera, which was influenced by Microsoft's promotion of FPGA in data centers, ultimately leading to Intel's underperformance in the FPGA market [3]. - Despite initial expectations, the use of FPGA in data centers did not meet Microsoft's needs, leading to a preference for NVIDIA GPUs for AI model training [3]. Group 3: Post-Acquisition Developments - AMD established the Adaptive and Embedded Computing Group (AECG) to focus on FPGA and SoC roadmaps, led by former Xilinx CEO Victor Peng [4]. - Xilinx's product updates post-acquisition have been moderate, with expectations for stable growth in the FPGA market rather than significant breakthroughs [8][11]. Group 4: Financial Performance - Xilinx's revenue for the fiscal year 2021 was $3.15 billion, showing stability despite global supply chain challenges [11]. - The Embedded business segment revenue for AMD in 2022 was approximately $4.53 billion, reflecting a 17% increase in 2023 to $5.3 billion, attributed to the integration of Xilinx's revenue [17][18]. - However, the Embedded segment revenue is projected to decline to $3.6 billion in 2024, a 33% decrease from 2023, influenced by market demand and U.S. export restrictions [19][22]. Group 5: Market Outlook - The article concludes that three years post-acquisition, there have been no groundbreaking products from the integration, and the FPGA market remains stable [22]. - AMD's data center business saw significant growth, reaching $12.6 billion in 2024, a 94% increase, but the specific contribution of FPGA technology remains unclear [22].