Workflow
RTX 5090
icon
Search documents
疯了,游戏本逆天改装:一颗电阻4090反杀5090
3 6 Ke· 2025-11-12 03:47
Core Viewpoint - A modification involving the addition of a single resistor has allowed an RTX 4090 gaming laptop to outperform an RTX 5090 in certain benchmarks, highlighting the significant impact of power consumption on performance [1][10]. Group 1: Power Consumption and Performance - Power consumption is a critical factor that directly influences the performance of gaming laptops, with high-end models often boasting total power consumption exceeding 200W [3][5]. - The total power consumption typically refers to the combined power of the CPU and GPU, where higher power levels correlate with better performance due to enhanced cooling and power supply requirements [5][12]. - A user modified their ROG Zephyrus M16 by adding a resistor, effectively lowering the circuit resistance and allowing the RTX 4090 to draw nearly double its original power limit, resulting in performance that rivals the RTX 5090 [9][10]. Group 2: Benchmark Comparisons - After the modification, the performance of the RTX 4090 in the ROG M16 surpassed that of the RTX 5090 in most 3DMark tests, with the highest score in the Speedway benchmark showing a 9.6% lead [10][11]. - The overall performance improvement from the modification was over 20% in most benchmarks, with some tests showing increases of more than 35% [11][12]. Group 3: Manufacturer Limitations - NVIDIA is identified as the entity that sets power consumption limits for mobile GPUs, which restricts manufacturers from fully utilizing the hardware's potential [13][15]. - Despite the potential for higher performance through increased power limits, manufacturers often adhere to NVIDIA's restrictions to maintain product differentiation and avoid market conflicts [15][16]. - There are indications that NVIDIA may consider lifting power limits for future high-end models to cater to hardcore gaming enthusiasts seeking significant performance boosts [15][16].
X @vitalik.eth
vitalik.eth· 2025-10-16 01:23
RT Justin Drake (@drakefjustin)Progress toward real-time proving for Ethereum L1 is nothing short of extraordinary.In May, SP1 Hypercube proved 94% of L1 blocks in under 12 seconds using 160 RTX 4090s. Five months later Pico Prism proves 99.9% of the same blocks in under 12 seconds, with just 64 RTX 5090s. Average proving latency is now 6.9 seconds.Performance has outpaced Moore's law ever since Zcash pioneered practical SNARKs a decade ago. Today's Pico Prism results are a striking reminder of that exponen ...
Advanced Micro Devices, Inc. (AMD): A Bull Case Theory
Yahoo Finance· 2025-09-28 23:43
Core Thesis - Advanced Micro Devices, Inc. (AMD) is positioned as a strong investment opportunity due to its potential market share gains and the challenges faced by competitor Nvidia, with a target price range of $168–$187 over the next 12–18 months [2][5]. Financial Performance - AMD reported a 32% year-over-year revenue growth in Q2 2025, reaching $7.7 billion, driven by a 73% increase in gaming revenue to $1.1 billion and a 14% rise in data center revenue to $3.2 billion [3]. - Wall Street forecasts suggest a 15–20% compound annual growth rate (CAGR) for earnings per share (EPS) through 2027, despite near-term margin pressures from export controls [3]. Competitive Landscape - Nvidia's structural GPU reliability issues, such as problems with RTX 4090 connectors, create a competitive opportunity for AMD, which is seen as a stable alternative [4]. - AMD's RX 9070 XT shows strong performance and improved power efficiency, while its open-source ROCm platform enhances its data center positioning [4]. Market Opportunities - AMD could capture $3.6–$6 billion in incremental revenue from potential market share gains in the $120 billion discrete GPU segment, although Nvidia's ecosystem dominance poses challenges [5]. - The company's diversified revenue streams and competitive GPU offerings support the potential for multiple expansions, despite macroeconomic risks such as Federal Reserve rate hikes [5]. Historical Context - AMD's stock price has appreciated approximately 39% since May 2025, reflecting strong revenue growth driven by data center and Ryzen processor sales, as well as AI demand [6].
BluSky AI Inc. and Lilac Sign Letter of Intent to Launch Strategic GPU Marketplace Partnership
Globenewswire· 2025-08-26 13:42
Core Viewpoint - BluSky AI Inc. has signed a Letter of Intent (LOI) with Lilac to form a strategic partnership aimed at enhancing cloud compute provisioning and monetizing idle capacity in the AI ecosystem [1][2][3] Group 1: Partnership Details - The LOI allows BluSky AI to offer its GPU cloud computing resources, including unallocated inventory and customer capacity, for rent through Lilac's marketplace [2] - This collaboration is expected to increase the utilization of BluSky AI's compute assets while expanding Lilac's supplier base with high-performance GPU models such as NVIDIA B200, H200, H100, A100, L40, RTX 5090, and RTX 4090 [2][3] - The partnership includes a multi-pronged engagement strategy involving engineering integration, co-marketing efforts, and a customer acquisition framework, with a definitive agreement anticipated in the coming months [3] Group 2: Company Missions and Goals - BluSky AI aims to democratize access to AI compute and optimize resource efficiency by integrating idle capacity into Lilac's platform, enabling customers to generate new revenue streams [3][6] - Lilac's mission is to democratize access to critical AI infrastructure and create a more efficient cloud economy by connecting idle GPU capacity to AI developers and enterprises [5][7] Group 3: Marketing and Transparency Initiatives - BluSky AI will designate Lilac as a 'Preferred Marketplace Partner' and promote the platform within its ecosystem [6] - Both companies will collaborate on various marketing initiatives, including joint press releases, social media activations, and event partnerships [6] - BluSky AI will provide quarterly transparency reports on available GPU inventory to inform marketplace strategy and performance tracking [6]
减配不减价?英伟达在华推出“二次阉割”显卡
Guan Cha Zhe Wang· 2025-08-15 07:32
Core Viewpoint - NVIDIA has launched the new GeForce RTX 5090D v2 in mainland China, replacing the banned RTX 5090D, with a 25% reduction in memory and bandwidth to comply with U.S. export regulations, while maintaining the same price of RMB 16,499 (approximately $2,298) [1][9]. Product Specifications - The RTX 5090D v2 retains the same GB202-240 GPU chip as the RTX 5090D, with a base clock of 2.01 GHz and a boost clock of 2.41 GHz. However, the memory has been reduced from 32GB GDDR7 with a 512-bit interface to 24GB GDDR7 with a 384-bit interface, resulting in a bandwidth decrease from 1792 GB/s to 1344 GB/s, complying with U.S. export regulations limiting bandwidth to below 1.4 GB/s [5][8]. - The CUDA core count remains unchanged at 21,760, but the AI performance has significantly decreased from 3,352 TOPS to 2,375 TOPS, a reduction of approximately 29.15% [7][8]. Market Response - The RTX 5090D v2 is available from various manufacturers such as ASUS, Colorful, and GIGABYTE, with starting prices at RMB 16,499, while some flagship models have prices reaching RMB 17,499 to nearly RMB 19,000 [1][3]. - The previously banned RTX 5090 and 5090D are now in high demand in the second-hand market, leading to price premiums [1].
叫板英伟达RTX 5090,GPU初创公司做出13倍路径追踪性能的怪兽显卡
3 6 Ke· 2025-08-06 02:50
Core Viewpoint - Bolt Graphics, a lesser-known chip startup, has claimed that its first GPU module, Zeus 4C, achieves performance levels 13 times greater than the RTX 5090 in path tracing scenarios [1][4]. Performance Comparison - In path tracing tasks at 4K resolution with 120 frames per second, Zeus 4C outperforms RTX 5090 significantly, but it is not designed for gaming applications [4][6]. - Zeus GPU models (2c26-064 and 2c26-128) have advantages in board power and cache compared to RTX 4090 and RTX 5090, but they lag in floating-point performance (FP64/FP32/FP16 vector tflops) [5][6]. - Zeus GPU utilizes LPDDR5X memory, which is designed for mobile devices, resulting in lower bandwidth compared to the GDDR7 memory used in RTX 5090, leading to potential performance issues in gaming scenarios [6][9]. Target Applications - Bolt Graphics focuses on high-precision graphics rendering rather than gaming or AI performance, targeting industries such as film visual effects, game rendering, and high-performance computing (HPC) [6][9]. - Path tracing is highlighted as a key technology for Bolt Graphics, providing realistic rendering effects widely used in various industries [7][9]. Architectural Design - The Zeus GPU series features a chiplet architecture, with models incorporating multiple compute and I/O cores to enhance performance [13]. - The design aims to address memory bandwidth limitations by offering multiple SODIMM slots for additional memory [13]. Market Position and Future Outlook - Bolt Graphics is seen as a niche player in the GPU market, potentially challenging established companies like NVIDIA and AMD in specific applications [20]. - The company has not yet disclosed benchmark testing details or how its performance compares to competitors, with developer kits expected in 2026 and full production in 2027 [20][21].
叫板英伟达RTX 5090!GPU初创公司做出13倍路径追踪性能的怪兽显卡
量子位· 2025-08-05 13:34
Core Viewpoint - Bolt Graphics, a lesser-known chip startup, has claimed that its first GPU module, Zeus 4C, outperforms NVIDIA's RTX 5090 by 13 times in path tracing scenarios [1][8]. Group 1: Performance Claims - Zeus 4C achieves 120 frames per second at 4K resolution in path tracing tasks, significantly surpassing the RTX 5090 [8]. - Despite its impressive path tracing performance, Zeus GPUs fall short in floating-point operations compared to RTX 4090 and RTX 5090 [9]. - The Zeus GPU series is designed for high-precision graphics rendering rather than gaming or AI performance, focusing on applications like movie visual effects and high-performance computing (HPC) [12][14]. Group 2: Technical Specifications - Zeus GPUs utilize LPDDR5X memory, which is optimized for low power consumption but has lower bandwidth compared to the GDDR7 memory used in RTX 5090, leading to potential performance issues in gaming scenarios [10]. - The architecture of Zeus GPUs includes multiple compute and I/O cores, similar to AMD's chiplet design, enhancing their computational capabilities [17][19]. Group 3: Market Position and Future Prospects - Bolt Graphics is positioned as a niche player targeting specific high-demand applications rather than competing directly with established players like NVIDIA and AMD [26][27]. - The company has not yet disclosed benchmark testing details or how its performance compares to competitors, and the developer kits for Zeus GPUs are expected to be released in 2026, with mass production in 2027 [29][30].
一颗GPU,叫板英伟达
半导体芯闻· 2025-07-23 09:59
Core Viewpoint - The article discusses the emergence of Bolt Graphics, a startup aiming to redefine the GPU landscape with its new GPU called Zeus, specifically targeting path tracing technology to challenge established giants like NVIDIA, AMD, and Intel [1][6]. Group 1: Path Tracing as a Breakthrough - Path tracing represents a significant advancement in game graphics, providing a more realistic rendering of light interactions compared to traditional real-time ray tracing [2]. - Traditional methods sacrifice physical accuracy for performance, while path tracing offers "no-compromise quality" despite its high computational cost [2]. - The historical development of path tracing dates back to Jim Kajiya's 1986 paper, which laid the foundation for modern rendering theories [3]. Group 2: Bolt Graphics and Zeus GPU - Bolt Graphics was founded by engineers from major companies like NVIDIA and AMD, recognizing the untapped potential of path tracing technology [6]. - The Zeus GPU comes in three versions: single-chip (Zeus 1c), dual-chip (Zeus 2c), and quad-chip (Zeus 4c), with varying power and performance specifications [6][7]. - Zeus 1c has a TDP of approximately 120W and can process around 7.7 billion rays per second, while Zeus 4c targets data center applications with a TDP of 500W and up to 2TB of DDR5 memory [6][7][10]. Group 3: Advantages of Zeus - The memory architecture of Zeus utilizes LPDDR5X for bandwidth and DDR5 for capacity, allowing for a total memory of up to 2.25TB, which is beneficial for path tracing and HPC datasets [10]. - Bolt claims that Zeus can outperform NVIDIA's RTX 5090 by a factor of 10 in terms of efficiency for 4K path tracing scenes [10][11]. - Zeus supports IEEE-754 FP64 standard, making it suitable for high-performance computing (HPC) applications, which is a competitive advantage over NVIDIA's focus on AI [11][12]. Group 4: Ecosystem Development - Bolt is building an open, customizable ecosystem based on RISC-V architecture, which allows for community acceptance and flexibility in design [14][15]. - The company is developing a proprietary path tracing engine called Glow Stick, which aims for compatibility with mainstream rendering tools [15][16]. - Bolt plans to integrate its technology with various industry software and is working on drivers for DirectX and Vulkan, although challenges remain in the Windows ecosystem [16][17]. Group 5: Future Prospects and Challenges - Bolt aims to deliver its first development kits by Q3 2025 and enter mass production by the end of 2026, facing typical startup pressures [17][18]. - The company intends to target professionals in film and design before expanding into the gaming market, requiring successful case studies to build credibility [18][19]. - The potential for Zeus to revolutionize graphics rendering and simulation integration is significant, but the path from concept to production is fraught with challenges [19].
一颗野心勃勃的GPU
半导体行业观察· 2025-07-23 00:53
Core Viewpoint - The article discusses the emergence of Bolt Graphics, a startup aiming to redefine the GPU landscape with its new GPU called Zeus, which focuses on path tracing technology to challenge established giants like NVIDIA, AMD, and Intel [1][19]. Group 1: Path Tracing as a Breakthrough - Path tracing represents a significant advancement in rendering technology, providing a more accurate representation of light behavior compared to traditional real-time ray tracing [2]. - The computational demands of path tracing are substantially higher, requiring ten to a hundred times the power of standard GPUs for real-time applications [2]. Group 2: Bolt Graphics and Zeus GPU - Bolt Graphics was founded by engineers from major companies like NVIDIA, AMD, and Intel, with a mission to create a high-performance path tracing GPU [7]. - The Zeus GPU comes in three versions: Zeus 1c, Zeus 2c, and Zeus 4c, with varying power and performance specifications, including a path tracing performance of approximately 7.7 billion rays per second for the 1c version [7][8]. - The Zeus 4c version is designed for data centers, featuring a TDP of 500W and up to 2TB of DDR5 memory, aimed at high-performance computing (HPC) and rendering farms [8][10]. Group 3: Advantages of Zeus - Zeus GPUs utilize a unique memory architecture combining LPDDR5X for bandwidth and DDR5 for capacity, allowing for a total memory of up to 2.25TB, which is beneficial for both path tracing and HPC datasets [10]. - In terms of performance, Bolt claims that their GPUs can outperform NVIDIA's RTX 5090 by a factor of 10 in certain scenarios, significantly reducing the number of GPUs needed for complex rendering tasks [10][11]. Group 4: Ecosystem Development - Bolt is building an open and customizable ecosystem based on RISC-V architecture, which allows for greater flexibility and community engagement compared to traditional closed architectures [14]. - The company is developing a proprietary path tracing engine called Glow Stick, which aims to integrate with popular rendering tools and provide high-precision sampling and physical Monte Carlo integration [15][16]. Group 5: Challenges and Future Outlook - Despite its potential, Bolt faces significant challenges, including the timeline for mass production, which is projected for late 2026, and the need to establish a robust software ecosystem to support its hardware [17][18]. - The success of Bolt's Zeus GPU could redefine the graphics rendering landscape, particularly in gaming and HPC applications, if it can deliver on its promises of unprecedented visual fidelity and performance [19].
从CoreWeave视角看算力租赁行业
傅里叶的猫· 2025-06-09 13:40
Core Viewpoints - The article discusses the rapid growth and potential of the computing power leasing industry, particularly through the lens of CoreWeave, a significant player in this sector [2][11]. Company Overview - CoreWeave was established in 2017, originally as a cryptocurrency mining company, and has since pivoted to focus on AI cloud and infrastructure services, operating 32 data centers by the end of 2024 [2][3]. - The company has deployed over 250,000 GPUs, primarily NVIDIA products, and is a key provider of high-performance infrastructure services [2][3]. Business Model - CoreWeave offers three main services: bare-metal GPU leasing, management software services, and application services, with a focus on GPU leasing as the core offering [3][4]. - Revenue is generated primarily through two models: commitment contracts (96% of revenue) and on-demand payment, allowing flexibility for clients [4][5]. Financial Performance - In 2024, CoreWeave's revenue reached $1.915 billion, a year-over-year increase of over seven times, with Q1 2025 revenue at $982 million, reflecting a fourfold increase [8][9]. - The company has a remaining performance obligation of $15.1 billion, indicating strong future revenue potential [8]. Competitive Advantages - CoreWeave has optimized GPU utilization rates and efficiency, achieving significant performance improvements in AI training and inference tasks [7]. - The company has established strong relationships with NVIDIA, ensuring priority access to cutting-edge chips and technology [6][7]. Market Outlook - The AI infrastructure market is projected to grow from $79 billion in 2023 to $399 billion by 2028, with a compound annual growth rate of 38%, highlighting the industry's potential [11]. - The computing power leasing sector is expected to play a crucial role in the digital economy, driven by increasing demand for AI capabilities [11][14]. Future Growth Strategies - CoreWeave plans to expand its customer base, explore new industries, and enhance vertical integration with strategic partnerships [10]. - The management aims to leverage existing contracts and maintain a low leverage asset structure to support growth [10].