Workflow
NVIDIA GB200 NVL72
icon
Search documents
Bitdeer Technologies (BTDR) Climbs 37.6% on AI Computing Expansion
Yahoo Finance· 2026-01-20 02:06
Core Insights - Bitdeer Technologies Group (NASDAQ:BTDR) experienced a significant stock price increase of 37.6% week-on-week due to the adoption of NVIDIA GB200 NVL72 deployment in Malaysia, which aligns with its goal to simplify and scale AI computing globally [1][2]. Group 1: Company Developments - The deployment of NVIDIA GB200 NVL72 infrastructure is a crucial step in Bitdeer's strategy to enhance its global AI infrastructure, enabling support for demanding AI workloads [2][4]. - Bitdeer is expanding its AI data center footprint with multiple projects, including a 13 MW data center in Washington, a 37 MW center in Tennessee, a 570 MW center in Clarington, and a 175 MW facility in Tydal, Norway. The Washington and Tennessee facilities are being converted from cryptocurrency mining to GPU-optimized AI data centers [3]. Group 2: Strategic Vision - The Head of AI at Bitdeer emphasized that the company is building a robust foundation for the entire AI lifecycle, from model training to application deployment, underlining the vision of "AI Power, Simplified" [4].
一年后,DeepSeek-R1的每token成本降到了原来的1/32
机器之心· 2026-01-09 06:16
Core Insights - DeepSeek recently updated its R1 paper, expanding from 22 pages to 86 pages, providing more detailed insights into its training pipeline and data validation methods [1] Group 1: Model Specifications and Performance - DeepSeek-R1, released on January 20, 2025, features 671 billion parameters and employs a MoE architecture, significantly enhancing training efficiency [4] - The cost per token for the R1 model has decreased to 1/32 within a year of its launch, showcasing remarkable cost efficiency improvements [6][18] - NVIDIA's collaboration with DeepSeek has led to a 36-fold increase in throughput since January 2025, further reducing inference costs [18] Group 2: Technological Innovations - NVIDIA's GB200 NVL72 system, designed for high-density workloads, connects 72 Blackwell GPUs, providing up to 1800 GB/s bidirectional bandwidth [11] - The Blackwell architecture includes hardware acceleration for NVFP4 data format, enhancing precision and performance during token generation [12] - The latest NVIDIA TensorRT-LLM software significantly boosts inference performance, particularly in various input/output sequence lengths [10][14] Group 3: Performance Metrics and Enhancements - The throughput of DeepSeek-R1 has improved dramatically, with Blackwell GPUs achieving up to 2.8 times higher throughput in the last three months [17] - The use of multi-token prediction (MTP) and NVFP4 technology on the NVIDIA HGX B200 platform has led to substantial performance gains while maintaining accuracy [21][24] - Continuous optimization of the entire technology stack by NVIDIA aims to enhance the efficiency of large language models and increase token throughput across existing hardware [30]
SuperX Unveils Modular AI Factory Solution to Reshape AI Infrastructure with an Estimated Deployment Cycle of Under 6 Months
Prnewswire· 2025-10-01 10:35
Core Insights - Super X AI Technology Limited has launched the SuperX Modular AI Factory, a data center-scale solution designed to address the challenges of traditional AI data center construction, including long lead times, high costs, and energy consumption [1][2][5] Group 1: Product Features and Innovations - The SuperX Modular AI Factory offers a prefabricated, integrated solution that reduces deployment time to under six months, significantly faster than the typical 18-to-24-month cycle for traditional data centers [2][5] - The solution features a SuperX NeuroBlock core compute unit capable of supporting up to 24 NVIDIA GB200 NVL72 systems with a power capacity of up to 3.5MW, achieving a compute density seven times higher than traditional solutions [3][6][8] - The modular architecture allows for on-demand scalability, enabling clients to expand their infrastructure as needed, which aligns with the rapid nature of AI business models [3][8] Group 2: Efficiency and Sustainability - The SuperX Modular AI Factory utilizes High-Voltage Direct Current (HVDC) technology, boosting end-to-end power efficiency to over 98.5% and driving the overall Power Usage Effectiveness (PUE) to as low as 1.15, resulting in over 23% energy savings compared to traditional air-cooled systems [8][6] - The factory-prefabricated components, including the SuperX CryoPod cooling system and SuperX Energy Vault for green energy storage, contribute to a more sustainable operation by minimizing energy consumption and carbon emissions [3][4][8] Group 3: Strategic Positioning - The launch of the SuperX Modular AI Factory marks a strategic upgrade for the company, transitioning from an AI infrastructure integrator to a solution provider that sets new standards for AI data centers [5][6] - By transforming complex custom projects into standardized products, the company aims to enhance clients' return on investment and reduce market risks associated with traditional data center deployments [5][6]
告别54V时代,迈向800V,数据中心掀起电源革命
3 6 Ke· 2025-08-07 11:21
Core Insights - The rapid growth of AI applications like ChatGPT and Claude is driving an exponential increase in power demand for global AI data centers, pushing them towards critical power limits [1] - The power consumption of AI data centers is shifting from traditional levels of 20-30 kW per rack to levels reaching 500 kW and even 1 MW [1][2] - NVIDIA has announced the formation of an 800V HVDC power supply alliance aimed at developing next-generation AI data centers capable of supporting 1 MW per rack by 2027 [4] Group 1: Power Demand and Infrastructure - AI workloads are causing data center power demands to surge, with traditional 54V power systems becoming inadequate for modern AI factories that require megawatt-level power [2] - The transition to 800V HVDC systems is seen as essential to reduce energy losses and improve overall efficiency in data centers [1][3] - The current reliance on 54V systems is leading to physical limitations in space and efficiency, necessitating a shift to higher voltage systems [2][3] Group 2: Technological Developments - The 800V HVDC architecture is expected to enhance end-to-end energy efficiency by up to 5% and significantly reduce maintenance costs by up to 70% [5] - NVIDIA's collaboration with partners across the energy ecosystem aims to overcome previous barriers to the widespread adoption of HVDC technology in data centers [4] - Domestic companies like InnoSilicon and Changdian Technology are also advancing their technologies to align with the 800V HVDC trend, indicating a competitive landscape [6][7] Group 3: Semiconductor Innovations - The global supply of Gallium Nitride (GaN) is becoming increasingly strained, with companies like InnoSilicon positioned to leverage this scarcity in the context of NVIDIA's supply chain [9] - GaN devices offer superior performance in high-voltage applications compared to traditional silicon-based semiconductors, making them ideal for the evolving demands of AI data centers [11][12] - The integration of GaN technology is expected to significantly enhance power density and efficiency in the new 800V HVDC systems [12]
CoreWeave Becomes First Hyperscaler to Deploy NVIDIA GB300 NVL72 Platform
Prnewswire· 2025-07-03 16:14
Core Viewpoint - CoreWeave is the first AI cloud provider to deploy NVIDIA's latest GB300 NVL72 systems, aiming for significant global scaling of these deployments [1][5] Performance Enhancements - The NVIDIA GB300 NVL72 offers a 10x boost in user responsiveness, a 5x improvement in throughput per watt compared to the previous NVIDIA Hopper architecture, and a 50x increase in output for reasoning model inference [2] Technological Collaboration - CoreWeave collaborated with Dell, Switch, and Vertiv to establish the initial deployment of the NVIDIA GB300 NVL72 systems, enhancing speed and efficiency for AI cloud services [3] Software Integration - The GB300 NVL72 deployment is integrated with CoreWeave's cloud-native software stack, including CoreWeave Kubernetes Service (CKS) and Slurm on Kubernetes (SUNK), along with hardware-level data integration through Weights & Biases' platform [4] Market Leadership - CoreWeave continues to lead in providing first-to-market access to advanced AI infrastructure, expanding its offerings with the new NVIDIA GB300 systems alongside its existing fleet [5] Benchmark Achievement - In June 2025, CoreWeave achieved a record in the MLPerf® Training v5.0 benchmark using nearly 2,500 NVIDIA GB200 Grace Blackwell Superchips, completing a complex model in just 27.3 minutes [6] Company Background - CoreWeave, recognized as one of the TIME100 most influential companies and featured in Forbes Cloud 100 ranking in 2024, has been operating data centers across the US and Europe since 2017 [7]
Nebius Is The Only Pure Play On Europe's AI Sovereignty
Seeking Alpha· 2025-06-23 16:59
Group 1 - Nebius has successfully executed a $1 billion two-tranche convertible note in just eight weeks, a feat that larger cloud companies have taken significantly longer to achieve [1] - The company has made immediate deliveries of NVIDIA GB200 NVL72, indicating a strong operational capability in the AI and cloud computing space [1] - Nebius is positioning itself as a key player in the AI market, leveraging its expertise in machine learning algorithms and model deployment [1]