Workflow
AI workloads
icon
Search documents
Cango Reports Q2 Earnings: Improved Adjusted EBITDA, 50 EH/s Achieved, Now Among Largest Bitcoin Miners Globally - Cango (NYSE:CANG)
Benzinga· 2025-09-17 12:44
Cango Inc. CANG has emerged as a significant player in the competitive Bitcoin mining industry, reporting a substantial operational scaling to 50 EH/s of computing power at the close of Q2 2025. This rapid expansion positions Cango among the global leaders, with the company estimating that its deployed hash rate represented 6% of the global Bitcoin network by June-end.For the three months ended June 30, 2025, Cango mined a total of 1,404.4 Bitcoin at an average cost to mine, excluding depreciation of mining ...
Prediction: This "Ten Titans" Growth Stock Will Join Nvidia, Microsoft, Apple, Alphabet, Amazon, Broadcom, and Meta Platforms in the $2 Trillion Club by 2030
Yahoo Finance· 2025-09-15 09:11
Key Points Oracle’s cloud investments are paying off big time. Oracle has set clear five-year expectations. If it delivers, it’s reasonable to assume the stock could more than double from here. 10 stocks we like better than Oracle › Oracle (NYSE: ORCL) surged a mind-numbing 36% on Wednesday to close the session at a market cap of $922 billion. Five years ago, Oracle's market cap was under $200 billion. Now, there's reason to believe it can surpass $2 trillion by 2030. If that prediction comes tru ...
AAI 2025 | Powering AI at Scale: OCI Superclusters with AMD
AMD· 2025-07-15 16:01
AI Workload Challenges & Requirements - AI workloads differ from traditional cloud workloads due to the need for high throughput and low latency, especially in large language model training involving thousands of GPUs communicating with each other [2][3][4] - Network glitches like packet drops, congestion, or latency can slow down the entire training process, increasing training time and costs [3][5] - Networks must support small to large-sized clusters for both inference and training workloads, requiring high performance and reliability [8] - Networks should scale up within racks and scale out across data halls and data centers, while being autonomous and resilient with auto-recovery capabilities [9][10] - Networks need to support increasing East-West traffic, accommodating data transfer from various sources like on-premises data centers and other cloud locations, expected to scale 30% to 40% [10] OCI's Solution: Backend and Frontend Networks - OCI addresses AI workload requirements by implementing a two-part network architecture: a backend network for high-performance AI and a frontend network for data ingestion [11][12] - The backend network, designed for RDMA-intensive workloads, supports AI, HPC, Oracle databases, and recommendation engines [13] - The frontend network provides high-throughput and reliable connectivity within OCI and to external networks, facilitating data transfer from various locations [14] OCI's RDMA Network Performance & Technologies - OCI utilizes RDMA technology powered by RoCEv2, enabling high-performance, low-latency RDMA traffic on standard Ethernet hardware [18] - OCI's network supports multi-class RDMA workloads using Q-cure techniques in switches, accommodating different requirements for training, HPC, and databases on the same physical network [20] - Independent studies show OCI's RDMA network achieves near line-rate throughput (100 gig) with roundtrip delays under 10 microseconds for HPC workloads [23] - OCI testing demonstrates close to 96% of the line rate (400 gig throughput) with Mi300 clusters, showcasing efficient network utilization [25] Future Roadmap: Zeta-Scale Clusters with AMD - OCI is partnering with AMD to build a zeta-scale Mi300X cluster, powering over 131,000 GPUs, which is nearly triple the compute power and 50% higher memory bandwidth [26] - The Mi300X cluster will feature 288 gig HBM3 memory, enabling customers to train larger models and improve inferencing [26] - The new system will utilize AMD AI NICs, enabling innovative standards-based RoCE networking at peak performance [27]