Workflow
Bandwidth
icon
Search documents
X @wale.moca 🐳
wale.moca 🐳· 2025-12-22 07:26
RT Kelano (@kelanoo)I see a lot of people jumping on the "we don't need another chain" wagonif that's you, imo, you don't understand the fundamental idea behind this move: bandwidthPolygon is great, no shades, but if Polymarket becomes the biggest betting site in the world, they'll need high TPS, custom block space, which eventually, polygon won't be able to accomodatealso, sharing blockspace as the big dawg on a chain isn't sexy, and is a hindrance more than notthis has nothing to do with a token/airdrop, ...
AMD Versal™ Network On Chip ​ Performance Tuning
AMD· 2025-11-17 19:00
Hello, and welcome. In this video, we'll guide you through an overview of the AMD Versal Network on Chip, NoC, and discuss key strategies for its performance tuning. We will first start with an introduction to the AMD Versal Network on Chip, NoC, followed by the NoC architecture and terminology. Then we will show you how to access the Versal NoC for your designs.And finally, we will go over important NoC settings needed to achieve a desired bandwidth and latency. Let's get started. The AMD Versal Network on ...
X @Polyhedra
Polyhedra· 2025-11-07 18:00
4/NoC Performance Limitation: Current system performance is constrained by the on-chip Network-on-Chip (NoC) bandwidth, which is limited to around 250 MHz. ...
X @Polyhedra
Polyhedra· 2025-10-24 17:00
Technology & Performance Improvement - FPGA is used to resolve the efficiency bottleneck in GPU's random accumulation writes during the prepare phase [1] - Effective bandwidth increased from 1 TB/s to 6 TB/s by using FPGA, a 500% increase [1]
X @mert | helius.dev
mert | helius.dev· 2025-08-18 13:33
Network Performance - Latency increases [1] - Bandwidth reduces [1]
X @Solana
Solana· 2025-07-23 01:06
Solana生态 - Solana 庆祝达到 6000 万 CU(计算单位)里程碑 [1] - Block Logic 验证器表现强劲 [1] 技术进展 - 内部带宽增加 [1]
What every AI engineer needs to know about GPUs — Charles Frye, Modal
AI Engineer· 2025-07-20 07:00
AI Engineering & GPU Utilization - AI engineering is shifting towards tighter integration and self-hosting of language models, increasing the need to understand GPU hardware [6][7] - The industry should focus on high bandwidth, not low latency, when utilizing GPUs [8] - GPUs optimize for math bandwidth over memory bandwidth, emphasizing computational operations [9] - Low precision matrix matrix multiplications are key to fully utilizing GPU potential [10] - Tensor cores, specialized for low precision matrix matrix multiplication, are crucial for efficient GPU usage [6][37] Hardware & Performance - GPUs achieve parallelism significantly exceeding CPUs, with the Nvidia H100 SXM GPU capable of over 16,000 parallel threads at 5 cents per thread, compared to AMD Epic CPU's two threads per core at approximately 1 watt per thread [20][21] - GPUs offer faster context switching compared to CPUs, happening every clock cycle [23] - Bandwidth improvement increases at the square of latency improvement, favoring bandwidth-oriented hardware [25][26] Model Optimization - Small models can be more hardware-sympathetic, potentially matching the quality of larger models with techniques like verification and multiple generations [32][33] - Multi-token prediction and multi-sample queries can become nearly "free" due to tensor core capabilities [36] - Generating multiple samples or tokens can improve performance by leveraging matrix matrix operations [39]
X @Starlink
Starlink· 2025-07-18 19:36
BANDwidth 🛰️🎸Bob Plankers (@plankers):Band: “Hey, is that a Starlink? Could we connect to it? Cell sucks here and we need to make a call.”Me: “Absolutely.”(I printed a mount for it to use with an Irwin clamp, trying it out) https://t.co/qi7pvQOoHg ...
X @Solana
Solana· 2025-07-17 14:53
RT Mike | heymike.sol 🎒🪽 (@heymike777)Increase Bandwidth, Reduce Latency ...