Workflow
Expander
icon
Search documents
X @Polyhedra
Polyhedra· 2025-11-07 18:00
Expander development continues to advance with steady optimization efforts. Here’s this week’s update from the engineering team on FPGA acceleration for Expander. https://t.co/ppaarGKRPz ...
X @Polyhedra
Polyhedra· 2025-10-31 17:00
We’re keeping the momentum strong! This week, the dev team has been working behind the scenes on GPU and FPGA acceleration for Expander. https://t.co/6cTHssW3E6 ...
X @Polyhedra
Polyhedra· 2025-10-24 17:00
This week’s Expander progress focused on GPU & FPGA Acceleration for Expander. Let's dive into the updates. https://t.co/qO6scaDNNz ...
X @Polyhedra
Polyhedra· 2025-10-16 17:00
Expander keeps evolving. This week’s Expander update brings continued improvements as we move forward with our roadmap. https://t.co/2dvQkPLqFs ...
X @Polyhedra
Polyhedra· 2025-10-09 17:00
This week brought major progress in FPGA acceleration for Expander, focusing on end-to-end integration and performance scaling. Here’s a quick breakdown of what we accomplished. https://t.co/uLrNRxUV6C ...
X @Polyhedra
Polyhedra· 2025-09-18 17:00
This week we delivered key improvements across ZKML and Expander. Here’s a breakdown of what was shipped. https://t.co/i6YGwdpNqH ...
X @Polyhedra
Polyhedra· 2025-09-11 17:00
This week, Expander takes a big leap forward with Multi-GPU support! Here’s a breakdown of what this upgrade brings and how it boosts performance. https://t.co/add9c8QxWF ...
X @Polyhedra
Polyhedra· 2025-09-02 02:00
Product Improvement - Expander received several enhancements last week [1] - These enhancements strengthened Expander's performance and reliability [1]
X @Polyhedra
Polyhedra· 2025-09-01 09:50
Let's dive into the Dev Update from Polyhedra last week!The Expander explores GPU CI integration and the evolution of the bi-KZG implementation.Meanwhile, the Polyhedra i-D Project introduces the ZKML Recursive Verifier integration and the new zkcuda_recursion submodule. https://t.co/2Og33G3Bxl ...
X @Polyhedra
Polyhedra· 2025-08-18 02:28
Performance Improvement - CUDA 13.0 compatibility fix for Fiat-Shamir [1] - Shared memory optimization achieves 1 TB/s bandwidth [1] - Achieved 9,000 zk proofs/sec on m31ext3 [1] - GPU acceleration for MSM on KZG commitments [1]