Workflow
Expander
icon
Search documents
X @Polyhedra
Polyhedra· 2025-11-07 18:00
Development Progress - Expander development is advancing with ongoing optimization efforts [1] - Engineering team is focusing on FPGA acceleration for Expander [1]
X @Polyhedra
Polyhedra· 2025-10-31 17:00
We’re keeping the momentum strong! This week, the dev team has been working behind the scenes on GPU and FPGA acceleration for Expander. https://t.co/6cTHssW3E6 ...
X @Polyhedra
Polyhedra· 2025-10-24 17:00
This week’s Expander progress focused on GPU & FPGA Acceleration for Expander. Let's dive into the updates. https://t.co/qO6scaDNNz ...
X @Polyhedra
Polyhedra· 2025-10-16 17:00
Product Updates - Expander continues to evolve with ongoing improvements [1] - The update aligns with Expander's roadmap [1]
X @Polyhedra
Polyhedra· 2025-10-09 17:00
FPGA Acceleration Progress - Major progress in FPGA acceleration for Expander, focusing on end-to-end integration and performance scaling [1] Project Focus - The project is focused on end-to-end integration [1] - The project is focused on performance scaling [1]
X @Polyhedra
Polyhedra· 2025-09-18 17:00
Product Development - This week the company delivered key improvements across ZKML and Expander [1] Technology Focus - The company's focus areas include ZKML (Zero-Knowledge Machine Learning) and Expander [1]
X @Polyhedra
Polyhedra· 2025-09-11 17:00
Performance Improvement - Expander introduces Multi-GPU support, significantly boosting performance [1] - The upgrade represents a big leap forward for Expander [1]
X @Polyhedra
Polyhedra· 2025-09-02 02:00
Product Improvement - Expander received several enhancements last week [1] - These enhancements strengthened Expander's performance and reliability [1]
X @Polyhedra
Polyhedra· 2025-09-01 09:50
Let's dive into the Dev Update from Polyhedra last week!The Expander explores GPU CI integration and the evolution of the bi-KZG implementation.Meanwhile, the Polyhedra i-D Project introduces the ZKML Recursive Verifier integration and the new zkcuda_recursion submodule. https://t.co/2Og33G3Bxl ...
X @Polyhedra
Polyhedra· 2025-08-18 02:28
Performance Improvement - CUDA 13.0 compatibility fix for Fiat-Shamir [1] - Shared memory optimization achieves 1 TB/s bandwidth [1] - Achieved 9,000 zk proofs/sec on m31ext3 [1] - GPU acceleration for MSM on KZG commitments [1]