LLM inference
Search documents
X @Polyhedra
Polyhedra· 2025-08-11 09:34
Zero-Knowledge Proofs (ZKP) Application - ZKP allows service providers to prove the correctness of LLM inference without revealing model parameters [1] - ZKP can address the issue of service providers potentially deploying smaller/cheaper models than promised [1] zkGPT Overview - The report introduces new work on zkGPT, focusing on proving LLM inference fast with Zero-Knowledge Proofs [1]
X @Avi Chawla
Avi Chawla· 2025-08-06 06:31
Core Technique - KV caching is a technique used to speed up LLM inference [1] Explanation Resource - Avi Chawla provides a clear explanation of KV caching in LLMs with visuals [1]