Workflow
新型量子低密度奇偶校验(LDPC)纠错码
icon
Search documents
新型量子纠错码开发成功,性能非常接近哈希界限
Ke Ji Ri Bao· 2025-09-30 07:55
Core Insights - The Tokyo University of Science team has made significant advancements in quantum error correction technology by developing an efficient and scalable quantum Low-Density Parity-Check (LDPC) error-correcting code that maintains high stability in systems with hundreds of thousands of logical qubits, approaching theoretical limits [1][2] - This breakthrough provides crucial technical support for achieving large-scale fault-tolerant quantum computing, potentially accelerating practical applications in quantum chemistry, cryptanalysis, and complex optimization [1] Group 1 - The current quantum computers can manipulate dozens of qubits, but solving real-world problems often requires millions of stable and reliable logical qubits [1] - Existing quantum error correction methods generally suffer from high resource consumption and low efficiency, requiring many physical qubits to encode a small number of logical qubits, which severely limits system scalability [1] - Many existing error correction codes have low coding rates and limited performance improvement potential, with significant gaps remaining from the theoretical optimal error correction limit known as the hashing bound [1] Group 2 - The team successfully overcame these challenges by proposing a new construction method that designs prototype LDPC codes with excellent error correction characteristics and introduces affine arrangement-based techniques to enhance code structure diversity [2] - Unlike traditional LDPC codes defined over binary finite fields, the new approach uses non-binary finite fields, allowing each encoding unit to carry more information and thus improving overall error correction capability [2] - The team transformed these prototype codes into a CSS-type quantum error correction code and developed an efficient joint decoding strategy that can simultaneously handle bit-flip and phase-flip errors, unlike most previous methods that could only correct one type at a time [2] Group 3 - Through large-scale numerical simulations, the new error correction code achieves a bit error rate of 10^-4 in systems with hundreds of thousands of logical qubits, with performance very close to the hashing bound [2] - Importantly, the computational complexity required for decoding is proportional to the number of physical qubits, meaning that as system size increases, the resource overhead grows linearly, indicating good engineering feasibility [2]