Workflow
DeepSeek“悄悄”上线全新模型,或触发硬件光计算革命

Core Insights - DeepSeek has launched a new multimodal model, DeepSeek-OCR, which has sparked significant discussion in the industry regarding its potential applications in AI and quantum computing [1] - The model's visual encoder is noted for its efficient decoding capabilities, providing a clear technical pathway for integrating optical and quantum computing into large language models (LLMs) [1][2] Group 1: Technological Innovations - DeepSeek-OCR introduces "Contexts Optical Compression," allowing text to be processed as images, theoretically enabling infinite context and achieving a token compression of 7-20 times [2][3] - The model maintains 97% decoding accuracy at 10x compression and 60% accuracy at 20x compression, which is crucial for implementing memory and forgetting mechanisms in LLMs [2][3] Group 2: Implications for Optical Computing - The technology reduces the number of data segmentation and assembly operations, thereby lowering overall computational load and pressure on backend hardware [3][4] - DeepSeek-OCR's approach may facilitate the integration of optical computing chips with large models, leveraging the high parallelism and low power consumption of optical technologies [3][4] Group 3: Industry Challenges and Developments - Current challenges for optical computing include the need for advanced photonic-electronic integration and a mature software ecosystem to support large-scale development [5] - Key players in the optical computing space include domestic companies like Turing Quantum and international firms such as Lightmatter and Cerebras Systems, with Turing Quantum making strides in thin-film lithium niobate technology [5]