Workflow
大模型范式转变
icon
Search documents
DeepSeek的新模型很疯狂:整个AI圈都在研究视觉路线,Karpathy不装了
3 6 Ke· 2025-10-21 04:12
Core Insights - The introduction of DeepSeek-OCR has the potential to revolutionize the paradigm of large language models (LLMs) by suggesting that all inputs should be treated as images rather than text, which could lead to significant improvements in efficiency and context handling [1][3][8]. Group 1: Model Performance and Efficiency - DeepSeek-OCR can compress a 1000-word article into 100 visual tokens, achieving a compression efficiency that is ten times better than traditional text tokenization while maintaining a 97% accuracy rate [1][8]. - A single NVIDIA A100 GPU can process 200,000 pages of data daily using this model, indicating its high throughput capabilities [1]. - The model's approach to using visual tokens instead of text tokens could allow for a more efficient representation of information, potentially expanding the effective context size of LLMs significantly [9][10]. Group 2: Community Reception and Validation - The open-source release of DeepSeek-OCR garnered over 4000 stars on GitHub within a single night, reflecting strong interest and validation from the AI community [1]. - Notable figures in the AI field, such as Andrej Karpathy, have praised the model, indicating its potential impact and effectiveness [1][3]. Group 3: Theoretical Implications - The model's ability to represent text as visual tokens raises questions about how this might affect the cognitive capabilities of LLMs, particularly in terms of reasoning and language expression [9][10]. - The concept aligns with human cognitive processes, where visual memory plays a significant role in recalling information, suggesting a more natural way for models to process and retrieve data [9]. Group 4: Historical Context and Comparisons - While DeepSeek-OCR presents a novel approach, it is noted that similar ideas were previously explored in the 2022 paper "Language Modelling with Pixels," which proposed a pixel-based language encoder [14][16]. - The ongoing development in this area includes various research papers that build upon the foundational ideas of visual tokenization and its applications in multi-modal learning [16]. Group 5: Criticism and Challenges - Some researchers have criticized DeepSeek-OCR for lacking progressive development compared to human cognitive processes, suggesting that the model may not fully replicate human-like understanding [19].
DeepSeek的新模型很疯狂:整个AI圈都在研究视觉路线,Karpathy不装了
机器之心· 2025-10-21 03:43
机器之心报道 编辑:泽南、Panda 「我很喜欢新的 DeepSeek-OCR 论文…… 也许更合理的是,LLM 的所有输入都应该是图像。即使碰巧有纯文本输入,你更应该先渲染它, 然后再输入。」 一夜之间,大模型的范式仿佛被 DeepSeek 新推出的模型给打破了。 昨天下午, 全新模型 DeepSeek-OCR 突然开源 。在该模型的处理过程中,1000 个字的文章能被压缩成 100 个视觉 token,十倍的压缩下精度也可以达到 97%,一 块英伟达 A100 每天就可以处理 20 万页的数据。 这种方式或许可以解决大模型领域目前头疼的长上下文效率问题,更重要的是,如果「看」文本而不是「读」文本最终被确定为正确的方向,也意味着大模型的 范式会发生重要的转变。 GitHub 上, DeepSeek-OCR 项目一晚收获了超过 4000 个 Star 。 因为是开源的小模型,DeepSeek-OCR 第一时间经历了整个 AI 社区的检验,很多大佬在看完论文之后纷纷发表了看法,兴奋之情溢于言表。 OpenAI 联合创始成员之一,前特斯拉自动驾驶总监 Andrej Karpathy 表示,它是一个很好的 OCR ...