Workflow
DRIFT
icon
Search documents
X @CoinDesk
CoinDesk· 2026-04-01 22:03
UPDATE: The Drift Protocol exploit losses have now reached $285M$DRIFT is down over 20% following the exploit https://t.co/7gExjgCB71CoinDesk (@CoinDesk):WARNING: @DriftProtocol reports unusual activity detected on its protocol. Users are being advised not to deposit funds at this time.Stay safe. https://t.co/nrM3F2IlLY ...
小模型读书大模型思考:上海AI Lab提出新知识推理解耦方法DRIFT,高效且「防越狱」
机器之心· 2026-03-14 06:33
Core Insights - The article discusses the limitations of current long-context reasoning models and questions whether knowledge acquisition and logical reasoning should be performed by the same model [3][11][12]. Group 1: DRIFT Framework - DRIFT is introduced as a dual-model framework that decouples knowledge acquisition from reasoning, where a lightweight knowledge model extracts relevant information from long documents and compresses it into a high-density representation for the reasoning model [4][12][15]. - Experimental results indicate that DRIFT significantly improves reasoning efficiency while maintaining or even enhancing task performance under high compression settings [5][21]. - The structure of DRIFT enhances robustness against security risks, as the reasoning model does not directly interact with the original text, reducing exposure to malicious content [6][26]. Group 2: Existing Methods and Limitations - Current methods for handling long contexts include compression, retrieval, and memory, but they often struggle with determining who reads and how to read the information effectively [9][10]. - Compression techniques can either delete low-importance tokens or map text to latent representations, both of which have limitations in retaining critical information [10]. - Retrieval-augmented generation (RAG) methods depend heavily on the performance of the retrieval system, which can limit overall effectiveness [10]. Group 3: Training and Performance - DRIFT employs a three-stage training strategy that teaches the knowledge model how to read and compress information relevant to queries, while the reasoning model focuses on reasoning based on the compressed knowledge [18][20]. - The model demonstrates a 32× compression rate while achieving performance comparable to or exceeding full-context models, with lower reasoning latency across various context lengths [24][21]. - The reasoning model retains its general capabilities, effectively handling complex reasoning tasks, knowledge questions, code generation, and instruction following [22][23]. Group 4: Applications and Broader Implications - The decoupling of reading and reasoning is exemplified in the protein understanding task, where a specialized model interprets protein sequences, allowing a language model to focus on reasoning [28][29]. - This structural decoupling not only enhances efficiency but may also provide additional security benefits by minimizing the reasoning model's exposure to potential attacks [34][26]. - The overarching theme from DRIFT to BioBridge emphasizes that extracting domain knowledge into a suitable representation for reasoning is more effective than having reasoning models directly process raw knowledge inputs [33][34].
X @Ansem
Ansem 🧸💸· 2025-07-03 02:27
RT BidClub (@bidclubio)🔈BidCast - New Interview with @cindy_leowtt from @DriftProtocol on $DRIFTWe discuss:📌How $DRIFT is making $25–35M in annualized revenue, with 100% flowing to DAO-held treasury.📌Why it targets $500M–$1B in daily volume by year-end, backed by upcoming Aug patch + SOL upgrades.📌Token buybacks based on PE ratio w/ treasury.📌Apollo private credit fund / RWA and more.Timestamps 👇00:00 Opening00:50 Introduction to Drift Protocol and Trading Volume03:57 Revenue Structure and Token Holder Dyna ...