Tokens
Search documents
X @Trust Wallet
Trust Wallet· 2025-10-24 18:47
What tokens are you watching this weekend? 👀Whether it’s a few or a few dozen, Watchlists in Trust Wallet make your favourite tokens easy to follow.Check out our YouTube tutorial to see how:https://t.co/O2RCx8X6O4 ...
X @BitMart
BitMart· 2025-10-23 05:59
BitMart Pre-Market Platform Overview - BitMart introduces Pre-Market trading, enabling users to trade tokens before official listing [1] - The platform allows early access to new assets [1] Pre-Market Trading Mechanics - Users stake USDT to mint PreTokens [1] - Settlement happens automatically when the project officially lists [1] User Engagement & Incentives - BitMart is running a quiz with a $100 USDT prize pool to educate users about Pre-Market [1] - Five winners will be chosen randomly from those who answer correctly and follow the participation guidelines [1] Participation Requirements - To join the quiz, users need to follow BitMart, retweet the post, and comment with their answers and CID [1]
New DeepSeek just did something crazy...
Matthew Berman· 2025-10-22 17:15
Deepseek OCR Key Features - Deepseek OCR is a novel approach to image recognition that compresses text by 10x while maintaining 97% accuracy [2] - The model uses a vision language model (VLM) to compress text into an image, allowing for 10 times more text in the same token budget [6][11] - The method achieves 96%+ OCR decoding precision at 9-10x text compression, 90% at 10-12x compression, and 60% at 20x compression [13] Technical Details - The model splits the input image into 16x16 patches [9] - It uses SAM, an 80 million parameter model, to look for local details [10] - It uses CLIP, a 300 million parameter model, to store information about how to put the images together [10] - The output is decoded by Deepseek 3B, a 3 billion parameter mixture of experts model with 570 million active parameters [10] Training Data - The model was trained on 30 million pages of diverse PDF data covering approximately 100 languages from the internet [21] - Chinese and English account for approximately 25 million pages, and other languages account for 5 million pages [21] Potential Impact - This technology could potentially 10x the context window of large language models [20] - Andre Carpathy suggests that pixels might be better inputs to LLMs than text tokens [17] - An entire encyclopedia could be compressed into a single high-resolution image [20]
X @Phantom
Phantom· 2025-10-20 16:39
Note: These represent the top tokens swapped with our in-wallet swapper between 10/13 and 10/19, excluding SOL and stablecoins. ...
X @BSCN
BSCN· 2025-10-17 09:00
RT BSCN (@BSCNews)💡 SPOTLIGHT: NEW TOKENS ON $BNB CHAIN 💡@BNBChain season is well-and-truly in! 💰 Here are some of the recently-launched tokens making waves on the industry's top L1... ⬇️- @4onbsc $4- @akedofun $AKEDO- @creditslink $CDL- @EGLL_american $EGL1- @hajimi_CTO_BNB https://t.co/07PA9yX33M ...
tokens用量增长超200倍,国内大模型龙头算力消耗或激增
Xuan Gu Bao· 2025-10-16 23:23
*免责声明:文章内容仅供参考,不构成投资建议 *风险提示:股市有风险,入市需谨慎 东方证券也指出,在算力端,豆包大模型的性能升级和推理需求的迅猛增长,使得算力建设持续性至关 重要。在此背景下,服务器及液冷厂商、PCB厂商等有望受益。此外,数据存力和运力需求的持续攀 升,为存储、光模块和光芯片等企业创造了更多的市场机会,促使整个产业链不断优化升级,以满足日 益增长的AI算力需求。 公司方面,据华创证券表示,润泽科技、宝信软件、光环新网等有合作。 浙商证券指出,字节跳动2024年资本开支达到800亿元,接近百度、阿里、腾讯的总和(1000亿元)。 2025年字节跳动资本开支有望达到1600亿元,旨在打造自主可控的大规模数据中心集群,其中约900亿 元将用于AI算力的采购,700亿元用于IDC基建以及网络设备如光模块、交换机。字节跳动对未来Token 消耗估计较高,预计将持续加大算力投入。 其对豆包大模型带来的算力产业链的GPU/服务器、数据中心设备需求进行了测算。假设2027年日活达 到5000万,日均token使用量达到50万亿,为了满足用户需求,在2.5倍峰值token倍数的假设下,2027年 算力需求达到1 ...
X @BSCN
BSCN· 2025-10-15 20:00
RT BSCN (@BSCNews)💡 SPOTLIGHT: NEW TOKENS ON $BNB CHAIN 💡@BNBChain season is well-and-truly in! 💰 Here are some of the recently-launched tokens making waves on the industry's top L1... ⬇️- @4onbsc $4- @akedofun $AKEDO- @creditslink $CDL- @EGLL_american $EGL1- @hajimi_CTO_BNB https://t.co/07PA9yX33M ...
X @wale.moca 🐳
wale.moca 🐳· 2025-10-15 16:26
I spent $10k USD on an imaginary box in the hopes of winning a valuable JPEG or a lot of valuable tokens.Reporting back tomorrow ...